Fahad Javaid Siddiqui
Profile Url: fahad-javaid-siddiqui
Researcher at Duke-NUS Medical School
Since the beginning of the COVID-19 outbreak in December 2019, a substantial body of COVID-19 medical literature has been generated. As of May 2020, gaps in the existing literature remain unidentified and, hence, unaddressed. In this paper, we summarise the medical literature on COVID-19 between 1 January and 24 March 2020 using evidence maps and bibliometric analysis in order to systematically identify gaps and propose areas for valuable future research. The examined COVID-19 medical literature originated primarily from Asia and focussed mainly on clinical features and diagnosis of the disease. Many areas of potential research remain underexplored, such as mental health research, the use of novel technologies and artificial intelligence, research on the pathophysiology of COVID-19 within different body systems, and research on indirect effects of COVID-19 on the care of non-COVID-19 patients. Research collaboration at the international level was limited although improvements may aid global containment efforts.
Background: Little is known about the role of artificial intelligence (AI) as a decisive technology in the clinical management of COVID-19 patients. We aimed to systematically review and critically appraise the current evidence on AI applications for COVID-19 in intensive care and emergency settings, focusing on methods, reporting standards, and clinical utility. Methods: We systematically searched PubMed, Embase, Scopus, CINAHL, IEEE Xplore, and ACM Digital Library databases from inception to 1 October 2020, without language restrictions. We included peer-reviewed original studies that applied AI for COVID-19 patients, healthcare workers, or health systems in intensive care, emergency or prehospital settings. We assessed predictive modelling studies using PROBAST (prediction model risk of bias assessment tool) and a modified TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) statement for AI. We critically appraised the methodology and key findings of all other studies. Results: Of fourteen eligible studies, eleven developed prognostic or diagnostic AI predictive models, all of which were assessed to be at high risk of bias. Common pitfalls included inadequate sample sizes, poor handling of missing data, failure to account for censored participants, and weak validation of models. Studies had low adherence to reporting guidelines, with particularly poor reporting on model calibration and blinding of outcome and predictor assessment. Of the remaining three studies, two evaluated the prognostic utility of deep learning-based lung segmentation software and one studied an AI-based system for resource optimisation in the ICU. These studies had similar issues in methodology, validation, and reporting. Conclusions: Current AI applications for COVID-19 are not ready for deployment in acute care settings, given their limited scope and poor quality. Our findings underscore the need for improvements to facilitate safe and effective clinical adoption of AI applications, for and beyond the COVID-19 pandemic.