|
|
ORIGINAL ARTICLE |
|
Year : 2023 | Volume
: 2
| Issue : 1 | Page : 4-12 |
|
Comparison of diagnostic accuracy of the artificial intelligence system with human readers in the diagnosis of portable chest x-rays during the COVID-19 pandemic
Leena R David1, Wiam Elshami1, Aisha Alshuweihi2, Abdulmunhem Obaideen2, Bashar Afif Issa1, Shishir Ram Shetty3
1 Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates 2 Department of Medical Diagnostic Imaging, University Hospital Sharjah, Sharjah, United Arab Emirates 3 Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
Date of Submission | 23-Apr-2022 |
Date of Decision | 01-Jul-2022 |
Date of Acceptance | 02-Jul-2022 |
Date of Web Publication | 23-Sep-2022 |
Correspondence Address: Dr. Leena R David Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, Sharjah United Arab Emirates
 Source of Support: None, Conflict of Interest: None
DOI: 10.4103/abhs.abhs_29_22
Background: Evaluating the performance of the available machine learning software is fundamental to ensure trustworthiness and improve automated diagnosis. This study compared the diagnostic accuracy of artificial intelligence (AI) system reporting with human readers for portable chest anteroposterior (AP) x-rays acquired patients in a semi-recumbent position. Methods: Ninety-four patients who underwent portable chest AP with clinical suspicion or confirmed COVID-19 were included in the study; among them, 65 were COVID-19 positive and 29 had symptoms. High-resolution computed tomography (HRCT) Chest was available for 39 patients. Images were read by two radiologists (R1, R2) and AI. In case of disagreement between R1 and R2, a third radiologist (R3) read the images; however, if HRCT Chest was available, we counted HRCT Chest instead of R3. Thus, the gold standard was HRCT or R1 = R2, R1 = R3, or R2 = R3. Results: The sensitivity of the AI system in detecting pleural effusion and consolidation was 100% and 91.3%, respectively. The specificity of the AI system in detecting pleural effusion and lung consolidation was 84% and 61%, respectively. Nevertheless, there is no good agreement between the gold standard and AI in the case of other chest pathologies. Conclusion: Significant moderate agreement with AI and gold standard was shown for pleural effusion and consolidation. There was no significant agreement between the gold standard and AI in the case of the widened mediastinum, collapse, and other pathologies. However, future studies with large sample sizes, multicentric with multiple clinical indications, and radiographic views are recommended.
Keywords: Artificial intelligence, chest x-ray, COVID-19, high-resolution computed tomography
How to cite this article: David LR, Elshami W, Alshuweihi A, Obaideen A, Issa BA, Shetty SR. Comparison of diagnostic accuracy of the artificial intelligence system with human readers in the diagnosis of portable chest x-rays during the COVID-19 pandemic. Adv Biomed Health Sci 2023;2:4-12 |
How to cite this URL: David LR, Elshami W, Alshuweihi A, Obaideen A, Issa BA, Shetty SR. Comparison of diagnostic accuracy of the artificial intelligence system with human readers in the diagnosis of portable chest x-rays during the COVID-19 pandemic. Adv Biomed Health Sci [serial online] 2023 [cited 2023 Jun 9];2:4-12. Available from: http://www.abhsjournal.net/text.asp?2023/2/1/4/356788 |
Background | |  |
During the COVID-19 outbreak in December of 2019, a vital diagnostic tool for the worldwide fight against COVID-19 is chest x-rays (CXR) and computed tomography (CT) [1]. It is well known that reverse transcription-polymerase chain reaction (RT-PCR) is the gold standard test for COVID-19, although these tests may give false negatives when the disease is still in an early stage. Because it is less expensive and easier to perform, a CXR is usually the first imaging procedure. Furthermore, it can be obtained with a portable machine in isolated rooms or at the patient's bedside, which would greatly simplify the required sanitization process [2].
Interpreting a CXR is more subjective and has an interobserver variability that can vary with the type of pathology [3]. A deep learning algorithm can detect multiple abnormalities, support radiologists in clinical decision-making, and even can outperform radiologists on CXRs [4] and chest CT [5-8].
Artificial intelligence (AI) technology is empowering imaging tools and aiding medical imaging and diagnosis [9]. A recent study assessed the current understanding of AI and found that there is a significant lack of knowledge among radiology workers [10]. A study among radiography technologists in the MENA region showed that they have the basic knowledge and technical information [11]; another study found that the main challenges for AI implementation in the MENA region were the lack of AI knowledge and developing skills. Nevertheless, the radiology staff showed high interest to integrate AI into radiology practice [12]. Globally, intensive efforts have been made to obtain fast and reliable solutions by incorporating AI systems to diagnose COVID-19 from chest radiographs. Adopting AI can enhance accurate image interpretation, reduce medical errors, and improve workflow in a radiology department [13]; still, Thrall et al. [14] argued that AI application in medical imaging might work worse on patient datasets compared with the training data. Nevertheless, the integration of AI in medical imaging and radiology cannot serve its purpose without assessment because of the possible occurrence of errors [15].
Therefore, evaluating the performance of the available machine learning software is fundamental to ensure trustworthiness and improve the automated diagnosis of chest images. The current study aimed to evaluate AI applied to CXRs and compare the diagnostic accuracy of the AI system with human readers for suspected and confirmed COVID-19 cases with more emphasis on pathologies detected or missed by the AI while reporting CXR.
Materials and methods | |  |
This research aimed to compare the diagnostic accuracy of AI system reporting with human readers during the COVID-19 pandemic using portable chest anteroposterior (AP) x-rays acquired in a supine and/or semi-recumbent position (CXR-AP).
Research ethics committee approval was obtained from the University Hospital Sharjah (ref no: UHS-HERC-050-23022021). The need for written informed consent was waived, as the images and reports were anonymized while collecting the data. We prospectively collected images of 94 adult patients at the University Hospital Sharjah who underwent portable CXR-AP using FujiFilm FDR nano DR-XD 1000 and/or with high-resolution chest computed tomography (HRCT, Hitachi Supra 16 slice) from March 1, 2021, to March 30, 2021 with clinical suspicion or confirmed cases of COVID-19. The chest radiographs were performed in isolation wards. We used the Lunit INSIGHT CXR v. 3.0.0.1 AI software installed in FujiFilm FDR nano DR-XD 1000 that is trained to report CXRs.
Data collection
We created a spreadsheet to document the radiological findings [Appendix A (https://www.abhsjournal.net/articles/0/0/0/images/AdvBiomedHealthSci_0_0_0_0_356788_sm9.pdf)], such as pneumothorax, pleural effusion, lung collapse, consolidation, and other random findings. Images with two radiologists' diagnostic reports (R1, R2) and AI findings [Figure 1] were primarily collected. R1 and R2 had experience in CXR reporting for more than 25 and 20 years, respectively, and reported the images independently and were not aware of the report of AI. To quantify the data, the COVID status of a patient using RT-PCR test on nasopharyngeal swabs, the presence of each pathology, and the availability of HRCT Chest was marked as one for positive/present/available and zero for no/absent/suspected. Then, one of the co-authors compiled the reports from R1 and R2 and added AI findings to the data. Before statistical analysis, the dataset was checked for missing data and disagreement between the reviewers (R1 and R2). When there was disagreement, we used HRCT Chest noncontrast agent-enhanced volumetric scan (HRCT Chest) at 1–1.25 mm section thickness within 2 days for those patients as the gold standard if it was available as it is found to have high sensitivity for the diagnosis of COVID-19 [16]. We have not performed HRCT Chest for the purpose of this research considering the risk outweighing the benefit. Otherwise, the images were sent to radiologist 3 (R3), with more than 20 years of experience in reporting. R3 was not aware of the report of R1 and R2. Therefore, for the cases R3 reported, the gold standard was taken as the common findings between R1, R2, and R3. In short, the gold standard in this research was the images were read by two radiologists (R1, R2) and AI. In case of disagreement between R1 and R2, a third radiologist (R3) reads the images; however, if HRCT Chest was available, we counted HRCT Chest instead of R3. This multiple reviewer approach and the CT examinations helped to analyze the images without any bias while collecting the data [Figure 2].
Statistical analysis
We used mean and standard deviation to summarize the normally distributed data, and binary/categorical variables were presented using counts (frequency) and percentages. There are three human readers, and a few HRCT Chest scans were available in this study. As a gold standard, we considered HRCT Chest, or common responses between the three human readers when there was a disagreement between the human readers. Thus, for the assessment of the accuracy of AI, we have created a new gold standard for each pathology based on these criteria. Chi-square test or Fisher's exact test was used to find the association between COVID status and the gold standard for each pathology. Cohen's Kappa statistics were used to find the agreement between AI with each of the human readers and the gold standard. For the assessment of diagnostic accuracy of AI for each pathology, we used receiver operating characteristic (ROC) curve and area under the curve (AUC), along with confidence limits and accuracy measures such as sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). A P value lesser than 0.05 shows statistical significance. Data were entered in Microsoft Excel and analyzed using SPSS version 20.00.
Results | |  |
In this study, there were 94 patients with COVID-19 diagnosed by RT-PCR or with symptoms. This study aimed to assess the diagnostic accuracy of the AI system with human readers in the diagnosis of pathologies such as pneumothorax, pleural effusion, cardiomegaly, consolidation, widened mediastinum, collapse, and any other pathologies that are common in CXRs.
The age range varied from 17 to 98 years with an average of 62.50 ± 20.30 years. One-third of the patients belonged to the age group of 17–44 years and the majority to the higher age groups, that is, ≥45 years. Gender is distributed as males (40/94) and females (54/94). COVID-19 status was positive for 69.2% and HRCT was available for 42% of patients [Figure 3]. | Figure 3: Distribution of baseline characteristics of the study population
Click here to view |
[Table 1] shows an overall summary of responses in each pathology. We have reported the counts of responses by human readers, AI, and the criteria-wise distribution of the gold standard. The gold standard classification was HRCT, R1 = R2, and either R1 = R3 or R2 = R3. We have reported the responses of all 94 cases by reader 1, reader 2, and AI. In the formulation of the gold standard, for those 39 patients who underwent HRCT Chest, this response was taken as the gold standard; for those 55 patients who were not done HRCT, we have observed how many of the responses are R1 = R2. If there is disagreement with initial readers, we have preferred a reader 3. | Table 1: Summary of responses of readers, AI, and the gold standard for each pathology.
Click here to view |
Further, we have assessed the association between the outcomes of each pathology with COVID status and found enough evidence to prove the association between the pathology consolidation and COVID status. For the 46 cases with consolidation, the majority (38) were having COVID (82.6%) with a very small P value (0.006) [Table 2], indicating a statistically significant relationship. This significance shows that in those patients with consolidation, 82.6% had COVID, and the COVID status was equally distributed among the patients without consolidation. This association with the presence of consolidation and the COVID status was significant. In all other variables, we failed to find evidence for their presence or absence with COVID. | Table 2: Association of COVID status with the gold standard of different pathologies.
Click here to view |
For the pathologies, pleural effusion, cardiomegaly, and consolidation kappa statistic range between 0.41 and 0.60 shows a moderate agreement with AI and gold standard [pleural effusion: Kappa value = 0.489 (P < 0.0001); cardiomegaly: kappa value = 0.527 (P < 0.0001), consolidation: Kappa value = 0.528 (P < 0.0001)], which was found to be significant. The pathology pleural effusion has a good sensitivity (100%) and specificity (84.9%); cardiomegaly shows good sensitivity (96.2%) but specificity was less (69.1%). Also, for consolidation, sensitivity was more (91.3%), but specificity was less (61.7%).
For other parameters such as the widened mediastinum, collapse, and other pathologies, we failed to find a good agreement with its gold standard and AI. Also, the diagnostic accuracy measures sensitivity and specificity were found to be less [Table 3]. | Table 3: Assessment of diagnostic accuracy of AI with the gold standard for each pathology.
Click here to view |
The assessment of diagnostic accuracy of AI with the gold standard for the pathologies is shown in [Table 4] and ROC curves [Figure 4]. For the pathology pleural effusion, AUC = 0.924 with 95% confidence interval (CI) = 0.869–0.980; we have observed good diagnostic accuracy for the pathology pleural effusion as sensitivity = 100%, specificity = 87.9%, PPV = 38.10%, and NPV = 100%). Cardiomegaly shows AUC = 0.826 (95% CI = 0.742–0.911), sensitivity and NPV were high (>90%), but the specificity and PPV were less (<70%). Consolidation also has a significant AUC (AUC = 0.765, 95% CI = 0.665–0.865); here also sensitivity and NPV were high (>80%), but the specificity and PPV were less (<80%). Other pathologies such as the widened mediastinum, collapse, and any other pathologies do not have significant results to produce any valid conclusion. | Figure 4: ROC curve for diagnostic accuracy of various pathologies between AI and the gold standard
Click here to view |
 | Table 4: Diagnostic accuracy of various pathologies between AI and the gold standard.
Click here to view |
Discussion | |  |
This study aimed to assess the diagnostic accuracy of the AI system with human readers in the diagnosis of pathologies that are common in CXRs. We found that consolidation has an association with the COVID status and the chest pathology. Pleural effusion, cardiomegaly, and consolidation showed significant moderate agreement with the AI and gold standard (HRCT, R1 = R2, and either R1 = R3 or R2 = R3). The pathologies such as the widened mediastinum, collapse, or any other pathologies do not have a significant agreement between the AI and gold standard.
Because of the complex healthcare scenario and ever-increasing demand for data in the healthcare sector, the scope and demand for AI will be on a steep rise in the years to come [17]. The initial trend of AI application in healthcare mainly concentrated on the conventional machine learning approach trying to predict the possible prognosis of a disease condition or aid in choosing the most likely treatment option for a disease condition [18].
However, AI is one of the most encouraging clinical applications in the field of medical imaging. This has generated enormous research interest mainly focusing on creating and refining the AI tools capable of detecting and quantifying disease conditions [19].
Several recently published studies based on the diagnostic efficacy of AI focused mainly on determining the sensitivity and specificity [20]. However, few other studies have also concentrated on AI-based clinical outcomes determination [21].
Coronavirus disease has delivered a calamitous consequence on global demographics, evolving as the most significant global health problem of this century [22]. CXRs have proven to be rapid and economical imaging tools among the potential list of diagnostic modalities for the diagnoses of COVID-19 [23].
Besides the speed and cost, portable and fixed x-ray machines are commonly available across most healthcare settings throughout the world; therefore it is easily accessible [24]. RT-PCR test is usually used for the diagnosis of COVID-19, whereas CXRs and HRCT Chest usually act as first-line medical imaging diagnostic tools [25]. In the present study, the RT-PCR test was used to determine the COVID status of all the patients involved in the study.
The current research publications recommend that deep learning techniques may improve the specificity of chest imaging in COVID-19 cases similar to other studies done using AI systems and CXRs of COVID patients [23,26-31].
In the present study, we used Lunit INSIGHT CXR v. 3.0.0.1 to determine the diagnostic parameters of AI in the study subjects. A group of researchers [23] developed and employed a convolutional neural network (CNN) in which a dense layer was added on the top of a pretrained baseline CNN (EfcientNetB0). Some researchers [27] used a deep transfer learning method using CNN (3 architectures—InceptionV3, ResNet50, and VGG19) on CXRs. Whereas other research team [28] classified x-rays as COVID or normal using baseline ResNet, Inception-v3, Inception ResNet-v2, DenseNet169, and NASNetLarge. Another group of investigators [29] augmented five deep learning architectures and evaluated the sturdiness and diagnostic performance of these models in the diagnosis of COVID-associated radiographic changes on the CXR. Likewise, some researchers [30] used three previously verified CNN models: VGG-16, ResNet50, and MobileNetV2 , whereas, other investigators [31] used M-qXR algorithm for their research experiment.
In our research, CXRs of 94 patients diagnosed with COVID-19 were used for testing the Lunit INSIGHT CXR v. 3.0.0.1, whereas another group of scientists [23] used 15,153 CXRs for training, validation, and testing their AI model. Investigators [27] used a binary classification system consisting of 708 x-ray images comprising of x-rays images of 354 COVID-19 patients and 354 x-ray images of normal individuals. Other teams [28] utilized a dataset of 30,000 CXRs images in their research, which included 15,000 pneumonia cases, 7500 nonpneumonia cases, and 7500 cases with no findings. A group of scientists [29] employed 1,171 CXRs from their hospital dataset for their AI-based project. Ting and Lan [30] used a total of 6,432 CXR images to train and test their AI model while another study [31] used 625 CXR images for the validation of their study. The lower volume of the dataset used in the present study was compared with the other studies mainly because the present study was focused on validating the preexisting AI model rather than training and testing a newer model.
In the present study, the sensitivity of the AI system in detecting lung pathologies varied from 100% in pleural effusion to 91.3% in consolidation. Similarly, the AI model used by a research group [23] showed 90% sensitivity while another model [29] displayed sensitivity in the range of 91%–96%. The three AI models used by some researchers [30] exhibited sensitivity in the range of 93%–96%. The M-qXR algorithm used by a research team [31] in their research exhibited a comparable sensitivity of 94%. The sensitivity value of the AI model used in the present study is comparable to the sensitivity values of the AI models used in contemporary studies. However, our software used on a mobile x-ray machine is designed to detect only certain pathologies in CXR, having a significant P value is a fact, it may be because we had applied it during the COVID-19 pandemic where most pathologies were infectious so seen as consolidation; however, this finding gives us more assurance to use this software to detect the chest infection cases mainly during pandemic triage.
In this study, the specificity of the AI system in detecting lung pathologies varied from 84% in pleural effusion to 61% in lung consolidation. However, the AI model used by the previous study [23] showed a comparatively higher 97% specificity. Similarly, another research [29] also disclosed higher specificity values (94%–98%). Research using the three AI models [30] exhibited sensitivity in the range of 87%–99% while other researchers [31] noted a specificity rate of 84%, which was comparable to the specificity value of the present study.
In this research, the AUC with a 95% CI was 0.924 for pleural effusion and 0.765 for lung consolidation. Likewise, based on some studies [27,28], the AUC ranged from 0.978 to 0.989 among the AI models used by them. However, other studies [29,30] showed higher AUC values ranging up to 0.99. The AUC values of our study are like the other studies.
In the present work, the positive and negative predictive values for pleural effusion were 38.1% and 3.7%, respectively. However, some researchers [23,29] reported a higher PPV of 96% and 91%, respectively. The probable cause for the difference in the values could be due to the lower number of datasets used in the present study. Furthermore, in the present study, all radiographs had been in an anterior-posterior position, which is well known to lower the rate of detection/interpretation of some pathologies such as pleural effusion, cardiomegaly, and widened mediastinum.
Conclusion | |  |
Our study found a moderate and significant agreement between the AI and gold standard (HRCT-Chest or R1 = R2 or R3 = R1/R2) for pleural effusion and consolidation. However, there was no significant agreement between the gold standard and AI in the case of the widened mediastinum, collapse, and other pathologies. However, the findings of the current study ensure that the AI application used is capable of detecting chest infections. This might help to improve the radiologist's efficiency with the increased workload, especially during a pandemic.
Study limitations
The images used for this study were only in the anteroposterior position. Therefore, further studies are required for validating the same AI program on upright posterior-anterior chest radiographs. Furthermore, future studies with a different population, large sample size, various clinical indications, and radiographic views are recommended.
Authors' contributions
LRD conceived the research concept and developed the research design; AA and AO conducted the data collection; WE, BI, and SS supported the statistical analysis and discussion. All authors contributed substantially to the write-up of the article, and all take responsibility for the contents and integrity of this article.
Ethical statement
The study was approved by the Research and Ethics Committee, University Hospital Sharjah, with a reference number: UHS-HERC-050-23022021. The need for written informed consent from the patients was waived, as the images and reports were anonymized while collecting the data.
Financial support and sponsorship
Not applicable.
Conflict of interests
No conflict of interests declared.
Data availability statement
LRD and AA hold the data, and it will be available if requested.
References | |  |
1. | Joob B, Wiwanitkit V. Radiology management and COVID-19 in resource limited setting.Acad Radiol 2020;27:750. |
2. | Castiglioni I, Ippolito D, Interlenghi M, Monti CB, Salvatore C, Schiaffino S, et al. Artificial intelligence applied on chest x-ray can aid in the diagnosis of COVID-19 infection: A first experience from Lombardy, Italy.MedRxiv 2020. Available from: https://www.medrxiv.org/content/10.1101/2020.04.08.20040907v1. [Last accessed on 21 Jun 2022]. |
3. | Moifo B, Pefura-Yone EW, Nguefack-Tsague G, Gharingam ML, Tapouh JR, Kengne AP, et al. Inter-observer variability in the detection and interpretation of chest x-ray anomalies in adults in an endemic tuberculosis area.Open J Med Imaging 2015;5:143. |
4. | Putha P, Tadepalli M, Reddy B, Raj T, Chiramal JA, Govil S, et al. Can artificial intelligence reliably report chest x-rays?: Radiologist validation of an algorithm trained on 2.3 million x-rays.arXiv preprint 2018. Available from: https://arxiv.org/abs/1807.07455. [Last accessed on 21 Jun 2022]. |
5. | Hardy M, Harvey H. Artificial intelligence in diagnostic imaging: Impact on the radiography profession.Br J Radiol 2020;93:20190840. |
6. | Jin C, Chen W, Cao Y, Xu Z, Tan Z, Zhang X, et al. Development and evaluation of an artificial intelligence system for COVID-19 diagnosis.Nat Commun 2020;11:5088. |
7. | Wang L, Lin ZQ, Wong A. COVID-net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest x-ray images.Sci Rep 2020;10:19549. |
8. | Chowdhury ME, Rahman T, Khandakar A, Mazhar R, Kadir MA, Mahbub ZB, et al. Can AI help in screening viral and COVID-19 pneumonia?IEEE Access 2020;8:132665-76. |
9. | Shi F, Wang J, Shi J, Wu Z, Wang Q, Tang Z, et al. Review of artificial intelligence techniques in imaging data acquisition, segmentation, and diagnosis for COVID-19.IEEE Rev Biomed Eng 2021;14:4-15. |
10. | Abuzaid MM, Elshami W, Tekin H, Issa B. Assessment of the willingness of radiologists and radiographers to accept the integration of artificial intelligence into radiology practice.Acad Radiol 2022;29:87-94. |
11. | Abuzaid MM, Elshami W, McConnell J, Tekin HO. An extensive survey of radiographers from the Middle East and India on artificial intelligence integration in radiology practice.Health Technol (Berl) 2021;11:1045-50. |
12. | Abuzaid MM, Tekin HO, Reza M, Elhag IR, Elshami W. Assessment of MRI technologists in acceptance and willingness to integrate artificial intelligence into practice.Radiography (Lond) 2021;27 Suppl 1:83-7. |
13. | Topol EJ. High-performance medicine: The convergence of human and artificial intelligence.Nat Med 2019;25:44-56. |
14. | Thrall JH, Fessell D, Pandharipande PV. Rethinking the approach to artificial intelligence for medical image analysis: The case for precision diagnosis.J Am Coll Radiol 2021;18:174-9. |
15. | Walsh C, Larkin A, Dennan S, O'Reilly G. Exposure variations under error conditions in automatic exposure controlled film-screen projection radiography.Br J Radiol 2004;77:931-3. |
16. | Ai T, Yang Z, Hou H, Zhan C, Chen C, Lv W, et al. Correlation of chest CT and RT-PCR testing in coronavirus disease 2019 (COVID-19) in China: A report of 1014 cases. Radiology 2020;296:200642. |
17. | Davenport T, Kalakota R. The potential for artificial intelligence in healthcare.Future Healthc J 2019;6:94-8. |
18. | Lee SI, Celik S, Logsdon BA, Lundberg SM, Martins TJ, Oehler VG, et al. A machine learning approach to integrate big data for precision medicine in acute myeloid leukemia.Nat Commun 2018;9:42. |
19. | Oren O, Gersh BJ, Bhatt DL. Artificial intelligence in medical imaging: Switching from radiographic pathological data to clinically meaningful endpoints.Lancet Digit Health 2020;2:e486-8. |
20. | Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis.Lancet Digit Health 2019;1:e271-97. |
21. | Bello GA, Dawes TJW, Duan J, Biffi C, de Marvao A, Howard LSGE, et al. Deep learning cardiac motion analysis for human survival prediction.Nat Mach Intell 2019;1:95-104. |
22. | Cascella M, Rajnik M, Aleem A, Dulebohn SC, Di Napoli R. Features, evaluation, and treatment of coronavirus (COVID-19). Statpearls [Internet]; 2022. Available from: https://www.ncbi.nlm.nih.gov/books/NBK554776/. [Last accessed on 04 May 2022]. |
23. | Nikolaou V, Massaro S, Fakhimi M, Stergioulas L, Garn W. COVID-19 diagnosis from chest x-rays: Developing a simple, fast, and accurate neural network.Health Inf Sci Syst 2021;9:36. |
24. | Ng MY, Lee EYP, Yang J, Yang F, Li X, Wang H, et al. Imaging profile of the COVID-19 infection: Radiologic findings and literature review.Radiol Cardiothorac Imaging 2020;2:e200034. |
25. | Baratella E, Crivelli P, Marrocchio C, Bozzato AM, Vito A, Madeddu G, et al. Severity of lung involvement on chest x-rays in SARS-coronavirus-2 infected patients as a possible tool to predict clinical progression: An observational retrospective analysis of the relationship between radiological, clinical, and laboratory data.J Bras Pneumol 2020;46:e20200226. |
26. | Ghaderzadeh M, Asadi F. Corrigendum to “deep learning in the detection and diagnosis of COVID-19 using radiology modalities: A systematic review.”J Healthc Eng 2021;2021:9868517. |
27. | Awan MJ, Bilal MH, Yasin A, Nobanee H, Khan NS, Zain AM. Detection of COVID-19 in chest x-ray images: A big data enabled deep learning approach.Int J Environ Res Public Health 2021;18:10147. |
28. | Punn NS, Agarwal S. Automated diagnosis of COVID-19 with limited posteroanterior chest x-ray images using fine-tuned deep neural networks.Appl Intell (Dordr) 2021;51:2689-702. |
29. | Baltazar LR, Manzanillo MG, Gaudillo J, Viray ED, Domingo M, Tiangco B, et al. Artificial intelligence on COVID-19 pneumonia detection using chest x-ray images.Plos One 2021;16:e0257884. |
30. | Ting P, Kasam A, Lan K. Applications of convolutional neural networks in chest X-ray analyses for the detection of COVID-19. Ann Biomed Sci Eng 2022;6:001-007. |
31. | |
[Figure 1], [Figure 2], [Figure 3], [Figure 4]
[Table 1], [Table 2], [Table 3], [Table 4]
|