Open Access
29 November 2022 Intelligent diagnosis of coronavirus with computed tomography images using a deep learning model
Marko Sarac, Milos Mravik, Dijana Jovanovic, Ivana Strumberger, Miodrag Zivkovic, Nebojsa Bacanin
Author Affiliations +
Abstract

The coronavirus (COVID-19) disease appeared as a respiratory system disorder and has triggered pneumonia outbreaks globally. As this COVID-19 disease drastically spread around the world, computed tomography (CT) has helped to diagnose it rapidly. It is imperative to implement a faultless computer-aided model for detecting COVID-19-affected patients through CT images. Therefore, a detail extraction pyramid network (DEPNet) is proposed to predict COVID-19-affected cases from CT images of the COVID-CT-MD dataset. In this study, the COVID-CT-MD dataset is applied to detect the accuracy of the deep learning technique; the dataset has CT scans of 169 patients; among those, 60 patients are COVID-19 positive patients, and 76 cases are normal. These affected patients were clinically verified with the standard hospital. The deep learning-oriented CT diagnosis model is implemented to detect COVID-19-affected patients. The experiment revealed that the proposed model categorized COVID-19 cases from other respiratory-oriented diseases with 99.45% accuracy. Further, this model selected the exact lesion parts, mainly ground-glass opacity, which helped the doctors to diagnose visually.

1.

Introduction

Coronavirus (COVID-19) has dragged the world into endangering conditions, and the World Health Organization (WHO) has categorized the contagious disease as a pandemic. This infection is a kind of virus that severely affects the respiratory tract and has a visible likeness to SARS.1,2 This virus is a highly infectious disease, and the number of affected cases increases at a tremendous rate. To bring an end to this situation, the entire world is fighting against this virus with the fullest effort. Further, the COVID-19 virus is seriously threatening healthcare systems due to its highly contagious character. Moreover, the researchers are encouraged to battle against COVID-19 by implementing novel techniques in its treatment to eradicate this virus from the younger generation. The effectiveness of the infection assessment is essential to offer instant treatment to COVID-19-affected patients.3 Based on the statement of the WHO on April 3, 2022, 489 million infected cases have been recorded, and around 6 million deaths have occurred globally. The affected cases have been recorded in nearly the whole world from the United Kingdom, United States, Germany, India, Italy, the Korean peninsula, and Japan.46

The above situation motivates us to perform research on automatic detection and timely prediction methods to manage the disease. Even though many kinds of diagnostic tests are available to determine the infected patients, these are not efficient at stopping the disease from spreading.7 A smart health care system was designed using the tremendous capability of the cloud, mobile computing, and IoT sensors.8 To store the information about patients and monitor them, several medical services have been moved to IoT-oriented models to enhance the services and benefits of the health care system.9 In the current scenario, developing technologies, such as real-time smart monitoring, smart protection methods, and services, are not effectively carried out in public places, schools, or hospitals.10,11 The contribution of assistance with artificial intelligence (AI) has many success stories in the clinical diagnostic area of COVID-19 cases, and it provides a system for remote automatic surveillance.12 The early prediction of COVID-19 is a significant problem because the infectious area is scattered and small.13 Computed tomography (CT) is playing an important role in the early prediction and diagnosis of COVID-19.14

Sometimes the results of radiologists produce higher rates of false positive (FP),15 which shows that a sophisticated computerized lung CT identification technique is desperately required for exact detection of the affected cases, patient screening, and virus surveillance. For the classification of general images, AI is applied similarly to detect some kind of diseases, and medical images are used in AI by applying classification techniques.15,16 A convolutional neural network (CNN)17 is a broadly applied feed-forward network that has several models. It helps to extract the features in the given images. The above models are not efficient at classifying the images of CT because these images are fine-grained and possess very less interclass disparities; hence differentiating them is complicated.18,19 Lately, some techniques have been implemented in the learning methods of fine-grained images, with a deep learning model for CT images being convincing.20 Mainly, these techniques are in need annotation only at the image level, and the classification results get higher accuracy by training the models and refining the annotations.21 The above method has some limitations, and to rectify these limitations, this research contributes the proposed DEPNet technique to predict COVID-19-affected cases using CT scans by applying a classification algorithm.

The remaining sections of this article are organized as follows: Sec. 2 states the related studies to the proposed work in detail, Sec. 3 consists of the explanation of the DEPNet methodology, Sec. 4 describes the performance analysis, and Sec. 5 explains the conclusion of the paper.

2.

Related Study

In Ref. 22, the authors proposed a model by developing a pristine signal algorithm after removing the unwanted data termed a low-cost pervasive sensor. This model is involved in many experiments, such as coughing, breathing, and finding the respiratory tract. Moreover, the model obtained 98.99% respiratory system estimation accuracy and 97.34% accuracy in cough estimation. This model helps to screen patients who are affected by COVID-19 and diagnose and monitor patients on a large scale.

In Ref. 23, Hossain et al. introduced deep learning techniques for evaluating the direction of humans by examining their threshold range. Vedaei et al.24 implemented a novel intelligent model for categorizing x-ray data into three various groups. In that method, they employed three algorithms with the deep learning CNN architecture. The accuracy of the model was evaluated with the benchmark x-ray image dataset of the affected and nonaffected patients, and it produced three different accuracies for three algorithms. The maximum accuracy was recorded as 90.3%.

In Ref. 25, the authors introduced an AI system for detecting COVID-19-affected cases via their recorded voice model using a smartphone. They designed an AI voice recognizing model by collecting the features of COVID-19-affected cases and obtained an accuracy of 98.49%. Xu et al.26 introduced a B5G by applying the 5G networks possessing minimal latency and high bandwidth and by determining the x-ray images and CT scans of the patients. Gozes et al.27 designed a robust model of IoT for social distance monitoring to defend the COVID19 cases. In this COVID-19 period, deep learning-oriented models have been implemented effectively for the analysis and classification of chest CT data.28,29 Several deep learning models have been introduced for COVID-19 testing,29,30 observation,31,32 and detection in the hospital.33 In Ref. 17, the authors designed a network-based model residual neural network50 (ResNet50), which was pretrained and proved to be robust for predicting the features in images. Further, they added a network-like feature pyramid that helped to collect the details of top-n from every image.15 To learn the important features in the image, the attention unit was combined based on the extracted features.

In the current research, the COVID-CT-MD dataset was applied; it has chest CT scan images of 169 people. Out of 169, 60 cases were affected by COVID-19, and 76 were healthy people. To detect the abnormality in the chest CT scan images efficiently, a deep learning model named detail extraction pyramid network (DEPNet) was designed. First, the proposed model predicted the main lesion area at several scales by combining the already trained weakly supervised deep learning network (WSDNet) and the feature pyramid network (FPN). Based on the predicted regions, a WSDNet was used to collect the features in every region and the common features among the regions. Then, the collected attributes were combined with the global features of the original image, and for detecting the image level, the features were applied to the multilayer perceptron. Finally, for patient-level diagnosis, the detection in every chest CT image frame of a single patient was integrated. The proposed model showed 99.45% accuracy for differentiating the COVID-19 patients from healthy people by training and validating the COVID-CT-MD dataset. Moreover, the proposed DEPNet ground-glass opacity and lesion features help the doctors to diagnose visually.

3.

Proposed DEPNet

The proposed method introduced a deep learning-oriented chest CT scan analysis technique to predict the COVID-19-inducing pneumonia and to trace the prime lesions. The fully automated CT diagnosing model was implemented on three important levels. In the first level, the main portion of the lung was extracted, and in the second level, a DEPNet was done to retrieve the image-level detections. In the third level, image-level detections were integrated to obtain each patient-level diagnosis.

3.1.

Data Preprocessing

3D CT scan images of every patient had nearly 200 images, and the adjacent images were very much identical. Nearly 15 images were selected as representatives who were present at equal distances. The speed of the prediction was increased after removing the redundant images. The Open Source Computer Vision Library package was applied in the research to detect the incomplete images, determine the whole lung images in the CT scan, and remove the boundary of the image. To remove the overfitting problem in the deep learning model, the blank portion of the image was replaced with some other lung portion images. Finally, 60 COVID-19 cases with 565 CT images, 33 bacterial pneumonia cases with 232 slices, and 76 normal people with 634 slices were obtained in the research.

3.2.

DEPNet

The debate was designed according to the already trained WSDNet that was proved to be efficient in predicting the features of the images. Moreover, the FPN was included to collect the top-n information in every image. As shown in Fig. 1, the COVID-CT-MD dataset is given as input to the WSDNet to collect the features, and then these features are the input to dense and pooling layers for collecting the global features. Then, the FPN model extracts the local information in the images.

Fig. 1

Architecture of the DEPNet.

JEI_32_2_021406_f001.png

Then, the feature map and region of scales sized 14×14, 7×7, and 48×48, 96×96 were used respectively in small portions because lesion portions were commonly tiny, and applying the bigger size feature map would produce more noise in the model. Thus, the FPN detected the important region with a high score. Aggregations of detected features are performed, and a multilayer perceptron is implemented for data calculation. Finally, the images-level frame detection is performed, and selected frames are integrated to get the output.

Figure 1 describes the DEPNet analysis model. Dataset is preprocessed and sent to WSDNet for selecting the appropriate global features. Weekly supervision helps to annotate and label the unknown dataset variables with heuristically pattern identification techniques. After processing the features with multiple WSDNet, feature aggregation is performed. Further, multi layer perceptron helps to classify the features, and images are detected. Finally, image frames are integrated for diagnosing COVID-19 with high accuracy.

Based on the extracted portions, top-n subimages are divided from the real image. Then, the real image is measured to create a fresh image by applying the multiplication of the relative portion of the top-n subimages with the high score, and the area not in the top-n images is set as null. After that, these subimages and the newly created images are given as input to WSDNet to gather important features; WSDNet is depicted in Fig. 2.

Fig. 2

WSDNet architecture.

JEI_32_2_021406_f002.png

Finally, the collected features of the original image were aggregated; top-n sub-images and the created image were converted to a one-dimensional (1D) vector, and it was sent to multiple layer perceptron to detect image-level features. According to the FPN, the proposed model not only produces the patient-level detection for helping medical practitioners but also translates the detection by measuring every pixel of the real image. For every affected case, the detection was done on every image frame, and the image-level measured output was integrated for the patient-level detection. In the proposed method, average pooling was applied to aggregate the image-level measurement of illness of every person for the patient-level detection.

Algorithm 1

DEPNet

Step 1: Load input image – COVID-CT-MD dataset.
Step 2: Image pre-processing – extract lung portion.
Step 3: Preprocessed COVID-CT-MD dataset is given as input to the WSDNet to collect the features.
Step 4: Collected features are the input to dense and pooling layer to collect the global features.
Step 5: FPN model extracts the local information of the images.
Step 6: Collected features of the original image are aggregated; top-n sub-images.
Step 7: Created images are converted to 1D vector, and those images are sent to multiple layer perceptron to detect the image-level features.
Step 8: Then, it not only produces the patient-level detection to help the doctors but also translates the detection by measuring every pixel of the real image.
Step 9: Image-level measured outputs are integrated for the patient-level detection.

In the proposed technique, the COVID-CT-MD dataset was used to evaluate the DEPNet model. The benchmark dataset was given as input to WSDNet after the dataset was preprocessed to collect the features. After that, those features were forwarded to dense and pooling layers for collecting the global features, and FPN extracted the local information of the images. The collected features of the original image was aggregated as the top-n sub-images. The created images were converted to 1D vector and again images were sent to the multiple layer perceptron to detect the image-level features. Therefore, it not only produces the patient-level detection to help the doctors but also translates the detection by measuring every pixel of the original image. The image-level measured outputs are integrated for patient-level detection.

4.

Experimental Evaluation

In the experimental analysis of this proposed work, n=3, which means that three subimages are collected in every input image. To prove it practically, a model was implemented with two functions: classifying COVID-19 cases from normal people and dividing COVID-19 people from other disease-affected cases and normal people. There are two final prediction classes of our proposed model: first class-0 (Non-covid class) and second class -1 (covid class). For every function, the patient-level partition method was applied by employing random division with 60%, 30%, and 10% training, testing, and validation, respectively. The training images were applied to train the proposed model, and validation images were employed to enhance the hyperparameters for improved effectiveness. The last enhanced design was separately evaluated on the test images. The precision, accuracy, recall, specificity, and F1 score are estimated using Eqs. (1)–(5).

The evaluated metrics are calculated as

Eq. (1)

AccuracyDEPNet=TP+TNTP+TN+FP+FN,

Eq. (2)

PrecisionDEPNet=TPFP+TP,

Eq. (3)

RecallDEPNet=TPFN+TP,

Eq. (4)

SpecificityDEPNet=TNFP+TN,

Eq. (5)

F1score=Precision+recall2.

Here true positives (TPs), FPs, true negatives, and false negatives were the values that were calculated area under the receiver operating feature curve (AUC) using the scikit-learn. For the three-way classification function, the proposed work calculated the F1 score, recall, and precision for every type and reported the average by default. Initially, an experiment was done by partitioning 60 COVID-19 cases with 565 CT images, 33 bacterial pneumonia cases with 232 slices, and 76 normal persons with 634 slices. The integration of the image-level detection produced trivial rises in the AUC values at the patient level from 0.95 to 0.99 for validation and individual tests. To prove the efficiency of the implemented DEPNet structure, the DEPNet networks with the existing deep learning work, in particular, VDD16, Dense Net, and Resnet,15 were compared. As depicted in Table 1 and Fig. 3, DEPNet obtained the maximum AUC value among all of the deep learning models, Visual Geometry Group16 (VGG16) and ResNet scored nearly the same value, and DenseNet scored a minimal value. When evaluated by F1-score, the other balanced evaluation, defense research establishment network scored the highest and DenseNet had the lowest value, whereas ResNet was insignificantly more than VGG16.

Table 1

Person-level performances comparisons on the individual evaluation.

TechniqueRecallSpecificityAccuracyAUCPrecisionF1-score
VGG160.880.810.850.920.810.85
DenseNet0.940.740.830.880.770.84
ResNet0.940.810.870.910.820.85
DEPNet0.980.980.990.990.970.99

Fig. 3

Comparisons of patient-level performances.

JEI_32_2_021406_f003.png

Comparison of patient level prediction with the other existing deep learning techniques were done, and the measured values of AUC, precision, recall, specificity, accuracy, and F1-score of the DEPNet are higher than those of the other models. VGG16 and ResNet had nearly similar scores, and DenseNet scored slightly lower than the other models.

Figure 4 represents the receiver operating feature curves for comparing with similar benchmark techniques. DEPNet has a maximum TP score at portions of the minimized FP rate (FPR) (FPR<0.02), which are also considered as very essential regions because there were more cases of bacterial pneumonia than COVID-19. At a maximum FPR (>0.2), all techniques other than DenseNet exactly predict the COVID-19 cases.

Fig. 4

AUC - diagnosis of COVID-19.

JEI_32_2_021406_f004.png

Figure 5 shows the result of the proposed DEPNet that correctly predicted the COVID-19 pneumonia patients. Figure 6 depicts the CT images of bacterial pneumonia that were erroneously predicted as COVID-19 cases. Figure 7 shows CT images of COVID-19 cases that were falsely detected as bacterial pneumonia cases.

Fig. 5

Visualization of two accurately detected COVID-19 pneumonia cases.

JEI_32_2_021406_f005.png

Fig. 6

Bacterial pneumonia cases erroneously determined as COVID-19.

JEI_32_2_021406_f006.png

Fig. 7

COVID-19 cases falsely detected as bacterial pneumonia.

JEI_32_2_021406_f007.png

5.

Conclusion

This proposed technique has explained the usefulness of a deep learning technique to help medical practitioners in predicting COVID-19 cases and determining the feasible lesions in the CT images automatically at the end. This model enables categorizing the COVID-19-affected person rapidly and accurately. In the existing model, the authors did not apply a benchmark dataset for evaluation because it was implemented in the earlier period of the COVID-19 pandemic. But, in this proposed method, there are many benchmark datasets to evaluate the designed techniques. This model was designed with the combination of pretrained WSDNet, which enhanced the accuracy of DEPNet. The performance of DEPNet was compared with VGG16, ResNet, and DenseNet. DEPNet produced 99.45% accuracy, which was very much higher than the other models.

References

1. 

B. Shen et al., “Proteomic and metabolomic characterization of COVID-19 patient sera,” Cell, 182 (1), 59 –72.e15 https://doi.org/10.1016/j.cell.2020.05.032 CELLB5 0092-8674 (2020). Google Scholar

2. 

P. Yang et al., “Feasibility study of mitigation and suppression strategies for controlling COVID-19 outbreaks in London and Wuhan,” PLoS One, 15 (8), e0236857 https://doi.org/10.1371/journal.pone.0236857 POLNCL 1932-6203 (2020). Google Scholar

3. 

M. Shorfuzzaman and M. S. Hossain, “MetaCOVID: a Siamese neural network framework with the contrastive loss for n-shot diagnosis of COVID-19 patients,” Pattern Recognit., 113 107700 https://doi.org/10.1016/j.patcog.2020.107700 PTNRA8 0031-3203 (2021). Google Scholar

4. 

B. N. Rome and J. Avorn, “Drug evaluation during the Covid-19 pandemic,” N. Engl. J. Med., 382 (24), 2282 –2284 https://doi.org/10.1056/NEJMp2009457 NEJMBH (2020). Google Scholar

5. 

P. Yang et al., “The effect of multiple interventions to balance healthcare demand for controlling COVID-19 outbreaks: a modeling study,” Sci. Rep., 11 (1), 1 –13 https://doi.org/10.1038/s41598-021-82170-y (2021). Google Scholar

6. 

G. Muhammad and M. S. Hossain, “COVID-19 and non-COVID-19 classification using multi-layers fusion from lung ultrasound images,” Inf. Fusion, 72 80 –88 https://doi.org/10.1016/j.inffus.2021.02.013 (2021). Google Scholar

7. 

H. Lin et al., “Privacy-enhanced data fusion for COVID-19 applications in intelligent Internet of Medical Things,” IEEE Internet Things J., 8 (21), 15683 –15693 https://doi.org/10.1109/JIOT.2020.3033129 (2020). Google Scholar

8. 

F. Chirico et al., “COVID-19: protecting healthcare workers is a priority,” Infect. Control Hosp. Epidemiol., 41 (9), 1117 –1117 https://doi.org/10.1017/ice.2020.148 (2020). Google Scholar

9. 

G. Muhammad et al., “EEG-based pathology detection for home health monitoring,” IEEE J. Sel. Areas Commun., 39 (2), 603 –610 https://doi.org/10.1109/JSAC.2020.3020654 (2020). Google Scholar

10. 

A. Allawilaiwi et al., “Enhanced engineering education using smart class environment,” Comput. Hum. Behav., 51 852 –856 https://doi.org/10.1016/j.chb.2014.11.061 0747-5632 (2015). Google Scholar

11. 

M. A. Rahman and M. S. Hossain, “An internet-of-medical-things-enabled edge computing framework for tackling COVID-19,” IEEE Internet Things J., 8 (21), 15847 –15854 https://doi.org/10.1109/JIOT.2021.3051080 (2021). Google Scholar

12. 

G. Rathee et al., “Decision-making model for securing IoT devices in smart industries,” IEEE Trans. Ind. Inf., 17 (6), 4270 –4278 https://doi.org/10.1109/TII.2020.3005252 (2020). Google Scholar

13. 

J. Lei et al., “CT imaging of the 2019 novel coronavirus (2019-nCoV) pneumonia,” Radiology, 295 (1), 1 –18 https://doi.org/10.1148/radiol.2020200236 RADLAX 0033-8419 (2020). Google Scholar

14. 

T.-Y. Lin et al., “Feature pyramid networks for object detection,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognit., 2117 –2125 (2017). https://doi.org/10.1109/CVPR.2017.106 Google Scholar

15. 

K. He et al., “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognition, 770 –778 (2016). https://doi.org/10.1109/CVPR.2016.90 Google Scholar

16. 

B. Zhao et al., “A survey on deep learning-based fine-grained object classification and semantic segmentation,” Int. J. Autom. Comput., 14 (2), 119 –135 https://doi.org/10.1007/s11633-017-1053-3 (2017). Google Scholar

17. 

K. Yan et al., “Holistic and comprehensive annotation of clinically significant findings on diverse CT images: learning from radiology reports and label ontology,” in Proc. IEEE/CVF Conf. Comput. Vision and Pattern Recognition, 8523 –8532 (2019). https://doi.org/10.1109/CVPR.2019.00872 Google Scholar

18. 

Y. Zhou et al., “Weakly supervised instance segmentation using class peak response,” in Proc. IEEE Conf. Compute. Vision and Pattern Recognition, 3791 –3800 (2018). https://doi.org/10.1109/CVPR.2018.00399 Google Scholar

19. 

X. Chen et al., “A pervasive respiratory monitoring sensor for COVID-19 pandemic,” IEEE Open J. Eng. Med. Biol., 2 11 –16 https://doi.org/10.1109/OJEMB.2020.3042051 (2020). Google Scholar

20. 

R. Rodriguez et al., “Deep learning applied to capacity control in commercial establishments in times of COVID-19,” in 12th Int. Conf. Comput. Intell. Commun. Networks (CICN), 423 –428 (2020). https://doi.org/10.1109/CICN49253.2020.9242584 Google Scholar

21. 

J. De Moura et al., “Deep convolutional approaches for the analysis of covid-19 using chest x-ray images from portable devices,” IEEE Access, 8 195594 –195607 https://doi.org/10.1109/ACCESS.2020.3033762 (2020). Google Scholar

22. 

J. Laguarta et al., “COVID-19 artificial intelligence diagnosis using only cough recordings,” IEEE Open J. Eng. Med. Biol., 1 275 –281 https://doi.org/10.1109/OJEMB.2020.3026928 (2020). Google Scholar

23. 

M. S. Hossain et al., “Explainable AI and mass surveillance system-based healthcare framework to combat COVID-I9 like pandemics,” IEEE Network, 34 (4), 126 –132 https://doi.org/10.1109/MNET.011.2000458 IENEET (2020). Google Scholar

24. 

S. S. Vedaei et al., “COVID-SAFE: an IoT-based system for automated health monitoring and surveillance in post-pandemic life,” IEEE Access, 8 188538 https://doi.org/10.1109/ACCESS.2020.3030194 (2020). Google Scholar

25. 

F. Shan et al., “Lung infection quantification of COVID-19 in CT images with deep learning,” (2020). Google Scholar

26. 

X. Xu et al., “A deep learning system to screen novel coronavirus disease 2019 pneumonia,” Engineering., 6 (10), 1122 –1129 https://doi.org/10.1016/j.eng.2020.04.010 (2020). Google Scholar

27. 

O. Gozes et al., “Rapid ai development cycle for the coronavirus (covid-19) pandemic: Initial results for automated detection & patient monitoring using deep learning ct image analysis,” (2020). Google Scholar

28. 

X. Qi et al., “Machine learning-based CT radionics model for predicting hospital stay in patients with pneumonia associated with SARS-CoV-2 infection: a multicenter study,” (2020). Google Scholar

29. 

S. Hu et al., “Weakly supervised deep learning for covid-19 infection detection and classification from ct images,” IEEE Access, 8 118869 –118883 https://doi.org/10.1109/ACCESS.2020.3005510 (2020). Google Scholar

30. 

J. Fu et al., “Look closer to see better: recurrent attention convolutional neural network for fine-grained image recognition,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognit., 4438 –4446 (2017). https://doi.org/10.1109/CVPR.2017.476 Google Scholar

31. 

G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the Opencv Library, O’Reilly Media, Inc. ( (2008). Google Scholar

32. 

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” (2014). Google Scholar

33. 

F. Iandola et al., “Densenet: implementing efficient convent descriptor pyramids,” (2014). Google Scholar

Biography

Marko Šarac received his BSc degree in 2007 from the Faculty of Organizational Sciences, University of Belgrade, and his MSc and PhD degrees from the Faculty of Informatics, Singidunum University, Serbia, in 2009 and 2013, respectively. He has been the head of IT Department at Singidunum University since November 2006. His areas of expertise are informatics and computing, electrical engineering and computing, and e-business. He is certified in administration and database field by many major IT companies, such as Cisco, Microsoft, Mikrotik, Google, SAP, IBM, HP, Solidworks, and Oracle. He is the author of three books and a number of journals and conference scientific papers. His current research interests include security, computer networks, data bases, and IoT.

Milos Mravik is an assistant of the Computer Science Department at Singidunum University. He is currently a PhD candidate in the Department of Computer Science. His teaching focuses on performance improvement, machine learning, contemporary technologies, and communication technology. He is certified in administration and database fields by many major IT companies, such as Microsoft, Cisco, IBM, and Juniper. He is the author of one University book. His research interests are computer networks, cloud technologies, IoT, and machine learning.

Dijana Jovanovic received her BSc and MSc degrees from the Department of Informatics at the College of Academic Studies “Dositej” in 2018 and 2019, respectively. Currently, she is working as a research assistant at the College of Academic Studies “Dositej.” Since 2019, she has been a PhD student at the Faculty of Computer Science, Megatrend University of Belgrade, Serbia.

Ivana Strumberger started her university career in 2013 as teaching assistant at the Faculty of Computer Science in Belgrade. She received her PhD in computer science from Singidunum University in 2020. Currently, she is working as a teaching assistant at the Faculty of Informatics and Computing, Singidunum University, Belgrade, Serbia. However, she is in the process of being elected for assistant professor. She conducts research in the domain of computer science and her specialty includes swarm intelligence, machine learning, optimization and modeling, cloud computing, computer networks, and distributed computing. She has published around 50 scientific papers in high-quality journals and international conferences indexed in Clarivate Analytics JCR, Scopus, WoS, and IEEExplore. She has also published 10 book chapters in Springer Lecture Notes in Computer Science series. She has also published one book from the domain of cloud computing. She is a regular reviewer of many international state-of-the-art journals with high Clarivate Analytics and WoS impact factor, such as Applied Soft Computing, Journal of Ambient Intelligence and Humanized Computing, Soft Computing, Swarm and Evolutionary Computation, etc. She has been included in prestigious list of Stanford University with best 2% world scientists for the year 2021.

Miodrag Zivkovic received his PhD from the School of Electrical Engineering, University of Belgrade in 2014. Currently, he is working as an associate professor at the Faculty of Informatics and Computing, Singidunum University, Belgrade, Serbia. He is involved in scientific research in the field of computer science and his specialty includes stochastic optimization algorithms, swarm intelligence, human–computer interaction, and artificial intelligence algorithms.

Nebojsa Bacanin received his PhD in computer science from the Faculty of Mathematics, University of Belgrade in 2015. He started his university career in Serbia 15 years ago at the Graduate School of Computer Science in Belgrade. Currently, he is working as a full professor and as a vice-rector for scientific research at Singidunum University, Belgrade, Serbia. He is involved in scientific research in the field of computer science, and his specialty includes stochastic optimization algorithms, swarm intelligence, soft computing, and optimization and modeling, as well as artificial intelligence algorithms, swarm intelligence, machine learning, image processing, and cloud and distributed computing. He has published more than 240 scientific papers in high-quality journals and international conferences indexed in Clarivate Analytics JCR, Scopus, WoS, IEEExplore, and other scientific databases. He has also been included in the prestigious Stanford University career list with 2% best world researchers in the field of artificial intelligence the years 2020 and 2021.

CC BY: © 2022 SPIE and IS&T
Marko Sarac, Milos Mravik, Dijana Jovanovic, Ivana Strumberger, Miodrag Zivkovic, and Nebojsa Bacanin "Intelligent diagnosis of coronavirus with computed tomography images using a deep learning model," Journal of Electronic Imaging 32(2), 021406 (29 November 2022). https://doi.org/10.1117/1.JEI.32.2.021406
Received: 17 May 2022; Accepted: 8 November 2022; Published: 29 November 2022
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Computed tomography

COVID 19

Deep learning

Performance modeling

Data modeling

Diagnostics

Artificial intelligence

Back to Top