Background: Cervical cancer disproportionally harms women in low and middle income countries (LMICs). There is increasing interest in automated visual evaluation (AVE) – using artificial intelligence to analyze cervical images at the point of care (PoC) – for managing patients in LMICs. AVE has a diagnostic component (for pathology) and a quality component (to ensure image adequacy). The quality component must run on the imaging device at the PoC, and is limited by its processors for computation. Methods: A novel, multiple-module algorithm for assessing cervical image quality was developed in an Android application. One module located the cervix and another sought objects obstructing the transformation zone. The cervix locator module is an object detection model that determined the bounding box of the cervix. Models trained on multiple architectures (YOLOv5 and EfficientDet-Lite2) with the same data were compared. For obstructions classification, a multi-task model was trained to detect 5 common obstructions (blood, SCJ inside of os, loose vaginal walls, mucus, blur/glare) and obstruction-free cervix. Performance of the model’s tasks were compared for 2 different imaging devices. Results and Discussion: The cervix locator performed better and was faster for YOLOv5, although differences were minimal. In the obstructions classifier, 4 different tasks (loose vaginal walls, blood, SCJ inside of os, and obstruction-free) performed satisfactorily. For all modules, the full computation time was <10 sec. Both modules met the desired performance thresholds for image adequacy assessment. The algorithm shown here is to our knowledge, the first AVE quality classifier running on a mobile device.
|