Open Access
24 February 2021 Endodontic disease detection: digital periapical radiography versus cone-beam computed tomography—a systematic review
Author Affiliations +
Abstract

Purpose: To assess the comparative diagnostic performance of digital periapical (PA) radiography and cone-beam computed tomography (CBCT) imaging on endodontic disease detection and to provide study methodology and design recommendations for future studies comparing the diagnostic performance of imaging modalities on endodontic disease detection.

Approach: A search of the Medline, Embase, Scopus, Web of Science, and the Cochrane Central Register of Controlled Trials databases was conducted. Studies that compared the performance of CBCT to digital PA radiography for detecting endodontic disease had an independent reference standard determining the presence of endodontic disease and conducted data analysis including either sensitivity, specificity, receiver operating characteristic (ROC) analysis or free response operating characteristic analysis were included. Of the 20,530 identified studies, only 3 fulfilled the inclusion criteria.

Results: Most studies assessed for eligibility were excluded due to limitations and biases in study design—15 of 18 studies had no reference standard. Only one retrospective clinical study reported on the diagnostic performance of CBCT and showed a sensitivity of 86% and specificity of 26%. Two cadaver studies reported sensitivity ranging from 60% to 100%, specificity ranging from 79% to 100%, and an area under the ROC curve of 0.943 for CBCT. The reported sensitivity for digital PA radiography ranged from 27% to 60%, specificity was 99%, and the area under the ROC curve was 0.629.

Conclusions: There is a lack of quality evidence and insufficient data to compare diagnostic performance of digital PA and CBCT imaging. This emphasizes the need for well-designed studies to inform clinicians about the relative diagnostic performance of these imaging modalities.

1.

Introduction

Endodontic disease prevalence has been reported to range from 7% to 86%,1 and it is estimated that 22 million endodontic procedures are performed annually in the United States of America.2 Prior to these procedures, dental imaging is required not only for diagnostic, but also for medico-legal and treatment planning purposes.3 Diagnosis of dental and endodontic abnormalities follows a Bayesian approach just like in medicine—patient history and examination data are gathered to generate pre-test odds (prior probability) of a disease being present. This is multiplied by the weight of new testing information (likelihood ratio) that generates post-test odds (posterior probability) of the disease being present.4 Dental imaging has historically used intra-oral and extra-oral diagnostic radiographs, with an early cadaver study showing the limitations of radiographs in showing simulated pathologic changes in cancellous bone; these radiolucent changes could be detected radiographically only if there was cortical bone perforation.5 A later clinical study showed that periapical (PA) radiography had high diagnostic value in endodontic disease detection.6

Medical imaging is constantly evolving; three-dimensional (3D) cone-beam computed tomography (CBCT) has been recently introduced into the clinical dental setting and is gaining popularity.7 Newer imaging modalities can be considered fit for purpose if their diagnostic performance is comparable to, or better than, current modalities. Diagnostic efficacy analysis is therefore needed to establish the diagnostic ability of these new imaging tools.8 Since the introduction of CBCT into the clinical evaluation of endodontic diseases, several studies have attempted to investigate its diagnostic efficacy compared to two-dimensional PA radiography. They showed differences in the diagnostic performance of these modalities; however, these studies differ by design and results. For example, a majority of the studies was based on imaging examination records of PA, panoramic, and CBCT imaging and included patients with different presentations of endodontic infections or patients referred to a specialist endodontic practice for endodontic treatment, with a consensus opinion of a panel being used to establish the presence of disease.9 Second, most of these studies did not perform an evaluation of CBCT using an established independent reference standard.10 Third, many of the studies were based on conventional PA radiography and either assessed the agreement between CBCT and PA reporting or PA and panoramic image reporting. Most utilized only images with disease, which limited the calculation of diagnostic performance metrics such as specificity and false positive rates. Some of the published studies used CBCT as a “reference standard” to assess the sensitivity of PA imaging. These differences in methodologies and result frameworks emphasize the need for a review of the literature to understand the diagnostic efficacy of CBCT relative to PA radiography.

Previous systematic reviews comparing CBCT and PA radiography in endodontic disease detection1115 were mostly based on plain film PA radiography, included studies that assessed the agreement between both imaging modalities or used cadaver findings as a reference standard. Some used an artificial reference standard: “mechanically or chemically induced lesions,” which does not establish the truth about endodontic disease presence or absence.12,14 Two of these reviews, which focused on the diagnostic efficacy of CBCT and PA radiography using a hierarchical model, reported that the diagnostic efficacy was unclear13 and that human CBCT studies using a histological reference standard were needed.12 A meta-analysis of ex-vivo studies with artificial apical periodontitis found CBCT imaging had a greater area under the receiver operating characteristic (ROC) curve than PA radiography.14 A more recent review showed that the odds ratio of CBCT detecting endodontic disease was double that for PA radiography15. However, these reviews have some limitations: the diagnostic performance of digital PA and CBCT imaging were not compared directly,1113,15 ex-vivo studies were included, which limit the external validity of these findings11,12,14 and therapeutic efficacy rather than diagnostic accuracy was evaluated.12,13 Therefore, the comparative diagnostic performance of these imaging modalities in the endodontic domain is poorly understood.

In addition, digitization has improved the quality of radiological images and allowed post-processing of acquired images to suit different diagnostic tasks. In dentistry, digitization of the imaging process has been shown to improve image quality, which may optimize the detection of dental caries and assessment of bony anomalies.16 Thus, a review of the literature on the diagnostic performance of CBCT relative to digital PA radiography in the digital era will provide informed choices of imaging options for patients and clinicians requesting dental imaging. This review aims to assess the comparative diagnostic performance of digital PA and CBCT imaging on endodontic disease detection and to provide study methodology and design recommendations for future studies comparing the diagnostic performance of imaging modalities on endodontic disease detection.

2.

Methods

2.1.

Databases and Search Strategy

The literature search was conducted based on the Preferred Reporting Items for a Systematic Review and Meta-Analysis of Diagnostic Testing Studies (PRISMA-DTA) statement.17 Medline, Embase, Scopus, Web of Science, and the Cochrane Central Register of Controlled Trials databases were searched for relevant articles published from database inception to January 12, 2021. Google Scholar was also used to search for relevant articles and the reference lists of published articles were manually screened to identify additional publications. Search terms were combined with “OR” and included the following main terms: “Cone beam computed tomography” OR “cone beam” OR “periapical radiography” OR “periapical” OR “endodontics” OR “pulp disease” OR “apical periodontitis” OR “periapical disease” OR “periapical lesion” OR “endodontic pathosis” OR “apical pathology” OR “apical radiolucency” OR “receiver operating characteristic” OR “free response.”

2.2.

Eligibility Criteria

Inclusion and exclusion criteria were based on the population, intervention, comparator, and outcome (PICO) elements (Table 1). The clinical research question we sought to address was, in permanent human teeth, does CBCT have greater diagnostic performance in endodontic disease detection than PA radiography? Studies were included if they: compared the performance of CBCT to digital PA radiography for detecting endodontic disease, included humans with permanent teeth, had an independent reference standard determining the presence of endodontic disease, conducted data analysis including at least one of the following outcomes: sensitivity, specificity, ROC analysis, or free response operating characteristic (FROC) analysis, and were published in English. Studies were excluded if they did not meet these inclusion criteria. Literature reviews, conference papers, letters to editors, and posters were also excluded. Initial triage of the abstracts was performed by two authors (K.Y. and E.E.). Disagreements were resolved by objectively evaluating the inclusion and exclusion criteria and establishing a consensus.

Table 1

The PICO method regarding inclusion and exclusion criteria.

ElementCharacteristics
PopulationPermanent human teeth
InterventionCBCT imaging
ComparatorDigital PA radiography
OutcomeDiagnostic performance in endodontic disease detection: ROC curve analysis, FROC analysis, sensitivity, and specificity

2.3.

Quality Assessment

Quality assessment was performed by two authors (K.Y. and E.E.) using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool.18 It consists of four main domains: patient selection, index test, reference standard, and flow and timing of the index tests and reference standard. The QUADAS-2 tool is mainly recommended for judging the risk of bias and the applicability of original diagnostic accuracy studies. A weighted kappa was used to assess the agreement between the two assessors. Kappa was interpreted as follows: <0.20=poor; 0.21 to 0.40 = fair; 0.41 to 0.60 = moderate; 0.61 to 0.80 = substantial; and 0.81 to 0.99 = almost perfect.19 Any discrepancies in the quality assessment were discussed and resolved through consensus.

2.4.

Data Extraction Process

Data were extracted in two phases. First, the authors determined the study characteristics (e.g., study design, reported outcome measures, provision of clinical history, and recruitment method for patients and readers), population characteristics (e.g., sample size, disease prevalence, and distribution of disease severity), reader characteristics (observer clinical experience, CBCT experience, and qualifications), and interpretation protocol. Second, the diagnostic performance of PA imaging was compared to CBCT. The performance metrics analyzed were number and location of detected abnormalities, ROC curve construction, relationship between true positive fraction (TPF) at a given false positive fraction (FPF), area under the ROC curve, FROC analysis, and diagnostic accuracy measures such as jackknife FROC figure of merit, sensitivity, and specificity. All authors reviewed the full text articles and any discrepancies regarding data analysis or interpretation were resolved by objectively evaluating the reported findings and establishing a consensus.

3.

Results

3.1.

Identification of Included Studies

The search strategy identified a total of 20,530 studies. After the screening of titles and abstracts, 18 studies were selected for full-text reading (Fig. 1). Only three studies fulfilled the inclusion criteria.2022 One study used clinical information20 and two used data from cadavers21,22 as the reference standard to assess the diagnostic performance of CBCT and digital PA radiography (Table 2). Fifteen studies that were identified to have examined disease detection using CBCT and digital PA radiography but were excluded are summarized in Table 3. These studies were excluded for the following reasons: none had an independent reference standard, either did not provide information about disease prevalence or used only images with disease —limiting the assessment of other diagnostic performance metrics, assessed agreement between CBCT and PA radiography reports, or did not provide diagnostic performance metrics. None of these studies accounted for case difficulty and severity of diseased or non-diseased patients.

Fig. 1

A Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement flowchart of the search and selection strategy.

JMI_8_4_041205_f001.png

Table 2

Characteristics of included studies.

StudySample size (teeth)Participant characteristicsDisease prevalenceIndex testReference standardReadersDisease severity distributionPerformance metrics provided
Pope et al.20200Retrospective record analysis from endodontic practices7.8% (14/180)Diameter of PA radiolucency using modified CBCT-PA index (PAI) scaleClinical informationTwo: one endodontist and one endodontic postgraduate studentNoCBCT sensitivity (86%) and specificity (26% to 80%)
Kanagasingam et al.2167CadaversNot providedPresence of a “PA lesion”Histopathology findingsFive endodontistsNoSensitivity, specificity, and area under the ROC curve. CBCT (89%, 100%, and 0.943), PA (27%, 99%, and 0.629)
Kruse et al.22222CadaversNot providedPresence of apical periodontitis using a 5-point rating scaleHistopathology findingsThree: two endodontists and one oral radiologistNoCBCT and PA sensitivity (80%, 60%) and CBCT specificity (79%)

Table 3

Characteristics of excluded studies.

StudySample size (teeth)Disease prevalenceIndex testReference standardReadersPerformance metrics provided
Lofthag-Hansen et al.2346100%Consensus report of a PA lesionNoThree oral and maxillofacial radiologistsNone
Estrela et al.101508100%PAI scoreNoThree “calibrated examiners”None
Low et al.2474100%Consensus report of a PA lesionNoTwo: one oral radiologist and one endodontistNone
Lennon et al.2510100%Presence of artificial bone lesions using a 5-point rating scaleNoTen: two endodontists, two dental radiologists, and six postgraduate endodontic studentsNone
Abella et al.26138100%Consensus report of an “apical periodontitis lesion”NoTwo endodontistsNumber of “lesions” seen on PA and CBCT
Patel et al.9151100%Consensus report of an apical periodontitis lesionNoTwo endodontistsNumber of lesions seen on PA and CBCT
Abella et al.27161100%Consensus report of an apical periodontitis lesionNoTwo endodontistsNumber of lesions seen on PA and CBCT
Venskutonis et al.2835Not providedConsensus report of a PA lesionNoTwo endodontistsNone
Bornstein et al.2958100%Report of either “cyst” or “granuloma”NoFour: two oral surgeons and two oral surgery residentsNumber of radiographic reports designated as granuloma or cyst
Davies et al.30100100%Consensus report of a PA lesionNoTwo endodontistsNumber of roots with a PA lesion detected
Weissman et al.3167Not providedPresence of apical radiolucencyNoThree: two endodontists and one oral and maxillofacial radiologistNumber of lesions seen on PA and CBCT
Davies et al.3298Not providedConsensus report on the change in PA status at reviewNoTwo endodontistsHealing or non-healing category
Beacham et al.3318 imaging studiesNot providedReport on the location of any finding considered “notable or important”NoNine: four endodontists and five endodontic residentsNumber of radiographic findings assigned by an “expert reviewer” that were identified by the observer
Kruse et al.3474Not providedConsensus score determining the level of healing and the treatment planNoThree: two endodontists and one oral radiologistChange in treatment plan based on CBCT report
Chang et al.3568 imaging studiesNot providedPresence of a PA lesionNoTwo: one endodontist and one oral and maxillofacial radiologistNumber of lesions seen on PA and CBCT

3.2.

Quality Assessment

The three included studies had a different risk of bias and a range of applicability concerns regarding patient selection and reference standard. For all three, the risk of bias about the index test was low and risk for flow and timing was uncertain. All studies had high applicability concerns about the index test. The quality assessment results are summarized in Table 4. Inter-reader agreement between the two assessors of quality showed a weighted kappa of k=0.92, 95% CI: 0.767 to 1.000.

Table 4

QUADAS-2 tool results.

StudyRisk of biasApplicability concerns
Patient selectionIndex testReference standardFlow and timingPatient selectionIndex testReference standard
Pope et al.20HighLowLowUncertainLowHighLow
Kanagasingam et al.21UncertainLowUncertainUncertainUncertainHighUncertain
Kruse et al.22UncertainLowUncertainUncertainUncertainHighUncertain

3.3.

Diagnostic Performance of Periapical Radiography versus Cone-Beam Computed Tomography

When clinical information was used as the reference standard and disease was considered to be having a PA radiolucency with diameter >0.5  mm, CBCT had a sensitivity of 86% (12/14).20 When non-disease was considered to be an intact PA bone structure, specificity was 26% (43/166). If the threshold for non-disease was PA radiolucency with diameter no greater than 1 mm, CBCT specificity was 80% (133/166). Although both modalities were compared with different methods of analysis, no data were reported on the diagnostic performance of PA radiography.

For cadaver studies, sensitivity of CBCT for detecting endodontic disease ranged from a mean of 89%21 to 80% (66/83) when individual roots were calculated with no figures provided for teeth.22 Sensitivity of PA radiography was reported to range between a mean of 27%21 and 60% (134/223) for individual roots.22 Specificity of CBCT varied from 79%22 to 100%21 when individual roots were calculated. PA radiography had a reported specificity mean of 99%.21 When ROC data were given, the area under the curve values were 0.629 for PA radiography and 0.943 for CBCT.21 Data on the relationship between true and false positive fractions—the sensitivity at a given specificity, and vice versa, were not provided.

The three included studies each had different index tests and displayed methodological heterogeneity in reporting measures. The cadaver histology results used the presence of inflammation as the reference standard; however, the relationship to disease in humans was not shown. Due to the methodological heterogeneity, meta-analysis was not performed.

4.

Discussion

The analysis shows that there is a lack of high-level evidence, with notable uncertainty about the study quality and bias, regarding the diagnostic performance of digital PA radiography and CBCT for endodontic disease detection. The purpose of evaluating diagnostic performance is to determine diagnostic accuracy efficacy8 so that the truth in yielding an abnormal or normal diagnosis can be ascertained. Evaluation at this level is clinically relevant as it forms part of the framework in a model of understanding decision making.8 Without the truth value of the test being evaluated, diagnostic performance is unknown.36 Only one study provided sensitivity, specificity, and area under the ROC curve metrics for both modalities,21 using cadaver samples. Of the identified studies, there were issues with study design that limit the external validity of the available data. A major finding in studies that have examined endodontic disease detection using CBCT and PA radiography was that they were designed with the aim of observers identifying radiographic findings, such as PA radiolucency, which may not always be pathognomonic for disease.37 In contrast, endodontic disease such as irreversible pulpitis can occur in the absence of PA radiolucency26 and the presence or absence of a PA radiolucency as a surrogate measure for disease has not been shown to be a relevant or valid proxy. Instead of using these reported test indices, observers should be rating their confidence in the presence of an abnormality.38

Population sampling across studies on CBCT and PA radiography has been skewed to contain only diseased cases. The only retrospective clinical study that fulfilled inclusion criteria had a very low disease prevalence,20 with cadaver studies focusing on roots, not teeth, having unknown disease prevalence.21,22 The main limitation of this skewed sampling strategy is that indices of test accuracy calculated in one patient group cannot be generalized to other groups if they show different clinical spectra.39 It should be noted that the rationale for assessing performance of a diagnostic system using a sample of cases, observers, and readings is to provide an estimate of how the imaging system would perform “on the average” in those similar cases and observers and readings that were not studied.40 Therefore, it is important that a diagnostic test performance study encompasses cases with a wide distribution of clinical features and includes a broad range of patients both with and without the disease.41 Such inclusion criteria provide opportunities to assess other diagnostic metrics, considering the variability in population characteristics and disease conditions encountered in the clinical setting. The exclusion of patients with a specific condition or high prevalence may influence observer interaction with the images and lead to inflated diagnostic accuracy estimates,36 particularly when cases with presenting diagnostic difficulty are excluded.18 Low-quality studies with a non-representative sample have a tendency to overestimate the diagnostic performance of a test.42 A test-set containing a wide distribution of cases with different levels of difficulty is needed to provide representation of the variation in the clinical setting,43 where diagnostic performance decreases as disease findings become more subtle.44 All three included studies did not report on their population case spectrum and their ability to extrapolate findings to the broader population is unknown. Only one study reported on sensitivity and specificity of both modalities using histopathology findings as the reference standard21 and found that sensitivity was higher for CBCT compared to PA radiography (89% and 27%, respectively), with no significant change in specificity (100% and 99%, respectively). An animal study45 with a similar study design also found CBCT had higher sensitivity (91%) than PA imaging (77%) with no difference in specificity (both 100%). Because the disease severity in both studies was not reported, it is unknown which case types of disease or non-disease these results apply to. Future studies should include cases with a range of severity in both diseased and non-diseased patients.44

Intrinsic human limitations can influence the diagnostic performance of imaging modalities and there are variations in the human ability to interpret radiological images. Diagnostic accuracy efficacy is not just a function of the image, it is a joint function of the images and of an observer.8 Reader variability has been shown in previous endodontic studies on PA radiography46,47 and CBCT.48 Therefore, studies assessing diagnostic image performance should include a significant number of readers. The number of observers in the identified studies had a tendency to be low and most ranged from 2 to 5, with two studies having nine and ten readers, respectively. Given that every case was read once, or a consensus report was used, the number of total opinions used to establish the performance of CBCT relative to PA radiography was low. The excluded studies also suffered from low observer numbers. In other radiology domains, certain factors such as training49 and number of annual cases read50 have been shown to be associated with diagnostic performance but no study has explored how these factors affect diagnostic performance in digital PA radiography and CBCT interpretation. No information was provided on reader experience and expertise on diagnostic performance, which is of clinical relevance; diagnostic accuracy has been shown to increase with reader experience.51 Therefore, future studies should account for variation in reader characteristics that affect diagnostic performance. Importantly, endodontic CBCT images are interpreted by dentists and radiologists.7 A comparison of the diagnostic performance of these professionals and the factors that impact their performance would help provide informed strategies for improving diagnostic efficacy of dental imaging interpretation.

Across the literature on endodontic disease imaging, there is a lack of an independent and valid reference standard for assessing the performance of CBCT or PA radiography. The reference standard is needed to establish the truth about disease presence or absence and to measure sensitivity and specificity of these imaging technologies;52 without it, the true test results are unknown.36 Biopsy has been used as a reference standard in medicine;50 however, for endodontic disease, histology results do not have the same level of dichotomy. Inflammatory cells in the PA tissues have been shown to be present for healed teeth53 and while histological findings are independent, the presence of inflammation does not necessarily indicate disease presence and has not been shown to be a valid reference standard for disease. Furthermore, the use of cadaver histology is limited due to the lack of clinical evidence to corroborate histological findings in endodontics. An example of a valid and independent reference standard in radiology studies is the Delphi panel, where examination and follow-up clinical information is given to a consensus panel who collectively determine the presence or absence of disease.54 It should be emphasized that these panelists are not involved in the reporting of images in the study. A Delphi panel approach should be used as a reference standard for future dental diagnostic imaging studies.

Observer performance measurement in the identified studies demonstrated significant limitations. The assigned observer task was to report on a type of radiographic finding without indicating its location. This is inconsistent with previous teachings from medical radiology that an observer’s task is to not only to detect but also to locate the abnormality.55 When the decision task involves more than just a determination of whether the patient is diseased or non-diseased, the bivariate ROC method has significant limitations in assessing diagnostic efficacy.56 To inform treatment interventions on the correct tooth, the exact location of the disease is required. Therefore, inclusion of location information in dental imaging studies is important. Without location assignment on images, errors can be disguised as correct calls.57 This is overcome by reporting using the free response paradigm; on an image, the reader reports a “mark”—a region suspicious for abnormality, and assigns a “rating”—the corresponding confidence level.38 This search paradigm accounts for ambiguities that can occur unnoticed in the ROC paradigm, such as when a location-level false positive and a location-level false negative occur on the same image.57 In this situation, ROC analysis provides an image-level true positive for the wrong reasons; an incorrect abnormality location was reported and an abnormality missed. Without a free response analysis, these potentially significant errors are overlooked. For this reason, future dental diagnostic imaging studies should use the free response paradigm.

This review has highlighted the limitations of the current literature on assessing diagnostic performance of dental imaging modalities, identified methodological issues, and provided examples of study designs to address these limitations. The lack of sensitivity, specificity, and relationship between TPF and FPF data in published studies emphasize the need for further studies to establish the diagnostic efficacy of CBCT relative to digital PA radiography. Only three studies were included, which further highlights the need for properly designed studies comparing digital PA and CBCT imaging in endodontic disease detection. In particular, future studies need to overcome the limitations of the existing studies and avoid repeating the errors previously made, in order to provide valid and relevant data that can help improve clinical decision making. Without further research, the comparative performance of these endodontic imaging modalities and the factors that influence their diagnostic efficacy cannot be determined.

5.

Conclusion

There is a lack of evidence to establish the diagnostic performance of digital PA radiography relative to CBCT in endodontic disease detection. Well-designed studies are required in order to inform clinicians about the diagnostic performance of commonly used digital imaging modalities in detection of endodontic disease. These should reflect the task in clinical practice, use a valid reference standard, allow for measuring multiple abnormalities per image, include localization of abnormalities, reward correct and penalize incorrect abnormality locations, and encompass the entire spectrum of disease and non-disease severity present within the study population.

Disclosures

None.

References

1. 

I. F. Persoon and A. R. Özok, “Definitions and epidemiology of endodontic infections,” Curr. Oral Health Rep., 4 (4), 278 –285 (2017). https://doi.org/10.1007/s40496-017-0161-z Google Scholar

2. 

“2005–2006 survey of dental services rendered,” (2007). Google Scholar

3. 

M. Alrahabi, M. S. Zafar and N. Adanir, “Aspects of clinical malpractice in endodontics,” Eur. J. Dent., 13 (3), 450 –458 (2019). https://doi.org/10.1055/s-0039-1700767 Google Scholar

4. 

C. J. Gill, L. Sabin and C. H. Schmid, “Why clinicians are natural Bayesians,” BMJ, 330 (7499), 1080 –1083 (2005). https://doi.org/10.1136/bmj.330.7499.1080 Google Scholar

5. 

I. B. Bender and S. Seltzer, “Roentgenographic and direct observation of experimental lesions in bone. I,” J. Am. Dent. Assoc., 62 (2), 152 –160 (1961). https://doi.org/10.14219/jada.archive.1961.0030 Google Scholar

6. 

C. Reit and H. G. Gröndahl, “Application of statistical decision theory to radiographic diagnosis of endodontically treated teeth,” Scand. J. Dent. Res., 91 (3), 213 –218 (1983). https://doi.org/10.1111/j.1600-0722.1983.tb00804.x SJDRAN 0029-845X Google Scholar

7. 

L. F. Brown and P. Monsour, “The growth of Medicare rebatable cone beam computed tomography and panoramic radiography in Australia,” Aust. Dent. J., 60 (4), 511 –519 (2015). https://doi.org/10.1111/adj.12250 ADEJA2 0045-0421 Google Scholar

8. 

D. G. Fryback and J. R. Thornbury, “The efficacy of diagnostic imaging,” Med. Decis. Making, 11 (2), 88 –94 (1991). https://doi.org/10.1177/0272989X9101100203 MDMADE Google Scholar

9. 

S. Patel et al., “The detection of periapical pathosis using periapical radiography and cone beam computed tomography—Part 1: pre-operative status,” Int. Endod. J., 45 (8), 702 –710 (2012). https://doi.org/10.1111/j.1365-2591.2011.01989.x IENJEA 1365-2591 Google Scholar

10. 

C. Estrela et al., “Accuracy of cone beam computed tomography and panoramic and periapical radiography for detection of apical periodontitis,” J. Endod., 34 (3), 273 –279 (2008). https://doi.org/10.1016/j.joen.2007.11.023 Google Scholar

11. 

A. Petersson et al., “Radiological diagnosis of periapical bone tissue lesions in endodontics: a systematic review,” Int. Endod. J., 45 (9), 783 –801 (2012). https://doi.org/10.1111/j.1365-2591.2012.02034.x IENJEA 1365-2591 Google Scholar

12. 

C. Kruse et al., “Cone beam computed tomography and periapical lesions: a systematic review analysing studies on diagnostic efficacy by a hierarchical model,” Int. Endod. J., 48 (9), 815 –828 (2015). https://doi.org/10.1111/iej.12388 IENJEA 1365-2591 Google Scholar

13. 

E. Rosen et al., “The diagnostic efficacy of cone-beam computed tomography in endodontics: a systematic review and analysis by a hierarchical model of efficacy,” J. Endod., 41 (7), 1008 –1014 (2015). https://doi.org/10.1016/j.joen.2015.02.021 Google Scholar

14. 

K. L. Dutra et al., “Diagnostic accuracy of cone-beam computed tomography and conventional radiography on apical periodontitis: a systematic review and meta-analysis,” J. Endod., 42 (3), 356 –364 (2016). https://doi.org/10.1016/j.joen.2015.12.015 Google Scholar

15. 

A. Aminoshariae, J. C. Kulild and A. Syed, “Cone-beam computed tomography compared with intraoral radiographic lesions in endodontic outcome studies: a systematic review,” J. Endod., 44 (11), 1626 –1631 (2018). https://doi.org/10.1016/j.joen.2018.08.006 Google Scholar

16. 

P. F. van der Stelt, “Better imaging: the advantages of digital radiography,” J. Am. Dent. Assoc., 139 S7 –S13 (2008). https://doi.org/10.14219/jada.archive.2008.0357 Google Scholar

17. 

M. D. F. McInnes et al., “Preferred reporting items for a systematic review and meta-analysis of diagnostic test accuracy studies: the PRISMA-DTA statement,” JAMA, 319 (4), 388 –396 (2018). https://doi.org/10.1001/jama.2017.19163 JAMAAP 0098-7484 Google Scholar

18. 

P. F. Whiting et al., “QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies,” Ann. Intern Med., 155 (8), 529 –36 (2011). https://doi.org/10.7326/0003-4819-155-8-201110180-00009 Google Scholar

19. 

A. J. Viera and J. M. Garrett, “Understanding interobserver agreement: the kappa statistic,” Fam. Med., 37 (5), 360 –363 (2005). Google Scholar

20. 

O. Pope, C. Sathorn and P. Parashos, “A comparative investigation of cone-beam computed tomography and periapical radiography in the diagnosis of a healthy periapex,” J. Endod., 40 (3), 360 –365 (2014). https://doi.org/10.1016/j.joen.2013.10.003 Google Scholar

21. 

S. Kanagasingam et al., “Diagnostic accuracy of periapical radiography and cone beam computed tomography in detecting apical periodontitis using histopathological findings as a reference standard,” Int. Endod. J., 50 (5), 417 –426 (2017). https://doi.org/10.1111/iej.12650 IENJEA 1365-2591 Google Scholar

22. 

C. Kruse et al., “Diagnostic accuracy of cone beam computed tomography used for assessment of apical periodontitis: an ex vivo histopathological study on human cadavers,” Int. Endod. J., 52 (4), 439 –450 (2019). https://doi.org/10.1111/iej.13020 IENJEA 1365-2591 Google Scholar

23. 

S. Lofthag-Hansen et al., “Limited cone-beam CT and intraoral radiography for the diagnosis of periapical pathology,” Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endod., 103 (1), 114 –119 (2007). https://doi.org/10.1016/j.tripleo.2006.01.001 Google Scholar

24. 

K. M. T. Low et al., “Comparison of periapical radiography and limited cone-beam tomography in posterior maxillary teeth referred for apical surgery,” J. Endod., 34 (5), 557 –562 (2008). https://doi.org/10.1016/j.joen.2008.02.022 Google Scholar

25. 

S. Lennon et al., “Diagnostic accuracy of limited-volume cone-beam computed tomography in the detection of periapical bone loss: 360 degree scans versus 180 degree scans,” Int. Endod. J., 44 (12), 1118 –1127 (2011). https://doi.org/10.1111/j.1365-2591.2011.01930.x IENJEA 1365-2591 Google Scholar

26. 

F. Abella et al., “Evaluating the periapical status of teeth with irreversible pulpitis by using cone-beam computed tomography scanning and periapical radiographs,” J. Endod., 38 (12), 1588 –1591 (2012). https://doi.org/10.1016/j.joen.2012.09.003 Google Scholar

27. 

F. Abella et al., “An evaluation of the periapical status of teeth with necrotic pulps using periapical radiography and cone- beam computed tomography,” Int. Endod. J., 47 (4), 387 –396 (2014). https://doi.org/10.1111/iej.12159 IENJEA 1365-2591 Google Scholar

28. 

T. Venskutonis et al., “Accuracy of digital radiography and cone beam computed tomography on periapical radiolucency detection in endodontically treated teeth,” J. Oral Maxillofac. Res., 5 (2), e1 –e1 (2014). https://doi.org/10.5037/jomr.2014.5201 Google Scholar

29. 

M. M. Bornstein et al., “Comparison between radiographic (2-dimensional and 3-dimensional) and histologic findings of periapical lesions treated with apical surgery,” J. Endod., 41 (6), 804 –811 (2015). https://doi.org/10.1016/j.joen.2015.01.015 Google Scholar

30. 

A. Davies et al., “The detection of periapical pathoses in root filled teeth using single and parallax periapical radiographs versus cone beam computed tomography—a clinical study,” Int. Endod. J., 48 (6), 582 –592 (2015). https://doi.org/10.1111/iej.12352 IENJEA 1365-2591 Google Scholar

31. 

J. Weissman et al., “Association between the presence of apical periodontitis and clinical symptoms in endodontic patients using cone-beam computed tomography and periapical radiographs,” J. Endod., 41 (11), 1824 –1829 (2015). https://doi.org/10.1016/j.joen.2015.06.004 Google Scholar

32. 

A. Davies et al., “The detection of periapical pathoses using digital periapical radiography and cone beam computed tomography in endodontically retreated teeth—part 2: a 1 year post-treatment follow-up,” Int. Endod. J., 49 (7), 623 –635 (2016). https://doi.org/10.1111/iej.12500 IENJEA 1365-2591 Google Scholar

33. 

J. T. Beacham et al., “Accuracy of cone-beam computed tomographic image interpretation by endodontists and endodontic residents,” J. Endod., 44 (4), 571 –575 (2018). https://doi.org/10.1016/j.joen.2017.12.012 Google Scholar

34. 

C. Kruse et al., “Impact of cone beam computed tomography on periapical assessment and treatment planning five to eleven years after surgical endodontic retreatment,” Int. Endod. J., 51 (7), 729 –737 (2018). https://doi.org/10.1111/iej.12888 IENJEA 1365-2591 Google Scholar

35. 

L. Chang et al., “Periradicular lesions in cancellous bone can be detected radiographically,” J. Endod., 46 (4), 496 –501 (2020). https://doi.org/10.1016/j.joen.2019.12.013 Google Scholar

36. 

J. F. Cohen et al., “STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration,” BMJ Open, 6 (11), e012799 (2016). https://doi.org/10.1136/bmjopen-2016-012799 Google Scholar

37. 

O. Molven et al., “Periapical changes following root-canal treatment observed 20–27 years postoperatively,” Int. Endod. J., 35 (9), 784 –790 (2002). https://doi.org/10.1046/j.1365-2591.2002.00568.x IENJEA 1365-2591 Google Scholar

38. 

D. P. Chakraborty, “A brief history of free-response receiver operating characteristic paradigm data analysis,” Acad. Radiol., 20 (7), 915 –919 (2013). https://doi.org/10.1016/j.acra.2013.03.001 Google Scholar

39. 

I. A. Scott, P. B. Greenberg and P. J. Poole, “Cautionary tales in the clinical interpretation of studies of diagnostic tests,” Intern Med. J., 38 (2), 120 –129 (2008). https://doi.org/10.1111/j.1445-5994.2007.01436.x Google Scholar

40. 

J. A. Hanley, “Receiver operating characteristic (ROC) methodology: the state of the art,” Crit. Rev. Diagn. Imaging, 29 (3), 307 –335 (1989). Google Scholar

41. 

D. F. Ransohoff and A. R. Feinstein, “Problems of spectrum and bias in evaluating the efficacy of diagnostic tests,” N. Engl. J. Med., 299 (17), 926 –930 (1978). https://doi.org/10.1056/NEJM197810262991705 NEJMAG 0028-4793 Google Scholar

42. 

J. G. Lijmer et al., “Empirical evidence of design-related bias in studies of diagnostic tests,” JAMA, 282 (11), 1061 –1066 (1999). https://doi.org/10.1001/jama.282.11.1061 JAMAAP 0098-7484 Google Scholar

43. 

J. T. Philbrick, R. I. Horwitz and A. R. Feinstein, “Methodologic problems of exercise testing for coronary artery disease: groups, analysis and bias,” Am. J. Cardiol., 46 (5), 807 –812 (1980). https://doi.org/10.1016/0002-9149(80)90432-4 AJNCE4 0258-4425 Google Scholar

44. 

J. Khademi, “The effects of digitization on diagnostic performance in endodontics and periodontics: an ROC study,” Iowa (1994). Google Scholar

45. 

F.W.G. de Paula-Silva et al., “Accuracy of periapical radiography and cone-beam computed tomography scans in diagnosing apical periodontitis using histopathological findings as a gold standard,” J. Endod., 35 (7), 1009 –1012 (2009). https://doi.org/10.1016/j.joen.2009.04.006 Google Scholar

46. 

M. Goldman, A. H. Pearson and N. Darzenta, “Endodontic success—who’s reading the radiograph?,” Oral. Surg. Oral Med. Oral Pathol., 33 (3), 432 –437 (1972). https://doi.org/10.1016/0030-4220(72)90473-2 OSOMAE 0030-4220 Google Scholar

47. 

D. D. Antrim, “Reading the radiograph: a comparison of viewing techniques,” J. Endod., 9 (11), 502 –505 (1983). https://doi.org/10.1016/S0099-2399(83)80167-8 Google Scholar

48. 

J. M. Parker et al., “Cone-beam computed tomography uses in clinical endodontics: observer variability in detecting periapical lesions,” J. Endod., 43 (2), 184 –187 (2017). https://doi.org/10.1016/j.joen.2016.10.007 Google Scholar

49. 

C. Buissink et al., “The influence of experience and training in a group of novice observers: a jackknife alternative free-response receiver operating characteristic analysis,” Radiography, 20 (4), 300 –305 (2014). https://doi.org/10.1016/j.radi.2014.06.016 RADIAO 0033-8281 Google Scholar

50. 

M. Rawashdeh et al., “Markers of good performance in mammography depend on number of annual readings,” Radiology, 269 (1), 61 –67 (2013). https://doi.org/10.1148/radiol.13122581 RADLAX 0033-8419 Google Scholar

51. 

P. Kasprowski, K. Harezlak and S. Kasprowska, “Development of diagnostic performance and visual processing in different types of radiological expertise,” in Proc. ACM Symp. Eye Tracking Res. and Appl., 40 (2018). Google Scholar

52. 

C. E. Metz, “Basic principles of ROC analysis,” Semin. Nucl. Med., 8 (4), 283 –298 (1978). https://doi.org/10.1016/S0001-2998(78)80014-2 SMNMAB 0001-2998 Google Scholar

53. 

A. Khayat, “Histological observations of periradicular healing following root canal treatment,” Aust. Endod. J., 31 (3), 101 –105 (2005). https://doi.org/10.1111/j.1747-4477.2005.tb00313.x Google Scholar

54. 

S. E. Stheeman et al., “Use of the Delphi technique to develop standards for quality assessment in diagnostic radiology,” Commun. Dent. Health, 12 (4), 194 –199 (1995). CDHEES Google Scholar

55. 

C. E. Metz, “ROC analysis in medical imaging: a tutorial review of the literature,” Radiol. Phys. Technol., 1 (1), 2 –12 (2008). https://doi.org/10.1007/s12194-007-0002-1 Google Scholar

56. 

D. P. Chakraborty and K. S. Berbaum, “Observer studies involving detection and localization: modeling, analysis, and validation,” Med. Phys., 31 (8), 2313 –2330 (2004). https://doi.org/10.1118/1.1769352 MPHYA6 0094-2405 Google Scholar

57. 

P. Bunch et al., “A free response approach to the measurement and characterization of radiographic observer performance,” Proc. SPIE, 0127 124 –135 (1977). https://doi.org/10.1117/12.955926 PSISDG 0277-786X Google Scholar

Biographies of the authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Kehn E. Yapp, Patrick C. Brennan, and Ernest U. Ekpo "Endodontic disease detection: digital periapical radiography versus cone-beam computed tomography—a systematic review," Journal of Medical Imaging 8(4), 041205 (24 February 2021). https://doi.org/10.1117/1.JMI.8.4.041205
Received: 3 December 2020; Accepted: 28 January 2021; Published: 24 February 2021
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
KEYWORDS
Diagnostics

Radiography

Teeth

Imaging systems

Computed tomography

Databases

Bone

Back to Top