Hand x-rays are used for tasks such as detecting fractures and investigating joint pain. The choice of the x-ray view plays a crucial role in a medical expert’s ability to make an accurate diagnosis. This is particularly important for the hand, where the small and overlapping bones of the carpals can make diagnosis challenging, even with proper positioning. In this study, we develop a prototype that uses deep learning models, iterative methods and a depth sensor to estimate hand and x-ray machine parameters. These parameters are then used to generate feedback that helps ensure proper radiographic hand positioning. The method of this study consists of five steps: detector table parameter estimation, 2D hand joint landmark prediction, hand joint landmark depth estimation, radiographic positioning parameter extraction, and radiographic protocol constraint verification. Detector plane parameter estimation is achieved by fitting a plane to randomly queried depth points using RANSAC. Google’s MediaPipe HandPose model is used for 2D hand joint landmark prediction, and hand joint depth estimation is determined using the OAK-D Pro sensor. Finally, hand positioning parameters are extracted and evaluated for the selected radiographic viewing protocol. We focus on three commonly used hand positioning protocols: posterior-anterior, oblique, and lateral view. The prototype also has a user interface and a feedback system designed for practical use in the x-ray room. Two evaluations are undertaken to validate our prototype. First, with the help of a radiology technician, we rate the tool’s positioning feedback. Second, using a bespoke left-hand x-ray phantom and an x-ray machine, we generate images with and without the prototype guidance for a double-blind study where the images are rated by a radiologist.
|