Clear identification of bone structures is crucial for ultrasound-guided lumbar interventions, but it can be challenging due to the complex shapes of the self-shadowing vertebra anatomy and the extensive background speckle noise from the surrounding soft tissue structures. Therefore, in this work, we will present our method for estimating the vertebra bone surfaces by using a spatiotemporal U-Net architecture learning from the B-Mode image and aggregated feature maps of hand-crafted filters. Additionally, we are integrating this solution with our patch-like wearable ultrasound system to capture the repeating anatomical patterns and image the bone surfaces from multiple insonification angles. 3D bone representations can then be created for interventional guidance. The methods are evaluated on spine phantom image data collected by our proposed “Patch” scanner, and our systematic ablation experiment shows that improved accuracy can be achieved with the proposed architecture. Equipped with this surface estimation network, our wearable ultrasound system can potentially provide intuitive and accurate interventional guidance for clinicians in an augmented reality setting.
|