Back to
Projects List
Live tracked ultrasound processing with PyTorch
Key Investigators
- Tamas Ungi (Queen's University)
- Rebecca Hisey (Queen's University)
- Róbert Szabó (Queen's University / Óbuda University)
- Colton Barr (Queen's University / BWH)
- Tina Kapur (Brigham and Women's Hospital)
Presenter location: In-person
Project Description
Our past code for training and deploying ultrasound segmentation in real time was based on TensorFlow. Example project:
https://youtu.be/WyscpAee3vw
The goal for this project week is to provide a new open-source implementation using PyTorch and modern AI tools like MONAI and wandb. A Slicer module will also be provided to deploy trained AI on recorded or live ultrasound streams.
Objective
- Export annotated ultrasound+tracking data for training
- Example code for training
- Slicer module to use trained models on ultrasound data in Slicer
Approach and Plan
- All data processing and training code will be here: https://github.com/SlicerIGT/aigt/tree/master/UltrasoundSegmentation
- Slicer module will be here: https://github.com/SlicerIGT/aigt/tree/master/SlicerExtension/LiveUltrasoundAi/TorchLiveUs
Progress and Next Steps
- Model training and testing is implemented in this repository: https://github.com/SlicerIGT/aigt/tree/master/UltrasoundSegmentation
- Successfully used RunNeuralNet from DeepLearnLive to run a trained PyTorch segmentation model on live ultrasound data. OpenIGTLink data transfer is a good way to run AI models in parallel with Slicer. https://github.com/SlicerIGT/aigt/tree/master/DeepLearnLive/RunNeuralNet
- Need to do precise performance estimation to see the limit of frame rate we can handle from an ultrasound scanner. Also need to explore the effect of AI model size on accuracy and performance.
Illustrations
Background and References
No response