Researchers from the School of Biomedical Engineering & Imaging Sciences have used artificially generated data to teach a machine learning model to learn the motion of retinal points despite occlusions to ensure safer retinal surgery.
Mr Claudio Ravasio, a researcher in The Department of Surgical & Interventional Engineering, created synthetic data from retinal images chosen from a large dataset of intraoperative videos.
“I selected a number of clear images with an unobstructed retinal fundus (retina), cut out a few surgical instrument shapes to serve as templates, and used both in quite a complex algorithm that can create synthetic data based on that,” he said.
“We can use this data to train a machine learning model which is then able to track motions in real intraoperative videos.”
The development of such an algorithm has the potential to help surgeons gain more information about the retina as they’re operating. If particular points of interest can be tracked robustly, augmented reality could be applied to increase the information available to the surgeon, such as merging a second imaging modality.
Together with novel robotic tools in development in the Robotics and Vision in Medicine lab led by Dr Christos Bergeles, this will also enable new therapies to be delivered to patients, combatting many wide-spread conditions such as Age-related Macular Degeneration (AMD).
“There’s potentially a huge impact once robotic tool enhanced therapies enter the OR,” Mr Ravasio said.
The difficulty in tracking retinal points lies in the image quality acquired.
Currently, vitreoretinal surgery - keyhole or minimally invasive surgery to treat eye problems involving the retina, macula, and vitreous fluid - is performed entirely manually.
During these operations, a light pipe consisting of a fiberglass thread is used to shine light into the eye which means the illumination levels can change a lot when it is moved around, while at the same time the image can be out-of-focus, contain motion blur or other imaging artefacts.
“We see the same images that a surgeon would see during an operation because essentially, we have a frame-grabber, a camera that records through the same microscope that the surgeon also looks through,” Mr Ravasio said.
“The instruments moving around cast shadows on the retina, and from a computer’s perspective, they look like a separate moving object,” he added.
While traditional methods lack control of what the algorithm focuses on, machine learning based methods can address these issues by leveraging synthetic data specifically adapted to the task.
“Exploring how this optical flow prediction works on retinal images when you suppress certain things in the synthetic data is novel at this stage,” Mr Ravasio said.
“We’ve shown that we can indeed use synthetic data to make the whole system work on real data, and we can steer it in certain directions – for example, we can teach it to ignore tool and shadow motions and track retinal points underneath such occlusions. That’s the novelty aspect of it.”
Dr Bergeles said: "We are thankful for the support of NIHR Communities for their support in this research. We hope to be able to evaluate this technology in clinic in advanced interventions."