Surgical phase recognition and video labelling to improve outcomes in robot assisted radical prostatectomy

Project reference: SIE_05_21  
First supervisor: Sebastien Ourselin 
Second supervisor: Prokar Dasgupta 

Start date: October 2021

Project summary:  At our centre, regarded as one of the highest volume robotic surgery institutes in Europe, we generate many thousand hours of video recordings during complex surgery. This project will aim to label specific parts of videos from robot assisted radical prostatectomy and use deep learning to understand the surgical workflow, enhance team training and improve clinically relevant outcomes.

 
Project description:

This project will establish an industry-clinical-academic partnership on automatic surgical phase and video labelling and establishment of performance metrics. The index procedure chosen for this purpose is robot assisted radical prostatectomy (RARP).

Prostate cancer is the commonest male cancer, now causing more deaths in men than in women with breast cancer. Many men with localised or locally advanced prostate cancer are treated surgically with RARP which is the commonest robotic surgical procedure world-wide.

 

The King’s Institute of Robotic Surgery houses three clinical robots and five high volume surgeons at Guy’s Hospital performing 8-10 RARP every week. Each of these procedures generates 2-3 hours of videos recorded on hard disk and CDs. An over-arching ethical approval for data science and the use of such anonymised video is already in place through KCL. 

 

The project will use surgical phase recognition and video labelling of important steps of RARP such as anterior and posterior dissection, nerve sparing, apical dissection, and anastomosis. In addition, external videos of the entire operating room will be simultaneously recorded to study the interactions between the robotic surgeon and the rest of the team.

 

100 videos will be used for training and 400 as the test set using a deep learning approach.

Tracking of the robotic instruments and team interactions will aid in understanding of the surgical workflow and will be correlated with clinical outcomes such as surgical margins, urinary continence at 3 months, erectile function at 12 months as measured with validated questionnaires.

 

Previous work in this emerging field includes analysis of Fetoscopic videos with a deep learning approach with ResNet 101 architecture, polyp detection during coloscopy, diabetic retinopathy and breast cancer detection with reinforcement learning by Deep Mind and Automated Performance Metrics using the DaVinci black box recorder.

 

 

©2020 by CDT SIE