This PhD will develop and validate an automated framework for intraoperative performance assessment during robotic-assisted radical prostatectomy (RARP), directly linking technical skill to postoperative functional and oncological outcomes. Using multimodal inputs—robotic kinematics, instrument-tracking data and endoscopic video—the project will quantify surgeon hand motion and tool–tissue interactions across key tasks such as suturing, dissection and knot-tying. Bench-top phantom models across a range of expertise will provide a controlled environment to characterise motion signatures and train predictive algorithms, which will then be adapted and validated on intraoperative surgical videos. Longitudinal modelling will capture individual learning curves, while edge-based real-time inference will support intraoperative feedback and next-step prediction. Integration of additional clinical and patient-specific factors, including tumour location, will enable combined technical–clinical models to predict outcomes such as continence, erectile function, positive surgical margins and extracapsular extension. The ultimate goal is a real-time, outcome-driven assessment and feedback system to improve training, credentialing and quality assurance in robotic urological surgery.
Back to projects
Multimodal AI for real-time intraoperative performance assessment and outcome prediction in robotic-assisted radical prostatectomy


