Learning 2D Surgical Camera Motion From Demonstrations

Publication Date: August 1, 2018

Learning 2D Surgical Camera Motion From Demonstrations. Jessica J. Ji, Sanjay Krishnan, Vatsal Patel, Danyal Fer, Ken Goldberg. IEEE International Conference on Automation Science and Engineering (CASE), Munich, Germany, August 2018.


Abstract: Automating camera movement during robotassisted surgery has the potential to reduce burden on surgeons and remove the need to manually move the camera. An important sub-problem is automatic viewpoint selection, proposing camera poses that focus on important anatomical features at the beginning of a task. We use the 6 DoF Stewart Platform Research Kit (SPRK) to simulate camera movements and study
camera motion in surgical robotics. To provide demonstrations, we link the platform’s control directly to the da Vinci Research Kit (dVRK) master control system and allow control of the platform using the same pedals and tools as a clinical movable endoscope. We propose a probabilistic model that identifies image features that “dwell” close to the camera’s focal point
in expert demonstrations. Our experiments consider a surgical debridement scenario on silicone phantoms with foreign bodies of varying color and shape. We evaluate the extent to which the system correctly segments candidate debridement targets (box accuracy) and correctly ranks those targets (rank accuracy). For debridement of a single uniquely colored foreign body, the box accuracy is 80% and the rank accuracy is 100% after
100 training data points. For debridement of multiple foreign bodies of the same color, the box accuracy is 70.8% and the rank accuracy is 100% after 100 training data points. For debridement of foreign bodies with a particular shape, the box accuracy is 70.5% and the rank accuracy is 90% after 100 training data points. A demonstration video is available at: https://vimeo.com/260362958