Surgical robots

At the Hamlyn Centre, we believe that the future medical robotics research should focus on the design of lightweight, cost-effective, flexible manipulators with minimum footprint in the operative theatre.

Such surgical robots are intrinsically complex and intelligent, yet simple, lightweight and natural to use with seamless user control. They should enhance the current surgical workflow, rather than alter it radically or become a hindrance to normal procedures. To this end, we are currently working towards the development of new generation of miniaturised and intelligent mechatronic devices and robots for flexible access surgery, as well as investigating new techniques for providing synergistic control between the surgeon and the robot.


Research themes

Research Themes

This is to develop fully articulated robots incorporating integrated imaging and sensing with interchangeable instruments for both intraluminal and transluminal surgical procedures. One example platform developed by the Centre is the i-Snake® (Imaging-Sensing Navigated And Kinematically Enhanced) surgical robot for MIS. It uses a biologically inspired mechanical design with fully flexible locomotion control such that the device can reach inaccessible anatomical regions with enhanced precision and dexterity. In particular, the design of the joint module is based on a novel hybrid micromotor-tendon joint mechanism, offering a modular configuration for a lightweight, articulated flexible access device. The i-Snake® robot is in constant evolution and being adapted for a range of procedures that require intra-luminal or extra-luminal curved anatomical pathway navigation.

One of the main goals of medical robotics research at the Hamlyn Centre is to develop smart, functional tools enhancing the ability of surgeons. They are smaller, cheaper and easier to be integrated with the existing surgical workflow. Among these, robotic hand-held surgical instruments are enhanced mechatronic tools with a certain degree of intelligence and autonomy. They are able to assist the surgeons by achieving ultra-precision manoeuvres during micro-surgical tasks, delicate tissue manipulation involving micromanipulation forces, and constraining the level of interaction with the operative field for improved safety and consistency. Several hand-held devices have already been developed by our team and are used in tasks that are beyond human precision, steadiness and dexterity.

With recent developments in smart materials and composites, the Centre has focussed significant effort on the development of continuum robots and their application to endoluminal procedures. Rather than using rigid links, these robots use new materials and typically bio-inspired mechanical design to offer continuous curvature, articulation and navigation, thus offering new design and control challenges. These include the development of a continuum catheter robot for endovascular intervention, a miniaturised ophthalmic surgical platform based on concentric tubing robot, and flexible bimanual micro-surgical manipulators for neurosurgical procedures. Issues related to design, manufacturing, modelling and control are key topics being addressed by our research team.

'Perceptual docking' is a novel approach to synergistic control introduced by Professor Guang-Zhong Yang and his team in 2006 with the aim to achieve seamlessly shared control between the surgeon and the robot. The fundamental idea is to gain knowledge from subject-specific motor and perceptual behaviour through in situ sensing. One example is to integrate eye-tracking as an additional human-robot interface modality and perform gaze-contingent attention selection. Within the gaze-contingent perceptual docking framework, the information derived in situ from the subject eye gaze using binocular eye-tracking has been effectively used to recover 3-D motion and deformation of soft tissue to aid surgical navigation and focussed energy delivery. The same idea has been used to achieve hands-free control of an articulated robotic probe. Through the use of motor channelling, real-time binocular eye tracking can be exploited to improve the performance and accuracy of robotic manipulation. This is achieved by generating a haptic force with intensity proportional to the relative separation between the fixation point and the instrument tip. This effectively bridges the visual and motor modalities using a perceptually enabled channel and alleviates the burden of the instrument control as well as the cognitive demand on the surgeon. The latest development of the gaze-contingent framework is the integration of a real-time gaze gesture recognition method, which has been used to control a Kuka lightweight robotic arm equipped with a laparoscopic camera to achieve seamless and intuitive hands-free control during surgery.

In order to enhance the safety of robot assisted procedures, we are exploring new ways of providing cooperative control between the surgeon and the robot. In this context, the robot can perform certain surgical tasks autonomously under the supervision of the surgeon. With increasing advances in back-drivable robots with varying and fully controllable stiffness, it is now possible to gain knowledge in situ directly from the surgeon through the use of learning-by-demonstration, rather than pre-programming. In this case, a robot can collect information from a surgeon’s demonstrated trajectories and extract knowledge for improved execution of surgical tasks. The ability of the robot to learn in situ from the operator also offers the opportunity for seamless integration of dynamic active constraints. Forbidden anatomical regions surrounding the target operative area can be implicitly determined from the trajectories described by the instrument driven by a skilled surgeon.