The Swiss National Science Foundation has established a National Centre of Competence in Research (NCCR) for COmputer aided and image guided MEdical interventions (CO-ME). This objective of this program is to investigate, understand, and demonstrate the potential that information technology can offer to improve medical treatment.
Our goal is to enable surgeons to benefit from pre- and intra- operative data during minimally invasive surgery. Traditional Computer Assisted Surgery (CAS) techniques are not effective for minimally invasive surgery, especially when soft tissues (i.e. deformable objects) are involved. Thus, we are investigating 3D model reconstruction (CT, MRI, etc.), 3D model update from intra-operative data and fusion with preoperative data, and 3D intra-operative positioning techniques using augmented reality (video with graphical overlays and haptic feedback).
- Real-time processing of deformable, 3D models of organs (including internal 3D structure)
- Multi-sensor navigation using both visual and haptic feedback within these 3D models (surface and internal structures)
- Fusion of multiple pre- and per- operative imaging modalities, with emphasis on per-operative data from 2D/3D endoscopic images and ultrasound
It is standard medical practice for surgeons to carefully and intensively analyze preoperative imaging data for planning purposes. Although the data contain the complete 3D information, they are often not available during the surgery process itself. We intend to use the data as direct input for visual and tactile feedback to help guarantee the successful completion of complex surgical procedures.
The surgeon’s ability to perform interventions is restricted with respect to accuracy, repeatability, and what his senses can access and render. For example, in an oncologic liver resection or in a related liver transplantation, the combination of patient specific pre-operative data with images obtained during the surgical process would allow very accurate cutting to remove the tumor.
The software package we will supply to the surgeons, via our partners among endoscopic manufacturers, contains three modules (processing stages). First, a 3D model is reconstructed from 2D DICOM files. In the operating room, the second module fuses this 3D model to endoscopic images. Finally, the fusion is updated in real time. This package guides the surgeon throughout the entire intervention and therefore allows an optimized resection with respect to vascularization criteria.
The package will be the basis for an active interface in which surgical instruments mounted on an electro-mechanical device generate tactile feedback. Consequently, the tactile feedback will provide a stimulus to the surgeon that is based on the previously registered 3D model. This feedback will help the surgeon navigate the instruments through the internal structure of the target organ.
Achievements to date
3D model generation. The first software module (automatic 3D surface model for orthopaedics) was relatively easy to develop since bone always produces good contrast in CT. To date, we have been able to clearly demonstrate that our concept fulfills the requirements of compatibility with 2D DICOM images from CT as well as easy and fast operation by the surgeons. The power of this low-cost software module has been shown for several orthopaedic tasks.
A similar module has been developed as part of a 3D planner for total hip replacement. It is based on the individual anatomy and the selected implant. Specifically, the software model supports the implant choice while accounting for individual characteristics. This software module is currently in clinical use for beta testing at the Hopital Orthopedique de la Suisse Romande (HOSR).
As low contrast images and complex structures are much more difficult to segment, we have also developed a semi-automatic module for 3D model generation. This module allows the surgeon or anatomist to update 3D models of veins, arteries and ducts for hepatectomy and nephrectomy purposes. The communication between the surgeon and the computer (using the semi-automatic module) will be further used for pre-operative 3D model updates.
Pre- and intra-operative data fusion. In the first year, static registration of 3D surface models of internal liver structure onto 2D images for “open sky” surgery was achieved. To validate the principle of sensor fusion and augmented reality, the field of hepatectomy was chosen. The necessary hardware and software systems are now available. The first step, static fusion of 3D pre-operative liver data with 2D images (endoscopic and a conventional camera above the surgical zone) has been achieved.
An enchancement module has been designed to account for the requirements of minimally invasive surgery. This package is now under clinical validation and has already been used to statically register 3D vein and artery network models for live donor kidney transplants. We are currently analyzing the results of these operations in detail in order to improve and refine the package.
The interaction between the surgeon and the hardware and software is the limiting factor during minimally invasive surgery. To overcome this limitation, we are planning to investigate vision-based gesture tracking as an additional interaction modality.
Haptic interfaces. We previously developed the Delta Haptic Device (DHD), a high fidelity, large-workspace, haptic input device based on the Delta manipulator. During the first year we adapted this device for medical applications. Preliminary experiments have demonstrated that a combination of force and visual feedback can enhance medical gestures, e.g., by providing navigational feedback during tool movements. In addition, we have modified several surgical instruments for mounting on our force feedback device.