Past Projects

CHAI 3D

CHAI 3D is an open source, freely available set of C++ libraries for computer haptics, visualization and interactive real-time simulation. CHAI 3D supports several commercially-available three- and six-degree-of-freedom haptic devices, that makes it simple to control new custom force feedback devices. CHAI 3D is especially suitable for education and research purposes, offering a light platform onto which extensions can be developed. CHAI 3D’s support for multiple haptic devices also makes it easy to send your applications to remote sites that may use different hardware.

Ultrasound Based Preoperative Planning Updates During Hepatectomy Surgery

The goal of this project is to develop a method to update preoperative 3D virtual surgery planning during the surgery. The medical context is liver surgery in particular, hepatectomy. We are using a 3D deformable model to represent the liver in the virtual world. For speed, it is a mass-spring based model. The masses and springs are interconnected to form a skeleton. In our case, the medial axis of the vascular tree provides the shape of the liver skeleton. Intra-operative-tracked, 2D ultrasound images provide fresh data to update the model. An image processing technique provides information of the 3D position of sliced vessels. This cloud of 3D points is then used as a force field that will attract the skeleton until the forces are minimized.

Characterization of 3D Deformable Models

In the field of real time 3D simulation, deformable models are now more often used, especially in the medical field. Based on the skeleton representation of deformable objects, the goal of this project is to define a correct set of stiffness parameters in order to create a situation where the physical simulated model behaves like the real object under variable force constraints. The strategy for characterization is to apply external forces of several known values to the real object and to measure the resulting displacement. Based on these measurements and the model’s skeleton, a custom Bayesian filter algorithm is used to estimate the springs’ parameters according to the evolution of the model.

Soft Tissue Modeling

Virtual reality based simulations of surgery have shown promise in assisting surgical training, surgical planning, pre-operative rehearsal, and intra-operative execution. Development of an effective virtual environment requires real-time interactivity and realistic visualization. Simulating deformable organs can be heavy duty for a computer. This project researches an original method to allow real time deformation of complex virtual objects. The technique consists of creating much simpler models using filling spheres and three dimensional elastic links.

Optical Tracking

Together with Atracsys Ltd., the VRAI group is developing new tracking techniques. These techniques are suited for high end requirements such as high update rates, high spatial measurement accuracy without compromizing size, weight and price. Within the VRAI group, these techniques are used in fields such as computer assisted surgery (CAS), semi-active tool holder positioning, etc.

Abdominal Aortic Aneurysm

An abdominal aortic aneurysm (AAA) is a bulge in the wall of an artery. It is estimated that 1.5 million Americans have AAA, although only approximately 200,000 are diagnosed each year. AAA’s are almost always caused by arteriosclerosis. As plaques accumulate, the pressure of the blood flowing through the weakened section of the artery causes the artery to balloon, forming an aneurysm. If the aneurysm is not detected in time, the weakened aorta will rupture, often causing death.

Active, a framework to build intelligent assistants

Active is a unified tool and associated methodologies to build intelligent systems. Its goal is to ease the development of AI-based software by seeing that the required technologies more accessible to programmers. The Active framework provides a unified approach for rapidly developing applications incorporating natural language interpretation, dialog management, multimodal fusion, adaptable presentation generation, reactive execution and dynamic brokering of web services.

Haptics for surgical navigation

We are developing a needle insertion simulator and CT/US navigation system designated the BiopsyNavigator. In this system, the biopsy needle is directly connected to a haptic feedback device. During an intervention, the system provides the surgeon with navigational information as well as force guidance to improve needle insertion.

Computer Aided Laser Treatment of Hard Tissue: Laser Positioning System

We are developing a laser-based, non-contact cutting tool to perform osteotomy without mechanical stress and vibrations. With respect to conventional mechanical approaches, the laser method reduces procedure invasiveness while significantly increasing section accuracy and precision. Furthermore, the development of a positioning system will allow 3D cuts that are currently impossible with a saw.

Collaborative Control: A robot-centric model for vehicle teleoperation

Telerobotic systems have traditionally been designed and solely operated from a human point of view. Though this “robot as a tool” approach suffices for some domains, it is sub-optimal for tasks such as operating multiple vehicles or controlling planetary rovers. Thus, we believe that it  is worthwhile examining a new system model for teleoperation providing, a new paradigm for human-robot interaction: collaborative control.

Fast coarse-to-fine model-based elastic registration of medical data

This project presents a new approach for the non-rigid registration of medical data. The presented technique can also be applied for model-based segmentation.

GestureDriver: Visual Gesturing for Vehicle Teleoperation

Using visual gesture to pilot a vehicle offers several advantages. For instance, the interface is passive (i.e. it doesn’t require the user to use any hardware or to wear special tags or clothes). Therefore, the interface is easy to deploy and can be used virtually anywhere in the field of view of the camera that performs the tracking. This flexibility is hard to achieve with hand controllers such as rate-control joysticks. Using vision also allows different gesture interpretations to be used, depending on the user’s preferences and the tasks to be performed. Since the interpretation is software based, it is possible to customize the human-machine interaction to accommodate any user operating a vehicle in any remote environment. Furthermore, the interaction can adapt to the user over time, which is not possible with hardware devices. As a result, we have the potential to minimize sensorimotor workload on a per-user basis.

HapticDriver: Remote Driving with Force Feedback

The most difficult aspect of remote driving is that the operator is usually limited to visual information (e.g., camera video) for perception. Consequently, the operator often fails to understand the remote environment and makes judgement errors. This problem is most acute when precise motion is required, such as maneuvering in cluttered spaces or approaching a target. The HapticDriver addresses this problem by providing force feedback to the operator. Range sensor information is transformed to spatial forces using a linear model and then displayed to the operator using the Delta Haptic Device. Thus, the Haptic Driver enables the operator to feel the remote environment and leads to better performance in precise driving tasks.

M/ORIS – Medical / Operating Room Interaction System

During Computer-Assisted Surgery (CAS), the surgeon must interact with the computerized equipment in the Operating Room (OR). Currently, Surgeon-Computer Interaction (SCI) is limited by environmental and human factors. First, the requirement for sterility of the surgical environment prevents the use of classic Human-Computer Interaction tools, such as mouse and keyboard. But more importantly, the interaction with the computer adds to the cognitive load of the surgeon, requiring frequent interruptions in the procedure and leading to frustration, loss of focus and situational awareness. To overcome both these issues, M/ORIS provides a way for surgeons to directly interact with the Graphical User Interfaces (GUIs), while reducing the surgeon workload by automating the computer configuration and display of relevant information. To achieve these goals, M/ORIS combines vision-based surgeon head and hands tracking with other sensors readily available in ORs such as tool trackers, pedals, etc. to determine the progress of the procedure, and allow the surgeon to point and click at GUIs with simple gestures.

Orthopedic planning and navigation for Total Hip Replacement

The goal of this project is to develop a complete orthopedic positioning system to improve Total Hip Replacement. The system is based on a combination of software and a mechanical device. The software initially processes 2D CT scans (DICOM format), transforming the images into 3D graphical models of the pelvis and femur. These models are used for pre-operative implant planning (visualization). During surgery, software and hardware provide navigational guidance to the surgeon.

PdaDriver: Vehicle Teleoperation

Remote driving systems have remained largely unchanged for the past 50 years: an operator uses hand-controllers (joysticks) to continuously drive a remote vehicle while watching data and video displays. These systems are expensive, cumbersome and time consuming to setup, and require significant training. To remedy this, we are developing PdaDriver, a Personal Digital Assistant (PDA) interface for remote driving. PdaDriver is designed to let any user (novice or expert alike) to remotely drive a mobile robot from anywhere at anytime.

PerceptOR Software Systems: Handheld user interface and human-robot dialogue

The Perception for Off Road Robots (PerceptOR) program is one of six key supporting technology programs of the United States DARPA/Army Future Combat Systems (FCS) program. The PerceptOR program is developing prototype approaches to advance outdoor obstacle detection for robotic systems and enable higher levels of autonomous mobility needed for FCS operations. PerceptOR is designed to push the state-of-the-art in perception to real world conditions. Perception algorithms utilizing both onboard and overhead sensor data are expected to yield significant improvements in obstacle avoidance, especially for off-road or complex urban conditions. Experimentally backed performance data will enable the U.S. Army to better understand how to design and deploy field robots, as well as the level of human involvement required for robot navigation.

Advance UI for Paraendoscopic Surgery (PICO)

The VRAI group is building the user interface module of a robotized endoscope holder used in neurosurgery. This project is conducted within the EU 6th frame CRAFT program.

TLIB: a Real-time Computer Vision Library for HCI

A computer vision software library is a key component of vision-based applications. While there are several existing libraries, most are large and complex or limited to a particular hardware/platform combination. These factors tend to impede the development of research applications, especially for non-computer vision experts. To address this issue, we have developed TLIB, an easy-to-learn, easy-to-use software library that provides a complete set of real-time computer vision functions, including image acquisition, 2D/3D image processing, and visualization. In this project, we present the motivation for TLIB and its design. We then summarize some of the applications that have been developed with TLIB, and discuss directions for future work.

Advanced Teleoperation Interfaces

Vehicle teleoperation has traditionally been a domain for experts. Figuring out where the vehicle is, determining where it should go, and remotely driving it are complex problems. These problems can be difficult solve, especially if the vehicle must operate in a hazardous environment, over a poor communications link, or with limited operator resources. As a result, expert operators are needed far more often than not. Our goal is to make vehicle teleoperation accessible to all users, novices and experts alike. Thus, we are creating easy-to-use user interfaces and effective human-robot interaction methods to enable robust vehicle teleoperation (mobile robot remote driving) in unknown, unstructured environments (both indoor & outdoor).

Nanomanipulation of Carbon Nanotubes

The aim of this project is to develop a force-feedback interface that enables a user to manipulate nanometer size objects with an Atomic Force Microscope (AFM). Our current interface integrates a high-performance force-feedback system (the Delta Haptic Device), real-time 3D graphics, and physics-based simulation of nanoscale AFM interaction. We have recently begun integrating our system with a commercial AFM and are now evaluating the suitability of different operation modes for nanomanipulation. By allowing bilateral scaling (geometric, kinematic and force), the DHD can make such operations easier and faster than traditional tools.

Web Pioneer: Vehicle Teleoperation on the World Wide Web

The Web Pioneer lets you drive a Pioneer via the World Wide Web! Using Web Pioneer you can remotely drive a Pioneer from anywhere in the world and see live video images from either an external camera or the one mounted on Pioneer. To access the Web Pioneer, you must use Netscape Navigator 3.0 or higher (sorry, it is not compatible with Internet Explorer). Both the driving performance and video quality depend on your network connection. If you are connecting to the Web site (coming soon to www.activmedia.com), or if you are using a low-speed Internet connection, you may notice some delay in sending driving commands or receiving video signal.

Laser for osteotomy

An inherent drawback of using mechanical tools for osteotomy purposes is that they are in direct contact with the hard tissue and can transmit unwanted vibrations to the patient. Moreover force and torque applied to the bone may generate 3D movements of the structure that must be compensated in order to maintain overall system accuracy. Furthermore, they are rather bulky and the heat generated by friction may destroy the (degrades the otherwise obtainable) precision of the cut. The best width precision obtained in osteotomy using current surgical procedures is approximately 1 mm and quantitative information does not exist regarding the accuracy of the osteotomy line determined using pre-operative planning tools.The project aimed at using a laser beam to perform the osteotomy (project conducted with AOT and Universitatsspital Basel).