1
Fachhochschule Ravensburg-Weingarten Doggenriedstrasse D-88250 Weingarten Germany | 2
GREYC ISMRA & University of Caen 6, boulevard du Maréchal Juin F-14050 Caen, France |
For pre-operational planning and surgery simulators the reconstruction and segmentation of medical multidimensional images become more and more important. This development rises a need of techniques for the representation of anatomical knowledge. Displaying such 3-dimensional data on screen is a topic of virtual reality. Interacting with the 3-dimensional virtual systems with conventional input devices, like the mouse or the keyboard is often difficult, because desired operations have to be described on flat 2-dimensional screen. Spoken dialogue enables more comfortable interaction with such systems as it is closer to human's way to describe 3-dimensional operations.
The 3D surface of the brain is extracted directly from a MR volumic image, by a thresholding operation. It is made of squares named surfels, which are the facets of the voxels (volume elements) of the volumic image. Arbitrary points are chosen on the surface and associated to an arbitrary colour. The colours are then diffused on the surface, and their intensity is modulated to take account that the area (darker) is inside a sulci, or at the top of a girus of the brain (lighter).
The primary system inputs are spoken user commands
like 'Go forward.', 'I would like to see the right hemisphere.'
'Turn the brain to the left.', 'Stop' 'Go back!' 'Where am I?'
These sentences are recognised by a commercial device, split into
their syntactical structures by a CHART parser, and interpreted
in their semantic context. To make the interaction more natural,
a dialogue automata reacts to the user's input by generating answers
and requests for further information. If the user requests a manipulation
of the brain image, it calls the corresponding functions of the
motion planning. To enable smooth transitions between previous
and desired position, the motion planning prepares animation sequences,
generating a motion list which is displayed by a separated execution
unit.
We have implemented a system which visualises medical brain images in a 3-dimensional way. We currently implement a dialogue system for spoken navigation in this virtual world. We think this type of system can be useful in application ranges such as telemedicine and medical education.