A new 'language' for surgery?

Computer scientists at Johns Hopkins University have decided to borrow ideas from speech recognition research to build what they're calling a 'Language of Surgery.' In speech recognition, basic sounds are called phonemes. For surgery tasks, such as suturing, dissecting and joining tissue, the basic steps have been named 'surgemes.' The mathematical models used by the researchers have several goals: evaluate a surgeon's work and help doctors to improve their operating room skills. They also want to 'enable robotic surgical tools to perform with greater precision.'

After watching surgeons using robotic systems for a while, computer scientists at Johns Hopkins University have decided to borrow ideas from speech recognition research to build what they're calling a 'Language of Surgery'. In speech recognition, basic sounds are called phonemes. For surgery tasks, such as suturing, dissecting and joining tissue, the basic steps have been named 'surgemes.' The mathematical models used by the researchers have several goals: evaluate a surgeon's work and help doctors to improve their operating room skills. They also want to 'enable robotic surgical tools to perform with greater precision.'

This interdisciplinary project, which includes computer scientists, surgeons and experts in robotics, has been led by Gregory Hager, a professor in the Computer Science Department at the Johns Hopkins University. He's also in charge of the Computational Interaction and Robotics Lab (CIRL) and deputy director of the NSF Center for Computer-Integrated Surgical Systems and Technology (CISST).

Why has he decided to use speech recognition research to surgery? Here are his answers.

"Surgery is a skilled activity, and it has a structure that can be taught and acquired," said Hager. "We can think of that structure as 'the language of surgery.' To develop mathematical models for this language, we're borrowing techniques from speech recognition technology and applying them to motion recognition and skills assessment."
Complicated surgical tasks, Hager said, unfold in a series of steps that resemble the way that words, sentences and paragraphs are used to convey language. "In speech recognition research, we break these down to their most basic sounds, called phonemes," he said. "Following that example, our team wants to break surgical procedures down to simple gestures that can be represented mathematically by computer software."

Below is a series of pictures showing eight 'surgemes,' or basic surgical gestures, in an analogy with phonemes (Credit: Gregory Hager et al.). "The eight rudimentary surgical gestures, or surgemes, common in a four-throw suturing task, as defined by a senior cardiac surgeon. 1) Reach for needle. 2) Position needle. 3) Insert and push needle through tissue. 4) Move to middle with needle (left hand). 5) Move to middle with needle (right hand). 6) Pull suture with left hand. 7) Pull suture with right hand. 8) Orient needle with both hands."

Two-photon 3D readout of the recorded data

Another idea of the project is to evaluate the quality of the work performed by a surgeon. But how are they doing this?

The computer scientists hope to be able to recognize when a surgical task is being performed well and also to identify which movements can lead to operating room problems. Just as a speech recognition program might call attention to poor pronunciation or improper syntax, the system being developed by Hager's team might identify surgical movements that are imprecise or too time-consuming.

As an example, you can see below an analysis of the movements of two different surgeons practicing 'identical' operations (Credit: Gregory Hager et al.). "The motions of the expert surgeon (left) separate more distinctly than those of the intermediate surgeon (right)."

Two-photon 3D readout of the recorded data

The above analysis has been done with the help of data recorded by Intuitive Surgical's da Vinci surgical system. But will it lead to a new surveillance system for surgeons? Apparently not.

Hager cautioned that the project is not intended to produce a "Big Brother" system that would critique a surgeon's every move. "We're trying to find ways to help them become better at what they do," he said.

Will this new 'language' of surgery be available to all surgeons or only for Intuitive Surgical's customers. The Johns Hopkins Gazette doesn't provide an answer.

But for more information, this research work has been presented at several medical conferences such as Medicine Meets Virtual Reality (MMVR) in 2006 or the 8th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2005. And the various papers presented at these conferences all have the same theme, "Detection and segmentation of robot-assisted surgical motions."

The scientific journal Computer Aided Surgery recently published a paper about this research (Vol. 11, No. 5, Pages 220–230, September 2006). Here is a link to this paper (PDF format, 11 pages, 2.50 MB). The illustrations above have been extracted from this article.

Sources: Phil Sneiderman Homewood, Johns Hopkins Gazette, October 23, 2006; and various websites

You'll find related stories by following the links below.