Presented by:

December 04, 2017      

Surgical automation is here to stay. Market leader Intuitive Surgical reported $2.7 billion in revenue last year and has projected a growth rate of near 20% for the surgical robotics market between 2017 and 2020.

Even so, surgical robots today are merely “slave” machines and are not data-driven or intelligent. However, this is expected to soon change. Intuitive Surgical and new entrants such as Verb Surgical have expressed their intentions to make surgical robots more intelligent in the near future.

Business Takeaways:

  • Surgical automation is a profitable and growing market, despite the high initial cost of robots.
  • Presently, surgical robots are man-slave machines without data-driven AI.
  • Computer vision systems will soon enable surgical depth imaging for more complex operations.

Surgical depth-imaging challenges

In order for assistive devices to acquire data during a surgical procedure and, therefore, become semi-autonomous, the robot must use a number of sensors. In particular, it must have sensors that can analyze the surgical environment and identify where the instruments are placed in relation to that environment in real time.

In other words, surgical automation needs to understand how and where to navigate the surgical instruments with a high level of accuracy during the operation. This requires implementation of a computer vision system.

Surgical automation gets better vision, thanks to data

The field of perception for surgical automation constrains imaging options.

Computer vision is well known for other uses — for example, in autonomous cars and mobile phones like Apple’s iPhone X. Yet implementing these platforms for surgical automation is not as straightforward.

From a technical point of view, a minimally invasive surgical environment poses the worst possible conditions for implementing computer vision sensors that can deliver precise and reliable real-time data. The sensors have to fit through very small incisions in the body, which challenges the conventional triangulation-based methods, such as stereo.

Stereo methods rely on a certain baseline (or distance) between two cameras to calculate the precise depth in an image. In principle, this is exactly the same with respect to the human vision. We need a certain distance between our eyes to perceive depth. If this distance becomes too small, we lose our three-dimensional perception.

Another challenge is the surgical surface, which consists of deformable, absorptive, and reflective surfaces that limit the use of time-of-flight and lidar technologies. Moreover, there can be smoke or blood in the surgical field, either of which poses a challenge for most computer vision systems.

Data-driven surgical automation

Given the difficult conditions, surgical automation must use a combination of sensors to deliver robust and precise data. Some start-up companies have started to take on the challenge.

For example, the Danish startup 3Dintegrated has developed a computer vision sensor system that relies on both stereo and structured light. The technology consists of a miniature structured light probe and a stereo laparoscope (camera used for minimal invasive surgery). The software that reads and analyses the images uses advanced algorithms built on a combination of techniques, including deep learning, triangulation, and structure from motion.

The 3Dintegrated technology can, within the same data set, provide ultra-precise depth readings of the surgical surface and determine the position of the surgical instruments and camera in relation to the surface. Such data is exactly what is needed to build data-driven applications for surgical robots.

Surgical automation can combine multiple sensors for greater precision.

A combination of a light probe, a stereo laparoscope, and other images can improve surgical automation.

The data obtained by such computer vision systems can provide a digital understanding of the surgical field. For example, the surgeon will be able to “see through tissue” by augmenting pre-operative data onto the surgical field during the procedure.

Furthermore, the data establishes the foundation for computer-driven robotic surgery, as the robot can now see and understand the field in which it is operating. This enables the robot-assisted surgeon to operate surgical automation safely without risk of damaging critical structures due to unintended collision or hidden anatomy.

In the future, surgeons could also use robots for repetitive tasks such as suturing, which would provide time for the more critical parts of the surgery. For patients, this would ensure safe and complication-free surgery, even in rural hospitals with fewer capabilities.

Use cases for improved surgical automation

This new technology is interesting, said Giulio Santoro, a surgeon in Treviso, Italy, specializing in general and colorectal procedures, as well as robotic-assisted surgery.

“An accuracy of 1 mm may be very important, especially in complex procedures with fine dissection, sparing nerves, lymph nodes dissection around major vessels, micro-anastomoses — such as rectal surgery, upper GI, and HPB surgery,” he said. “It’s very interesting, especially in multi-quadrant surgery, [such as in] a surgical procedure with a lot of changes in anatomical areas/targets.”

“This allows the surgeon to plan where the instruments should be positioned for an optimized angle before performing the procedure, avoid collision of instruments, avoid touching — and potentially damaging — surrounding organs outside the surgical field of interest, and ultimately improving the navigation of the surgical instrument,” Santoro added.

Of all colorectal surgeries in the U.S., only 22% are performed by laparoscopic methods, and 18% were done by robotic laparoscopy in 2015. A major reason for the limited uptake of the laparoscopic methods is that rectal cancer surgery procedure is in the deep and narrow pelvis with vital anatomic structures nearby.

The technical complexity of the procedure is high and requires a high degree of precision.

“Robot-assisted laparoscopy enables smaller and more precise movements compared to conventional laparoscopy and could potentially enable more surgeons to perform the complex procedure of rectal cancer surgery,” said Santoro.

More on Healthcare and Surgical Automation:

How it all started

There is a general consensus that brilliant medical devices are invented when doctors and engineers join efforts. However, 3Dintegrated started from a very different perspective — that of the patient.

Henriette Kirkegaard, CEO of 3Dintegrated

Henriette Kirkegaard knows firsthand the need for more and better surgical automation.

Henriette Kirkegaard had just undergone her last melanoma cancer surgery when she had to say goodbye to her mother who died from the same disease. From that moment, Kirkegaard said, it was clear to her that she had to use her second chance in life to prevent other families from going through the same trauma.

She then founded 3Dintegrated with doctor Steen Hansen and started to focus on finding a solution to the need she knew existed from her own experience.

“The applications for the technology are endless,” Kirkegaard said. “The most important thing is that it can assist surgeons in performing cancer surgery that is not performed today due to lack of experience and lack of information. The goal for me and the team is to bring the technology to the market and make it available to as many patients as possible. In that way, someone’s mother, father, child, or friend will have better chances of surviving cancer, and in the end, that is what matters.”

Computer vision promises to make surgical automation more precise, allowing robots to better assist human surgeons and make procedures safer and more accessible for everyone.