Results

The SAFROS project resulted in the following advancements:

  • Practical technological developments in each surgical robotics area concerned with patient safety;
  • A methodological framework to integrate the practical solutions into one coherent approach. 

While the former set was carried out by each member of the project’s consortium (listed below), the latter integration effort has been supervised by the World Health Organization, University of Verona and San Raffaele Hospital. The SAFROS project, therefore, carries a dual nature of practical and theoretical approach to the issue of patient safety in robotic surgery.

The “safros method”

The first major outcome of the project is the adaptation of an existing safety framework to address the needs of robotic surgery by drawing from experience in assessment of patient safety in a broader context.

We established a practical method to join surgery and engineering – disciplines quite disconnected until recently – under the same conceptual framework of patient safety. Such procedure has been successfully tested throughout the project, and is a concrete example of how our method allows tackling of a complex scenario.

In practical terms, the key steps of our method are summarized below:

  • Given a surgical procedure, identify and group its major steps in classes based on their clinical domain;

  • Perform a risk analysis on each step and sort its results accordingly;

  • Identify the most significant adverse events in each domain and the corresponding root causes;

  • Find the appropriate safety measures to spot each trigger occurrence;

  • Develop local solutions to prevent the adverse events or counter-act them with specific safety measures;

  • Check globally that the solution would not affect negatively the overall safety status, avoiding local minima in the safety landscape;

  • Iterate the process.

Each step took form as one or more Project Deliverables throughout the course of our research, and the method itself was described in detail in the final document “Deliverable 1.4: Report on Building Safety into New Technologies”.

In particular, the risk analysis produced a list of Evaluation Dimensions that targeted very specific aspects of patient safety, closely bound to the technology being assessed. For example, some of these EDs evaluated the precision of the tools, the accuracy of the robot movement, etc. that are referred to throughout the document.

Research Outcomes

This section lists briefly the outcomes of the research done by each institution or through their collaboration.

Project Consortium

The SAFROS consortium that took part in the research efforts is composed by:

  • World Health Organization (WHO)

  • Universita` degli Studi di Verona (UNIVR)

  • Fondazione Centro San Raffaele del Monte Tabor (HSR – now FCSR)

  • Tallinna Tehnikaulikool (TUT)

  • Deutsches Zentrum für Luft – und Raumfahrt e.V. (DLR)

  • Karlsruher Institut für Technologie (KIT-U)

  • Aspete Higher School of Pedagogic & Technological Education (ASPETE)

  • Ecole Polytechnique Federale de Lausanne (EPFL)

  • Holografika Hologrameloallito Fejleszto es Forgalmazo kft (HOLO)

  • Force dimension s.a.r.l. (FORCE)

Requirements, Methodology and Human Factors

San Raffaele Hospital (FCSR/HSR)

Researches carried out by FCSR can be grouped into three main fields of investigation: the safety requirements identification, the delineation of the project methodology for the outline and mitigation of risks and the analysis of the Human Factors component in robotic surgery.

The first scope aimed to study all the influencing factors, key objective and challenges to account for a patient safety driven development of SAFROS technological solutions and for a consistent safe introduction into a well-established surgical workflow.

From the results of this preliminary analysis it was found that the patient safety issue in robotic surgery is articulated in different concurrent criticalities, not only linked to the technological and procedural aspects but also emerging from the overlap of the previous two categories.

This consideration constituted the starting point to develop the second area of analysis: the definition of a methodological framework for the identification and mitigation of risks possibly affecting patient safety. The methodology has been implemented into a threefold schema in order to assess the safety of the SAFROS solutions considering them firstly individually (i.e.: product safety analysis), then analysing their impact when introduced in a surgical process (i.e.: process safety analysis) and finally integrating them into a wider organizational context (i.e.: organizational safety analysis).These analyses have been supported with multidisciplinary means as literature researches, on site observations, structured interviews, questionnaires and formal risks assessment methods. Thanks to the described approach, it was possible to derive a set of technical and medical safety metrics reflecting the expected improvements in patient safety brought by SAFROS products.

Exploiting the identified safety metrics, SAFROS solutions underwent to a specific safety assessment, designed to reflect different demonstration purposes coherently with the safety analysis carried out. Firstly SAFROS products were put under investigation from a technological point of view and finally from a medical and patient related perspective. At the end of the testing activities it was possible to pair each SAFROS product to its potential benefits for patient safety.

FCSR’s third field of investigation, the Human Factors (HF) analysis, highlighted the importance in the context of robotic surgery of teamwork dynamics and appropriate behavioural attitudes as sources of hazards possibly leading to human errors affecting patient safety. According to the project’s safety driven perspective, FCSR team introduced Human Factors competency requirements in the SAFROS training curriculum for robotic surgery. The final aim, confirmed through the results of a dedicated testing activity, was to prove that a dedicated Human Factors training favours an appropriate behavioural response to specific HF issues and contributes to deliver a higher level of safety to the patient.

In conclusion, the FCSR research demonstrated the necessity and efficacy of a systems approach to achieve the project objectives, considering all the factors that possibly contribute to the definition of the level of patient safety reachable in robotic surgery. Safety reveals to be a global property both from the technical, medical, environmental and methodological points of view.

Expanding the field of discussion to the more general concept of patient safety, the work done so far is characterized by features of generalizability and flexibility of application. The methodology developed for the identification and mitigation of risks can be applied in any healthcare safety critical environment and to any innovative product that needs to be introduced in it.

Thanks to the possibility to rely on formal methods of investigation involving a well-structured methodology and to find customizable metrics with an objective value, this approach can be worthwhile under different viewpoints in the healthcare field. It could be useful both for producers to demonstrate the potentials of their innovations and for process stakeholders and healthcare organizations that can decide, with an informed basis, whether to adopt these new solutions. Moreover, these methods of analysis can reach and be understood also by patients and the general public, that can be properly aware about the relative features in terms of patient safety.

All these factors combine to create the basis for an appropriate patient safety culture, which is willing to permeate all the component levels of healthcare organizations and to spread among the professionals both directly or indirectly involved in the healthcare process.

Organ Phantoms

TUT

Our research within the SAFROS project allowed technologies for patient safety in robotic surgery to be first validated on surgical phantoms rather than directly on animals. The phantoms are an easy and inexpensive way both for learning and experimenting with close-to-reality artificial organs that have no need for an operating room environment and most importantly have no ethical issues attached.

Two different surgeries, pancreatic enucleation and cardio-vascular surgery, are selected for validation purposes. Those two tasks are very different in scope but at the same time challenging enough to validate the project concepts. Therefore, artificial phantoms have to abide closely to the topology of the organs involved and replicate its mechanical properties in terms of tissue realism to the touch. Furthermore, we chose to pursue similarities not only in haptic and tactile feedback, but also in the medical imaging field (ultrasound and CT).

1

Figure : Operative test (AAA repair) on aortic phantom

SAFROS Final Report-v04_html_4c614ebd

Figure : A detail of the aortic phantom with a surgical graft

During the course of the project, TUT provided partners with the properties and virtual 3D-models of pig abdomen organs for the surgical simulator and organ production. The virtual models were segmented semi-automatically from actual pig CT scans and were validated by radiologists/veterinaries. Based on the obtained models, an abdominal box with numerous organ phantoms were produced during the course of the project [2]. The pancreas phantom organ is designed to have realistic mechanical (geometry and elasticity of the human organs’ tissue) and imaging (US properties for intra-operative guidance and CT properties for pre-operative model creation) properties [3 and 4], while the dilated aorta phantom for AAA [1] and nearby organs for PTE are designed to only have realistic mechanical properties.

SAFROS Final Report-v04_html_m12381ee1

Figure : Organ phantom with realistic Ultrasound properties

SAFROS Final Report-v04_html_701e1372

Figure : Organ phantom with realistic CT properties

The main contribution of TUT towards improvement of the quality and safety of robotic surgery is the development and production of realistic low-cost organ phantoms to practice and assess the robotic surgery procedures in scope of the SAFROS project and beyond.

SAFROS Final Report-v04_html_37f8b5f6

Figure : The pig abdomen phantom and the MIRO robot

SAFROS Final Report-v04_html_mf78d8be

Figure : inner view of the pig phantom

SAFROS Final Report-v04_html_6e2a562f

Figure : Organ mould fabrication from human CT scan

 

The anatomically shaped organ phantoms with imaging properties were needed in the SAFROS project to be used as a controlled model to assess quality and performance of medical image processing algorithms and surgical interventions. The phantoms also allowed project partners to minimise the extent of animal experimentation during validation of technologies for patient safety by substituting animal organs and body parts with realistic artificial phantoms.

Beyond the scope of the project such low-cost phantoms with correct mechanical and imaging properties, if applied on a large scale, would reduce the number of “in the job” trainings by young surgeons and radiologists, reduce the need for animal experimentation and enable testing hazardous procedures on realistic physical models (potentially patient-specific) before real operations. Thus the project has a potential to partially solve some ethical issues (ex. experimentation on animals) as well as to increase patient safety directly (young surgeons and radiologists will start operating on humans better prepared and complicated/hazardous procedures can be rehearsed prior to the actual intervention) and indirectly (high availability of low-cost phantoms allows to practice interventions more often).

Medical Imaging Segmentation

TUT, UNIVR

Understanding the contents of medical images has become of paramount importance in the everyday clinical practice since most of the patients start their clinical experience with a medical image acquisition. Beside the modality used to acquire the data, images could greatly help the radiologist or surgeon to understand the clinical conditions of the patient and improve the diagnosis. Originally medical images were used mainly in pre-operative phase but with the widespread adoption of minimally invasive procedure image have become also a key factor to compensate the lack of direct visual feedback and guide the surgical action with precision.

Segmenting (i.e. localizing different structures) the content of medical image is a very time consuming task for an expert user, and it is also very user dependant since the image quality and the experience of the clinician strongly influence the outcome of manual segmentation. Computational method could greatly speed-up the segmentation procedure reducing also the variability of the outcome.

During the SAFROS project we have studied the problems connected to image segmentation, both in the pre-operative phase and also in the intra-operative phase. Pre-operative image segmentation requirements has been defined and evaluated on realistic phantom with known geometric properties. We have studied the low level properties of different medical image modalities comparing real patient datasets with phantom datasets. From these studies we have obtained a better characterization of the differences between the two experimental setups. We have also studied the results of manual segmentation performed from different user on the same datasets finding that a single user is not a reliable source of ground-truth.

Intra operative image segmentation has been also studied and computation method has been developed to obtain image segmentation in real-time during the acquisition of the data. Segmentation method output has been analysed and used for the registration with pre-operative models, to define technical requirements both for the segmentation and registration methods.

Errors in pre-operative segmentation may result in a wrong diagnosis or in the selection of a suboptimal surgical approach, increasing the risk for the patient and then decreasing the safety. Errors in intra-operative segmentation could lead to wrong evaluation of the surgical field and the execution of wrong surgical action with all the dire consequences for the patient’s life.

Computer assisted diagnosis and planning system could greatly help the radiologist in performing diagnosis evaluation with a better understanding of the patient anatomy and pathological condition. This could lead to a better choice of surgical approach, improving the pre-operative planning of the surgery and enabling the possibility of testing the surgery on a patient specific simulator.

Intra-operative image guided system with automatic segmentation could enable the adoption of image guidance in surgical procedure where otherwise will not be possible to use this type of systems. Intra operative image segmentation, together with registration methods, could also link the pre-operative planning with the actual scenario during the surgery, improving the amount of information available to the surgeon. These technologies could enable an intra-operative surgical navigation system, which could help the surgeon in localizing target lesion and critical surrounding areas faster and more accurately, improving the safety for the patient during the surgery

Lastly, another contribution of the SAFROS project is research and development in real-time US image segmentation algorithms for intra-operative situation assessment [7 and 8] as well as development and application of a semi-automatic CT-scan segmentation procedure to create anatomically correct patient-specific models of organs, usable in surgical simulations and phantom organ production [9].

SAFROS Final Report-v04_html_115424fc

Figure : US image segmentation result: segmented ultrasound image

SAFROS Final Report-v04_html_5bb2d5fc

Figure : US image segmentation results – blue organ walls, green cyst walls and red duct walls. A number of scans overlaid on top of pre-operative model (white mesh)

Surgical Simulation and Training

UNIVR

An important aspect that UNIVR has investigated during the SAFROS project it the training of surgeons to the use of the robot. The main result of this study is Chiron, the surgical simulator that has been developed to answer to the need for a simulator of advanced skills. Chiron, in fact, extends the features of surgical simulators currently available on the market and ensures that the simulation behaves correctly by accurately modelling interactions between objects, introducing correct friction and other properties of the simulated elements. This way we maximize training effectiveness and can reduce its overall duration. In addition, Chiron supports tasks that train non-technical skills and that evaluate the cognitive load of the trainee to assess his/her proficiency in stressful situations.

Chiron is being distributed to the hospitals that are partner in the project as well as few external hospitals and training centres around Europe. The use of Chiron for the training of surgeons has a twofold advantage: it greatly reduces the cost of the training (thus spreading the availability of training platforms) and increases the proficiency of the trainee beyond the bare dexterous manipulation of the robot. This, in turn, results in more safety for the patient, as the surgeon learn how to correctly perform all the steps of the intervention.

This first phase of test should result in the validation of Chiron as an effective tool for robotic surgery training. If we will obtain successful results, we will make a commercial product out of it. To this extent, Chiron already supports the full customization and extension of the training tasks, to adapt and create training curricula that are specific for the need of the different surgical disciplines.

Image Registration

UNIVR

Robotic surgical systems use images as principal source of information for the navigation and execution of surgical actions during the intervention. Beside the video imaging systems, that are part of the robotic system (e.g. Da Vinci), there is an increased use of ultrasound (US) images in the operating room (OR), during both robotic and classical laparoscopic surgery. We are also aware of the increased interest to include the US probe into a robotic system to be used in the OR. The adoption of US images during the surgery could greatly improve the safety of the patient, by enabling the localization of internal structures not visible from other visual systems.

There is another important image modality used by the surgeon before the surgery to better understand the clinical condition of the patient: the pre-operative images. In abdominal surgery pre-operative images come most of the time from computed tomography scan (CT) or magnetic resonance imaging (MRI). These images have very high resolution and accuracy and help the surgeon plan the operation.

The integration of these different types of images (i.e. video images, US images and pre-operative images), called registration process, is for sure a key component that will be part of the workflow of a robotic surgery. By integrating a registration system into the robotic system, the physician will benefit through a better view of the interventional area, both outside the organ (video) and inside the organ (US); there will also be the possibility to correlate these information with the pre-operative images (CT, MRI), where the plan was defined, therefore the intervention will have a higher precision, will be quicker and, on the whole, will increase the patient’s safety.

During the 3 years of the SAFROS project, we have investigated the main requirements of such a system, in order to be effective and efficient, and we have built and tested algorithms that may solve the registration problem. The tests were done on special developed phantoms that gave us the possibility to have to complete control over the radiologic properties and the geometry of the models.

Operating Room Supervision

KIT

In the scope of the SAFROS project, KIT has developed and implemented a supervision system for the operation room. Based on different 3D cameras, the unique system is able to acquire and interpret information about the current environment in real-time. The system offers various safety features for robot assisted interventions, both pre- and intraoperative. In order to be applicable to different surgical robotic systems, the operation room supervision system (OSS) has been specifically designed to offer generic safety features instead of supporting only one type of surgical robot. In the SAFROS project it has been successfully used with both the DLR MiroSurge system and the KIT OP:Sense research platform.

The SAFROS OSS has been designed and implemented as a modular, redundant, distributed system. Redundancy on all layers minimizes the probability of failure of the system even in the case of malfunctions of one or more components. The distributed architecture allows to spatially separate the sensors from the data processing, thereby minimizing the footprint of the system close to the OR table.

On the technical side, the system integrates three different camera systems. Industrial grade time of flight (ToF) 3D cameras are used for reliable and fast scene acquisition with a low resolution. In order to deal with (partial) occlusions, a seven camera configuration has been established. As the usage of multiple ToF cameras would normally lead to incorrect measurements due to crosstalk between the cameras, a special time- and frequency-multiplexing triggering library has been developed. In extensive tests, the effectiveness of the triggering scheme has been verified. In addition to the original work plan, the Microsoft Kinect 3D camera which was released during the SAFROS project was evaluated for integration into the OSS. As the Kinect provides a significantly higher resolution compared to ToF cameras as well as additional colour information, it was decided to integrate four Kinects in addition to the ToF cameras. For highly precise marker-based tracking, the ARTtrack2 system has also been integrated into the SAFROS OSS in a six camera configuration.
To comprehensively deal with the data acquired by both 3D camera subsystems, a two-level scene model has been implemented. The first level is the safety layer that processes the low-latency data acquired by the ToF camera subsystem. It is used for safety critical features such as collision checking, collision avoidance and robot pose verification. The second level consists of a highly detailed environment model created by combining data provided by the Kinect subsystem. This is the basis for semantic scene analysis such as human detection and tracking.

The various safety features that have been developed and implemented into the OSS can be categorized into two different groups: preoperative safety features and intraoperative safety features. During the setup of the operation room, the surgical robotic system has to be positioned correctly in order to deliver the best performance. Different localization methods that have been designed and implemented that allow the SAFROS OSS to automatically verify that the robot’s location complies with the pre-planned position. In addition, by using marker-based tracking of the robot’s end effector and the robotic surgical instruments, the OSS can verify the correct coupling of the instruments. These preoperative safety features ensure that the intervention can start with optimal preconditions.

SAFROS Final Report-v04_html_7c86bcc2

Figure : Scenario illustration of safe interaction between a surgeon and two robots in the same workspace;
overlaid illustration of virtual scene representation as captured by the SAFROS operation room supervision system

Due to the different sensing capabilities integrated into the SAFROS OSS, the system can redundantly monitor the robots performance by three independent modalities: Tracking of the end effector and/or surgical instrument, calculating the pose based on the robot’s internal sensors and analyzing the shape of the robot acquired by the 3D cameras. If a discrepancy between these modalities is detected, the surgical staff can be notified immediately in order to react before the robot’s performance significantly degrades.

In order to guarantee the safety of the surgical staff when interacting with or acting in the same workspace as the robot (see Figure 1), a special safety concept has been developed and implemented. Based on the geometric information of the scene, which is captured by the 3D cameras, and the known current pose of the robot, a safe hull is constructed around the robot. The OSS constantly checks if this safe hull is violated, e.g. by a person accidentally reaching into the motion path of the robot or an object which is placed too close to the robot. If a violation of the safe zone is detected, different reactions can be triggered such as slowing down the robot before a collision happens and simultaneously notifying the surgical staff. By avoiding collisions caused by the robot, the supervision of the safe hull can increase the safety of both the surgical staff and the patient.

As a measure to guarantee the correctness of the OSS itself, different consistency checks have been integrated that constantly check if the data acquired by the various cameras is plausible. If a camera is found to provide implausible data, e.g. outside of its specifications or inconsistent with other cameras, its measurements are excluded from being integrated into the virtual scene.

In summary, KIT has designed and successfully established a new safety concept for surgical robots based on 3D camera supervision of the operation room. The full integration of both pre- and intraoperative safety features ensures a safe and reliable execution of the intervention, thereby increasing the safety of surgical personnel and, ultimately, the patient. Based on the massive increase of 3D sensing in various areas of robotics, it is very likely that adaptation and integration of sensing technologies into a really “smart” operation room will soon become a major topic. The SAFROS technologies developed by KIT to increase patient safety based on operation room supervision can be a strong foundation for further developments in this field.

Robotic Control

DLR, FORCE

The DLR MiroSurge surgical robotic system is one of the most advanced research platforms for minimally invasive robotic surgery (MIRS). Over the last years it has been an on-going research project at DLR. With maturing of the robotic technology, the need has become obvious for software technology that provides the new robotic technology with novel methods for increasing patient safety. Consequently, DLR’s focus within the SAFROS project has been on novel software technology for robotic surgery systems, to further investigate promising methods such as robotic system simulation, planning and setup strategies and innovative control algorithms for haptic input devices. 11 shows the DLR MiroSurge System for MIRS with the robotic system (slave side) on the left and the surgeon’s console (master side) on the right. Concurrently the software components developed by DLR within the SAFROS consortium to increase patient safety are emphasized in the blocks.

SAFROS Final Report-v04_html_72ab88ee

Figure : The DLR MiroSurge System for Minimally Invasive Robotic Surgery with Emphasized SAFROS Contribution from DLR – Software Technology for Surgical Robotic Systems to Increase Patient Safety

In a first step, metrics were identified by the SAFROS consortium to enable evaluation of the new methods to be developed within the project. These metrics are referred to as Evaluation Dimensions. DLR mainly contributed with three key topics to improve results in terms of the Evaluation Dimensions within the SAFROS Consortium:

  • Robotic System Simulator (RSS): Development of a robotic system simulator (RSS) perfectly interchangeable with the real robotic system to enable high performance control and system performance monitoring.

  • System Setup Strategies: Development of system setup strategies, which allow for an efficient setup planning and fast transfer of the surgical robotic system into the operating room, while maintaining full reachability of the task specific workspace within the patient, independent on the patient’s individuality

  • Control Algorithms for Haptic Input Devices: Development of control algorithms for haptic input devices for robotic surgery, namely the sigma.7 from Force Dimension, allowing for high transparency and performance, as well as system performance monitoring

Robotic System Simulator (RSS): 12 depicts the main parts of a modular simulator for robotic surgery: The user interface is represented by the Surgeon Console. The Robotic System Simulator represents the manipulators, i.e. robots and tools of a robotic surgery system. The world simulator represents the patient in his environment.

In order to scale the modelling and computation effort different use cases of such a simulator were identified and appropriate levels modelling details, i.e. classes of model complexity, are defined. The most relevant use cases are Surgeon Training, User Interface Design, Workflow Design and Validation, Robot Design and System Monitoring. Three hierarchies of modelling details of the three distinct aspects Application, System, and Patient define a classification of the model approximation of the real world. To allow for a real time interaction between the modules the Systems Modelling Language SysML is used to define the system components and interfaces. A hierarchical classification of modelling details is introduced.

SAFROS Final Report-v04_html_5eb2d196

Figure : A Modular Simulator for Minimally Invasive Robotic Surgery (MIRS).

Besides enabling high performance control of the real robotic system, the Robotic System Simulator is of major importance concerning the use case System Monitoring. Therefore novel calibration algorithms to approximate the real robotic system as good as possible were developed. Mainly focusing on lightweight robots such as the DLR MIRO, the developed calibration method is generally valid, also with respect to industrial lightweight robots (e.g. Kuka LWR), and robots without integrated torque sensors. The validity of the derived models is proven within experiments considering not only the controller performance, but also the monitoring use case of simulation. Thereby the RSS is directly compared to the real robotic system. The results are mapped to the regarding Evaluation Dimensions and the predefined accuracies to reach a certain level of patient safety are reached due to the calibrated robotic system simulator.

The presented classification and modularization has successfully been used for the implementation of the Robotic System Simulator within the SAFROS project. The developed concepts are also applicable to tele-robotics in general, e.g. the operation of robots in hazardous environments such as space or disaster zones, etc..

System Setup Strategies: System setup strategies are developed, which allow for efficient setup planning and fast transfer of the surgical robotic system into the operating room, while maintaining full reachability of the task specific workspace within the patient, independent on the patient’s individuality.

Therefore a novel approach is taken to perform the setup procedure of the robotic system based on virtual reality (VR) assistance: instead of relying on preoperative data which is often not comparable to the actual patient pose in the OR (due to e.g. soft tissue displacement and insufflation), the VR system provides the surgeon with an augmented view that shows the workspaces of each robot. The surgeon can then compare these spaces with the patient anatomy and the desired region of surgery. When repositioning the system, the workspace is updated online. This way, neither preoperative data nor a time consuming registration is necessary. A validation with surgeons proved the potential of this method and showed that the requirements defined in the relevant Evaluation Dimensions are fulfilled.

Control Algorithms for Haptic Input Devices: On the one hand a novel control algorithm to scale down the inertia and friction effects of the sigma.7 device and on the other hand a disturbance observer to detect collisions with other parts of the sigma.7 device than the handle are developed. Both are based on the developed dynamics model and integrated 6 degree of freedom force/torque sensor by Force Dimension.

Many technical experiments prove that, the Evaluation Dimensions concerning the surgeon console regarding patient safety are improved in terms of movement precision, transparency and usability, by the proposed controller. The same results can be obtained performing a user study based on the utilization of the modular simulator for MIRS in the typical configuration. Also positive impact of the disturbance observer on patient safety can be proven. Unintended movements of the input device and therefore of the robot inside the patient can be avoided by reacting to the detected collisions. Naturally the proposed methods are not only beneficial considering MIRS, but also considering other applications of tele-robotics and haptics.

As future work the potential of the modular simulator for MIRS can be investigated further. E.g. for the use case class Surgeon Training, the correct interface between the Robotic System Simulator and World Simulator could be improved, such that a robust and stable co-simulation with high transparency in terms of haptics can be performed. Regarding the use case class Monitoring, merging the developed model based system monitoring with model free monitoring approaches from the data mining and fault detection and isolation communities can gain further knowledge of the state of the robotic system within robotic surgery. Thus patient safety might be increased further.

Besides the described contributions, DLR hosted several meetings and gave a demonstration of the SAFROS results in November 2012 to the commission. Several publications were and will be published on the results. Naturally, the proof of improvement by doing real surgeries on patients using commercial systems is beyond the scope of this project. However, the results achieved in this project will be kept in view and may contribute to successfully commercialize DLR’s MiroSurge system or similar ones.

Surgical Interface and Telepresence

EPFL

During the first year, the EPFL team studied the required specification of a tele-operated surgical setup, and especially the technical requirements needed to achieve two pilot surgeries: Abdominal Aortic Aneurysm (AAA) and pancreatic surgery. A dedicated force sensor was also developed, compliant with the surgical clamps used with the aorta.

The two main achievements done during the further period of the project concern a study conducted on telepresence, and the development of a supervisory interface for the MiroSurge setup.

In surgical robotics, the information from the remote environment (visual, aural, haptic etc) should be transmitted and rendered to the surgeon through a surgical workstation. In order to give the feeling that the surgeon is at a remote environment an appropriate level of telepresence is required. A good level of telepresence can guarantee intuitiveness, which can greatly improve a surgeon’s performance and thus patient safety. Therefore, during the SAFROS project, we proposed an objective neuroscience method to assess the telepresence level and we performed experiments to find the role of different factors in the telepresence level.

About the supervisory interface, the lack of a general dashboard was first identified. This interface was then designed and integrated to the DLR middleware. It allows all component of the robot setup to report their state, and to display any important information to the Surgical Staff in an understandable way. In case of problem or failure of a component, an acoustic alert warns the staff, and a short guide is displayed. The interface also allows recording the name of the surgical process, the surgical staff and some patient data. The checklist provided by WHO is eventually integrated, aiming at providing a central console where the Surgical Staff can rapidly obtain practical generic and specific information about the surgery. This new component is tested and validated in the context of patient safety during a survey session, which took place in HSR, Milan in September 2012. Valuable comments and feed-backs were obtained during this session.

Training Curriculum

ASPETE, UNIVR, HSR/FCSR

In the SAFROS project ASPETE was involved mainly in the drafting of a Robotic surgery training curriculum.

Our first contribution was an analysisfrom an educational point of view whether the interfaces under development were suitable for the project’s objectives, based on an educational framework supported by three different theoretical learning models: The Rasmussen’s Skills- Rules- Knowledge-based behaviours framework, the constructivism/constructionism and adult learning theories.

At a later stage, we worked in cooperation with UNIVR and San Raffaele Hospital to design the SAFROS training curriculum, which was presented as a project deliverable: a problem-based curriculum for robotic surgical training (March 2012).

Lastly, we contributed to the validation of such curriculum by conducting the subjective evaluation of the training course and the analysis of the evaluative data coming from questionnaires and diaries filled in by trainees.

In addition to that, ASPETE was involved in the validation of the robotic assisted procedures by developing their analysis method around the concepts of Product, Process, and Organization – resulting in the final Report on safety measures.

Medical Imaging Visualization

HOLO

Holografika’s involvement in the SAFROS project resulted in the following activities:

  • Replacing the existing display equipment used currently in the education and execution of robot assisted surgery with a light field display, that provides multi-user glasses-free 3D viewing.

  • Collecting user requirements from medical personnel, testing our existing solutions against these requirements and determining how we can enhance our displays as training / diagnostic visualization terminals.

  • Replacing the previous integration point of our displays. The HoloVizio OpenGL wrapper was a lightweight, non error-tolerant OpenGL replacement library. We have determined that for healthcare use, we would need a robust software stack. On top of this new software stack, more reliable and performing applications could be built for the HoloVizio displays

  • Implementing various medical visualization methods on our current line of HoloVizio displays.

  • Enhancing our existing HV80WLT monitor type display to make it a better fit for the healthcare market

With the first requirement analysis conducted by HSR within the project, we have established that user comfort and image fidelity are key factors determining overall system safety in robot assisted surgery. Not only can 3D glasses-free displaying provide higher depth fidelity compared to stereoscopic displaying, but it also ensures that users have a more natural feel and less discomfort than using competing solutions, resulting in more convenient long term use.

To verify our statements about the display, we have designed and conducted a series of perception tests with the help of EPFL. Combining the results with our previous tests regarding eye strain and depth we have determined that our solutions are superior to stereoscopic solutions currently used in the industry.

We also implemented a native rendering library for our displays called ClusteredRenderer. The library has a plug-in architecture and it currently provides clustered rendering capabilities for OpenCL and OpenGL. Fast marshalling and unmarshalling of events ensures easy integration and state consistency between the application and the cluster. Event persistence and playback can also enable quick recovery from system failures (e.g. application hang, memory error, etc.). Low overhead networking is available on both TCP/IP and RDMA networks. Loaders are available for various 3D file types and image files.

In cooperation with UNIVR, we have integrated this rendering library into Chiron, the surgical simulator of the project. By enhancing the lighting and rendering model used by the simulator to use the viewing model of the light field display, we could provide superior rendering quality to the OpenGL wrapper based solution.

We have added real-time image processing and visualization for ultrasound imaging used in the project.

Using the lessons learned during the project and the feedback from the user testing we have designed and implemented an enhanced version of our current HV80WLT monitor type display. We have improved angular resolution and thus more depth range and better focused images, enhanced geometrical and colour calibration and reduced noise.

Finally, to gain acceptance and make potential users, decision makers and the general public familiar with this technology, we have demonstrated the project’s solutions, findings and integrated system at various dissemination events.