Feasibility of A Multimodal Image-Assisted 3D Endoscopic Surgical Training System Using VR-HMD for Robotic-Assisted Endoscopic Radical Prostatectomy
Takuya Sueyoshi*, Maki Sugimoto
Innovation Lab, Teikyo University, Okinaga Research Institute, Chiyodaku, Tokyo, Japan
*Corresponding author: Takuya Sueyoshi, Innovation Lab, Teikyo University, Okinaga Research Institute, Chiyodaku, Tokyo, Japan
Received Date: 25 April 2023
Accepted Date: 28 April 2023
Published Date: 01 May, 2023
Citation: Sueyoshi T, Sugimoto M (2023) Feasibility of A Multimodal Image-Assisted 3D Endoscopic Surgical Training System Using VR-HMD for Robotic-Assisted Endoscopic Radical Prostatectomy. J Surg 8: 1799 DOI: https://doi.org/10.29011/2575-9760.001799
Abstract
Purpose: This study aimed to develop and evaluate the feasibility of a Virtual Reality Head-Mounted Display (VR-HMD) training system for Robot-Assisted Endoscopic Radical Prostatectomy (RARP) by combining 3D endoscopic images with 3D reconstructed CT images to improve surgeons’ spatial awareness and robotic surgical skills.
Methods: A VR-HMD training system was developed using the game engine Unity, incorporating 3D endoscopic images from past RARP surgeries and 3D reconstructed CT images of the patient’s anatomy. The system provided an immersive learning environment, allowing surgeons to practice robotic arm manipulation, anatomical understanding, RARP surgical procedures, and individual surgical maneuvers.
Results: The VR-HMD training system enabled surgeons to gain a better understanding of the three-dimensional relationship between organs and the robotic arm’s movement. The system facilitated structural and spatial recognition of pelvic organs, improved understanding of RARP surgical procedures, and provided a realistic sensation of being at the tip of the endoscopic scope. The training system also allowed surgeons to freely manipulate the images within the VR space using hand gestures, contributing to an interactive and immersive learning experience.
Conclusion: The developed VR-HMD training system demonstrated the potential to improve surgeons’ spatial awareness and overall surgical skills in RARP. The positive results obtained from the training could contribute to improving spatial recognition and avoiding misrecognition in other robotic and endoscopic surgeries.
Keywords: Endoscopic Surgery; Prostatectomy; Robot; Spatial Awareness; Surgical Training; Virtual Reality
Introduction
The usefulness of Virtual Reality (VR) and Augmented Reality (AR) has been reported in clinical practice and surgery. VR devices offer a immersive experience in a virtual environment, and their application in surgery is growing [1,2]. Positive outcomes have been reported in surgery, training, and medical education, particularly in urologic surgery [3,4]. In Robot- Assisted Endoscopic Total Prostatectomy (RARP), understanding the narrow pelvic anatiradical in three-dimensional (3D) space is crucial [5]. Virtual simulators are available for training [6], but it lacks personalized patient data and the 3D effect of the surgeon’s console. To address these limitations, we developed a VR-HMD training system that combines 3D endoscopic images with the 3D reconstruction of CT images to improve robotic surgical techniques and spatial awareness.
Methods
From the images of past RARP surgeries acquired by da Vinci’s 3D endoscope, the part of the dissection operation around the prostate was used, imported to the PC as stereoscopic images, and converted into a VR application using the game engine Unity (Figure 1). Unity is the most popular game developer. It is one of the most popular game engines in game development. In order to grasp the three-dimensional relationship of the organs around the prostate not visible in the endoscopic images, the patient’s CT images taken in the past were reconstructed in 3D, and the prostate, bladder, arteries, rectum, urethra, ureters, and bones were segmented and output as polygons. The polygon data of these organs were constructed as a VR application using the Unity engine, with each organ model color-coded. The previously recorded 3D endoscopic images and polygon data of the organs were imported into Unity and presented three-dimensionally on the VR-HMD in VR space (Figure 2). The images presented on the VR-HMD and the polygons were moved using hand gestures. When manipulating hand gestures, models of both hands were superimposed on the VR space. With these features, the 3D endoscopic images and 3D models were presented superimposed while following the RARP procedure process, and training in dissection manipulation was conducted. The training was followed by superimposing one’s hands on top of the forceps movements in the RARP surgical images A surgeon evaluated the educatonal usefulness of this system in four points below;
- Improve anatomical understanding: practicing in a virtual environment improves knowledge of the complex pelvic anatomy, allowing the user to more accurately identify and navigate the various structures during the actual surgery.
- Improve robotic arm manipulation skills: the simulator provides a safe and controlled environment for surgeons to practice robotic arm manipulation, allowing them to hone their skills and become proficient in controlling instruments during the actual surgery.
- Understand RARP surgical procedures.
- Understand individual surgical maneuvers’ resourcefulness depending on the surgical site.
Figure 1: Side-by-side display of da Vinci’s recorded stereoscopic endoscopic images in the HMD.
Figure 2: Endoscopic images superimposed with organ polygon data.
Results
By having the wearer’s hand follow the robotic arm’s movement within the VR-HMD, the surgeon gained a better understanding of the 3D relationship between each organ and the position of the robotic arm’s movement. Specifically, spatial recognition capability was improved, allowing the surgeon to quickly grasp the gap between the prostate, rectum, and bladder, the extent of tumor resection, and how to prevent arterial injury, a bleeding risk. Multimodal imaging assist allowed the 3D endoscopic image from the CT image and a 3D model of the patient to be polygonised and displayed simultaneously within the
VR environment to guide the dissection and procedure process. Specifically, structural and spatial recognition of pelvic organs and understanding of the RARP surgical procedure was facilitated (Figure 3). In addition, the stereoscopic endoscopic images and polygonal data could be freely enlarged and positioned within the VR space using hand gestures, providing a realistic sensation of being at the tip of the endoscopic scope. Furthermore, the background within the VR-HMD could be changed between virtual space and pass-through (Figure 4). In the pass-through state, stereoscopic images and 3D polygons were displayed in front of the eyes in real space, providing a realistic training experience. In addition, hand gestures using the infrared sensor of the VR-HMD allowed the user to grasp the 3D model with his or her own hands, providing real-time interactivity.
Figure 3: RARP training with multimodal image support.
Figure 4: Meta Quest Pro’s Color Pass-Through Function incorporates real-time color correction, layer-based editing, and a transparent effect. 4a: Meta Quest Pro’s stereo cameras for path-through. 4b: Augmenting the stereo-3D immersion of real-world experiences.
Discussion
Surgeons have difficulty receiving accurate image assistance during surgery because they cannot set up displays or a mouse for input in the sterile environment of the operating room. However, with this system, 3D endoscopic images are presented stereoscopically on a VR-HMD, improving immersion, presence, and realism, providing a visual effect equivalent to a surgeon’s console. In other words, not only learning but also this physical limitation can be improved, and the visual effect of image assistance during surgery and the physical fatigue of the surgeon can be improved. Furthermore, this study was thought to lead to a reduction in physician fatigue. The virtual simulator allows the surgeon to practice various surgical procedures repeatedly. Because there is no physical burden, the surgeon can maintain his/her concentration even during long surgeries. Looking to the future, using this system to repeat learning by contrasting CT and endoscopic images of actual patients will help build confidence in future surgeries. Becoming familiar with surgical procedures and scenarios in a virtual environment could help patients gain confidence in their abilities before performing actual surgery, leading to better outcomes for the patient.
Conclusion
The system was able to provide an immersive learning environment by superimposing actual RARP 3D endoscopic images and a 3D model of the patient. It was suggested that it could potentially improve surgeons’ spatial awareness and overall surgical skills. Furthermore, the positive results obtained from the training in RARP could contribute to improving spatial recognition and avoiding misrecognition in other robotic and endoscopic surgeries.
References
- Sugimoto M (2022) Cloud XR Extended Reality: Virtual Reality, Augmented Reality, Mixed Reality and 5G Mobile Communication System for Medical Image-Guided Holographic Surgery and Telemedicine 2022: 381-387.
- Sugimoto M (2020) Extended Reality (XR:VR/AR/MR), 3D Printing, Holography, A.I., Radiomics, and Online VR Tele-Medicine for Precision Surgery 2020: 65-70.
- Yoshida S, Taniguchi N, Moriyama S, Matsuoka Y, Saito K, et al. (2020) Application of virtual reality in patient explanation of magnetic resonance imaging-ultrasound fusion prostate biopsy. Int. J. Urol 27: 471-472.
- Yoshida S, Sugimoto M, Fukuda S, Taniguchi N, Saito K, et al. (2019) Mixed reality computed tomography‐based surgical planning for partial nephrectomy using a head‐mounted holographic computer. International journal of urology 26: 681-682.
- Makary J, Van-Diepen DC, Arianayagam R, McClintock G, Fallot J, Leslie S, et al. (2021) The evolution of image guidance in roboticassisted laparoscopic prostatectomy (RALP): a glimpse into the future. J Robot Surg 16: 765-774.
- Wang F, Zhang C, Guo F, Sheng X, Ji J, et al. (2021) The application of virtual reality training for anastomosis during robot-assisted radical Asian J Urol 8: 204-208.
© by the Authors & Gavin Publishers. This is an Open Access Journal Article Published Under Attribution-Share Alike CC BY-SA: Creative Commons Attribution-Share Alike 4.0 International License. Read More About Open Access Policy.