- Research
- Open access
- Published:
Utilization of facial fat grafting augmented reality guidance system in facial soft tissue defect reconstruction
Head & Face Medicine volume 20, Article number: 51 (2024)
Abstract
Background
Successfully restoring facial contours continues to pose a significant challenge for surgeons. This study aims to utilize head-mounted display-based augmented reality (AR) navigation technology for facial soft tissue defect reconstruction and to evaluate its accuracy and effectiveness, exploring its feasibility in craniofacial surgery.
Methods
Hololens 2 was utilized to construct the AR guidance system for facial fat grafting. Twenty artificial cases with facial soft tissue defects were randomly assigned to Group A and Group B, undergoing filling surgeries with the AR guidance system and conventional methods, respectively. All postoperative three-dimensional models were superimposed onto virtual plans to evaluate the accuracy of the system versus conventional filling methods. Additionally, procedure completion time was recorded to assess system efficiency relative to conventional methods.
Results
The error in facial soft tissue defect reconstruction assisted by the system in Group A was 2.09 ± 0.56 mm, significantly lower than the 3.23 ± 1.15 mm observed with conventional methods in Group B (p < 0.05). Additionally, the time required for facial defect filling reconstruction using the system in Group A was 25.45 ± 2.58 min, markedly shorter than the 37.05 ± 3.34 min needed with conventional methods in Group B (p < 0.05).
Conclusion
The visual navigation offered by the fat grafting AR guidance system presents obvious advantages in facial soft tissue defect reconstruction, facilitating enhanced precision and efficiency in these filling procedures.
Background
Craniofacial deformities represent a prevalent clinical condition, encompassing disorders such as hemifacial microsomia, congenital facial atrophy, and tissue defects resulting from trauma or tumors [1, 2]. Against the backdrop of an increasingly appearance-focused society, individuals affected by craniofacial deformities often encounter challenges to their self-esteem and social interactions, leading to significant psychological burdens[3, 4]. The therapeutic objective for managing craniofacial deformities is to achieve symmetrical and harmonious facial contours alongside normal occlusion. Maxillofacial surgical interventions, including orthognathic surgery, bone grafting, and prosthetic reconstruction, in conjunction with orthodontic treatment, can effectively address skeletal and dental occlusal issues [5, 6]. However, challenges persist with regards to soft tissue concerns, necessitating concurrent or staged interventions to comprehensively restore facial contour symmetry and attain balanced and harmonious facial aesthetics.
Autologous fat grafting represents a widely employed reconstructive technique in the management of craniofacial deformities, aimed at restoring facial contours and augmenting soft tissue volume [7]. This procedure entails harvesting fat from the patient’s adipose tissue and subsequently injecting it into targeted areas following meticulous processing protocols [8]. However, it is important to acknowledge the significant variability in facial deformities among patients, with each case presenting unique challenges. The efficacy of facial fat grafting hinges upon the surgeon’s expertise and visual assessment, which are influenced by a multitude of factors including technical proficiency and potential visual fatigue. Consequently, achieving precise control over the quantity and placement of fat grafts poses challenges, potentially resulting in under- or over-correction of volume in the recipient area, or an inability to ensure uniform distribution of fat grafts, thereby impacting surgical outcomes [9]. Despite the notable advancements in digital medicine facilitating the use of three-dimensional imaging software to simulate various morphological changes in facial soft tissue and customize personalized virtual filling plans for patients, surgeons still heavily rely on their experience and memory to execute these plans during surgery. This reliance often leads to discrepancies between the surgical field and virtual plans, as well as inadequate precision. Therefore, the successful restoration of facial contour symmetry remains a formidable challenge faced by surgeons.
Visualization is crucial for precision surgery. Augmented reality (AR) technology seamlessly integrates virtual information generated by computers with the real environment and presents it uniformly on display devices [10, 11]. When employing AR technology, individuals can perceive not only the real world but also a virtual three-dimensional world derived from objects in the real world, thereby enhancing their perception and understanding [12]. In recent years, AR technology has garnered significant acclaim as an innovative tool, particularly beneficial in the medical field for certain diagnostic and therapeutic procedures that necessitate guidance from medical imaging [13,14,15]. For instance, Cabrilo et al. [16] utilized AR technology to aid in endoscopic coronary artery bypass grafting surgery, demonstrating that the fusion of virtual anatomical information with real surgical scene images through registration effectively facilitates the smooth implementation of minimally invasive surgery. Liu et al. [17, 18] developed an AR navigation system to assist in the reconstruction of craniofacial bone deformities. By integrating virtual surgical plans with patients’ physical anatomy and tracking surgical instruments in real-time, the system enables real-time visualization of three-dimensional virtual images, the surgical field, and the status of surgical instruments, effectively restoring the anatomical morphology of facial bone tissues.
Currently, there are no reports on the utilization of AR technology to guide autologous fat grafting, yet this prospect undoubtedly holds great promise in the medical field. As the demand for elevated standards in medical procedures continues to escalate, there is a growing aspiration for safer and more precise surgical interventions. The integration of AR technology into autologous fat grafting procedures has the potential to revolutionize this field. Consequently, this study proposes a framework for establishing an AR guidance system for facial fat grafting and substantiates the feasibility of this system through experimental validation.
Materials and methods
Research design
Twenty artificial cases of unilateral irregular facial soft tissue defect models (Guangyunda, Shanghai, China) were randomly divided into two groups, designated as Group A and Group B. The artificial models of facial soft tissue defects are composed of materials resembling human skin, subcutaneous tissue, and bone, with potential spaces between each layer. Additionally, the defect was created by reducing the volume of soft tissue on the affected side compared to the healthy side. In Group A, an AR navigation system was established to guide surgeons in the reconstruction of soft tissue defects by filling them, while Group B underwent conventional methods for filling and reconstruction.
Intervention strategies
Group A: Development of an augmented reality-guided system for facial fat grafting to assist in defect filling and reconstruction
Utilizing Hololens 2 (Microsoft, Washington, United States) to develop an AR-guided system for facial fat grafting, as depicted in Fig. 1. The working principle mainly involves the Hololens 2 camera capturing real-world images input to the computer, and through marker-based registration and tracking, merging the computer-generated three-dimensional virtual surgical plan with objects in the real world, presenting them in the surgeon’s field of view to enhance spatial perception during the surgical process. The following steps outline the construction of the AR guidance system for facial fat grafting:
Designing a tracking registration device utilizing artificial markers
The methodology of registering virtual images by accurately and swiftly determining camera poses and calculating model view matrices through the strategic placement of specific artificial markers within authentic environments is denoted as marker-based tracking registration technology [18]. Quick Response (QR) codes, often serve as the carriers of information for marker-based tracking and registration technology and can be rapidly read and identified by Hololens. In this study, QR codes were generated using https://www.the-qrcode-generator.com, as illustrated in Fig. 2A.
In AR, virtual heads obtained from CT scans have their respective coordinate systems, distinct from those in the real environment. The prerequisite for merging the two is to unify them into the same coordinate system. This necessitates an object that can be recognized when using the CT scan head model and also identified by Hololens in the real environment, serving as an intermediate “link.” Thus, in this step, we designed and fabricated a QR code-mountable bracket fixed to the head model, following these steps:
-
(1)
Equipped with four slots for QR codes;
-
(2)
The vertices of the QR code slots are positioned equidistantly along the origin and x, y, z axes of the coordinate system, as depicted in Fig. 2B;
-
(3)
The resin bracket produced using 3D printing technology is installed onto the simulated head model (Fig. 2C).
Three-dimensional digital modeling of the simulated head model using digital software
After obtaining medical Digital Imaging and Communication (DICOM) data of the simulated head model connected to the scaffold using computed tomography (GE Healthcare, Fairfield), the data were imported into the digital software RadiAnt DICOM Viewer (Medixant, Poland) to obtain a full-resolution 3D model in STL format. Subsequently, the generated STL file was processed using digital software Blender (Blender Foundation, Netherlands) to create a three-dimensional virtual model. Within the three-dimensional virtual model, the origin (o) and axes (x, y, z) of the scaffold were established, constructing the Coord-A coordinate system (Fig. 3).
Achieving “virtual-real fusion” in augmented reality display through the artificial marker tracking registration method
Importing the three-dimensional virtual model into Unity3D software (Unity Technologies, San Francisco, United States), the operator wears HoloLens 2, and its camera captures and identifies QR codes, obtaining the coordinates (Coord-B) of the QR codes in the HoloLens coordinate space (Fig. 4A). Coord-A and Coord-B represent the three-dimensional coordinates of the simulated head model in different spaces. By real-time tracking of the simulated head model in the actual scene and unifying Coord-A and Coord-B into the same coordinate system through the transformation matrix M (trs), the operator can seamlessly observe the three-dimensional virtual model superimposed on the simulated head model in the real world through HoloLens 2, achieving “virtual-real fusion” (Fig. 4B). The term “virtual-real fusion” refers to the AR system analyzing data to obtain scene position information and accurately overlaying computer-generated three-dimensional virtual images onto specific locations in the real scene, thereby achieving a perfect fusion of virtual objects with the real world [19].
Validation of tracking and registration accuracy in facial fat grafting augmented reality guidance system
Select 5 points on the cheeks, nose, forehead, and chin of the head model to fix the QR codes, as shown in Fig. 5. After completing the aforementioned steps 1–3, designate the origin of the bracket as the origin (0,0,0) of the head model coordinate system, and record the coordinates of each QR code in the Coord-A coordinate system (x1, y1, z1). Then, Hololens identifies each QR code in the real environment, defining the origin of the bracket as the origin (0,0,0) of the head model coordinate system, and records the coordinates of each QR code in the Coord-B coordinate system (x1’, y1’, z1’). By comparing the coordinates (x1, y1, z1) with (x1’, y1’, z1’), verify whether the virtual head model and the physical head model are fully integrated in the real world. Proceed to the next step when the accuracy is less than 1 mm.
Development of virtual surgical plans for facial soft tissue defect reconstruction using augmented reality
Construct the sagittal plane of the head model by utilizing anatomical landmarks such as the inner canthi and nasal tip points. Establish a sphere in the healthy facial region on one side, where the region of intersection between the sphere and the face represents the area to be mirrored, as illustrated in Fig. 6A. The surgeon wears the HoloLens 2 and activates the program menu to select a set of vertices for mirroring by adjusting the size and position of the sphere (Fig. 6B). Referring to the aforementioned sagittal plane, the selected set of vertices is mirrored to the contralateral defect area using a cloning algorithm to achieve a symmetrical facial contour. The visual display of the differences between the three-dimensional virtual filling scheme and the actual facial structure is achieved, as depicted in Fig. 6C. It should be noted that, based on the operator’s clinical experience, it is also possible to increase the volume on the mirrored basis to compensate for a certain degree of absorption after fat filling.
Virtual Filling Plan for Facial Fat Grafting Developed Using Augmented Reality. A: Intersection region between the Sphere and Facial Surface, indicating the area to be mirrored. B: Adjustment of Sphere Size and Position to Select Vertex Set for Mirroring. C: Visualization of the discrepancy between the three-dimensional virtual filling scheme and the actual facial structure
Facial fat grafting augmented reality guidance system-assisted filling
Under real-time guidance from the visual differences between the three-dimensional virtual filling plan presented by the AR system and the actual facial features of the patient, the operator performed the filling procedure at the facial soft tissue defect site, as depicted in Fig. 7A.
Group B: Facial soft tissue defect filling with conventional methods
Using CT to obtain DICOM data of the facial defect model, a preoperative three-dimensional virtual filling plan was generated using digital software. The operator referred to the virtual plan and performed the filling procedure on the defect, as shown in Fig. 7B.
Assessment of filling accuracy and time
CT scans of the filled head models were conducted, and the software Unity3D (Unity Technologies, San Francisco, United States) was employed to register and overlay the virtual plan onto the postoperative models. Sampling was performed in the facial filling area, as illustrated in Fig. 8, to obtain two coordinates: one representing the point in the virtual surgical plan and the other corresponding to the point at the same location in the postoperative model. The accuracy of the filling was analyzed by conducting hundreds of samplings and calculating the distance between the two sets of coordinates.
Additionally, the filling operation duration for both groups was recorded.
Statistical analysis
Independent sample t-tests were performed using IBM SPSS Statistics 17.0 software (IBM Corp., Armonk, New York, USA) to compare the differences in surgical accuracy and filling time between Groups A and B, with p < 0.05 considered statistically significant.
Results
This study successfully developed an AR guidance system for facial fat grafting, achieving a seamless integration of computer-generated three-dimensional facial fat filling virtual surgical plans with real-time spatial-temporal visualization of facial entities during surgery, thus achieving a “virtual-real fusion” effect. With the assistance of this AR guidance system, surgeons proficiently performed facial soft tissue defect filling procedures (Fig. 9). Comparative analysis revealed that the error associated with facial defect filling reconstruction using this system was significantly lower than that of conventional methods (p < 0.05). Meanwhile, the time required for facial defect filling reconstruction using this system was markedly shorter than that of conventional methods (p < 0.05), as shown in Table 1.
Discussion
Autologous fat grafting has emerged as a prominent method for addressing craniofacial deformities. This procedure entails the extraction of adipose tissue from the patient, which is subsequently transplanted into the targeted reconstruction area. This approach effectively restores facial soft tissue deficits and refines desired contour [20]. Accurately achieving the volume and position of the grafted fat at the recipient site is crucial for the aesthetic outcome of the procedure. The application of digital surgical planning systems enables preoperative simulation of morphological changes in facial soft tissue, facilitating the development of more scientific and personalized virtual surgical plans. However, implementing these virtual plans accurately remains a challenge. Previous studies, such as that by Lo et al. [3], proposed a method for fat grafting based on preoperative simulation of facial defects, guiding actual surgical procedures by outlining injection layers with different colors. Nevertheless, this method is still subject to the subjective experience of the surgeon, particularly for patients with large and irregular defect areas.
The use of AR in surgical procedures undoubtedly holds immense value in advancing medical practice. The application of AR technology is progressively revolutionizing traditional medical procedures. As an innovative approach, AR enhances surgeons’ visual perception by overlaying digital virtual images onto the real-world surgical environment, aiding surgeons in making precise intraoperative decisions [21, 22]. The facial fat grafting AR guidance system developed in this study exemplifies the practical application of AR technology in reconstructing facial soft tissue defects. By integrating preoperative digitally generated ideal filling effects, i.e., virtual three-dimensional filling surgical plans, with the patient’s physical form, it creates an enhanced visual sensory experience. This fusion technology allows surgeons to directly observe the visual differences between the virtual plans and the patient’s actual facial structure during the filling process, providing real-time, intuitive visual guidance. The results of this study demonstrate that the average error in soft tissue defect filling assisted by the facial fat grafting AR guidance system is 2.09 mm, significantly lower than the average error of 3.23 mm observed with conventional methods (p < 0.05). This indicates that the AR guidance system meets the surgeons’ need for meticulous observation of filling operations and surgical plans, aiding in accurately determining the height and extent of fat filling. Moreover, this experimental outcome underscores once again that relying solely on personal experience and visual observation cannot ensure surgical precision.
The ultimate goal of AR is to achieve the visual effect of ‘virtual-real fusion’, highlighting the crucial role of display technology in visual perception. HoloLens integrates multiple sensor components capable of projecting virtual images as holograms, facilitating the fusion of virtual and real-world scenes. As a wearable AR terminal device integrating video-based integrated display and computational capabilities, HoloLens finds extensive application in the research and development of mobile medical AR systems [23, 24]. In this study, a facial fat grafting AR guidance system was constructed based on HoloLens. Real filling surgical scenes are synchronously captured by this system, with the captured video information being inputted into the integrated computer. Subsequently, through computation, the corresponding virtual filling plan can be seamlessly fused with the facial entity in real-time by the system, projecting them directly in front of the surgeon’s field of view. The operation can be observed by surgeons from various angles and depths, enhancing their visual perception during the procedure, aiding in the understanding of complex facial defect structures, and effectively improving operational efficiency. Furthermore, the problem of hand-eye separation during surgery is mitigated by it, facilitating smoother surgical procedures. The results of this study also indicate that the application of this AR guidance system significantly reduces the time required for surgeons to perform soft tissue defect filling compared to conventional methods (p < 0.05). Moreover, during the execution of filling procedures, the system aids surgeons in accurately positioning surgical instruments relative to the patient’s face, thereby reducing invasiveness.
AR enriches the perception of the real world by precisely integrating computer-generated three-dimensional virtual images into real-life scenes. This integration is achieved through the continual computation of real-time pose estimation by mobile AR display devices, aligning virtual images with the real world based on camera pose information—a process referred to as tracking registration. The accuracy of tracking registration is pivotal as it directly influences the effectiveness of the “virtual-real fusion” presentation in AR systems. Marker-based tracking registration technology, being one of the most mature and extensively researched types of tracking registration techniques, entails the analysis of videos or images containing artificial markers captured by the camera to ascertain the camera’s pose. Subsequently, through a coordinate transformation process, virtual objects can be seamlessly integrated into real scenes. This method presents advantages such as low cost, high speed, and precision, thereby facilitating the efficient completion of tracking registration tasks [18]. In this study, a marker-based tracking registration method was utilized. The QR codes were affixed to the model via a bracket and subsequently underwent CT scanning for the reconstruction of a digital three-dimensional virtual model. With the spatial positioning technology of Hololens 2 utilized for QR code recognition, the fusion of the three-dimensional virtual model with the real scene was successfully achieved. Research has indicated that Hololens 2 is adept at swiftly capturing QR codes in real-world environments and accurately identifying them as spatial location coordinates, facilitating rapid positioning. Given the present technological landscape, this method stands as a relatively precise means of spatial positioning [18]. The registration accuracy results of the AR guidance system for facial fat transplantation established in this study exhibited values of less than 1 mm. Given the unique and stable nature of human centric occlusion, in clinical practice, the tracking registration device with artificial markers can be securely fixed to the patient’s head by connecting it to a dental splint positioned between the upper and lower teeth. This method guarantees accurate tracking registration [18].
In the process of employing AR guided systems for actual facial fat grafting, the formulation of three-dimensional virtual filling surgical plans is crucial. Facial fat grafting is characterized by a certain absorption rate. Research indicates that over time, the volume of fat in the transplanted area gradually diminishes, ultimately stabilizing within 6 months to 1 year [25]. The success of fat transplantation hinges on various factors, with each specific step (i.e., harvesting, processing, and transplantation) potentially impacting the degree of fat volume loss in the recipient area [26, 27]. In future research endeavors, we will persist in evaluating the retention rate of fat transplantation and augment the volume of absorbable fat in facial defect areas to ensure better alignment of the surgical plan with real-world scenarios to guide the surgeon in clinical practice. Additionally, in this study, we obtained three-dimensional data of the model in a lying-down position using CT scans for AR-assisted filling. In clinical practice, facial morphology may exhibit slight variations between lying-down and standing positions. Consequently, future research endeavors will delve into investigating the implications of employing this technique to aid in filling facial soft tissue defects across various patient positions in real clinical scenarios, along with an assessment of the advantages and limitations inherent to each position.
Conclusion
This study successfully established an AR guidance system for facial fat grafting, achieving “virtual-real fusion” display of three-dimensional virtual filling surgical plans with the actual facial structure, providing visual navigation for surgeons during the filling process. The accuracy and efficiency of facial soft tissue defect reconstruction assisted by this system were significantly superior to traditional methods, indicating its potential to become an important auxiliary tool in clinical practice for facial soft tissue defect reconstruction in the future.
Data availability
No datasets were generated or analysed during the current study.
References
Wang KY, Yang Z, Wang WL, Xu H, Liu FY. Autologous free Fat Graft for Repair of Concave Deformity after total parotidectomy. J Craniofac Surg. 2019;303:834–7.
Liu K, Sun H, Zhang L, Li B, Chakraborty S, Wang X. Do patient-specific cutting guides and plates improve the accuracy of maxillary repositioning in hemifacial microsomia? Br J Oral Maxillofac Surg. 2020;585:590–6.
Lo LJ, Yamaguchi K, Niu LS, Liao CH, Lin HH. Fat Grafting in patients with extensive unilateral Facial Deficiency: three-Dimensional computer-assisted planning, implementation, and Outcome Assessment. Ann Plast Surg. 2020;841S(Suppl 1):S94–9.
Kaipainen AE, Sieber KR, Nada RM, Maal TJ, Katsaros C, Fudalej PS. Regional facial asymmetries and attractiveness of the face. Eur J Orthod. 2016;386:602–8.
Wang YC, Wallace CG, Pai BC, Chen HL, Lee YT, Hsiao YC, Chang CS, Liao YF, Chen PK, Chen YR. Orthognathic surgery with simultaneous autologous Fat transfer for correction of facial asymmetry. Plast Reconstr Surg. 2017;1393:693–700.
Denadai R, Raposo-Amaral CA, Buzzo CL, Raposo-Amaral CE. Isolated Autologous Free Fat Grafting for Management of Facial Contour Asymmetry in a subset of growing patients with Craniofacial Microsomia. Ann Plast Surg. 2016;763:288–94.
Dayal A, Bhatia A, Hsu JT. Fat grafting in aesthetics. Clin Dermatol. 2022;40:35–44.
Tuin AJ, Schepers RH, Spijkervet FKL, Vissink A, Jansma J. Volumetric effect and patient satisfaction after facial Fat Grafting. Plast Reconstr Surg. 2022;1502:e307–18.
Lei H, Ma GE, Liu Z. Evaluation of repairing Facial Depression deformities secondary to Lupus Erythematosus Panniculitis with Autologous Fat Grafting. J Craniofac Surg. 2016;277:1765–9.
Vávra P, Roman J, Zonča P, Ihnát P, Němec M, Kumar J, Habib N, El-Gendi A. Recent Development of Augmented Reality in Surgery: A Review. J Healthc Eng. 2017; 2017: 4574172.
Wang J, Suenaga H, Yang L, Kobayashi E, Sakuma I. Video see-through augmented reality for oral and maxillofacial surgery. Int J Med Robot. 2017; 132.
Al Janabi HF, Aydin A, Palaneer S, Macchione N, Al-Jabir A, Khan MS, Dasgupta P, Ahmed K. Effectiveness of the HoloLens mixed-reality headset in minimally invasive surgery: a simulation-based feasibility study. Surg Endosc. 2020;343:1143–9.
Condino S, Turini G, Parchi PD, Viglialoro RM, Piolanti N, Gesi M, Ferrari M, Ferrari V. How to Build a Patient-Specific Hybrid Simulator for Orthopaedic Open Surgery: Benefits and Limits of Mixed-Reality Using the Microsoft HoloLens. J Healthc Eng. 2018; 2018: 5435097.
Brun H, Bugge RAB, Suther LKR, Birkeland S, Kumar R, Pelanis E, Elle OJ. Mixed reality holograms for heart surgery planning: first user experience in congenital heart disease. Eur Heart J Cardiovasc Imaging. 2019;208:883–8.
Cao C, Cerfolio RJ. Virtual or augmented reality to Enhance Surgical Education and Surgical Planning. Thorac Surg Clin. 2019;293:329–37.
Cabrilo I, Schaller K, Bijlenga P. Augmented reality-assisted bypass surgery: embracing minimal invasiveness. World Neurosurg. 2015;834:596–602.
Liu K, Gao Y, Abdelrehem A, Zhang L, Chen X, Xie L, Wang X. Augmented reality navigation method for recontouring surgery of craniofacial fibrous dysplasia. Sci Rep. 2021;111:10043.
Gao Y, Liu K, Lin L, Wang X, Xie L. Use of augmented reality navigation to optimise the surgical management of craniofacial fibrous dysplasia. Br J Oral Maxillofac Surg. 2022;602:162–7.
Nicolau S, Soler L, Mutter D, Marescaux J. Augmented reality in laparoscopic surgical oncology. Surg Oncol. 2011;203:189–201.
Liao X, Wang X, Xu Z, Guo S, Gu C, Jin Z, Su T, Chen Y, Xue H, Yang M. Assessment of facial autologous fat grafts using Dixon magnetic resonance imaging. Quant Imaging Med Surg. 2022;125:2830–40.
Fritz J, Ungi PUT, Flammang T, Cho AJ, Fichtinger NB, Iordachita G, Carrino II. JA. Augmented reality visualization with image overlay for MRI-guided intervention: accuracy for lumbar spinal procedures with a 1.5-T MRI system. AJR Am J Roentgenol. 2012; 1983: W266-73.
Fritz J, Ungi PUT, Flammang T, Fichtinger AJ, Iordachita G, Carrino II. Augmented reality visualization with use of image overlay technology for MR imaging-guided interventions: assessment of performance in cadaveric shoulder and hip arthrography at 1.5 T. Radiology. 2012;2651:254–9.
Andress S, Johnson A, Unberath M, Winkler AF, Yu K, Fotouhi J, Weidert S, Osgood G, Navab N. On-the-fly augmented reality for orthopedic surgery using a multimodal fiducial. J Med Imaging (Bellingham). 2018;52:021209.
Fotouhi J, Alexander CP, Unberath M, Taylor G, Lee SC, Fuerst B, Johnson A, Osgood G, Taylor RH, Khanuja H, Armand M, Navab N. Plan in 2-D, execute in 3-D: an augmented reality solution for cup placement in total hip arthroplasty. J Med Imaging (Bellingham). 2018;52:021205.
Herold C, Ueberreiter K, Busche MN, Vogt PM. Autologous fat transplantation: volumetric tools for estimation of volume survival. A systematic review. Aesthetic Plast Surg. 2013;372:380–7.
Mojallal A, Shipkov C, Braye F, Breton P, Foyatier JL. Influence of the recipient site on the outcomes of fat grafting in facial reconstructive surgery. Plast Reconstr Surg. 2009;1242:471–83.
Shi Y, Yuan Y, Dong Z, Gao J, Lu F. The fate of Fat grafts in different recipient areas: Subcutaneous Plane, Fat Pad, and Muscle. Dermatol Surg. 2016;424:535–42.
Funding
This project was supported in part by the Interdisciplinary Program of Shanghai Jiao Tong University (No. YG 2021QN71), the National Natural Science Foundation of China (No. 81970973), the Science and Technology Commission of Shanghai Municipality (No. 22010502600), and the Shanghai Professional Service Platform of Oral-Cranio-Maxillofacial Digital Technology Research and Application (21DZ2294600).
Author information
Authors and Affiliations
Contributions
K.L. and S.C. authored the main manuscript text, and K.L. prepared Figs. 1, 2, 3, 4, 5, 6, 7, 8 and 9. X.W, Z. M, and S. G. S. revised the manuscript. All authors reviewed the manuscript.
Corresponding authors
Ethics declarations
Consent for publication
Written informed consent was obtained from the patient’s legal guardian(s) for publication of this article and any accompanying images.
Competing interests
The authors declare no competing interests.
Ethical approval
This study was approved by the Shanghai Ninth People’s Hospital Institutional Review Board (approval number SH9H-2019-T142-1).
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Liu, K., Chen, S., Wang, X. et al. Utilization of facial fat grafting augmented reality guidance system in facial soft tissue defect reconstruction. Head Face Med 20, 51 (2024). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s13005-024-00445-x
Received:
Accepted:
Published:
DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s13005-024-00445-x