Virginia Tech® home

Securing Trustworthy XR Interactions through Human-Machine Networks in Healthcare

Description

Securing trustworthy XR interactions is a core requirement for the Future of Work that uses immersive technologies and mixed reality. Our specific context includes applications of remote collaboration between VR and AR users in a telemedicine context. For example, we are interested in exploring how users trust the use of mixed-reality devices for this type of application, including current medical students who will be at the forefront of the integration of immersive tech in medicine in the future. 

Securing trust is critical for adoption, and even more critical in a medical context when it comes to data security and privacy. This is integrated into our project by investigating the role authentication plays for connecting with remote medical experts, and whether users trust the instructions or medical diagnoses they may receive from remote caregivers in this system. We will be developing a VR/AR remote collaboration system as part of this project. Methods we plan to explore for increasing trust in this ecosystem include authentication methods and expert evaluation of the anticipated data flows to identify areas that need enhanced security as a result of increased sensors included on mixed-reality devices and the applications they are likely to support in tele-medicine.

Documentation

Reports

The primary research activities conducted during Fall were broken up into literature review of XR collaborative systems in a medical context, DepthKit Studio configuration and collaboration with  

University Libraries/ARIES, system exploration for alternative collaboration systems. 

Literature Review 

Literature review was conducted of research results and existing systems that fit our collaboration model  (local user and remote expert). Papers were first characterized based on the collaboration configuration,  the data transferred, and context.

We then synthesized new labels for the most common sensors/systems to examine trade-offs between   feasibility in deployment, user experience, and potential privacy risks. 

DepthKit Studio Testing 

The project team initiated a collaboration with ARIES and the University Libraries to start the configuration of tele-operation system  for VR/AR. This was inspired by prior work from Cornell Tech published at an IEEE VR workshop. 

However, during this process we encountered many roadblocks in terms of calibration, capture of the user  and the environment in one run, networking captured data to a nearby computer, and compatibility of the system with a Magic Leap 2 AR headset. 

System Exploration 

Beyond DepthKit Studio, we reviewed other tele-operation systems for VR/AR, including Magic Leap’s  Remote Assist and Workshop software, MakeSEA Catapult, and building a tele-operation networked  application from scratch using Unity and 360-degree videos. From this evaluation, we determined that  MakeSEA Catapult and Magic Leap Remote Assist are the most viable options. Additionally, we explored  the potential to integrate generative AI for the target task through Microsoft’s CoPilot AI, which offers to  provide automated annotations and responses for frontline workers using AR headsets. So far, we have  not been able to identify a useable system for this situation, but plan to explore this recent breakthrough in  AI in the context of remote healthcare when using spatial computing/mixed reality devices. 

Spring 2024 Plans 

Based on our results, we have determined a modified plan for Spring 2024. First, we will conduct up to  two design or focus group workshops on the context of emerging technologies, in line with the original  proposal. The only modification to these workshops would be related to a more interactive workshop  centered around and using generative AI mechanisms (language and image synthesis models). Second, we  narrowed our scope for the preliminary remote user study to make use of Magic Leap 2 Remote Assist, as  the plug in play solution allows us to execute a user study with three clear conditions, no remote assist  (baseline), remote assist using only the first-person view from the local client, and remote assist using  both the first-person view and the third-person view enabled by the environment sensing built into the AR  device. The user study can then evaluate the ability of remote experts (medical students) to guide novices  (general participants from the VT community) through the remote medical task to evaluate usability and  privacy concerns. Based on our literature review, this setting is representative of a practical and feasible  deployment to areas with limited access, as it only involves sending a mobile lightweight headset to the  client instead of a ten sensor DepthKit Studio set up that requires precise calibration. The user study is in  line with the originally proposed work, only remove the need to require the remote expert to use a VR  headset and instead fall back to the use of a traditional desktop setting for guidance, and includes the tools  and gestures originally proposed from a VR remote expert and the produced risks to viewing the local  client’s home environment during guidance.

Impact