Loading player for /content/dam/icat_vt_edu/projects/europa ship concept.mp4...

Our project aims to seamlessly blend real-time stereoscopic video capture of a live actor into a synthetic virtual environment and to build an interactive, branched narrative in which a live participant (AKA audience member) can engage and navigate towards different endings based on his or her story choices. To achieve this goal, we built a diverse team of faculty and students from various disciplines including Computer Science, Cinema, and Visual Arts. Our process has considered the limitations and opportunities of each of these areas, with the goal of creating an experience that significantly advances the possibilities of virtual reality.

Our efforts this semester culminated in a live demonstration of our prototype at the ICAT Creativity + Innovation Day 2019, on May 6, 2019. The exhibit allowed visitors to experience an early version of the first scene of our story, which unfolds inside a spacecraft interior. Participants came in pairs and rotated between two roles: the Captain of the spaceship and the Co-Pilot. The Captain role was experienced virtually inside an HTC Vive Pro headset. The Co-Pilot role (which ultimately will be portrayed by an actor in our final experience) was lit with several cinema lights and filmed against a green screen using two Blackmagic Micro Studio Camera 4K cameras (for 3D/stereoscopy), which were connected to a rendering computer by a Decklink 8 4K Pro video capture card. The bridge was modelled and textured using Autodesk Maya, and the scene was rendered to the Vive Pro headset using the Unity3D engine.

We are very encouraged by the results of this preliminary testing. Most visitors seemed to enjoy the experience and appeared very engaged by the ability to interact with another person through 3D integrated live video. We are really excited to expand our explorations next Fall 2019 term.

Here, below, is a summary of our processes and trajectory to-date:

We have applied a holistic approach to the various intersecting disciplines our project incorporates -- Computer Science, Cinema, and Visual Arts -- moving from story, to technology, or visual arts.

One of the key components of our project is the technical integration of the live-action video and the synthetic virtual reality. We have been working diligently this semester to combine our Computer Science and Cinema expertise towards this end. Drawing from the Cinematic Arts, we built a dual-camera 3D live-action camera rig to record an HD video capture of a performer against the backdrop of a green screen. The 3D system will provide a more natural perception of depth and, hopefully, a more immersive experience for the participant. On the Computer Science side, we are continually improving a custom-built pipeline for streaming our HD video feed in real-time into a Unity-powered game engine, utilizing a special shader to remove the green screen from the video production. The ultimate goal is to make the performer indistinguishable from the virtual environment. This is an extremely challenging task, however, and it will require fine-tuning as we proceed, as well as thoughtful consideration of cinema production elements (e.g., designing a costume that covers the fine hairs of the actor, which are very difficult to shade; maintaining precise, separate lighting of both the green screen and the actor; etc.). Another challenge will be optimizing our system for low latency, so that it can be used in real-time inside our VR environment.

One of the greatest challenges we encountered this semester, which affected our progress along this first and fundamental front of blending our live action video and synthetic VR, related to equipment procurement. It took quite a long time for our camera and computer parts to arrive, and when they finally did, only 1 of our 2 cameras worked, and the capture card was Dead on Arrival. It then took nearly 2 months to get the replacement for the camera and capture card -- and the replacement camera came with missing and improper accessories (i.e. no battery, no breakout cable, and an incorrect power adapter). Once we finally had all our main gear together and were able to begin testing the video stream within Unity, we discovered two difficulties: 1) Our two cameras do not output equal exposures (>1 stop difference); and 2) Our computer could not process the video signal. It turns out our motherboard did not have the minimum number of lanes to distribute and process the input signal. We found a temporary solution in order to keep working, and we will rely on a more long-term solution next fall - (namely, purchasing a new motherboard and processor with additional funds we sourced from a CHCI partial-matching grant of $1,000). These equipment difficulties definitely put us behind schedule, yet we have learned a lot in the process and gained a clearer understanding of where we need to go.

In addition to the new motherboard and processor, we still need to procure some camera-rigging parts, in order to construct a proper mount for the 2 cameras. So far, we have been using 2 tripods positioned very tightly together, which is but a temporary and limited system. Our ultimate goal is to build a camera mount that lets us adjust the space between the Left and Right camera “eyes”, which we can align to our participant’s IPD (interpupillary distance) for an optimal, customized 3D viewing experience.

We also need to invest (time and money) in an audio relay system between the performer and the participant. Just as our video signal needs to feel seamless and natural to the participant, so does the audible dialogue exchange need to feel “real”. This will require microphones for both actor and participant; headphones for our actor (we plan to use Bone Conduction headphones as part of the actor’s costume); and the necessary cabling, accessories, and audio interface. One challenge we anticipate on this front is a potential audio latency vs the video. We need to figure out how to maintain sync between the visual and auditory signals in the experience, else the participant will surely feel bumped from the immersion.

From a Sound Design standpoint, we need to invest more time and energy into creating a lush, spatial sonic environment, which needs to adapt in real-time to the participant and the ever-changing story unfolding in the moment.

With regard to the Visual Design of the virtual world, we drew from the discipline of Visual Art and spent a lot of time creating 3D models in Maya of a spaceship cockpit, with an eye to realistic engineering of the craft. Eventually, we will design all the details of the cockpit, along with various views of the exterior world contained within our story, including: the surface of Europa (the smallest of Jupiter’s Galilean moons), where our story begins; a thick surface layer of ice, through which our spaceship drills and melts its way down; a region of dark ocean void, as we descend beneath the ice layer; and then a magical world full of bioluminescent sea creatures. (One of our endings also includes an additional modeling of volcanic ocean floor.)

From this Visual Art and VR perspective, our goal is to create a virtual environment that looks and feels real. To achieve this goal, we will create a physical set that matches the virtual 3D models (in terms of its shape and size and texture -- not necessarily in terms of color, since the participant will be wearing a VR headset). In this way, the user can touch nearby objects that correspond to the virtual world, thereby significantly increasing presence. The visual design also takes into account restrictions of the capture process (for example a static camera) and narrative demands. Since the 3D world will have real-time images within it, we intend to leverage cinema techniques to physically light the actor so that it matches the virtual lights of the virtual scene. (One idea we have for this task is to deploy a 65-inch monitor off camera to project the lights and colors of the exterior view the actor would see in the virtual environment.)

Finally, our experience is an interactive narrative, in which the actor’s and participant’s interactions are fundamental to the development of the story. Our script has multiple decision points, which will be supported by a series of narrative controls and improvisational skills from the actor. The narrative controls are the software support for the narrative, allowing the director to activate events in response to the user’s actions during the experience. Some of these controls progress the narrative from pre-established decision points. Others are used as recovery mechanisms to bring the narrative back on track. We have completed a rough draft of the branched-narrative script. We plan to refine the text in the Fall 2019 term, once we have our technical system in place and have casted our performer. We anticipate a highly iterative coordination between Performer, Writer, and Technology in order to deliver on such real-time, interactive storytelling. (On this note: We decided to hold off casting an actor or pushing the story too far this school term, in order to wait for our camera and streaming system to be properly in place to support our unique creative process.)

Our efforts thus far on this project have only increased our enthusiasm for this research. While we’ve encountered many challenges along the way, each has created an opportunity for discovery and problem- solving with significantly Net Positive results. Promisingly, we saw several early expressions of similar immersive projects exhibiting at major film festivals like Sundance -- which tells us we are really onto something! Hopefully, we can leverage this “buzz” and momentum to gain additional funding and continue improving our idea.