Virginia Tech® home

Drosera Obscura: A Multi-Sensory XR Installation and Performance

Description

Drosera Obscura is an immersive multi-sensory art installation that combines virtual reality, animatronics, sound, scent, and touch to blur the lines between digital and physical worlds. By integrating various sensory elements, this installation allows participants to experience a deeper, more engaging form of interaction, exploring how technology can evoke emotion and connection. This project is designed to inspire and captivate audiences, showcasing the potential of XR to create unforgeBable experiences beyond sight and sound alone.

Documentation

The Drosera Obscura XR Project made major strides this semester, building on our vision of  merging virtual reality with real-time, physical interaction. This spring, we moved beyond static  VR to create a truly multisensory experience—integrating sight, sound, scent, touch, and  responsive motion. Scent cannons were redesigned and embedded into robotic forms, while  real-time audio feedback and interactive puppeteering connected user gestures to both virtual  and physical reactions. 

One of the biggest leaps forward was the transition to fully interactive animatronics. Initially  these were controlled in real-time using AutoDesk Maya Python scripts connected to Pololu  board controllers. We integrated VIVE trackers to allow for live puppeteering—creating a  feedback loop where a participant’s movement in VR affects the animatronic's behavior and vice  versa. This evolution redefines immersion and positions the project at the intersection of  performance, XR, and robotics. 

Previously exhibited at venues such as SEAMUS (NYC), Ars Electronica (Austria), and  SIGGRAPH (Colorado), the project has reached thousands and gathered crucial feedback from  experts in animation, robotics, and interactive design. SIGGRAPH alone drew an audience of  nearly 10,000. These presentations/exhibitions helped refine the project while underscoring the  value of our transdisciplinary team—visual artists, coders, puppeteers, sound engineers,  singers, and a wide arrange of students—all working together to overcome complex technical  challenges, like syncing Unreal Engine with animatronics in real time. 

Thanks to support from the ICAT Mini SEAD Grant and the McGill + ICAT Grant, our next phase  focuses on designing five non-tethered robots. These bots will be wirelessly connected, able to  communicate with users and each other using Unreal Engine (for XR) and Max (for immersive  audio). A key goal is developing autonomy in these robotic forms without losing the collaborative  spirit of puppetry. 

Design progress this semester includes a more compact scent cannon, now small enough to  embed in the robot’s body, releasing scent upon user interaction.  

Additional systems include Houdini-based particle effects and a haptic feedback vest that  simulates sensations like wind or rain in response to the animatronic environment. We built  custom circuitry and breakout boards for precision motor control, amplifying the responsiveness  of the environment. The setting—a night-time cranberry bog—brings an eerie richness to the  experience, amplified by layered haptic pulses and ambient sound. 

Collaborators like Yamin Xu (Bowling Green University) have added external insight, helping  push our iterations further. The collaborative nature of the project is key—we are constantly  refining through weekly feedback and making design decisions that reflect this evolving process. 

Undergraduate team members made vital contributions. 

Sydney Deco (BFA Industrial Design Student) has been instrumental in designing the robot’s  form, from flexible neck housings to ergonomic handles—balancing an industrial core with an  organic, lifelike aesthetic. Laser-cut structures and fabric details have helped us test materials  that can survive performance conditions while maintaining visual impact.

Matt Finn (BFA Creative Tech Student) redesigned the system baseplate for ease of rotation  and transport, adding wheel stabilizers and an integrated power storage unit. He also installed  proximity touch sensors—small but impactful—lighting up on human approach using changes in  capacitance. 

Jason Hodge (BFA Creative Tech Student) simplified our workflow by reducing dependencies:  he removed Max from the chain, rewrote the Arduino firmware, and created a direct pipeline  between Maya, Arduino, and Unreal. This allows real-time VR gestures to directly control  physical movements. Each puppeteer now manages specific “bones” that drive corresponding  physical joints. 

Tohm Judson (Music, Sound, Narrative) has been essential in shaping both the story and the  technical setup for Drosera Obscura. Early in the process, he noticed that the design sketches  resembled carnivorous Drosera plants. This insight inspired the project’s title and helped form a  narrative set in a posthuman cranberry bog in the Pacific Northwest—an environment shaped  by climate collapse where evolved plants digest insects, communicate through sound, and  repurpose microplastics. This backstory became a creative anchor for the team, guiding the  design of textures, sound, and behavior across the experience. Tohm also supports the  technical development, helping set up a system where five animatronic bots can wirelessly  communicate with each other and the environment through Max and Unreal Engine. His work  helps tie together the story and the interactive systems, making the experience more cohesive  and immersive. 

Brook Kennedy (Industrial Design Faculty Member) provided critical design consultation in the  early stages of Drosera Obscura, helping the team conceptualize the visual language and  ergonomic qualities of the animatronic forms. His background in speculative and sustainable  design supported material exploration and the integration of form with function—especially in  components like the scent cannon and interface elements. Brook's input helped bridge practical  fabrication needs with the immersive, organic aesthetic of the project. 

Matthew Swarts (Georgia Tech Faculty Researcher) is developing a multi-component  integration system—combining touch sensors, haptics, audio feedback, stepper motors and  servos. The system runs on a custom Feather board that outputs audio and takes input from  sensor arrays, all driven through wireless communication with the R4 Arduino board. It’s been a  deep hardware challenge but central to getting the bots to react with nuance. 

Dongsoo Choi (School of Visual Arts Faculty/IT Member) has been essential in hardware  assembly—soldering boards with up to 300 contact points each. With three boards in operation,  we’ve had to manage nearly 900 potential points of failure. Preparing these systems for travel to  Canada next year adds another layer of complexity. 

Thomas Tucker (School of Visual Arts Faculty Member) he had led the effort to get Unreal  Engine communicating in real time with Arduino, enabling synced, live interaction between the  virtual and physical elements. He has also played a key role in iterative modeling and fabrication  throughout the project. In addition to his technical contributions, he is serving as the project’s  Principal Investigator and will manage performances at McGill and ICAT in the upcoming  academic year.

We’re now preparing for a fall performance at the ICAT Cube and a spring 2026 showcase at  McGill University. Several early prototypes are being preserved as documentation of the  iterative process—each a small step in a larger, more complex system. 

We're also continuing to explore how puppeteers and animatronics can complement each other.  While some puppet functions are being automated, we still believe in the power of the human  touch to bring forms to life. Interactive elements like buttons will trigger particle effects, scents,  or responses, adding layers of engagement. Safety remains a priority, with kill switches and  protective housings integrated into every build. 

To support documentation and momentum, we’ve created 18 weekly YouTube videos showcasing progress. These updates have been key in tracking the project’s arc, keeping our  team focused, and communicating goals clearly.

View Channel: Thomas Tucker

Impact