Virginia Tech® home

Breath-driven augmented audio interface

The proposed breath-driven augmented audio interface will take respiratory input from biometric sensors and use it to manipulate audio playback and augment sampled sounds in real-time. There are two main innovations in this: 1) the use of respiratory signals to control the playback parameters of an audio file, and 2.) the augmentation of sampled ambient sounds (starting with the sound of the breath itself) to create a functionally aesthetic musical experience grounded in mindful music generation. The result will be an interactive soundtrack using the user’s breath as both its conductor and its palette. The intention is to adapt this interface for use in immersive group meditation sessions in the CUBE during Fall 2021. Biometric input from multiple participants will be analyzed for group coherence and synchronicity and used to spatially manipulate the sonic feedback in the space along with geographic data from any additional remote participants.