ICAT STUDIOne (Collegiate Square - 460 Turner Street)
As Al Bregman has so eloquently noted, the auditory system routinely accomplishes a phenomenal feat, that of parsing the motion of waves in a 3-dimensional soup of gas to uniquely identify individual auditory objects located anywhere in the space around us.
Using discrepancies in time that range from less than a millisecond for phase shifts at high frequencies to numbers of seconds for reverberation times of large rooms, we are able to organise the motion of air around us into the temporal, spectral and spatial attributes of sounding objects and their reflections. From this, we can identify such properties as the size of a room, the voice of a friend or the number of violins playing in an orchestra.
In this talk, I will propose that the mechanisms that underlie our ability to perform Auditory Scene Analysis such as fusion and stream segregation can be drawn upon to reveal patterns and relationships within large, complex, dynamic data sets. To illustrate this, I will discuss the work of a Ph.D. student currently collaborating with scientists at NASA Goddard that is revealing hitherto undiscovered relationships between frequency components in data from solar wind.
- Sile O'Modhrain
We welcome any interested students/faculty/community members to join this meeting, and enjoy coffee/pastries from the Next Door Bake Shop!