A computational framework for modelling active exploratory listening
that assigns meaning to auditory scenes
—Reading the world with Two!Ears—
Download Two!Ears Auditory Model
TwoEars-1.5.zip – 45 MB
For a starting point have a look at the documentation and start with the installation guide.
This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 618075.
Understanding auditory perception and cognitive processes involved with our interaction with the world are of high relevance for a vast variety of ICT systems and applications. Human beings do not react according to what they perceive, but rather, they react on the grounds of what the percepts mean to them in their current action-specific, emotional and cognitive situation. Thus, while many models that mimic the signal processing involved in human visual and auditory processing have been proposed, these models cannot predict the experience and reactions of human users. The model we aim to develop in the Two!Ears project will incorporate both signal-driven (bottom-up), and hypothesis-driven (top-down) processing. The anticipated result should be a computational framework for modelling active exploratory listening that assigns meaning to auditory scenes.
Funding period: 01.12.2013-30.11.2016
Project number: 618075
Coordinator: Prof. Dr. Alexander Raake, TU Ilmenau