SOUNDVISION on stage | AP School Of Arts Skip to main content
  • Home
  • Research
  • SOUNDVISION on stage


SOUNDVISION’ is a Unity based artistic toolset for reactive real-time-visualizations of sound, music and movement with the goal of making performances more visually perceivable, also for example for a deaf audience. Max Schweder developed this toolset together with his Mentors Chikashi Miyama and Naoto Hiéda during his fellowship at the Academy for Theater and Digitality (Dortmund, Germany).

As a researcher for the research group ‘CREATIE’ at the Royal Conservatoire Antwerp, Max Schweder will be building further on his first phase of programming research in Dortmund, now developing reactive sound/vision performances, that especially focus on accessibility for deaf and hearing-impaired people, both on and off stage. He wants to provide open source tools for artistic audio visualization that aim for more accessible performances. He will be collaborating with deaf dancers from the Un-Label network as well as the deaf community in Antwerp.

In this research it is important to not use visuals as a mere translational system. The main interest — which more and more becomes Max Schweders expertise — lies in working on music and their visuals simultaneously, where one artform reciprocally informs and inspires the other. Decisions and actions echo back and forth during the creative process.

Reactively visualizing all dimensions of a sound is a complex task. Not only common or just bipolar parameters, such as dynamics, pitch or articulation, but also the more complex relations and connections between sounds can be visualized. Additionally, a sound-reactive virtual embodiment of a performer, provided by e.g. 3D Camera Input - can furthermore amplify the connection between performer and sound.