Thursday, September 12, 2013

Sonification Techniques

There are major commonly used techniques to work with sound and acoustics for representing various types of data:

Auditory Icons / Earcons

Auditory Icons or Earcons are sounds being triggered when particular defined events happen in the data, e.g. a value exceeds a certain threshold. Though being used in the a similar way, there is a main difference between the two: Auditory icons are recorded sounds simply being triggered a certain events, whereas Earcons are more connected to the data and the data can influence certain parameters of the sound. The line between the two however is quite blurry.
This is an effective way to represent events.


Parameter Mapping

This techniques maps values of a dataset to parameters of a sound source, such as a digital synthesizer. This is often used to represent time series data. Each signal can be mapped to a different wave, manipulating different parameters of the sound source. Through that, the different signals become distinguishable from each other.
This technique holds possibilities to show the changes in a signal over time.

Model-Based Sonification

The approach of Model-Based Sonification is mainly focused on interaction. Auditory Icons, Earcons and Parameter Mapping are only interactive up to a certain extend, whereas model-based sonification is a method that turns the data itself into an instrument that the user can interact with and produce sounds through this interaction. So roughly speaking, without interaction, there won't be any sound.
This technique makes the data and its sonification a lot more tangible and engaging. It is however not quite practical for continuous time series data but more suitable for exploratory data analysis of large static data sets.

References
Hermann, T., Hunt, A. and Neuhoff, J. 2011. The sonification handbook. Berlin: Logos Verlag.

No comments:

Post a Comment