Audification and Sonification of Seismic Waveform Data
Ryan McGee - Software Development and Design
D.V. Rogers - Concept Design and Research Direction

Sonification is the use of non-speech audio to represent data. Auditory display of data as sonic representation may be used for monitoring (alarms, geiger counters, etc.) or analytic purposes. This research focuses on the latter, developing sonification techniques in the hope of fostering new comprehension of seismic data towards realizing characteristics that are not obvious from visualization techniques using the same data.

The most straight forward way to sonify data is to use the data as the sound itself. That is, each sample of data becomes a digital audio sample. This technique is known as direct conversion sonification or more commonly "audification." After scaling the data to be in the range of digital audio samples, one may alter the rate at which the samples are played back, thus altering the pitch of the perceived sound. However, most arbitrary data is inherently noisy and does not work well with audification. Though, seismic waves obey properties of physics that make them desirable for audification.

Typically, any modifications to the sound beyond playback rate adjustment are considered to be a more general sonification since the sound no longer represents the data exactly. Though, it is not uncommon for many audifications to utilize a high-pass filter at around 20Hz to eliminate frequencies below the range of human hearing. This simply results in better normalization of the sound amplitude.

Most forms of sonification involve using data to control parameters (frequency, timbre, amplitude, tempo, etc.) of one or more sound generators (synthesizers, samplers, etc.). For example, one may map the price of a certain stock to control the frequency of a synthesized violin sound. Musical sonification results when a data set determines aspects of a composition. For instance, one may use the Fibonacci numbers as a data set to determine rhythms or pitches in a piece.

Audification Process of Seismic Data

Since seismic waves obey the general wave equation (http://en.wikipedia.org/wiki/Wave_equation), the process of making them audible is simply a matter of rescaling the seismometer data values to the range of digital audio samples, [-1.0, 1.0], and playing the result back at a rate fast enough enter our range of hearing (20Hz - 20kHz). The typical range for seismic waves is 0.1-3 Hz, so increasing the playback rate by a factor of 100-1000X is suitable for making them audible.

Our experiments work with data collected from the February 21st, 2011 Mw 6.1 Christchurch, New Zealand Earthquake. The following examples are at 184X to 1103X speed, so the resulting durations are only a few seconds. Each sound has a high pass filter at 20Hz.

Audification Sound Examples:


Christchurch_184X_HP20.mp3

Christchurch_276X_HP20.mp3

Christchurch_551X_HP20.mp3

Christchurch_1103X_HP20.mp3

Sonification Process of Seismic Data

Working with seismic waves is a unique sonification scenario in which the data works well with audification and, therefore, produces the sound itself. In other words, it is not necessary to map the data to a sound generator to produce coherent sounds. In terms of a monitoring application, a live stream of seismic data would produce a loud banging sound that could act as an alarm for significant events. Yet, an auditory analysis of these events would be challenging given that we can only perceive changes in loudness and decay time for each event. In our sonification process we look at different techniques to enhance certain characteristics of the event's sound to foster further interpretation. Fundamentally, these techniques include granulation, time stretching, pitch shifting, and filtering.

Synchronous Granulation

Granulation of sound is the process of slicing a sound into several sound grains creating segments of sound lasting 1 to 100 milliseconds. If the grains are played back in order then the original sound results. One can repeat each grain a number of times, so for 5 repetitions of each grain, the result becomes 5 times longer than the original. Synchronous granulation means that all grains share the same duration (and other properties as the become used). Choosing an arbitrary duration for each grain (1 to 100ms) will result in several discontinuities in the sound. For example, a grain's amplitude may start at 0.23 and end on -0.72. When repeated, this jump in amplitude will produce an undesirable click in the sound. To solve this an amplitude window (envelope or ramp) is applied to each grain so it always starts and ends at 0. Next, the grains are overlapped so we hear less beating from the up and down windows.

Synchronous Granulation Sound Examples:

REF: Christchurch_276X_HP20.mp3

Christchurch_276X_synchgran_45ms_reps9.mp3
Audification at 276X granulated with 45ms grains repeated 9 times

Audified earthquakes sound like an explosion or a door slamming. Any such impulse sound will be characterized by an initial high frequency, high amplitude sound that decays over time. Granulation has the effect of time-stretching the sound, thereby allowing us to audibly zoom-in on the unique decay of a given earthquake.

Asynchronous Granulation Based on Zero-Crossings

Asynchronous granulation implies that each sound grain will have have different characteristics. In our case, the duration of each grain varies over time. An algorithm chooses the start and end points for each grain based on the location of zero crossings within the sound. Zero crossings are points where the wave's amplitude is equal to 0 (and is thus cross the horizontal axis). A convenience of using zero crossings is that we will not have to apply a window to the grains since they will already start and end on 0. Another quality of zero crossings is that they usually indicate the beginning of an impulse or large transient within the sound. So, when we repeat the grains multiple times, it emphasizes the major impulsive portions of each earthquake.

Asynchronous Granulation Sound Examples:

REF: Christchurch_276X_HP20.mp3

Christchurch_276X_asychgranzc_reps9.mp3Audification at 276X with grains at zero crossings repeated 9 times
When the grains are longer the sound seems to freeze or time-stretch on a certain point. Notice how this occurs at the major low frequency impulses of the sound. High frequencies cross zero more often and thus produce shorter grains and are less emphasized with this technique.

Time-Stretching, Pitch Shifting, and Filtering via Phase Vocoding

The phase vocoder is essentially a more advanced granulation tool. Its process breaks the sound into multiple segments (that can be thought of as grains) of equal duration and uses a Fast Fourier Transform to analyze the frequency spectrum of each segment. One may interpolate multiple spectra between two segments to extend the duration of a sound while maintaining its frequency content. If one time-stretches a sound in such a fashion and then alters the playback rate, the result becomes a change in pitch without a change in duration (unlike our audifications). We can also manipulate the spectra of each segment to apply filtering effects. In addition to high pass filtering at 20Hz, we sometimes remove frequencies below a certain amplitude threshold. This has the effect of de-noising the sound, leaving only the most prominent frequencies.

Time-Stretching, Pitch Shifting, and Filtering Sound Examples:

REF: Christchurch_276X_HP20.mp3

Christchurch_276X_phasevoc_strech2.mp3Audification at 276X time-stretched by a factor of 2


Christchurch_276X_phasevoc_strech8.mp3
Audification at 276X time-stretched by a factor of 8

The examples above demonstrate the smoother time stretching effects of the phase vocoder when compared to simple granulation techniques.

REF: Christchurch_551X_HP20.mp3

Christchurch_551X_phasevoc_thresh0.02.mp3
551X Audification filtered so that frequencies below 0.02 of the max amplitude for each segment are removed


Christchurch_551X_phasevoc_pitch4_thresh0.9.mp3
551X Audification filtered so that frequencies below 0.9 of the max amplitude for each segment are removed and pitch shifted by a factor of 4

The first example above produces a slightly more tonal, pitched sound from the original event. The second example is applies a much higher threshold, thereby leaving fewer frequencies in the sound. We hear a micro-melody as the frequency for each segment changes over time. The examples below build upon this notion by increasing the time stretch factor to 32. By drastically "zooming-in" on the time dimension of a sound and retaining only the most prominent frequencies we can extract subtle melodies from single events. As the last example shows adding more frequencies per segment moves from melody back to tone.


REF: Christchurch_1103X_HP20.mp3

Christchurch_1103X_phasevoc_stretch4_pitch2_thresh0.3.mp3
Audification filtered so that frequencies below 0.3 of the max amplitude for each segment are removed, time-stretched by a factor of 4, and pitch shifted by a factor of 2.


Christchurch_1103X_phasevoc_stretch32_pitch2_thresh0.3.mp3
Audification filtered so that frequencies below 0.3 of the max amplitude for each segment are removed, time-stretched by a factor of 32, and pitch shifted by a factor of 2.


Christchurch_1103X_phasevoc_maxpartials100_stretch32_pitch2.mp3
Audification filtered so that the 100 most prominent frequencies of each segment remain, time-stretched by a factor of 32, and pitch shifted by a factor of 2.

Lastly, rather than using the phase vocoder to extend the duration of our sounds, we can audify a sound at a lower speed where it would normally be inaudible and then pitch shift using the phase vocoder to hear the sound.

(The original 20X audification is entirely below 20Hz and can not be heard)

Christchurch_20X_phasevoc_pitch5.mp3
20X Audification pitch shifted by a factor of 5

Composition of the Christchurch Earthquake Mw6.1 by Ryan McGee (Duration 2.11)Christchurch.mp3 > More info at http://soundcloud.com/seismicsounds/christchurch-earthquake
©© Ryan McGee and D.V. Rogers March 2012

RESEARCH

"Philosophical and psychological research results show that there is a substantial difference between seeing and hearing a data set, because both evolve and accentuate different aspects of a phenomenon. From a philosophical point of view the eye is good for recognizing structure, surface and steadiness, whereas the ear is good for recognizing time, continuum, remembrance and expectation. In studying aspects like tectonic structure, surface deformation and regional seismic risk the visual modes of depiction are hard to surpass. But in questions of timely development, of characterization of a fault's continuum and of tension between past and expected events the acoustic mode of representation seems to be very suitable." - Florian Dombois
Earthquake Sound of the Mw9.0 Tohoku, Japan earthquake - Zhigang Peng
http://geophysics.eas.gatech.edu/people/zpeng/Japan_20110311/

Listen, Watch, Learn: SeisSound Video Products - SRL: Electronic Seismologist
http://www.seismosoc.org/publications/SRL/SRL_83/srl_83-2_es/

Earthquake Music - Zhigang Peng
http://geophysics.eas.gatech.edu/people/zpeng/EQ_Music/

Sounds of the Parkfield Earthquake (2004)
http://www.cisn.org/special/evt.04.09.28/sounds.html

Listening to Earthquakes - Andy Michael
http://earthquake.usgs.gov/learn/listen/index.php

The Sound of Seismic - John N. Louie
http://crack.seismo.unr.edu/ftp/pub/louie/sounds/index.html

Auditory Seismology - Florian Dombois
http://www.auditory-seismology.org/

Time Compression - Florian Dombois
http://www.auditory-seismology.org/version2004/time-compression.html

EARTH INSTRUMENT - BLDG BLOG
http://bldgblog.blogspot.com/2006/11/earth-instrument.html

Seismic Sound Art and Music

Inner Earth - A Seismosonic Symphony - KooKoon (1996)
http://www.agnld.uni-potsdam.de/~shw/ABSTRACTS/ScherbaumSeismoSound/InnerEarth.html

Earthquake Quartet #1 - Andy Michael (1997)
http://earthquake.usgs.gov/learn/music/

Mori: An Internet-Based Earthwork - Ken Goldberg (1999)
http://goldberg.berkeley.edu/art/mori/

Singing Songs of Volcanoes - Domenico Vicinanza (2007)
http://www.youtube.com/watch?v=FNRQ_LuzMt4 (30sec - 3.30sec)
ref: http://www.agiweb.org/geotimes/apr07/article.html?id=trends.html
Listen to Mount Etna > grid.ct.infn.it/etnasound/page4/page8/etna.aif

Near-Realtime SoundQuakes (Author Unknown) (2007)
http://www.flyrok.org/

Tectonic - Micah Frank (2010)
http://micahfrank.com/tagged/tectonic

Not of This Earth Sonifications

NASA Solar Wind Sonification
http://cse.ssl.berkeley.edu/stereo_solarwind/sounds_programs.html

Sonifying the Cosmic Microwave Background
http://www.mat.ucsb.edu/res_proj7.php

REFERENCE PAPERS

Listening to the Earth Sing - 1992
Chris Hayward
Published in Auditory Display: Sonification, Audification, and Auditory Interfaces, Edited by Gregory Kramer. 1994

USING AUDIFICATION IN PLANETARY SEISMOLOGY - 2001
Florian Dombois

AUDITORY SEISMOLOGY ON FREE OSCILLATIONS, FOCAL MECHANISMS, EXPLOSIONS AND SYNTHETIC SEISMOGRAMS - 2002
Florian Dombois


Using Audification to Distinguish Foreshocks and Aftershocks - 2006
David Eberhard


SONIC EXPLORATIONS WITH EARTHQUAKE DATA - 2008
Manuela Meier & Anna Saranti

SONIFYER A Concept, a Software, a Platform - 2008
Florian Dombois*, Oliver Brodwolf*, Oliver Friedli*, Iris Rennert*, Thomas Koenig†
external image pdf.png Sonifyer-Dombois2008.pdf


Auditory Displays and Sonification: Introduction and Overview - 2009
Ryan McGee

http://www.lifeorange.com/

Audification, Chapter 12. The Sonification Handbook - 2011
Florian Dombois and Gerhard Eckel


EARTHQUAKE SOUNDS - 2011
Andrew J. Michael


Listening to the 2011 Magnitude 9.0 Tohoku-Oki, Japan, Earthquake - 2012
Zhigang Peng, Chastity Aiken, Debi Kilb, David R. Shelly and Bogdan Enescu

http://www.seismosoc.org/publications/SRL/SRL_83/srl_83-2_eq/

DVR NOTES Jan/Feb 2012


BOOKS:

Auditory Display: Sonification, Audification, And Auditory Interfaces - 1994
Edited by Gregory Kramer
http://www.amazon.com/Auditory-Display-Sonification-Audification-Proceedings/dp/0201626047

The Sonification Handbook - 2011
Edited by Thomas Hermann, Andy Hunt, John G. Neuhoff
http://sonification.de/handbook/