Spatial encoding modes.

In sound, and especially in Electroacoustic sound, spatial information can originate from different modes. The images below attempt to illustrate some of these modes. The observation that I would like to make is that these modes can and will co-exist in a single compositional work. The implication of this co-existance is that the spatial information contained in different modes may complement, or perhaps interfere (and compete) with each other. When they compete with each other, the coherence of the spatial-audio image is reduced and so the ‘virtual reality’ is weakened.

Symbolic mode

Ligeti’s Atmospheres is an orchestral work that makes extensive use of long slow volume fades.

I’m suggesting that these long slow volume fades are a symbolic representation of distance. For example, increasing volumes symbolise movement towards the listener which might result in greater engagement /focusing of a particular set of sounds.  One of the points I will make in my thesis is that there is actually *a lot* of spatial symbolism in orchestral and stereo/mono electroacoustic works.

Live-room mode

The Berlin Philharmonic performs Atmospheres in the Berliner Philharmonie concert hall.

That physical space adds a layer of spatial encoding on top of Ligeti’s symbolic spatial encoding.

Concert halls all have their own, and identifiable, acoustic signatures which have varying effects on the sounds performed therein. The concert hall’s reverberation will have an effect (positive or not) on Ligeti’s use of volume to symbolise distance.

Virtual-reality (simulation) mode

Now imagine that the Berlin Philharmonic’s performance of Ligeti’s Atmospheres (in the previous mode) was recorded using a SoundField microphone (a high quality spatial-audio microphone).

If this recording is now played back on a large multi-speaker array in a separate performance hall, then the Berliner Philharmonie’s acoustics will be re-created as a virtual-reality (which itself contains Atmosphere’s symbolic spatial modes).

But this virtual-reality recreation is done within the context of a new physical space, which itself adds a new layer of spatial encoding. So we now have 3 layers (or modes) of spatial encoding.

How do the spatial encodings, within the different modes, affect each other? Ligeti’s Atmospheres would have been designed specifically to be performed in concert halls. Ligeti would have therefore intuitively composed the volume fades to achieve their desired effect when performed in a concert halls. I am postulating that the sense of distance would have varying degrees of success dependent on the acoustics of different performance venues.

Now consider that a spatial audio composer uses a volume fade within an entirely new and original spatial audio work. I would like to suggest that a volume fade will most-often be a symbolic use of spatial encoding to communicate distance (volume fades simply don’t exist in the real world). But consider that this sound is placed in a constructed virtually-real sound landscape and doesn’t move at all, whilst its volume is being reduced. This means that the sound uses two separate spatial encoding modes. The first is symbolic, the second is virtually-real. And they contradict each other. One says the sound is getting closer to the listener, the seconds says it is not.

This will weaken the coherence of the spatial audio image, thereby weakening its ‘suspension of disbelief’. This conflict of spatial audio information, originating from different spatial audio encoding modes, is (I’m arguing) the principal cause for spatial-audio composition’s unmet expectations in producing immersive realistic soundscapes.

Ofcourse, composers have only recently had access to reality-equivalent systems. The use of the listener’s experiential memory (of experiencing sound in space) has only been able to be explored by being referenced symbolically. Such things as the use of volume must now be *co-ordinated* to be consistent with the virtual-reality that is being presented. Producing the believable immersive experiences we have come to expect (by reality-equivalent systems) will take a lot of re-consideration of our use of spatial information communicated through the symbolic mode.

As a demonstration of this separation of spatial encoding modes, I will re-create Ligeti’s Atmospheres but in pure virtual reality. In other words, I will attempt to extract the spatial encoding from the symbolic mode, and re-implement it in the virtually real mode. In the first instance, this will principally concern volume fades being interpreted as movement in distance.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s