Thinking about Music

All content on this site is copyright mjkmercer

Month: August, 2013

Recognising Sounds – Knowing What we are Hearing

Note- See my Essay – ‘Sounds, Location and perception’ which is a prelude to this and covers the journey of a wave and its transformations from source to the ear. 

Once the ear has received the complex array of waves (now collected at the ear canal as a pulsating tube of air) it falls to the rest of the auditory system to ‘work out’ the content and its meanings and to identify which components go together to form each discrete auditory stream. The apparatus now involves the brain and its experience of the world to carry out this almost instantaneous sifting – first to detect threat (which would make us jump before we knew why) then to look for messages with meaning for us – that we need to respond to then by focusing and making a decision to do so, hearing all the background sounds. This is Auditory Scene Analysis.

The Oncoming Wave

As I sit writing this, I am aware of the following:

There is the slightest whisper of short strokes of my pen like silk on paper; the paper emits a tiny hollowness as the nib touches the surface and behaves like a membrane (I’m resting on a pad not a desk). There is also the soft breath of my computer fan – so continuous that I usually don’t hear it. Outside (I know it is outside because of my location perception apparatus) a bird unwinds its minimalist song. I identify what sort of bird it is. If I could, the experience would be different. It’s sound stream would be that of an ‘x’ and would form an impression complete with a label and my familiarity might downgrade the event to the level of the computer fan. But being a musician I enjoy the tune and it is something I have heard before and recognise.  This is a special sort of sound stream in that it could be interrupted and I would still know what was coming next. Consider the inevitability of each note of a wood pigeon’s call. I’ll come back to this point about known sounds from  knowledge templates versus unknown sounds which become categorized by extrapolation from templates of similar sounds.


Theory of Forms and Sound Recognition Templates


I need to digress into Plato’s Theory of Forms briefly – so here is an early philosophy warning.

Sounds have form that exists in time and space. The sound of a car going by is such an event: it is a unique event yet totally familiar. Were we to record it and look at the waveform in great detail we would know in keeping with our knowledge of things like snowflakes, that we will never see the same wave form exactly the same ever again and yet there will be plenty of cars passing by. Each event has sufficient characteristics in common for us to be able to recognise the sound and label it because we are able to access the idea of a car sound in our minds and instantaneously realise that it conforms to that general form.

The sound recognition templates (forms) that we have in our minds can only be broad in scope. We cannot hope to match a sound identically – it might appear at a different distance, be a different car, different surrounding acoustic, we might have the window open or closed. So many factors guarantee that we  almost never will hear the same sound twice.

We recognise the sound of a violin easily and based on our wide experiences of violins we are able to get a close match to the templates we have. Were we to experience an er-hu (Chinese two-string violin) having never heard one before, our cognitive processes will go into overdrive looking for a match.  The mind will offer the template of a violin to our understanding, but it will also inform us that it was not a veridical match. That something was different about the sound that defied the quick labelling. We might choose then to focus on the sound to describe to our inner process a wider experience of the sound. On being told what it is, we then understand what was heard, what the name is and we file away a template for the future. (Imagine if we described an er-hu as being a little like a crude violin – how that might change our experience of ‘World Music’). However, this is only one instance of an er-hu so far. If, in future, we heard a very similar sound it might not match the template that was formed with one instance alone. Perhaps we hear the er-hu in a different context – say in a Chinese orchestra and the mind struggles because it has a hunch about what it is hearing and we might look at the programme and find the word. Then we can re-categorize the template with two instances attached to it – this is a gathering experience which, in time, will form a life of experience and cause us to say – ‘yes, we know what an er-hu is,’ without us having to go through the room marked  ‘violin samples.’

This is like Plato’s forms or to be more up to date and put a similar spin on it (perhaps these different theories of forms aspire to an ideal theory of forms too?) I’ll cite Wittgenstein’s ‘family resemblances’ (In the next draft of this I’ll find the necessary quotes and references to keep us all connected to the world of ‘stuff already said by others’)


Auditory Scene Analysis


I’m still in my study, listening…The bird continues to make its sound. But how do I know it is one bird and not two? It might well be of course – I cannot know each bird so intimately as to be able to distinguish individual sounds – (though the bird probably can).  But I can extrapolate from the incoming sound wave a single thread of sound.  This is where I either recount everything Albert Bergman wrote or just send you off to read his work (See bibliography below)  but to save time here is a rapidly and easily digested summary from  Wiki….


What we learn from this? That the auditory system has powerful tools to sort and label and understand streams even when they appear mixed. There is intelligence at work and it may give you a small thrill of pleasure to know that even a powerful computer struggles to do this – but hold on to you humour because they are coming…


The Sum of all Sounds


More sounds are layered onto each other in my room: Vehicles, voices, violinists (daughter at practice) and so on. All that I can hear is to some extent familiar but in another aspect unique. Similar though they may be I have never heard them in this context (this mix of other sounds) at this distance, or this room  reverberation , or with this  physical make up (my ears are particularly acute today)  and so on.


To sum the listening experience of this moment;

1          The low hum of large vehicle

2          The bird sound

3          The sound of the pen on paper

4          The fan in the computer

5          Murmuring female voices somewhere – indiscernible words

6          Slight creak form my old faithful chair

7          Distant violin music being played

8          Clock softly ‘chucking’ (It’s a deeper sound than a tick)

…I shall pause and sit still in contemplative listening for a moment….


9          A very distant jet somewhere high above

10        A dog barked.


In the moment of the dog barking, all the other sounds became masked but I know they had not stopped and sure enough they re-appeared a moment later when I widened my attention again.  Masking and the illusion of continuity behind the masking is another piece of lovely hearing perception theory, I am very keen on Brian Moore’s book (see Bibliography)


My point – all this is happening on a quiet morning early,  before the day has really got going.  All these sounds are there to be picked out of the incoming stream and labelled. And I know what each one was. Being of a musical disposition, if I heard something I could not label, I would normally feel compelled to investigate – especially if I could hear musical potential.


So all these sounds are conforming to the templates I have for them and they have sorted themselves out in labelled streams I do not confuse the hum of the lorry with the distant music – though I could imagine that that might happen in some circumstances. As a composer I could make all sorts of things come together in the studio through careful mixing, but at the moment the sounds are from different directions, different distance and they are therefore not confused.

This single ‘mix’ of the moment ‘now’ (surely grounds for a Cagean aleatory composition?) presents a single vibrating pipe of air to the eardrum and the mind sorts it out for us and labels each experience with word or familiar feelings. It still also works to identify the location of the source and so on. The interesting thing is that I seem only to be able to give my attention to one at a time or I can chose to let the whole unlabelled sound wash over me as if it were a single source and this ability we have is important because of the implications for music.


Musical Implications


My ears have to work out how to hear an orchestra. I can, of course, hear the different parts that make up the sound I hear. But is that really so?  What I actually hear is groups. I cannot, for example, discern individual violins (unless one is particularly loud or bad) but I can pick out the flute. But then the flute is joined in unison by the oboe forming a new single sounding texture. The two combine because they are part of the same auditory scene, their timing events are identical and their pitches change in unison. A good example of this blending of different sounds is the way in which organists create sounds by mixing and layering different sets of pipes in ‘registrations’.  We hear a single event or single event texture. We hear a single violin line – unless suddenly the leader plays a melody above the texture.

This auditory scene analysis is critical to our understanding of how we hear music and how therefore we are going to record it.


When we listen to music we hear the whole thing or we chose to attend to parts. (Listening to a fugue on the piano is a supreme example of this and the best advice anybody ever gave me for listening to a fugue is to ‘go with the flow’).


Separating sounds out for us – such that we do not wrongly mix them up are:


  • Pitch differences and  similarities
  • Timing differences and similarities
  • Following or not following a pattern (sequence of timed events like ticking clock)
  • Timbre
  • Location or apparent source
  • Event (part of the show/not part of the show)
  • Visual information about the sources


These factors help us to match different parts of the sound to templates to be able to recognise each as a separate element.

Sorting out Source Location

I wrote previously about how a sound accrues information as it travels to the ear concerning the source location in relation to the listener. The listener also has to process that data at the same time as carrying out the scene analysis.

In the same way that we have templates to recognise sound types and patterns, I suggest that we have, a template or model of the world that helps us sort out source locations. I am not sure if I should try to confound things by suggesting that each sound template that guides our recognition holds all the possible variants of location within  – that seems an inefficient way for the brain to do things. I am going to suggest (and try to research further) that localisation (spatial) processing is from a different set of templates to those we use to identify the sound itself. Some of those templates for location might be linked to the fight or flight sound identifiers and cause rapid alarm in us primitive beings who still want to jump and run if we hear something like a wasp approaching.


As I mentioned earlier, at the eardrum, there is only a vibrating column of air. (I will keep it simple but I am aware that there is other information available through the vibrations coming to the back of the eardrum through the head and through information picked up through bone vibration etc.) The intelligent ear has a means of assessing how far away a sound is. Sound changes with distance and it changes in level, reverberation and tone.  When we hear a sound thus modified we know immediately that we are hearing, say, a trumpet at a distance rather than one close to, and processed. (This we can register from a microphone and thus filter out extraneous information (for now) about its general direction.


But what if we were to do just that – take a close-up trumpet sound and drop it to the rear of a mix. This happens all the time in the studio and good engineers know it is not just volume and reverb but careful adjustment of the EQ that gives the desired result. The level of verisimilitude seems linked closely to the engineers understanding of sound propagation. In my experience engineers brought up in the ‘hands-on’ school of mixing – or  ‘do it the same way as Bob’ as apprentices soon find out, are lost when it comes to working in these more subtle ways. The rock techniques for  moving a sound to the rear will not however work very well when trying to creates a realistic soundstage that  contains  reproducible  distance information in particular and location information in general.


So we have in us, an innate ability to assess how far away a sound it. This must be informed by knowledge about where we are. We will know that we are in a cathedral or outdoors for example. Experience tells us something about what to expect and how a sound will behave in these environments.


Recreating The Experience


The problem for recording engineers is to recreate what the ear has heard convincingly. There is not space here for a general review of stereo and  multi-channel techniques (I’ll write it soon though).

There is much that can be improved in the stereo recording sand mixing process by understanding how sound gets to the sentient mind, and much that we can design as a solution to improve that.  It is vital to understand that:


At the point where the microphone receives the sound most of the location information will not be recorded. It will give some distance information, it will give an approximate direction but it will not pick up what the eardrum receives.  Were we to insert microphone capsules in the ear and record the sound that gets to the drum we might have more information to work with but because of  Head related Transfer functions, the sound will present uniquely to the individual whose head is being measured. It was in the hope of getting round all these problems that binaural recording was invented – placing microphones in a dummy head to mimic the way our own heads work.


More soon.


MjkM August 2013

Training Sound Engineers to read music

I have been campaigning (unsuccessfully) for many years about improving the training that sound engineers get. I have argued that to spend three years at University  in a studio learning all about rock guitars and drums and so on is fine but why not learn  a lot more about  music? It seems a terrible disappointment to me that after three years the graduates seek work with no knowledge of music at all in some cases. As a bare minimum they should be able to play one instrument to a reasonably credible level.

I have good reasons for saying this…

1   Its not that hard to learn enough about music to be able to follow score and know what is going on. You don’t have to be a sight reader – you are not the artist  – but if the player (who can read music) says lets go back to the  – (what ever musical term you like here .  the rit,  the Bm chord, the piano entry and so on) the engineer with little understanding of music will be lost. Why would you not want to be able to speak that same language as the musicians? I have often given a single 90 page book to young people wanting to get a career in music and told then just to get familiar with its content and practice following score (a Beethoven sonata will do)

2    You need the work.  You need to stand above the other applicant and a flexibly skilled person is far more likely to get the job. Studios are not full of rock bands day in day out – unless you are lucky enough to work  at a major specialist studio (and most of those jobs are very over-filled with long term staff who will only move on by dying or going deaf – and some still hang on after that!) The real world of studio work is a mixture of rock, folk, pop, classical, jazz,  schools music, Karaoke and so on.

3     You will never be a great engineer if you don’t understand the thing you are working with. How could you?

4     If you don’t play an instrument how will you know when somebody is playing well or not – or even – will you notice if it is in tune and in time? Some engineers, I grant, can do this without being able to play but they still lack any empathy with their guests.

Let me nail the point for you. When I interview staff to work in my studio it is a pre-requisite that they can read music and play – and that is just  for the non-classical side. It was my frustration at finding too many people training on a production line basis  with  theory diagrams and no real listening experience, or with only one string to their bow (‘I do drum and bass’)  that led me to wonder what on earth they were doing for three years. Really – the technical knowledge of how to the use the equipment can be taught  in a single term.  Reading music can also be taught in a single term. Love of music in general –  a lifetime.

Other things that are not taught well everywhere:

Understanding and appreciation of  all musics

Understanding an appreciation of all instruments

Listening truing on somewhere in the region of 1000 critical tracks across a wide spectrum

Attending a wide range of live concerts to see what the real thing is like

Learning to manage projects

Learning to get the best out of people and encourage musicians


There are some Universities that understand what it takes but  too many institutions are keen to claim successful graduations rather than  training them for the real world. I could go on….



Sounds, Location and Perception

What is a Sound?

Reading various writers on the subject of sounds and their identity, I note what appears to be a level of confusion about what we mean by the word ‘Sound’. What is a sound? Where is a sound? What is it’s ontological status?  The various writers (See bibliography below but a very good introductory essay can be found at : speak of four main theories: The Proximal theory – that sounds are where we are, the Medial Theory; that the sounds are in the medium (air usually), the Distal Theory; that sounds are at the site of their generation and finally an Aspatial Theory, that space is not relevant to sound.

What is bothering me is the lack of a human dimension in the discourse. I don’t think it is necessary to complete a theory about things apart from persons. It is too remote from us to have the relevance which the topic demands and all talk of sound is predicated on a human auditor. We want to know what these things are in relation to ourselves. To discourse on trees falling in forests (which is the unstated question that lies behind all thoughts about the ontology of sounds)  is too abstract to be of use. I take a general philosophical stance that things signify in relation to the entity to whom they have significance. In short, ourselves. ( I call this Robust Pragmatism informed by Naivety – though I would welcome a professional philosopher’s comments to help  make this work out more rigorously). I’m happy to call upon support from such a professional:

‘…sounds directly perceived are sensations of some sort produced in the observer when the sound waves strike the ear. (Maclaclan, 1989)

I want to say that a sound is only a sound when experienced by an auditor who has the apparatus to detect it and the intellect to interpret the incoming waves in a consistent manner. If we consider sounds to be just waves in the air of between 20hz and 20Khz (an ambitious range for the over twenties in reality) then what of waveforms that lie above and below? Are they still to be called ‘sounds?’  There is also an implicit duration assumption of there being a repetition of the alternating compression and rarefaction of the air – were it to happen just once, might we not really hear a sound?  To paint an extreme picture – consider going up a mountain in a car until your ears pop then down the other side. Technically you have just experienced an ultra-low frequency (one cycle per half hour say – 0.00055 Hz) of one rarefaction followed by one compression  (on top of many other wave events) – could we justifiably (and according so some theories is might be so) be said to have ‘heard’ a sound at this frequency?

I want also to say that a sound contains information that has meaning to the receiver. Let me start with cats or some simple animal – the information it receives may only be fight or flight, come for food etc., but it is a sound to the cat – because it elicits a response that is causally linked to the wave stimulation is reveives. I know that sometimes the cat appears not to hear (but that’s between me and my cat). What can we really know of an animal’s experience of sound other than its response or lack of it. If I communicate with a friend and process my words electronically to be in the 40Khz to 60Khz range  they will not hear it. Can I be said fairly to have communicated simply because I created a wave in the air? I would suggest not.

Some writers have spoken about the objects vibrating and still ‘sounding’ even if in vacuum. I would contend that I could still ‘hear’ a sound in a vacuum if – for example – I were bounce a laser beam from it’s surface and create a sound from the reflection – it is still a transmission of a wave using only a bit of translation equipment – just like the air waves translated by the ears.

Location of a sound

Much gets written about the location of a sound. Where is this sound in relation to us? Bear in mind; the locational information about the source of the sound if not encoded in the waves of energy being emitted by the object.  The source knows nothing of the space in which it takes place and yet writers speak as if it did. As if the  information about its location were part of the wave.  Let’s go on a journey with a wave:

1        A sound is generated. Let’s keep it as simple as a finger snap.  (The reason for this is that it is a point source – I could enlarge for many pages on the problems of musical instruments that are rarely point-sources but a complex array of sources). There is a sphere of sound waves (pulses of rarefaction and compression ) radiating from the source at a speed of about 350 m/s – not forgetting that this is a approximate number depending on air composition and density, pressure and temperature plus other subtle factors. As soon as the wave has travelled an infinitesimally small distance it will have changed – albeit undetectably at this point but; it will have been subject to the air’s variable elasticity and have been modified in tone and volume in a Nano- second. This process of tonal and volume change over distance is continuous right to the point of audition in the ear drum at which point these factors cease to change. If we go too far away from the sound source then the energy of the waves (spreading and diminishing according to the inverse square law) reduces to Brownian motion in the air and is therefore as undetectable and lost as a homeopathic remedy.

2          The sound travels in air, constantly transforming and can be said to be accruing information as it travels. It does not know how far it needs to go to meet an auditor but it collects tonal transformation and volume modification as it goes. Within a very short time it will also encounter a surface – often the floor or ceiling first but as soon as it does so, it generates a reflection of itself (losing a tiny bit of energy to the wall in the process). Another part of the expanding sphere of sound waves will encounter another surface – possible a wall and another reflection will be created and so on until all the primary surfaces have been encountered. By the time the furthest surface has been met secondary reflections from the first surfaces will be following the main sound and will also be travelling to the next surface they encounter and so on. We soon get to tertiary and quaternary layers until we end up with something we clearly call reverb. (I will leave the special case of echo aside for now but the keen reader can quickly fish out some determining factors and add them to this account as well as commentary on early reflections) The totality of reflections and complexities of the wave being added to also gives us information about the building it is in. To a musician, nothing describes a building as accurately as the sound it makes.  But the sound still has no regard for the listener who may interpose her ears at any point or time in this journey but it is still accruing information as to the distance from its source and the surfaces it has encountered.

3   Finally it will encounter the listening body but let me start with a simplified case of a single monophonic microphone which will be a dumb witness to the sound as it arrives.  The apparatus will record the energy-impeded and tonally modified sound and from that a later auditor to the recording might infer some things about its genesis (such as what sort of object made the sound and how it was excited etc.) but certainly not whence it came. At best they might name the source to the extent of being able to repeat the sound. ‘It’s a finger snap.’ But the sound at this point has also accrued its multiple reflections, and the characteristics of these might lead an auditor to make some assumptions based on their experience of listening and their experience of spaces. They might quickly know that it was recorded in a church or a studio or a domestic environment. In a special set of circumstances (a trained engineer working with established recording venues) might be able to name the space – ‘That’s St. Paul’s Cathedral’ – or ‘that was recorded in Air Studio’.

I won’t go into other philosophical inferences that can be made from the sound such as it was a person with a hand and fingers and that there was intent – such inferences to back to the big bang (and beyond! I hear a philosopher want to rush in and fill our existential vacuum).

Let me add a second microphone to the set up so that we can consider direction of the sound through use of stereo recording (remember at all times that ‘stereo’ just means ‘solid’ but we will proceed with the standard assumptions about  what the word means). If we are to set up a stereo pair however, how will we orientate it?  An engineer will have set it up to be more or less tangential to the circle of sound waves arising – i.e. by pointing the microphones in the general direction of the sound source – so they will be adding information to the recorded sound by this simple act (but not to the sound in the room, of course). A true recording of the source might be better if it were made without regard for this and simply picked up – in stereo, what happened in the room. But this leads to difficulties. In some orientations it will not yeild directional  information easily – such as side-on to the source.

This ‘adding information’ to the sound is either witting or unwitting and is manifest in a vast array of interlacing frequencies and amplitudes that in theory could be reduced to sine waves (but which never seems to work very well when synthesising sounds – we are seldom fooled).

Here though, we have a clue as to what sound and location is all about. The act of placing a stereo microphone reveals that the information added by the selection of its location and its orientation is exactly what we as the listener might do when we go to listen to something.

Our left and right ears intercept the sound wave and ‘hear’ what is going on.  It is only at the point when the sound wave reaches the outer ear that all the location information is added. The stereo microphone, correctly aligned, will have picked up a couple of important clues as to location – the interaural level difference (ILD) which is the difference in volume between left and right ears will be captured more or less faithfully by a correctly rigged microphone pair. The interaural time difference (ITD) will also be captured (the difference in arrival time to the two ears is an important directional clue but a moments reflection will show that ITD and ILD are not sufficient to tell front from back – still less about up and down. ITD and ILD give as much radial information as can be captured in a microphone.

Note that it is only possible to speak of these things at the microphone. In the air in front  of the microphone – it is the presence of the equipment and its orientation that creates the point of measurement.

4          The microphones have still not captured the full spatial location however. They will give us clues as to right and left and the sound itself will capture distance information but now we have to look at the function of the ears and the head.  The ears and head form a complex system and these have been measured and calibrated to understand the function of all the parts of that system. The spatial information we depend on to locate sounds in space is a combination of the distance and direction information that is a variable at the microphone but the physical structure and orientation of the head now adds its own information about where we are in relation to the sound source.  Each part of the ear modifies the sound in tiny subtle ways which are individual to each of us – we all learned how to use the shape of ear and head we were born with to translate information about sound and their location. (but note how a new born child rotates its head to learn the link between the two)

Head Related Transfer Functions (HRTF) tell us how the head itself and its orientation affects how we hear.  The trouble is our unique personal HRTF that cannot be codified into a recording – though there are some generic data sets that might be applied to a recording to recreate a sense of location.

5          We move our heads. When listening we  ‘cock an ear’ as it were. By moving our heads we add to the information layer and include more about phase relationships in the sound (not dealt with here but again sufficient material for another day), we get to create a small shift in volumes and tones that give us more clues as to location based on our primitive sense of  locating danger and we employ all these almost like an analytic instrument to  locate the sound.

6          Brain processing takes place.  Bregman’s book on Auditory Scene Analysis is too large to summarise here but it describes and provides analysis on how we sort out incoming data streams in our minds and separate out what belongs in which group of audial experience – hence we can pay attention to one sound sequence whilst another is  sounding and we are not confused by a flute melody and simultaneous chatter from a child for example.

One can envisage scenarios where all the above criteria are in place but we do not hear the sound as a sound because the cognitive faculties are not correctly align or trained to respond and ‘tell the mind’ that such and such is a sound with content and meaning and that we have to respond to it (even ignoring it is a conscious response to it)


When we talk of what a sound is, I suggest it is the net effect of all the above factors and until it has been interpreted by a conscious entity it is only vibrations in the air with the potential to yield information in the right circumstances.

To speak of where a ‘sound’ is located seems like a senseless question unless we recognise that it is only a sound when the brain has processed the incoming impulses from the ear – and it is only an information rich sound to which we can respond when it  has been matched with our experience of sounds and we know what it means to us.

Defining sound as being ‘in the source object’ seems not right because of the  transformative journey the waves must take. Vibrations with the capacity to excite a transmission medium, that are shaped by the material of the object might be a start. Medial wave propagation has some useful features as long as we realise and acknowledge that it is a time dependant thing (as is all sound of course). We might take an infinity of time slices to define it (and I don’t have time here for differential calculus but other researchers have) but it seems to me not to be the sound – but an (unreliable!) transmission medium. And the proximal theory leaves us with hearing what is at the ear – but it still needs the intercession of intelligence to make an air vibration – no matter how sophisticated – into a sound that has a meaning.

I will have to write up all the steps between my assertions and show all my citations (work underway!)

But here are a few key sources:

Blauert, J (1997) Spatial Hearing: The Psychophysics of Human Sound Localization. Cambridge Mass. : MIT Press.

Bregman, A.S 1994 (1990) Auditory Scene Analysis: The perceptual Organisation of Sound, Cambridge, MA: MIT Press

Ed. Nudds and O’Callaghan.(2009) Sounds and Perception, New Philosophical Essays. Oxford: Oxford University Press.

Maclaclan, D.C.L. (1989) Philosophy of Perception. Englewood cliffs, Prentice hall.

O’Callaghan, C. (2007) Sounds: A Philosophical Theory. Oxford: Oxford University Press.

The bibliographies in these books alone will give you a lifetime of reading.

mjkm August 2013

Electronic music and the idea of Performance

Something that always bothered me about performances of electronic music was the lack of a human touch. It might be that this was a desired outcome – certainly some composers would agree  that the mechanical nature of the sound is within the aesthetic realm they wish to explore.  It was whilst working with CSound [1]  the other day that got me thinking. I had earlier spend quite some time working on the performance of a guitar piece,  fiddling with  slurs and fingerings until it sounded right, then trying out a different plucking position to see if the sounds might blend a little better and it struck me that there is no equivalent refining process in electronic music.  (I am sure I am wrong about this so please flood me with counter ideas – a conservatoire of electronic performance maybe?)

There might be a number a number of reasons for this.

Firstly,  in the early days there were few musicians working the field – it was lab coats and degrees in physics and maths and it showed. Much of the output was clever but  didn’t move anybody- it seas not an emotional experience and this is still true to some extent today. We are fascinated by what we hear but not moved.

Then,  there is usually one and one only performance that may become a CD that is never recorded again. Some of the recording artefacts are arbitrary and  unrepeatable, often the composer (who is also the performer) has moved on to new ideas, perhaps the set-up (swing microphones) will always produce something so different through the  mechanical set up that the human touch is not relevant to the realisation.  (Institute for excellence in microphone swinging?)

Nobody gets the chance to go back and re-look at how the piece might be better realised.  Some of the great electro-acoustic pieces are fixed in that what was done, was done and is complete – sealed in its time capsule recording forever. Whether we are discussing the  “Etude de Chemins de Fer” of Pierre Schaeffer or  a more recent work such as ‘Mutations’ by Jean Claude Risset or  even more recently, the wonderful work of Robert Normandeau (google and enjoy) we are talking about works whose realisation  is also a closed door on its future. I’m  sure not many scores exist for future generations and fewer instructions  other than the CD itself.

The other reason that little is said about performance is that there is not much ‘hands on’ work (except where processing live acoustic instruments). Working in Csound or in an electronic sound lab processing a recording, you get the feeling of something much more like either computer programming or hand- knitting an item of winter wear.  There is little or no discussion over the expression of a particular sequence  (melodic) or the balancing of a chord (Harmonic). Perhaps this is also because it takes months to programme something in Csound  that the idea of  slightingly stressing and sharpening a leading note in a violin line  is from other world. In the electronic work the composer is more likely to  require that a filter be opened in the last  five second of the piece.

Where I am going with this is that they are different worlds. Performance of acoustic instruments is  to some extent a black art; some never get it right, some train for twenty years to become the great performer we hear but in electronic music, excellence is more  concerned with mathematical  innovation and sound mixing  than with the idea of performance itself.  And I mean to include live performance in this. Performances I have been involved with were more concerned with just getting the technology to work as specified , but no musician had rehearsed the piece, nobody has worked out a series of options that would serve the sound better, nobody was interested in the man/machine interface.

There has been a lot of work on midi interfaces for performances fro address this lack of human touch, but there is no literature on how to realise any particular piece and what refinements might be available and so on.  There is little in the way of musicological analysis that might be the starting point for interpretive discourse. Perhaps it is time I wrote one.

Maybe this situation comes about because of the fundamental unrepeatability of the event.  There is no wrong reading of the score, no better interpretation, no comparative score from a similar composer  that would inform its realisation.  For this reason it is to me a different world. But this is not to denigrate it – I love the things it offers and I am listening to Robert Normandeau as I write.  But it is with very different ears that I will later assay the new recording of the  Britten Cello suites I have just bought.

I dont think I have said all I want to on this topic yet…. keep an eye out for a sequel.

[1] Csound is the lowest level  sound programming language for electronic music – it’s free software but a ten year study to master and each composition takes an eternity to realise simply because every event has to be specified – in incredible detail to get it right. Its a bit like writing down everything involved in making a cup of tea – down to the three dimensional mathematical trajectory of your hand as it lifts the kettle – an exercise in  Zen mastery. (

Stuart O’Connor and his Music


This is not a review of an album as such – more a review of a life (so far) work.  Stuart is an artist who works with words and music to engage his  audience in his ideas. There – I made a distinction that says more than just  ‘musician’.  But I’m not going to demean Stuart or you readers by offering some shorthand pigeon hole in which to place him. That would be unfair to him and to pigeons.  Stuart is what I like to call ‘ the real deal’. We have all been to gigs and seen  people strutting their stuff and we know at some instinctive level that this is just an act. It is a construct designed as a platform for the ego or a cynical money getter or something else. We don’t go away humming the tunes and we certainly don’t care to ponder the lyrics. There would be no point. I don’t want to share my consciousness with them.

Stuart has developed his own style under the tutelage of his own good taste and determination to innovate. In this innovation is a basic honesty – you feel that if he  found a false note in anything he wrote he would quickly  get rid of it and  work the material again. His songs – like all the best song writes, come from his life. His life in a white van travelling around the country performing whoever he can get an audience – and further afield, swapping gigs with people in Australia, New Zealand and Japan. His creativity involves the way he runs his whole venture. It is as if he decided to re-invent how the whole  touring musician thing should operate.

I’ve known Stuart for more than a decade and in that time seen his  output develop in maturity to the point where the new song video just released (why do we say released? it wasn’t in captivity or being artificially held back) (see link)  sums up the standard of his work so far. Yes, he really does pull all that off live. The use of pedals and so on is not new – but he doesn’t let it dominate as an ‘axeman’ might, he lets the fireworks in the guitar rest below his  lyrics, supporting and  sustaining the target of his performance – the songs.

I have been involved at periphery of Stuarts music in all its guises – as front man for the alternative band “My Pet Junkie’, as a solo artist and now as the progenitor of the ‘Stuart O’Connor Band’.  All incarnations are excellent and I recommend hearing and seeing any of these if you can.

I’ll offer you for a fuller picture but hunt around the web (or just type the name in google) and you’ll find a trove of music that defies  compartmentalisation and aught to be heard more widely.

Support Stuart too – even if you don’t rush out and buy his albums (which you should) just go and  give him some good feedback on his Facebook – to let him know  you appreciate  the dedication to his art that he exhibits.

Helicopter Quartet – Stockhausen

This work gets a lot of stick for its seeming excess and pointlessness but hold on. It is only a logical development of some the the spatialisation ideas that  He has been writing about for several decade. The very first Gesang der Jungerlinge (critical listing for an understanding of contemporary electronic music) and Gruppen (critical listing for an understanding of space in music)  both looked to greater and more expansive spatial expression.  The Helicopter is an extension of that idea.

The Quartet each fly in their own helicopter. Their sound is miked and relayed to the ground and the sound of the rotors and the engines remains mixed with it.  Pitches of instruments rise with the whine of the machines and as each transits into air space. And they play from locations  around the airfield and the listeners listen on in wonder.

All pretty good – except for one small detail – the fact of the helicopters whizzing round is only a visual spatialisation. As far as the listener is concerned, the sound being relayed to static loudspeakers, the helicopter is not moving. The movement is all visual. Now many philosophers of sound can say many things about the links between the senses – and much about what you see is what you hear illusions. – but this seems to be a missed opportunity. The better route would be for the players to have their sound relayed through large loudspeakers slung under the helicopters – and thence to spatialise by flying around. (Consider the opening of Apocalypse Now)

But I’m game for it – anybody got  a helicopter?

Here is a terrible sound quality version ….

Bach and Improvisation

Improvising Bach

I can’t help thinking that there is too much slavish following of non-definitive scores by players of Bach.  From the top let us be clear – Bach did not  produce Sibelius software type copies of his scores. He didn’t even mark dynamics, and often the instrument being aimed for was not clear.  Everything we hear is generally an interpretation of  those basic open scores. Usually by a scholar well versed in  Bach performance scholarship but still – not somebody who actually  knew the composer.

I find, when playing Bach, that the music wants to find its own way, you can almost sense the  composer  improvising – (brilliantly!) and that he would hate what has happened to his music.  Anecdote alert… coming shortly…

I play many pieces by Bach and they are not easy. Bach wrote nothing but for  master performers so  beginners pieces are always redactions edits or downright bowdlerisation – dismiss them at once. Get back to the best scores you can find and preferable facsimile editions where possible to see what he actually wrote.  – Then learn them, then let your imagination go and work them.

Example. The very first bars of the ‘cello suite in G are well known to every ‘cellist. It is not possible to get anywhere without coming across them and for them to become favourites. The bowing pattern for those opening bars varies according to which edition you read and yet non of them appears in the original manuscript. I found the dame when transposing the piece to play on guitar – I have free choices as to how to slur the notes and I experimented freely with it to find a reading I liked the sound of.  But this is still not improvisation –  it is taking a first step towards liberty.

My true liberation came when I decided in a repeat, to play  something that was more inspired by the music I had just played. It felt good, and it still felt Bachian and was a joy to play. The audience appreciated it too. I’d like to hear more  players loosen up their sensibilities and leave the  minute following of score to students, and for them to  let go and explore the music. Anecdote…. my friend was playing  the Goldberg variations in cambridge. In the audience was a line of elderly gentlemen all following the score carefully. I wondered if they were ticking the bars as they went by. At the end, one of them said it was okay but not very accurately played. I noticed he was following from a different edition.  You have to wonder what pleasure somebody might be getting form  reading their way through a concert and not listening.

Here is a link to an article on a similar theme.

But finally – I’ll leave with this thought – good improvisation skills can cover up the bar you totally forgot – or smear over the wrong note you just played. Don’t forget that unless you are very unlucky and playing in Cambridge – most of the audience won’t have the score in front of them and you are only ever a semi tone away from a note that will fit.

Space in Music – locating sounds


The spaces in which we perform, record or listen to music confer a layer of ‘meaning’ or invite interpretation of the music. That meaning might be created wittingly – in that one of the people involved in the process of music making made a decision or instruction regarding the perceived space and arrangement of the sounds. Or seemingly unwittingly wherein an particular environment such as a church has imposed its acoustic on the sound. I will however argue that such a thing is a part of the cultural heritage of our music making and is based on decisions made centuries ago when it was decided –  for example – that churches should host choral music – the invention and performance of which is not so easily imaginable in a field or small building though just about tolerable in a castle but which would then lack the  cultural milieu in which to make music. You need a strong sense  of worship or the numinous to kick start choral disciplines.

It is interesting that many textbooks, theses and commentaries on perceptual space and  in music approach from the points of view of psycho-acoustic or physics and even perceptual  psychology (See Brian Moore 1997). This text approaches the same considerations from the musicians point of view – more specifically,  from the point of view of the composer, the performer, the producer/engineer and the listener. I am interested in the implications of the decisions made by each of the participants in the music reproduction process.

We can begin with intentionality on the part of the composer ranging from the acknowledgement of and utilisation of acoustic reverberation in the composition of choral works for performance in cathedral spaces, through to Wagner’s demanding ‘space’ requirements for staging  his works – in particular, the Ring Cycle and to the modern era in which sound recording has been the largest catalyst for innovation and development in music.

To what extent does a composer consider the space involved in his composition. In most cases it is an arbitrary arrangement born of necessity: The seating of the orchestra and the acoustics of the environment in which it is performed. Coupled with this is the  ‘rule book’ ways in which recordings are made. The composer can more or less assume that the recording will be from the point of view of ‘the best seat in the house.’

But might composers be freer to specify more. – might there be a whole language of spatial articulation and implication were it to be made available? Examples might involve the consideration in meaning of a song sung in a dry acoustic versus one  performed in a vast stone space.  – the song might be the same but the meanings differ.

From the performers point of view, the spaces in which we perform, and the implied space in which a recording takes place tells the listener how to ‘read’ the piece. Most obviously the  difference between a singer songwriter  recorded in a dry small space will have an intimate  feel whereas the same performer recorded in a church will have a wider, more broadcast aspect. (terminology to be determined within the text below). Performers require feedback from the space around them. singers require reverb otherwise they cannot hear themselves. The right acoustic can enliven a performance of an individual or an orchestra. The singer might well move in a space to be heard better – pianos and instruments may be moved on a stage to favour a particular acoustic. Performers  seat themselves in space and chose where to sit based on primarily – tradition. Composers  communicate through the score to the performers – often with very specific staging instructions (John Culshaw Ring Resounding 1967)

Producers make interpretive decisions concerning the recording often in conjunction with conductors and musicians and in some serendipitous cases with the composers themselves. The decisions they make concern the  illusion of the space in which the sound takes place – and the location of instruments. This decision is more important than might at first seem obvious and has a direct bearing on hearing music correctly (a good example being the loss of identity of a chord that gets spread to widely in space). Composers in the modern world are more involved in the recording process than ever before as they have a knowledge and training and awareness of the techniques available and have thoughts about the controls they would exercise on the sound. Very few however make notations or demands on the recording process in their score. This is a feature that has been available for some time and one of my key outputs is to suggest a means of communication between composer and producer to ensure that the right recording gets made. The score always contains instructions to performers where necessary but rarely is there information on  – for example – the size of room acoustic that would suit the piece best.

I know that many view the presence of a composer in the studio as a blessing at times and a nightmare at others so I have tried to separate out what a composer might intend in contrast to what the producer might think.

And the listener. How do we cater for a lister that these days consumes their music on rather lower orders of equipment that in the hey day of hi-fi. It is true that the equipment has become more reliable and standard but  little attention  is placed to day on seating location with respect to loudspeaker placement – indeed most home do not permit such considerations. Most peoples’ enjoyment of music over loudspeaker is over a spread acoustic with the little  precision in location OR it is through the extensive use of earpieces which in their own way, limit the tonal range and experience of the music.


Space and the reproduction of space have developed since the invention of sound recording. In the early days of monaural recording space and reverberation were nevertheless available to some degree however the technology of the time demanded close proximity to the microphone to permit direct mechanical transmission form the instrument being played to the soft wax recording the sound. The subtly of a present acoustic largely ignored.

It was only later in the development of sound recording that sufficient fidelity existed for the acoustic to  be captured and heard.

The other dimension taken for granted today is the location in space of the sound source. A monophonic loudspeaker might be able to reproduce a sense of depth through the three prime depth indicators – roll of of high end tone, lowered volume and increased reverberation, but it was only with the advent of  stereophonic reproduction that the location of the sound became a consideration .

Spatial Location in concert music music has of course been a matter of convention, placing violins on the left of the conductor and so on – and early listeners of monophonic recordings may well have transcribed their  concert going experiences onto their listening.

It seems strange to us now to hear stories of  singers miming to recordings of themselves and fooling anyone in the early days of reproduction. Perhaps we do project our mental image of the sound onto the recording.  Certainly, we listen now at a lower level of high fidelity than we did in the 60’s and 70’s – sales of high end  equipment are for a very few and generally, classical oriented listeners.

For the purposes of this  essay, I will consider sound recording form the era of the high fidelity stereophonic recording in the 1950’s to be the beginning but I will not ignore that space and sound take place in earlier form of recording.

Space and location of  live performance of course cannot be ignored, and consideration of those features will be included – particularly with regard to how we demand our recording s to me made (Imagine if you will, Allegi’s Misere recorded in a dead studio environment and you will immediately see how much spacial and soundscape has become embedded in the  performance process and the listeners expectation) . Liturgical sounds belong in ecclesiastical spaces  -we might say, concert sounds belongs in places that sound like concert halls and electro acoustic music belongs in a fictive space created by  computer internally.

But  spatial listening begins with our environment and our listening equipment. We were born with two ears placed to give precise spatial location which presumably, the evolutionary biologists will argue and demonstrate that this gives us a competitive advantage in the world world of self preservation sand fight for food. Further discussion of this is not in my scope.

The early dawn of man as we recognise him does however contain some item of relevance to us. Recent theorists (Cross et al 2012) have posited that the cave dwellers were aware of and placed significance on acoustic properties in caves and that particularly ‘rich’ spots held such significance for theme that this is where they left their marks: cave paintings. Such assertions cannot be proved but there are clear indications and correlations to show that the  sensitivity to location and its sound was a more important facet to early man’s selection of places to be that was previously thought.

Clearly the sound of the environment ‘meant’ something to them. What it might have been we can only guess. To go to caves now ourselves and experience the sound in them – by intoning long vowels for example, leads us to discover echo, reverberation and then high and low spots, sweet and sour spots ands so on.  Recent  reports indicate the we echo-locate in subtle ways and that many blind people  have learned to listen and steer around object reporting lampposts as “a dark shadow past my ear” for example.

We also understand from  physicists that the higher the frequency, the more  specific the  location of the sound hence, low bass frequencies do not require stereo reproduction and may  be left to the work of a single specialised loudspeaker somewhere in the room.  The higher up the frequency bands we travel to the more ‘on centre’ we have to be to experience a strong sense of location in listening. The highest of instruments (unless the reverberation is out of control in the recording) being easy to ‘point to’.

Sound without a space in which to manifest is a largely unthinkable construct even if we take electroacoustic sounds generated in a computer, until we are able to “enjoy” cochlear implant music, the sounds depend upon air for their transmission and in that medium, we find that all spatial clues are generated.

Key features involved in spatial location:

  • Tone balance (EQ)
  • Reverberation
  • Left/right location
  • Up and down (which has no meaning in  stereo listening environments but is still a human perception largely ignored by musicians
  • Back and front – again with the exception of experiments  with quadraphonic and theatre sound, largely ignored by most musicians and composers. ( except e.g. Berlioz.)
  • Relative volumes.
  • Proportion of mechanical noise of the instrument (bow noises, breathing, key noises etc.
  • Signal to noise – as distance increases our perception of the noises in between  increases. Particularly true of a recording in which a more distant sound gets mixed higher than intended for musical reasons. (Off stage  instruments for example (Trumpets in  Wagner’s  Lohengrin, oboe and Berlioz symphony fantastique etc.)

Some of these might be necessarily linked to the reproductive fidelity of an experience – i.e. To experience as if you were at the concert hall with your eyes closed. In other cases certain decisions might be defined or directed by composers and performers in order to  say something else – a simple example being the movement of a singer across the speakers in an operatic scene.

The  area that concerns this work most though is our interpretation of those  acoustic triggers and how they inform our enjoyment and interpretation of the music.

I will cover the four steps from composition to performance, recording  through to listening to  discuss where these considerations matter with particular reference to  interviews with exponents of each element of the process.

I will be interested in what composers say about how they envisage their work being presented (if they think about it at all) or if they simply leave it to the performers.  How do performers  treat these things – we know in the studio that singers with headphones on need a little reverb to give them some audio feedback but  it that the same thing as an operatic soprano needed a large acoustic in which to perform or that more psychological?

Space and the soundscape is more than just a place in which music can take place – it confers meaning

There is a temptation- largely in musicological circles – to focus on music as being the interaction between score and performance.  “Until recently, musicology and music theory have had little or nothing to say about space in music, for a combination of reasons connected with their focus on the score, their comparative lack of interest in recordings and their intense focus on pitch and rhythm to the exclusion of almost everything else.” (Clarke;  Music,Space and Subjectivity in  Born: Music sound and space). This would seem to ignore a prime function of music: that of enjoyment and delight thorough our erotic relationship with it.  Roger Scruton (1997:) went to far as to argue that listeners focus on sounds themselves space being refined away by intense listening and space only plays an attenuated part in music.” Were he to be presented with a large choir singing in a dead studio environment in which the notes of the chords do not get the chance to blend in an environment  like the one the composer had in mind for the performance he might rethink his statement. In extremis he might find pleasure in focusing on the score itself and leaving us poor mortals to suffer the vagaries of the realised piece. The realisation of the piece is an essential part of its manifestation. Without the performance or recording, the piece is just a score.

Space separates the elements of the music an in the early days of recording  ‘flattened’ leaving the orchestra to organise itself  around the early recording devices. Spatial proximity them becomes a major factor in our appreciation of the music.

The listener suggest that music is not a pleasurable activity I would argue that today – particularly with contemporary  composition in with the realistic record of the piece is the recording itself, that the study of music making might now encompass a realisation of the importance and indeed the transformative effect off the modern recording studio and techniques associated.

Since the early stereo recordings of the Ring cycle produced by John Culshaw in which a studio realisation of the ring cycle brought a new language to the reading of the score-  that is to say an ability to present a audial stage for the listener rather than simply to record a semi staged event in the studio. Such control gave rise to special effects hinted at in Wagner’s score – and perhaps envisaged for future days when they might be realised but the forces demanded of same thunder were always unlikely to be co-ordinated with nature to bring a realistic sound so Wagner’s thunder special effect of a sheet of steel was used – and later improved in the studio through the use of separate room s with controlled acoustics to bring the effect to life – i.e. A higher degree of realism.

Becoming inured

The space in which music is performed or recorded is something which – if agreeable to the listener – will not be registered for more than a few moments whilst they orientate themselves to the sound world being presented.  For example, on playing a recording or a Bach Mass they will quickly establish the ecclesiastical setting (whether real or fictive) and ‘settle’ to that sound. The  acoustic environment will not be registered (but will be heard of course) unless;

It becomes apparent that there has been  shift for some reason. A good  example being  takes edited over a couple of performances during which the audience density changed and a slight shade of reverberation  changed

Or, it becomes apparent that the acoustic is not supporting the music well. A soloist may make an appearance that seems too  distant and blurry – at which point we become aware of the acoustic setting again and possibly have some thoughts about it.

So now it becomes something to which we take exception and wish to criticise.

Interestingly we seldom recall the acoustic environment of a recording unless it was either badly chosen or had some special effect, or a reason for a forefront presence. An example of which would be  Paul Horn’s famous recording of him playing solo flute in the Taj Mahal where  the whole recording is about the building in which he played  arpeggios to  let them blend I to chords that came bouncing back to the listener fully formed. The same performance in a dry acoustic would not  generally be unplayable and would certainly not be an enjoyable experience.

I cannot, for example, recall the acoustics in any particular way of an all time favourite album – Joni Mitchell “Blue”. I  know it will be  an average performance space acoustic, it will be  in proportion to the need and If I go back and listen with  the intention of hearing the acoustics specifically I will not be surprised. … I wasn’t – it was as I thought – professionally invisible to the process hence correct.

By contrast I can recall very well, some of the Deutch Grammaphone string quartet recordings from the seventies  which were recorded, to my ear, in large halls with too big an acoustic, losing detail in the recording.  Generally the faster the music the more detail gets lost in the overlapping of sounds.

The only other  circumstance where the  acoustics are recalled are when one has had a professional interest in the recording or the selection of the  location for the performance.

As a guitarist I have a ‘professional’ interest in recordings of guitar and can recall the acoustic of most – possibly because it is such a difficult instrument to site that I look for the better recordings – which to my mind are from small churches or larger  chambers appropriate to the flow of the music. There is an argument that Alhambra palaces are  fine for Spanish repertoire that might have ben composed in such places – or performed there  – the technique of leaving  notes ring as long as possible paying attention only to the rhythm at the front of the note, is not true for  Northern European repertoire in which the details in a Bach lute suite would be lost if all notes sounded as long as possible in this way and would therefore demand a less lively acoustic to make sure that the  players damping of a string was not frustrated by an acoustic that  competed with it.

Discussions about how the  performance space will affect both the players  ability to hear themselves and for the eventual listener to discern detail often involved long discussions with the players and sometimes the composer. (A compromise that can be made is to the use the noises  in the close proximity to the instrument  slightly mixed into the larger acoustic to give presence to the front edge of the note but the larger acoustic for the  duration and blending of the sound.

I have no idea about the acoustic in Alfred Brendel’s piano recordings, much loved thought they are. They must be agreeable and not  ‘present’ to the level where a comment becomes necessary. To a certain extent the recordings fit the canon of piano recordings.  We are used to hearing them played in  small  concert halls or large rooms and the wise producer will usually arrange that the recording sounds in the same way. We are only disconcerted when  something else  has taken place such as a recording in a vast auditorium with microphones too far off the piano.  As a matter of interest it takes between four and eight microphones to get a good balanced sound from a fine Steinway sited on the stage of a large concert hall.  – most are fairly close and a distant microphone is mixed in to agree the final level of the acoustic space present. The music itself will demand how it is to be mixed. (this is a major theme in this  thesis)

This inurement is true for all recordings and performances in which the space is only a setting. Where space isa dimension being deliberately and musically exploited, and the instrument placing is not natural (for example being recorded in a dry studio acoustic as was the case during the seventies and eighties when there was a fashion for dead spaces. Studios these days are more commonly designed to have a rich character of their own (see air Studios for example). The dead studios would depend on high end reverberation boxes to supply the necessary  space acoustic. Not in the case of classical recordings which has always  taken place in a live space but often has artificial reverberation added later in post production to sweeten the  overall presentation.

Where space is a dimension being exploited for effect and instrument placing is not natural but placed in an imagined space (albums such as ‘Dark side of the Moon’ are prime examples of a highly controlled and largely  illusory acoustic.) The is also evident in ‘Art music‘ and electroacoustic music.

Early electroacoustic works such as”Gesang der Junglinge” and later works such as Different trains by Steve Reich whose sound world depends upon the sampler and the modern studio for the realisation of the piece. In this the sound world is ‘constructed’ to suit the composer’s intention – but which still leans towards a naturalness.

XXX is clearly something in which we are expected to be disturbed or alerted by the ‘hopping’ location of the sounds .

There are only a few reasons why  a composer might want to  specify the spatial elements of the piece.

1 To subvert the normal run of things – perhaps to cause the listener to listen afresh

2 To suggest dramatic content (particularly in programme music) such as  emotional distance, movement across the stage, softness, etc.

3 As an effect to delight the ear – more as an extra ‘entertainment’

Note- you don’t find reference to ‘entertainment’ of music in many books on musicology. This might suggest that music has far more purposes – true but not to mention its prime component seems slipshod.

Such composers feel more in control of the elements of their work and thus use spatial  controls to the full.

Awareness of the sound world

Generally we are not aware of the spacial characteristics of  a recording – unless the are wrong. WE may be alerted to its nature when something changes – a movement of the acoustic or or an instrument in the sound field This happens more in rock and pop than in concert music music but is a mainstay of the electroacoustic composer whose whole audio world is about setting the ears  alight ( find quote)

We might listen to  song such as ‘strawberry Fields’ by the Beatles as an indication of what can be done when we ‘play’ the acoustic field as a part of the song.  What Strawberry fields shows us that there are levels of meaning unfolding with  the sing that are reflected in the  acoustic treatments. To single out just the spatialisation would be a mistake.  Engineers and producers usually work by feel and in the moment on such ideas and they are seldom programmed by the songwriter. The recording is a relatively early demonstration in the popular domain of what can be accomplished. The electroacoustic musicians ten years earlier had paved the way.


A sound source might be made to move within the performance space (as opposed to the recorded space) There are few occasions when  this might occur naturally (i.e. Without an instruction from the composer) – live or staged opera being one instance, marching bands being another (Charles Ives).

Other movements in the soundscape will be more or less a special effect – for example the ‘dramatic’ fly chase and swatting in Pink Floyd’s Ummagumma – Which track?) in which we are entertained by the realism of the effect of a stereo recording  depicting realistic movement in space. In the same album there is a track  – Granchester Meadow, in which a bird flies across the loudspeakers. Not in themselves, however musical events.

Jimi Hendrix’s Electric Ladyland shows us rapid panning effects to  break the sound field into a fictive space that gives us no real image of a performance space. The space is in your head and to try to extrapolate a real space would be a mistake.  We are expected to live with the uncertainties.  It is to real space what our experience of a roller coaster is.

It is hard to separate today where the acoustic decisions made are simply to sound good or to some extent involuntary or arbitrary. Many composers leave such questions in the hands of the production team at the studio or the performers. (popular music producers constantly search for novelty and as new sounds or techniques were invented they were readily  lapped up by the industry. Phasing in Rainbow chaser, and Thunderclap Newman – something in the air, beach boys use of theramin, very early synthesiser on Abbey road and so on.

One can find few examples of ‘meaning’ attached to spatial phenomena – perhaps a sense of remoteness, offstage or behind and distant instruments being the most common cliche for such emotions. In contemporary works space is treated as a firm dimension of the the sound which is considered as a part of the composition process.

Spatialisation of  of sound by ‘placing’ then in a sound field will give a greater clarity and ability to focus on individual melodic lines.

Seating and spatialisation in a String Quartet.

We expect a quartet to be seated on one particular way and with this image we are comfortable

V1 V2 Vla Vc

Actually and more accurately they will be seated:

V2 Vla

V1 Vc


This  semi circle  give them eye contact and the ability to take non musical cues form each other. It also faces the sound into a central sweet spot (SS) and this of course is where the wise recordist will place the principle microphone. In recent years we have seen a few examples of the following variation:


V1 Vla

This arrangement brings the lower register more to centre and assists in the spatialisation and separation of parts between V and Vla. It is possible that this seating position ins in recognition of the role of recording and the basic tenant of recording that Bass belongs in the middle. This  principle (more or less adhered to in 100% of cases) began as a means a dividing the hard work that a loudspeaker undergoes between the two speakers of a stereo system equally (there are early  stereo pop albums in which  the image is subverted – placement of sounds in Beatles recordings is somewhat counter intuitive or even bizarre at times. An example of subverting a principle almost before it has become established.

It is true that modern loudspeakers no longer need such kind consideration, but the  listeners ear and expectations are set as they are to expect the treble  balance to favour the left – whether in the quartet right up to the full symphony orchestra. It might seem that the natural playing position of the violin demands that they sit to the left  of the conductor, but that is more a matter of historic custom. I can think of no instance when the violins have appeared en mass to the right with the embarrassing exception of a record company that put out an Opera with left and right channels swapped by mistake (A hastily withdrawn and re-released Peter Grimes)

We ‘read’ sound from left to right with respect to treble and bass (which may seem counter intuitive given the placement of the treble on the right of a piano).

Of course a composer would be free to specify a different seating for a quartet and I can think of many reasons to offer:

VC Vla

V1 V2

Such a seating might suggest that dialogue effects across the space between the V1 and V2 are being made prominent and confining the lower parts to the centre. This might still give good eye contact between the players

Recording String Quartets

Recording a Quartet

I have reviewed a number of CDs recently where my overall comment has been that the recording was ‘Too wide and  too close.” Perfectly good musicians playing well and  creating a masterful rendition of a piece only to have it ruined by the recording quality.  When I use the expression ‘recording quality’ I am not  referring to  the  technology of the microphone,  pre-amps,  recorder and so, all of which work  brilliantly these days but there a re some basics that seem to me to be not right.  I will also hazard that they are not right because all training in recording technology  is focused on rock and pop technique (I hope I am wrong about this).  What we hear in these recordings of classical repertoire is firstly that the microphones are too close and that there are too many of them. This is very much a rock technique where, to get detail, and to reject noise, the engineer moves in to  capture the sound. But not being fluent with classical sound and form, does not really know how a ‘cello should sound in a recording so getting close in seems right to them – (about two feet away from the bridge). Similarly other instruments in a quartet will have close microphones at about the same distance. The wise engineer will also have placed a stereo pair a couple of meters away.

It’s what happens next that brings about the poor sound. Having captured a sound that is too close (you can tell it’s too close if you hear the  musicians breathing too much, or finger noises or too much bow sound) coupled with an out of balance sound of the instrument. Let me dive down a little.

Too Close

There is a distance for capturing the whole sound of an instrument without it feeling as if you were on top of the player. You need a little instrument and ambient noise to help articulation of the rhythm (hearing a tiny tap of keys on a flute helps rhythm become clear for example ). You need little instrument noise for realism – one of the criticisms of samples is that they have no body noises, no extraneous human artefacts. There is a distance that makes sense – it also balances the sound at the top of the instrument with the sound emanating from the bottom.  Theorists have suggested that the microphones minimum distance in these cases should be at the point of an equilateral triangle whose base is the  major dimension of the instrument. This is fine if there are no directional projections of sound – the piano for example requires a lot more finesse.

One of the reasons for getting in close is to be able to exclude sounds of other instruments (I am sure some engineers would like to put their artists in booths to achieve this). The players however, being friendly and in need of eye contact, like to sit quite close together which makes sound separation quite tricky – hence they put the microphones in close.

But why do they need separate channels for each instrument?  Because they feel they might need to do some ‘mixing’ and balancing later? Most quartets balance the sound themselves as they play – they have been trained to. There are some  top names I have heard who were recorded in this multi-miked way and it is a little insulting even to dream of touching the fader unless in consultation with the players.

Another reason for close miking might be to suppress the acoustic of the recording venue (too wild? too boxy?) with a view to adding artificial reverb later . We all do it, but usually just a touch in mastering to put the varnish layer on the  recording – not as a prime component of the sound.

All of this leads me to say – first make sure the venue you have chosen is good for the job. Generally studios (unless very good indeed) lack the sound to make the players blossom and  blend as they would in concert.  Secondly recognise that  the sound of a quartet does not  exist in close proximity to the players – the sound blends in the air somewhere about 3 meters away and the use of a good stereo pair should suffice. Now – if you need spot mikes on each instrument they are for the gentlest touches to bring out maybe a weak viola sound. But here is a  golden rule –  never move the faders in the middle of the piece – I can hear you doing it.

I’ll say that again – if you ‘push’ the cello part in a section to  help it along, I can hear the increase in volume and it makes the whole thing sound comical. Less well trained ears might not know what has happened bu they will experience the image wandering. Don’t forget that panning is  more correctly called amplitude panning and works by  dropping one volume and raising another so if you raise the volume of  violin 1 then the sound will  move over to the left and will be evident.

Too Wide

Here though is the biggest sin of all and the greatest evidence of a pop engineer  working with material he/she does not understand.  Just because the violin is on the left and the pan control suggest left by twisting it all the way to the left does not mean you get a stereo image.  Back to the beginning…. Stereo does not mean anything other than ‘solid’ (look it up).  Just because the quartet sits left to right and you have knobs that suggest left and right does not mean it is a good idea to put the Violin 1 far off to the left of(f) the stage and the ‘cello far out to the right – about 50 feet from his fellow.  This is to make the sound too wide and break down the ensemble.  By which I mean – sounds which go together – such as a chord between the players no longer resonate together but are spread across the sound field. One recording I heard recently, in which melodic lines are handed from violin to Viola then ‘cello and back (It was a Beethoven Quartet) had me feeling like a tennis referee, so twisted was the sound, and so broken down that the identity of a few of the chords was in doubt.  When a major triad is formed by two instruments on the left and the 7th degree of the scale by the ‘cello on the right you run the risk of hearing a split ensemble – it is too wide for the ear to put it back together.


Well – I have re-mastered a few recordings that were worth the time and trouble so that I narrowed the stereo field to one in which it feels like the quartet actually sat together. But better and more generally – We need to train more people in Classical music recording and not pretend that it is the same as rock. It isn’t and by a very wide margin. Ability to read music is a pre-requisite but few engineering courses require it.  a Knowledge and love of classical music is also required. You cannot move from rock to Bach with the same set of skills.

Am I ranting?  I hope not. Ranters don’t solve anything and I hope above I have suggested enough to give pointers on what is wrong with so many new recordings.  If you are a player about to record – get to know the engineer and their limits – if they don’t know what you are talking about then get another one or be prepared to have to work very hard alongside them telling them how to do it.

Grimes on the Beach

Grimes on the Beach final scene
Grimes on the Beach final scene

Grimes on the Beach – the Event and the CD

The live performance was possibly the best Opera I have ever attended – but mere enthusiasm for a loved opera in a loved setting is not enough to commend it. The concept works on many levels but let me begin with the magic of seeing events performed in the place in which they were set. Peter Grimes is the story of an Aldeburgh fisherman who loses apprentices in unfortunate circumstances and the setting of the North Sea as a backdrop was electrifying. Importantly though we could hear the sea in the music, we could feel the wind and smell the salt air. Many of us sat on the shingles for a few hours of pelvic discomfort but with little grumbling.

Of course the music and singers were superb. The cast working with head microphones (there must have been fifty channels to mix!) did a solid job of not letting technology, the novelty, the weather or anything else divert them from doing their jobs well. It was odd to have the music pre-recorded but it would probably not have worked to have live musicians – the humidity alone would put paid to any tuning.

What struck me most though was the terrific sense of space. The stage, unconfined by a building , was over 50 metres wide, and the cast walked off to wider parts , the sounds of their footstep’s in the shingle creating another of the signature sounds of the beach. Cleverly, the director (Tim Albery) chose to set the story in 1945, the year of its first performance, and he managed to time a spitfire to fly overhead in the first scene. So our soundstage was 100 metres wide, hundreds of metres deep with aeroplanes and sea sounds, and further dimensioned through the smells and sensations of being there.

Grimes on the Beach – Signum Records – The CD
This is a terrific presentation of one of the greatest British operas. The recording has captured the spirit of the drama brilliantly. One thing I missed in the recording is the ever present sound of the sea that we had on the beach – it added something and whilst you can’t having it running through the whole thing – some a touch of it might have been nice. In case its not clear, this recording was not done on the beach but at Snape Maltings. There will be DVD at some point in which you should get a better sense of the outdoor setting. (Including the sound of feet moving off stage on the shingle – also an evocative sound.)
Alan Oke as Grimes was the perfect singer for the role and a credible successor to Pears. He exuded the grim greyness of the North sea in the way that an Italian tenor could never muster (imagine a Pavarotti trying this out!). And Giselle Allen as Ellen Orford was performed in a convincing and sensitive manner. It’s hard to recommend this highly enough. Buy it – enjoy it and see the DVD.