Thinking about Music

All content on this site is copyright mjkmercer

A quick listen

 

This is a short study I did in January which later turned into a 9 minute composition called “Piano Factory”

 

A new Beginning

I have had a passion for electronic music all my life so decided to do something about it. I have recently completed an MA in Sonic Art and have just begun PhD studies in Electro acoustic music. i will be updating this site with sound samples and thoughts as I progress – more soon.

Lost in Translation

Watching the DVD about Arvo Part (24 Preludes before a Fugue)  which is obviously in Estonian, but the subtitles are in Spanish which I clicked on to instant translate.

The expression con moto  is rendered as ‘With Bike”.

The Perception of Value

We ascribe value to artworks by a far more complex system that their simple worth. (i.e. what somebody will pay for them)

Consider:

1)             Tape found in an old art student’s locker – 10 minutes of traffic noise  – somewhat edited and organised.

2)             Same tape found in John Cages’ study, carefully preserved and labelled opus 56.

3)             Same tape in the Hollywood special effect department

4)             Same tape  preserved at the Ministry of Transport to measure traffic noise levels.

I have to wonder if it is actually just the ideas that we pay for. If I hold the idea that the T-shirt I bought that ‘belonged’ to John Lennon is a real artefact, should I be disappointed if I see that 500 similar items were sold by a fraudster? Until the moment that the fraud was revealed I was very happy with my purchase.

Here is an economics scenario  given to me many years ago by a guy who advised Government Economists:

A man arrives at a farm in the morning and the farmer puts him to work picking the harvest at an agreed £50 for the day’s labour.

At lunch-time another person arrives asking for work and again the farmer (realising he needs more help to finish in time) takes him on also for £50 for the day.

Does the first person have any genuine complaint?

and….

How much should I pay for a tape of traffic noise?

The Fog of the Auditory world

The other day I was in my study playing guitar.  My neighbour decided that ‘all day’ was the perfect time frame for 80’s rock music to be playing.  It wasn’t loud but it bothered me so I closed the window. The sound was still there but reduced. It made me think about the way we are supposed to tolerate sounds from all around. The constant drone of traffic, the constant right of a private pilot to make THAT much noise over a village in which I would surely be arrested for the same level of sound, mobile phones!

So imagine if you will, that on going to a new art exhibition in a famous gallery you found that the rooms were filled with various levels of mist. From a light film of whiteness to a dense thick impenetrable  cloud such that you could only see the paintings close to. This is how the sound world has become:  People cough in concerts without even trying to mask the sound with their sleeves or even their hands. People eat noisily through films (can a box of pop corn really last that long?). At a solo cello recital in Aldeburgh the other day a couple obviously not used to the fact that this was not television at homer swapping facts about the composer’s life and the soloist’s career in barely whispered asides – all through the playing.

Coming home – I notice that one of my computers delivers a slight  wind sound from its fan. It has smeared over everything I listen to. Ironically it is the computer with my  music library on it. Does anybody make soundproof cases for computers?

There is another sound default at work – the default that everybody will love pop music blasting out – often to drown conversation – especially in public places like restaurants and shops.  Why always pop? is it only young people they are trying to attract? – if we must have other peoples music might it not occasionally be Jazz, Classical, world, singer -songwriter , alternative and so on?  I do not go to the Gym I used to go to because ‘Eye of the Tiger’ type songs play on endless loops.  As a preference, we should expect no background music, that silence be as golden as it used once to be.

Lets win back our auditory space and remove the fog from our listening environment.

 

 

 

Recognising Sounds – Knowing What we are Hearing

Note- See my Essay – ‘Sounds, Location and perception’ which is a prelude to this and covers the journey of a wave and its transformations from source to the ear. 

Once the ear has received the complex array of waves (now collected at the ear canal as a pulsating tube of air) it falls to the rest of the auditory system to ‘work out’ the content and its meanings and to identify which components go together to form each discrete auditory stream. The apparatus now involves the brain and its experience of the world to carry out this almost instantaneous sifting – first to detect threat (which would make us jump before we knew why) then to look for messages with meaning for us – that we need to respond to then by focusing and making a decision to do so, hearing all the background sounds. This is Auditory Scene Analysis.

The Oncoming Wave

As I sit writing this, I am aware of the following:

There is the slightest whisper of short strokes of my pen like silk on paper; the paper emits a tiny hollowness as the nib touches the surface and behaves like a membrane (I’m resting on a pad not a desk). There is also the soft breath of my computer fan – so continuous that I usually don’t hear it. Outside (I know it is outside because of my location perception apparatus) a bird unwinds its minimalist song. I identify what sort of bird it is. If I could, the experience would be different. It’s sound stream would be that of an ‘x’ and would form an impression complete with a label and my familiarity might downgrade the event to the level of the computer fan. But being a musician I enjoy the tune and it is something I have heard before and recognise.  This is a special sort of sound stream in that it could be interrupted and I would still know what was coming next. Consider the inevitability of each note of a wood pigeon’s call. I’ll come back to this point about known sounds from  knowledge templates versus unknown sounds which become categorized by extrapolation from templates of similar sounds.

 

Theory of Forms and Sound Recognition Templates

 

I need to digress into Plato’s Theory of Forms briefly – so here is an early philosophy warning.

Sounds have form that exists in time and space. The sound of a car going by is such an event: it is a unique event yet totally familiar. Were we to record it and look at the waveform in great detail we would know in keeping with our knowledge of things like snowflakes, that we will never see the same wave form exactly the same ever again and yet there will be plenty of cars passing by. Each event has sufficient characteristics in common for us to be able to recognise the sound and label it because we are able to access the idea of a car sound in our minds and instantaneously realise that it conforms to that general form.

The sound recognition templates (forms) that we have in our minds can only be broad in scope. We cannot hope to match a sound identically – it might appear at a different distance, be a different car, different surrounding acoustic, we might have the window open or closed. So many factors guarantee that we  almost never will hear the same sound twice.

We recognise the sound of a violin easily and based on our wide experiences of violins we are able to get a close match to the templates we have. Were we to experience an er-hu (Chinese two-string violin) having never heard one before, our cognitive processes will go into overdrive looking for a match.  The mind will offer the template of a violin to our understanding, but it will also inform us that it was not a veridical match. That something was different about the sound that defied the quick labelling. We might choose then to focus on the sound to describe to our inner process a wider experience of the sound. On being told what it is, we then understand what was heard, what the name is and we file away a template for the future. (Imagine if we described an er-hu as being a little like a crude violin – how that might change our experience of ‘World Music’). However, this is only one instance of an er-hu so far. If, in future, we heard a very similar sound it might not match the template that was formed with one instance alone. Perhaps we hear the er-hu in a different context – say in a Chinese orchestra and the mind struggles because it has a hunch about what it is hearing and we might look at the programme and find the word. Then we can re-categorize the template with two instances attached to it – this is a gathering experience which, in time, will form a life of experience and cause us to say – ‘yes, we know what an er-hu is,’ without us having to go through the room marked  ‘violin samples.’

This is like Plato’s forms or to be more up to date and put a similar spin on it (perhaps these different theories of forms aspire to an ideal theory of forms too?) I’ll cite Wittgenstein’s ‘family resemblances’ (In the next draft of this I’ll find the necessary quotes and references to keep us all connected to the world of ‘stuff already said by others’)

 

Auditory Scene Analysis

 

I’m still in my study, listening…The bird continues to make its sound. But how do I know it is one bird and not two? It might well be of course – I cannot know each bird so intimately as to be able to distinguish individual sounds – (though the bird probably can).  But I can extrapolate from the incoming sound wave a single thread of sound.  This is where I either recount everything Albert Bergman wrote or just send you off to read his work (See bibliography below)  but to save time here is a rapidly and easily digested summary from  Wiki….

http://en.wikipedia.org/wiki/Auditory_scene_analysis

 

What we learn from this? That the auditory system has powerful tools to sort and label and understand streams even when they appear mixed. There is intelligence at work and it may give you a small thrill of pleasure to know that even a powerful computer struggles to do this – but hold on to you humour because they are coming…

 

The Sum of all Sounds

 

More sounds are layered onto each other in my room: Vehicles, voices, violinists (daughter at practice) and so on. All that I can hear is to some extent familiar but in another aspect unique. Similar though they may be I have never heard them in this context (this mix of other sounds) at this distance, or this room  reverberation , or with this  physical make up (my ears are particularly acute today)  and so on.

 

To sum the listening experience of this moment;

1          The low hum of large vehicle

2          The bird sound

3          The sound of the pen on paper

4          The fan in the computer

5          Murmuring female voices somewhere – indiscernible words

6          Slight creak form my old faithful chair

7          Distant violin music being played

8          Clock softly ‘chucking’ (It’s a deeper sound than a tick)

…I shall pause and sit still in contemplative listening for a moment….

….

9          A very distant jet somewhere high above

10        A dog barked.

 

In the moment of the dog barking, all the other sounds became masked but I know they had not stopped and sure enough they re-appeared a moment later when I widened my attention again.  Masking and the illusion of continuity behind the masking is another piece of lovely hearing perception theory, I am very keen on Brian Moore’s book (see Bibliography)

 

My point – all this is happening on a quiet morning early,  before the day has really got going.  All these sounds are there to be picked out of the incoming stream and labelled. And I know what each one was. Being of a musical disposition, if I heard something I could not label, I would normally feel compelled to investigate – especially if I could hear musical potential.

 

So all these sounds are conforming to the templates I have for them and they have sorted themselves out in labelled streams I do not confuse the hum of the lorry with the distant music – though I could imagine that that might happen in some circumstances. As a composer I could make all sorts of things come together in the studio through careful mixing, but at the moment the sounds are from different directions, different distance and they are therefore not confused.

This single ‘mix’ of the moment ‘now’ (surely grounds for a Cagean aleatory composition?) presents a single vibrating pipe of air to the eardrum and the mind sorts it out for us and labels each experience with word or familiar feelings. It still also works to identify the location of the source and so on. The interesting thing is that I seem only to be able to give my attention to one at a time or I can chose to let the whole unlabelled sound wash over me as if it were a single source and this ability we have is important because of the implications for music.

 

Musical Implications

 

My ears have to work out how to hear an orchestra. I can, of course, hear the different parts that make up the sound I hear. But is that really so?  What I actually hear is groups. I cannot, for example, discern individual violins (unless one is particularly loud or bad) but I can pick out the flute. But then the flute is joined in unison by the oboe forming a new single sounding texture. The two combine because they are part of the same auditory scene, their timing events are identical and their pitches change in unison. A good example of this blending of different sounds is the way in which organists create sounds by mixing and layering different sets of pipes in ‘registrations’.  We hear a single event or single event texture. We hear a single violin line – unless suddenly the leader plays a melody above the texture.

This auditory scene analysis is critical to our understanding of how we hear music and how therefore we are going to record it.

 

When we listen to music we hear the whole thing or we chose to attend to parts. (Listening to a fugue on the piano is a supreme example of this and the best advice anybody ever gave me for listening to a fugue is to ‘go with the flow’).

 

Separating sounds out for us – such that we do not wrongly mix them up are:

 

  • Pitch differences and  similarities
  • Timing differences and similarities
  • Following or not following a pattern (sequence of timed events like ticking clock)
  • Timbre
  • Location or apparent source
  • Event (part of the show/not part of the show)
  • Visual information about the sources

 

These factors help us to match different parts of the sound to templates to be able to recognise each as a separate element.

Sorting out Source Location

I wrote previously about how a sound accrues information as it travels to the ear concerning the source location in relation to the listener. The listener also has to process that data at the same time as carrying out the scene analysis.

In the same way that we have templates to recognise sound types and patterns, I suggest that we have, a template or model of the world that helps us sort out source locations. I am not sure if I should try to confound things by suggesting that each sound template that guides our recognition holds all the possible variants of location within  – that seems an inefficient way for the brain to do things. I am going to suggest (and try to research further) that localisation (spatial) processing is from a different set of templates to those we use to identify the sound itself. Some of those templates for location might be linked to the fight or flight sound identifiers and cause rapid alarm in us primitive beings who still want to jump and run if we hear something like a wasp approaching.

 

As I mentioned earlier, at the eardrum, there is only a vibrating column of air. (I will keep it simple but I am aware that there is other information available through the vibrations coming to the back of the eardrum through the head and through information picked up through bone vibration etc.) The intelligent ear has a means of assessing how far away a sound is. Sound changes with distance and it changes in level, reverberation and tone.  When we hear a sound thus modified we know immediately that we are hearing, say, a trumpet at a distance rather than one close to, and processed. (This we can register from a microphone and thus filter out extraneous information (for now) about its general direction.

 

But what if we were to do just that – take a close-up trumpet sound and drop it to the rear of a mix. This happens all the time in the studio and good engineers know it is not just volume and reverb but careful adjustment of the EQ that gives the desired result. The level of verisimilitude seems linked closely to the engineers understanding of sound propagation. In my experience engineers brought up in the ‘hands-on’ school of mixing – or  ‘do it the same way as Bob’ as apprentices soon find out, are lost when it comes to working in these more subtle ways. The rock techniques for  moving a sound to the rear will not however work very well when trying to creates a realistic soundstage that  contains  reproducible  distance information in particular and location information in general.

 

So we have in us, an innate ability to assess how far away a sound it. This must be informed by knowledge about where we are. We will know that we are in a cathedral or outdoors for example. Experience tells us something about what to expect and how a sound will behave in these environments.

 

Recreating The Experience

 

The problem for recording engineers is to recreate what the ear has heard convincingly. There is not space here for a general review of stereo and  multi-channel techniques (I’ll write it soon though).

There is much that can be improved in the stereo recording sand mixing process by understanding how sound gets to the sentient mind, and much that we can design as a solution to improve that.  It is vital to understand that:

 

At the point where the microphone receives the sound most of the location information will not be recorded. It will give some distance information, it will give an approximate direction but it will not pick up what the eardrum receives.  Were we to insert microphone capsules in the ear and record the sound that gets to the drum we might have more information to work with but because of  Head related Transfer functions, the sound will present uniquely to the individual whose head is being measured. It was in the hope of getting round all these problems that binaural recording was invented – placing microphones in a dummy head to mimic the way our own heads work.

 

More soon.

 

MjkM August 2013

Training Sound Engineers to read music

I have been campaigning (unsuccessfully) for many years about improving the training that sound engineers get. I have argued that to spend three years at University  in a studio learning all about rock guitars and drums and so on is fine but why not learn  a lot more about  music? It seems a terrible disappointment to me that after three years the graduates seek work with no knowledge of music at all in some cases. As a bare minimum they should be able to play one instrument to a reasonably credible level.

I have good reasons for saying this…

1   Its not that hard to learn enough about music to be able to follow score and know what is going on. You don’t have to be a sight reader – you are not the artist  – but if the player (who can read music) says lets go back to the  – (what ever musical term you like here .  the rit,  the Bm chord, the piano entry and so on) the engineer with little understanding of music will be lost. Why would you not want to be able to speak that same language as the musicians? I have often given a single 90 page book to young people wanting to get a career in music and told then just to get familiar with its content and practice following score (a Beethoven sonata will do)

2    You need the work.  You need to stand above the other applicant and a flexibly skilled person is far more likely to get the job. Studios are not full of rock bands day in day out – unless you are lucky enough to work  at a major specialist studio (and most of those jobs are very over-filled with long term staff who will only move on by dying or going deaf – and some still hang on after that!) The real world of studio work is a mixture of rock, folk, pop, classical, jazz,  schools music, Karaoke and so on.

3     You will never be a great engineer if you don’t understand the thing you are working with. How could you?

4     If you don’t play an instrument how will you know when somebody is playing well or not – or even – will you notice if it is in tune and in time? Some engineers, I grant, can do this without being able to play but they still lack any empathy with their guests.

Let me nail the point for you. When I interview staff to work in my studio it is a pre-requisite that they can read music and play – and that is just  for the non-classical side. It was my frustration at finding too many people training on a production line basis  with  theory diagrams and no real listening experience, or with only one string to their bow (‘I do drum and bass’)  that led me to wonder what on earth they were doing for three years. Really – the technical knowledge of how to the use the equipment can be taught  in a single term.  Reading music can also be taught in a single term. Love of music in general –  a lifetime.

Other things that are not taught well everywhere:

Understanding and appreciation of  all musics

Understanding an appreciation of all instruments

Listening truing on somewhere in the region of 1000 critical tracks across a wide spectrum

Attending a wide range of live concerts to see what the real thing is like

Learning to manage projects

Learning to get the best out of people and encourage musicians

 

There are some Universities that understand what it takes but  too many institutions are keen to claim successful graduations rather than  training them for the real world. I could go on….

 

 

Sounds, Location and Perception

What is a Sound?

Reading various writers on the subject of sounds and their identity, I note what appears to be a level of confusion about what we mean by the word ‘Sound’. What is a sound? Where is a sound? What is it’s ontological status?  The various writers (See bibliography below but a very good introductory essay can be found at : http://plato.stanford.edu/entries/sounds/) speak of four main theories: The Proximal theory – that sounds are where we are, the Medial Theory; that the sounds are in the medium (air usually), the Distal Theory; that sounds are at the site of their generation and finally an Aspatial Theory, that space is not relevant to sound.

What is bothering me is the lack of a human dimension in the discourse. I don’t think it is necessary to complete a theory about things apart from persons. It is too remote from us to have the relevance which the topic demands and all talk of sound is predicated on a human auditor. We want to know what these things are in relation to ourselves. To discourse on trees falling in forests (which is the unstated question that lies behind all thoughts about the ontology of sounds)  is too abstract to be of use. I take a general philosophical stance that things signify in relation to the entity to whom they have significance. In short, ourselves. ( I call this Robust Pragmatism informed by Naivety – though I would welcome a professional philosopher’s comments to help  make this work out more rigorously). I’m happy to call upon support from such a professional:

‘…sounds directly perceived are sensations of some sort produced in the observer when the sound waves strike the ear. (Maclaclan, 1989)

I want to say that a sound is only a sound when experienced by an auditor who has the apparatus to detect it and the intellect to interpret the incoming waves in a consistent manner. If we consider sounds to be just waves in the air of between 20hz and 20Khz (an ambitious range for the over twenties in reality) then what of waveforms that lie above and below? Are they still to be called ‘sounds?’  There is also an implicit duration assumption of there being a repetition of the alternating compression and rarefaction of the air – were it to happen just once, might we not really hear a sound?  To paint an extreme picture – consider going up a mountain in a car until your ears pop then down the other side. Technically you have just experienced an ultra-low frequency (one cycle per half hour say – 0.00055 Hz) of one rarefaction followed by one compression  (on top of many other wave events) – could we justifiably (and according so some theories is might be so) be said to have ‘heard’ a sound at this frequency?

I want also to say that a sound contains information that has meaning to the receiver. Let me start with cats or some simple animal – the information it receives may only be fight or flight, come for food etc., but it is a sound to the cat – because it elicits a response that is causally linked to the wave stimulation is reveives. I know that sometimes the cat appears not to hear (but that’s between me and my cat). What can we really know of an animal’s experience of sound other than its response or lack of it. If I communicate with a friend and process my words electronically to be in the 40Khz to 60Khz range  they will not hear it. Can I be said fairly to have communicated simply because I created a wave in the air? I would suggest not.

Some writers have spoken about the objects vibrating and still ‘sounding’ even if in vacuum. I would contend that I could still ‘hear’ a sound in a vacuum if – for example – I were bounce a laser beam from it’s surface and create a sound from the reflection – it is still a transmission of a wave using only a bit of translation equipment – just like the air waves translated by the ears.

Location of a sound

Much gets written about the location of a sound. Where is this sound in relation to us? Bear in mind; the locational information about the source of the sound if not encoded in the waves of energy being emitted by the object.  The source knows nothing of the space in which it takes place and yet writers speak as if it did. As if the  information about its location were part of the wave.  Let’s go on a journey with a wave:

1        A sound is generated. Let’s keep it as simple as a finger snap.  (The reason for this is that it is a point source – I could enlarge for many pages on the problems of musical instruments that are rarely point-sources but a complex array of sources). There is a sphere of sound waves (pulses of rarefaction and compression ) radiating from the source at a speed of about 350 m/s – not forgetting that this is a approximate number depending on air composition and density, pressure and temperature plus other subtle factors. As soon as the wave has travelled an infinitesimally small distance it will have changed – albeit undetectably at this point but; it will have been subject to the air’s variable elasticity and have been modified in tone and volume in a Nano- second. This process of tonal and volume change over distance is continuous right to the point of audition in the ear drum at which point these factors cease to change. If we go too far away from the sound source then the energy of the waves (spreading and diminishing according to the inverse square law) reduces to Brownian motion in the air and is therefore as undetectable and lost as a homeopathic remedy.

2          The sound travels in air, constantly transforming and can be said to be accruing information as it travels. It does not know how far it needs to go to meet an auditor but it collects tonal transformation and volume modification as it goes. Within a very short time it will also encounter a surface – often the floor or ceiling first but as soon as it does so, it generates a reflection of itself (losing a tiny bit of energy to the wall in the process). Another part of the expanding sphere of sound waves will encounter another surface – possible a wall and another reflection will be created and so on until all the primary surfaces have been encountered. By the time the furthest surface has been met secondary reflections from the first surfaces will be following the main sound and will also be travelling to the next surface they encounter and so on. We soon get to tertiary and quaternary layers until we end up with something we clearly call reverb. (I will leave the special case of echo aside for now but the keen reader can quickly fish out some determining factors and add them to this account as well as commentary on early reflections) The totality of reflections and complexities of the wave being added to also gives us information about the building it is in. To a musician, nothing describes a building as accurately as the sound it makes.  But the sound still has no regard for the listener who may interpose her ears at any point or time in this journey but it is still accruing information as to the distance from its source and the surfaces it has encountered.

3   Finally it will encounter the listening body but let me start with a simplified case of a single monophonic microphone which will be a dumb witness to the sound as it arrives.  The apparatus will record the energy-impeded and tonally modified sound and from that a later auditor to the recording might infer some things about its genesis (such as what sort of object made the sound and how it was excited etc.) but certainly not whence it came. At best they might name the source to the extent of being able to repeat the sound. ‘It’s a finger snap.’ But the sound at this point has also accrued its multiple reflections, and the characteristics of these might lead an auditor to make some assumptions based on their experience of listening and their experience of spaces. They might quickly know that it was recorded in a church or a studio or a domestic environment. In a special set of circumstances (a trained engineer working with established recording venues) might be able to name the space – ‘That’s St. Paul’s Cathedral’ – or ‘that was recorded in Air Studio’.

I won’t go into other philosophical inferences that can be made from the sound such as it was a person with a hand and fingers and that there was intent – such inferences to back to the big bang (and beyond! I hear a philosopher want to rush in and fill our existential vacuum).

Let me add a second microphone to the set up so that we can consider direction of the sound through use of stereo recording (remember at all times that ‘stereo’ just means ‘solid’ but we will proceed with the standard assumptions about  what the word means). If we are to set up a stereo pair however, how will we orientate it?  An engineer will have set it up to be more or less tangential to the circle of sound waves arising – i.e. by pointing the microphones in the general direction of the sound source – so they will be adding information to the recorded sound by this simple act (but not to the sound in the room, of course). A true recording of the source might be better if it were made without regard for this and simply picked up – in stereo, what happened in the room. But this leads to difficulties. In some orientations it will not yeild directional  information easily – such as side-on to the source.

This ‘adding information’ to the sound is either witting or unwitting and is manifest in a vast array of interlacing frequencies and amplitudes that in theory could be reduced to sine waves (but which never seems to work very well when synthesising sounds – we are seldom fooled).

Here though, we have a clue as to what sound and location is all about. The act of placing a stereo microphone reveals that the information added by the selection of its location and its orientation is exactly what we as the listener might do when we go to listen to something.

Our left and right ears intercept the sound wave and ‘hear’ what is going on.  It is only at the point when the sound wave reaches the outer ear that all the location information is added. The stereo microphone, correctly aligned, will have picked up a couple of important clues as to location – the interaural level difference (ILD) which is the difference in volume between left and right ears will be captured more or less faithfully by a correctly rigged microphone pair. The interaural time difference (ITD) will also be captured (the difference in arrival time to the two ears is an important directional clue but a moments reflection will show that ITD and ILD are not sufficient to tell front from back – still less about up and down. ITD and ILD give as much radial information as can be captured in a microphone.

Note that it is only possible to speak of these things at the microphone. In the air in front  of the microphone – it is the presence of the equipment and its orientation that creates the point of measurement.

4          The microphones have still not captured the full spatial location however. They will give us clues as to right and left and the sound itself will capture distance information but now we have to look at the function of the ears and the head.  The ears and head form a complex system and these have been measured and calibrated to understand the function of all the parts of that system. The spatial information we depend on to locate sounds in space is a combination of the distance and direction information that is a variable at the microphone but the physical structure and orientation of the head now adds its own information about where we are in relation to the sound source.  Each part of the ear modifies the sound in tiny subtle ways which are individual to each of us – we all learned how to use the shape of ear and head we were born with to translate information about sound and their location. (but note how a new born child rotates its head to learn the link between the two)

Head Related Transfer Functions (HRTF) tell us how the head itself and its orientation affects how we hear.  The trouble is our unique personal HRTF that cannot be codified into a recording – though there are some generic data sets that might be applied to a recording to recreate a sense of location.

5          We move our heads. When listening we  ‘cock an ear’ as it were. By moving our heads we add to the information layer and include more about phase relationships in the sound (not dealt with here but again sufficient material for another day), we get to create a small shift in volumes and tones that give us more clues as to location based on our primitive sense of  locating danger and we employ all these almost like an analytic instrument to  locate the sound.

6          Brain processing takes place.  Bregman’s book on Auditory Scene Analysis is too large to summarise here but it describes and provides analysis on how we sort out incoming data streams in our minds and separate out what belongs in which group of audial experience – hence we can pay attention to one sound sequence whilst another is  sounding and we are not confused by a flute melody and simultaneous chatter from a child for example.

One can envisage scenarios where all the above criteria are in place but we do not hear the sound as a sound because the cognitive faculties are not correctly align or trained to respond and ‘tell the mind’ that such and such is a sound with content and meaning and that we have to respond to it (even ignoring it is a conscious response to it)

Summary

When we talk of what a sound is, I suggest it is the net effect of all the above factors and until it has been interpreted by a conscious entity it is only vibrations in the air with the potential to yield information in the right circumstances.

To speak of where a ‘sound’ is located seems like a senseless question unless we recognise that it is only a sound when the brain has processed the incoming impulses from the ear – and it is only an information rich sound to which we can respond when it  has been matched with our experience of sounds and we know what it means to us.

Defining sound as being ‘in the source object’ seems not right because of the  transformative journey the waves must take. Vibrations with the capacity to excite a transmission medium, that are shaped by the material of the object might be a start. Medial wave propagation has some useful features as long as we realise and acknowledge that it is a time dependant thing (as is all sound of course). We might take an infinity of time slices to define it (and I don’t have time here for differential calculus but other researchers have) but it seems to me not to be the sound – but an (unreliable!) transmission medium. And the proximal theory leaves us with hearing what is at the ear – but it still needs the intercession of intelligence to make an air vibration – no matter how sophisticated – into a sound that has a meaning.

I will have to write up all the steps between my assertions and show all my citations (work underway!)

But here are a few key sources:

Blauert, J (1997) Spatial Hearing: The Psychophysics of Human Sound Localization. Cambridge Mass. : MIT Press.

Bregman, A.S 1994 (1990) Auditory Scene Analysis: The perceptual Organisation of Sound, Cambridge, MA: MIT Press

Ed. Nudds and O’Callaghan.(2009) Sounds and Perception, New Philosophical Essays. Oxford: Oxford University Press.

Maclaclan, D.C.L. (1989) Philosophy of Perception. Englewood cliffs, Prentice hall.

O’Callaghan, C. (2007) Sounds: A Philosophical Theory. Oxford: Oxford University Press.

The bibliographies in these books alone will give you a lifetime of reading.

mjkm August 2013

Electronic music and the idea of Performance

Something that always bothered me about performances of electronic music was the lack of a human touch. It might be that this was a desired outcome – certainly some composers would agree  that the mechanical nature of the sound is within the aesthetic realm they wish to explore.  It was whilst working with CSound [1]  the other day that got me thinking. I had earlier spend quite some time working on the performance of a guitar piece,  fiddling with  slurs and fingerings until it sounded right, then trying out a different plucking position to see if the sounds might blend a little better and it struck me that there is no equivalent refining process in electronic music.  (I am sure I am wrong about this so please flood me with counter ideas – a conservatoire of electronic performance maybe?)

There might be a number a number of reasons for this.

Firstly,  in the early days there were few musicians working the field – it was lab coats and degrees in physics and maths and it showed. Much of the output was clever but  didn’t move anybody- it seas not an emotional experience and this is still true to some extent today. We are fascinated by what we hear but not moved.

Then,  there is usually one and one only performance that may become a CD that is never recorded again. Some of the recording artefacts are arbitrary and  unrepeatable, often the composer (who is also the performer) has moved on to new ideas, perhaps the set-up (swing microphones) will always produce something so different through the  mechanical set up that the human touch is not relevant to the realisation.  (Institute for excellence in microphone swinging?)

Nobody gets the chance to go back and re-look at how the piece might be better realised.  Some of the great electro-acoustic pieces are fixed in that what was done, was done and is complete – sealed in its time capsule recording forever. Whether we are discussing the  “Etude de Chemins de Fer” of Pierre Schaeffer or  a more recent work such as ‘Mutations’ by Jean Claude Risset or  even more recently, the wonderful work of Robert Normandeau (google and enjoy) we are talking about works whose realisation  is also a closed door on its future. I’m  sure not many scores exist for future generations and fewer instructions  other than the CD itself.

The other reason that little is said about performance is that there is not much ‘hands on’ work (except where processing live acoustic instruments). Working in Csound or in an electronic sound lab processing a recording, you get the feeling of something much more like either computer programming or hand- knitting an item of winter wear.  There is little or no discussion over the expression of a particular sequence  (melodic) or the balancing of a chord (Harmonic). Perhaps this is also because it takes months to programme something in Csound  that the idea of  slightingly stressing and sharpening a leading note in a violin line  is from other world. In the electronic work the composer is more likely to  require that a filter be opened in the last  five second of the piece.

Where I am going with this is that they are different worlds. Performance of acoustic instruments is  to some extent a black art; some never get it right, some train for twenty years to become the great performer we hear but in electronic music, excellence is more  concerned with mathematical  innovation and sound mixing  than with the idea of performance itself.  And I mean to include live performance in this. Performances I have been involved with were more concerned with just getting the technology to work as specified , but no musician had rehearsed the piece, nobody has worked out a series of options that would serve the sound better, nobody was interested in the man/machine interface.

There has been a lot of work on midi interfaces for performances fro address this lack of human touch, but there is no literature on how to realise any particular piece and what refinements might be available and so on.  There is little in the way of musicological analysis that might be the starting point for interpretive discourse. Perhaps it is time I wrote one.

Maybe this situation comes about because of the fundamental unrepeatability of the event.  There is no wrong reading of the score, no better interpretation, no comparative score from a similar composer  that would inform its realisation.  For this reason it is to me a different world. But this is not to denigrate it – I love the things it offers and I am listening to Robert Normandeau as I write.  But it is with very different ears that I will later assay the new recording of the  Britten Cello suites I have just bought.

I dont think I have said all I want to on this topic yet…. keep an eye out for a sequel.

[1] Csound is the lowest level  sound programming language for electronic music – it’s free software but a ten year study to master and each composition takes an eternity to realise simply because every event has to be specified – in incredible detail to get it right. Its a bit like writing down everything involved in making a cup of tea – down to the three dimensional mathematical trajectory of your hand as it lifts the kettle – an exercise in  Zen mastery. (www.csound.com)

Stuart O’Connor and his Music

Stuart

This is not a review of an album as such – more a review of a life (so far) work.  Stuart is an artist who works with words and music to engage his  audience in his ideas. There – I made a distinction that says more than just  ‘musician’.  But I’m not going to demean Stuart or you readers by offering some shorthand pigeon hole in which to place him. That would be unfair to him and to pigeons.  Stuart is what I like to call ‘ the real deal’. We have all been to gigs and seen  people strutting their stuff and we know at some instinctive level that this is just an act. It is a construct designed as a platform for the ego or a cynical money getter or something else. We don’t go away humming the tunes and we certainly don’t care to ponder the lyrics. There would be no point. I don’t want to share my consciousness with them.

Stuart has developed his own style under the tutelage of his own good taste and determination to innovate. In this innovation is a basic honesty – you feel that if he  found a false note in anything he wrote he would quickly  get rid of it and  work the material again. His songs – like all the best song writes, come from his life. His life in a white van travelling around the country performing whoever he can get an audience – and further afield, swapping gigs with people in Australia, New Zealand and Japan. His creativity involves the way he runs his whole venture. It is as if he decided to re-invent how the whole  touring musician thing should operate.

I’ve known Stuart for more than a decade and in that time seen his  output develop in maturity to the point where the new song video just released (why do we say released? it wasn’t in captivity or being artificially held back) (see link)  sums up the standard of his work so far. Yes, he really does pull all that off live. The use of pedals and so on is not new – but he doesn’t let it dominate as an ‘axeman’ might, he lets the fireworks in the guitar rest below his  lyrics, supporting and  sustaining the target of his performance – the songs.

http://www.youtube.com/watch?v=lKAMXZD-Q_Y

I have been involved at periphery of Stuarts music in all its guises – as front man for the alternative band “My Pet Junkie’, as a solo artist and now as the progenitor of the ‘Stuart O’Connor Band’.  All incarnations are excellent and I recommend hearing and seeing any of these if you can.

I’ll offer you   www.stuartoconnor.com for a fuller picture but hunt around the web (or just type the name in google) and you’ll find a trove of music that defies  compartmentalisation and aught to be heard more widely.

Support Stuart too – even if you don’t rush out and buy his albums (which you should) just go and  give him some good feedback on his Facebook – to let him know  you appreciate  the dedication to his art that he exhibits.