The tree can have several shapes, depending on how the performer moulds it.One picture of a tree has climbed various stages of filtering. The print has reached the stage of wind shaping the frequencies and patterns, in a way symbolising the feedback process.
I have used various kinds of Noise generating Ugens contained in SynthDef's to shape this piece.These SynthDef's are modulated by their own signal through the feedback engine. Having ‘just’ the right volume without excessive feedback is the key for the performance. After switching the engine, the player can decide either to modulate the incoming signal feeding on to itself or play the table, clap or speak to generate more textures for the tree. A user input sound generation process feeding itself which is played back from the same set of speakers. I played the ‘table on which my laptop was sitting and have also used my laptops body to generate sound. An important point into consideration; my laptop is acting as a sound generating body both ‘physically’ and electronically.
I recorded this piece by close micing the speakers in Studio 2. This created another feedback environment in the chain and affected the overall timbre of the tree. "Life feeds on life, feeds on life and feeds on life".Similar concept applied to sOuNd.
In other words the speakers that are used to playback the SynthDef's are feeding their own signal through feedback engine. This modified signal is reproduced back into the same set of speakers. The end signal is then captured by two Newman U-87 microphones as ‘Left’ and ‘right’ and recorded onto proTools.
The recorded waveform had two sections .One constructed with dust particles and storms and the next section was recorded playing the table. This is very interesting. I have figured out that when my feedback engine is on and I am using the ‘line In’ as the internal microphone of my computer, it converts the table as a drum. Not just the table the whole environment is affected. Speech, abstract noises and even breathing gets amplified with the control of changing parameters like pitch, grain size, time dispersion and pitch dispersion. As the player increase the output of the amplifier. There will be a dense cluster of sonic cloud (use this very carefully, if the volume is high, you ears are at a high risk and so are the speakers.)
The AB section is then made Left = AB and Right = B(rev) A(rev). A mono file is then bounced down which is exported back on to Protools. I made another mono track of the bounce; this time I have A B B (rev) A(rev). I made another mono channel, copied the track, hard panned both the channels and then I got Left = A B B(rev) A(rev), Right = A(rev) B(rev) B A
I then applied panning to both the channels . Both the tracks simultaneously change their position and move to the corresponding speaker several times in the piece.
Score : My code contains instances of synth which can be run and paused at any given time by the performer.. It also contains a user changeable yet fixed scheduling duration of various Synth’s .It also contains a user input random duration scheduling for playing the different Synth’s.The player can decide when he would like to switch ‘on’ the feedback engine. He can then decide if he would like to feed the signal on its own or start playing anything and everything he desires in that acoustic space.Nothing is fixed from the author,go crazy with it.
jai rAm ji ki..
===================================================================
Audio Arts :
Tracks:
mumma
Cheap as a cigarette
Robbers Collar
contact : ezulai@optusnet.com.au>
Lizabella Baker – Vocals, Flamenco dance/drums
Maura O’Reagan – Vocals, Whistle
Julius Crawford – Bass, keyboards
Christian Hodgson - Guitar
Mousiour Duffur - Drums
===========================================================================
Music in Context 3A: Music since1900
" Mosque in war times "- 6'55 mins
download score
References:
Whittington.Stephen.2007.Lectures presented for Music in Context 3A.Music Since 1900.Elder Conservatorium of Music.University Of Adelaide