Friday, November 16, 2007

Projects

Audio Arts : Sound Design for Films..
What is Music ? <<<<< Film by Daniel Murtagh <<<< Music & Sound Design by Vinny Bhagat



Creative Computing : Programming with SuperCollider..
click here to download the project file..

Now when thinking about this project, I am wondering have I succefully completed it..? I really dont know, depends what you call success is. The amount of time I put into this did not yield the outcome I would have expected. I have certainly come to know a new language, a new tool to construct sounds, music. My programming skills definately came in between my ideas and thier implimentation. Disappointment is there , towards the end I could not handle all this information properly. I am waiting to see what christian has to say about it..

Tuesday, October 30, 2007

Week 12

Creative Computing : Spatialisation

Materials for the week

* Mix.rtf : Mixes an array of channels down to a single channel or an array of arrays of channels down to a single array of channels.

// Mix an array of channels
{ Mix.new([ ClipNoise.ar(0.2), FSinOsc.ar(440, 0.1), LFSaw.ar(20, 0.1)]) }.play;

( // fill an array of channels
{
n = 18; // number of voices
Mix.fill(n, { SinOsc.ar(50 + 500.0.rand, pi, 0.05) });
}.play;
)

* MultiOutUGen.rtf : A superclass for all UGens with multiple ouptuts.MultiOutUGen creates the OutputProxy ugens needed for the multiple outputs.

* OutputProxy.rtf : A place holder for multiple outputs,sometimes used by Ugens.
// Pan2 uses an OutputProxy for each of its two outputs.
({
var out;
out = Pan2.ar(WhiteNoise.ar, SinOsc.kr(0.5));
out}.play;
)
* BiPanB2.rtf : Encodes a two channel signal to two dimensional ambisonic B-format.This puts two channels at opposite poles of a 2D ambisonic field.This is one way to map a stereo sound onto a soundfield.It is equivalent to:
PanB2(inA, azimuth, gain) + PanB2(inB, azimuth + 1, gain)
BiPanB2.kr(inA, inB, azimuth, gain)
{
var w, x, y, source1, source2, a, b, c, d;

source1 = SinOsc.ar(200);
source2 = SinOsc.ar(202);

// B-format encode
#w, x, y = BiPanB2.ar(inA: source1,
inB: source2,
azimuth: MouseX.kr(-1,1),
gain: 0.1);

// B-format decode to quad
#a, b, c, d = DecodeB2.ar(numChans: 4,
w: w,
x: x,
y: y
);
[a, b, d, c] // reorder to speaker arrangement: Lf Rf Lr Rr
}.play;

* DecodeB2.rtf : 2D Ambisonic B-format decoder.Decode a two dimensional ambisonic B-format signal to a set of speakers in a regular polygon.The outputs will be in clockwise order. The position of the first speaker is either center or left of center.
( // Theremin model
{
var theremin, w, x, y, a, b, c, d;

theremin = SinOsc.ar(freq: MouseY.kr(3200, 200, lag: 0.5, warp: 1)
*
SinOsc.kr(freq: 6, mul: 0.02, add: 1), // Vibrato
mul: abs(MouseX.kr(0.02, 1))
); //Amplitude

// B-format encode
# w, x, y = PanB2.ar(in: theremin,
azimuth: SinOsc.kr(2* pi),
gain: 0.5
);

// B-format decode to quad
#a, b, c, d = DecodeB2.ar(numChans: 4,
w: w,
x: x,
y: y
);

[a, b, d, c] // reorder to speaker arrangement: Lf Rf Lr Rr
}.play;
)
* LinPan2.rtf : Two channel linear panner.
"Sounds more like the Rhodes tremolo than Pan2."
{ Out.ar(0, LinPan2.ar(VarSaw.ar([200, 201], pi, 0.1), SinOsc.kr(1))) }.play;

* LinXFade2.rtf :
Two channel linear crossfader.
{ LinXFade2.ar(FSinOsc.ar([800,804], 0, 0.2), VarSaw.ar(0.2), FSinOsc.kr(pi)) }.play;
* Pan2.rtf : Two channel equal power panner.
(
{Pan2.ar( //pan position
FSinOsc.ar(exprand(700, 2000), 0,
max(0, LFNoise1.kr(3/5, 0.9))),
LFNoise1.kr(1))
}.play(s)
)
* Pan4.rtf : Four channel equal power panner.
( // phone ring
{
var lfo,in;
lfo = LFPulse.ar(freq: 15, mul: 200, add: 1000);
in = SinOsc.ar(lfo, mul: 0.5);
in = Pan4.ar(in, SinOsc.kr(2), VarSaw.kr(1.2),1);
}.play
)
* PanAz.rtf : Azimuth panner : Multichannel equal power panner.
Server.internal.boot;
(
{
var trig, out, delay;
trig = Impulse.kr(freq: 10);
out = Blip.ar(
freq: TRand.kr(0, 50, trig).midicps,
numharm: TRand.kr(1, 12, trig),
mul: max(0, TRand.kr(-0.5, 0.4, trig))
);
out = Pan2.ar(in: out,
pos: TRand.kr(-1.0, 1.0, trig)
);

out = out * EnvGen.kr(Env.perc(attackTime: 0,
releaseTime: 1),
gate: trig
);
out = Mix.ar({out}.dup(12))*0.2;
delay = CombL.ar(in: out,
maxdelaytime: 2.0,
delaytime: 4/6,
decaytime: 0.1
);
out = out + delay;

// five channel circular panning
PanAz.ar(
numChans: 5,
in: out,
pos: LFSaw.kr(MouseX.kr(0.1, 10, 'exponential')),
level: 0.5,
width: 3
);
}.play(Server.internal);
Server.internal.scope;
)

* PanB.rtf : Ambisonic B format panner.decodes over 4 channels.
* PanB2.rtf : 2D Ambisonic B-format panner. Encodes a mono signal to two dimensional ambisonic B-format.
* Rotate2.rtf : Rotates a sound field : Rotate2.kr(x, y, pos)
Rotate2 can be used for rotating an ambisonic B-format sound field around an axis.It does an equal power rotation so it also works well on stereo sounds.It takes two audio inputs (x, y) and an angle control (pos).It outputs two channels (x, y).
It computes this:
xout = cos(angle) * xin + sin(angle) * yin;
yout = cos(angle) * yin - sin(angle) * xin;
where angle = pos * pi, so that -1 becomes -pi and +1 becomes +pi.

* SelectX.rtf : =Mix one output from many sources
The output is mixed from an array of inputs, linearly interpolating from two adjacent channels.

* SelectXFocus.rtf : Mix one output from many sources.

The output is mixed from an array of inputs, linearly interpolating from a number of adjacent channels.A focus argument allows to control how many adjacent sources are mixed. (by adc)

* Splay.rtf :
Splay spreads an array of channels across the stereo field.
* SplayZ.rtf : SplayZ spreads an array of channels across a ring of channels.
* XFade2.rtf : Equal power two channel cross fade.
* Select.rtf : Select one output from many sources.

References :

* Haines.Christian."Workshop-12-sem2, conducted on Spatialisation .Programming with SuperCollider".25 October'2007.Electronic Music Unit.University of Adelaide, South Australia.
* McCartney , James et al . 2007,SuperCollider Inbuilt Help.
* Source Forge, http://supercollider.sourceforge.net/

Audio Arts : Film Sound
Class Notes:
* Sourround Sound in Film
*Dialogue; To be crystal clear in the mix. You could filter out the music/ frequencies from the range of 600Hz till 3Khz, from the mix so that the dialogue becomes clear.
Ducking :
It is an effect where the level of one signal is reduced by the presence of another signal, through the use of side chain compression.
Side-chaining uses the dynamic level of another input to control the compression level of the signal.
* THX ( Tomlinson Holman's experiment )is the trade name of a high-fidelity sound reproduction standard for movie theaters, screening rooms, home theaters, computer speakers, gaming consoles, and car audio systems.Itwas developed by Tomlinson Holman at George Lucas's company Lucasfilm in 1982 .THX is mainly a quality assurance system, a playback environment to ensure that any film soundtrack mixed in THX will sound as near as possible to the intentions of the mixing engineer.

(Dolby Digital, SDDS) or analog (Dolby SR, Ultra-Stereo), can be "shown in THX.

References:
Harrald.Luke.2007.Audio Arts 3.Film Sound.Electronic Music Unit.University of Adelaide, South Australia.
* http://en.wikipedia.org/wiki/THX ,viewed on 31.Oct'07
* http://www.thx.com, viewed on 31.Oct'07

Monday, October 22, 2007

Week 11

Creative Computing : FFT(2)

Keywords :
* Josh_PV_Ugens
PV_EvenBin : Returns the even numbered bins in an FFT buffer, resynthesize only even bins. Similarly PV_OddBin resynthesize only odd bins
PV_FreqBuffer(buffer, databuffer) : stores the freq values from an FFT analysis into a buffer to be used outside the FFT process
databuffer - a buffer of (fft buffer size / 2) for storing freq or mag data in
PV_Invert :
PV_MagBuffer(buffer, databuffer) : Store FFT data in another buffer for other uses.
databuffer - a buffer of (fft buffer size / 2) for storing freq or mag data in
PV_MagMap : Remap magnitudes to a new mag curve
PV_MaxMagN : return the N strongest bins
PV_MinMagN : return the N weakest bins
PV_NoiseSynthF : Decisions are based on whether or not freq data across numFrames is within a given threshold.return only bins that are unstable.
PV_NoiseSynthF(buffer, threshold, numFrames)
PV_NoiseSynthP : PV_NoiseSynthP and PV_PartialSynthP base these decisions on whether or not phase data across numFrames is within a given threshold.
buffer - the FFT buffer
threshold - a phase value (in radians) with which to allow values to pass through or be zeroed out
numFrames - the number of FFT frames needed to make the above decision
initflag - if 0, all bins are zeroed out while the initial is calculated, if 1, all bins pass through.

PV_OddBin : Return the odd numbered bins in an FFT buffer

MCLD Ugens

CQ_Diff : Logarithmic spectral difference measure.
CQ_Diff.kr(in1, in2, databufnum)
FFTDiffMags: Compares the spectra of two signals, finding the magnitude of the difference for each frequency bin. These differences are averaged onto the (control-rate) output.
FFTDiffMags.kr(chain1, chain2)

FFTFlatness : Calculates the Spectral Flatness Measure, defined as a power spectrum's geometric mean divided by its arithmetic mean. This gives a measure which ranges from approx 0 for a pure sinusoid, to approx 1 for white noise.
FFTFlatnessSplitPercentile : Splits the FFT power spectrum into two parts - above and below a given percentile and then calculates the spectral flatness measure for the two parts of the spectrum.
# lower, upper = FFTFlatnessSplitPercentile.kr(chain, fraction)
FFTFlux : Calculates the spectral flux of the signal, which is a measure of the rate of change of the FFT power spectrum. It measures the difference between the current and previous FFT frames, by calculating the 2-norm (the Euclidean distance between the two spectra) after normalising for power.
FFTInterleave : Takes two FFT "chain" signals and mixes them together. The FFT data is not actually combined, rather the trigger signals which indicate that a new FFT buffer is ready for processing are combined. The first chain takes priority: if both chains fire at the same time, then the frame from the second will be ignored.
FFTPercentile : Calculates the cumulative distribution of the frequency spectrum, and outputs the frequency value which corresponds to the desired percentile.
FFTPower : Sum of instantaneous FFT magnitudes.Operates on the frequency-domain rather than time-domain representation.
FFTSubbandPower : Calculates the spectral power measure, in the same manner as FFTPower, but divides the spectrum into (adjacent, non-overlapping) subbands, so returns separate power measures for the different subbands.
#[power1, power2, ... powerN+1] = FFTSubbandPower.kr(chain, [cutfreq1, cutfreq2, ... cutfreqN], incdc)
FFTTriggered : Based on [FFT], but analyses the signal only when triggered, rather than in a continuous sequence. The point is to be able to synchronise analysis windows exactly with trigger signals. Its purpose is for spectral analysis rather than "phase vocoder" manipulation, since IFFT typically won't be able to reconstruct a continuous audio signal.
chain = FFTTriggered(buffer, input, trig, maxoverlap=0.5)
FincoSprottL : chaotic system UGen
FincoSprottM : chaotic system UGen
FincoSprottS : chaotic system UGen
ListTrig : Emit a sequence of triggers at specified time offsets
Logger : Store values to a buffer, whenever triggered
RosslerL : A strange attractor discovered by Otto Rossler based on work in chemical kinetics.
The system is composed of three ordinary differential equations:

x' = - y - z
y' = x + ay
z' = b + z(x - c)

Readings :
More Simple Chaotic Flows with ABS Nonlinearity : http://sprott.physics.wisc.edu/chaos/finco/abs.html
some of the examples might use the FFT plugins from the library of Bhob Rainey
http://bhobrainey.net

Referenes:
* Haines.Christian."Workshop-11-sem2 conducted on Fast fourier Transform .Programming with SuperCollider".18 October'2007.Electronic Music Unit.University of Adelaide, South Australia.
* Parmenter, Josh 2007, JoshPV SuperCollider Library, 2007,
* Stowell, Dan. 2007, Signal Analysis - SuperCollider Plugins - MCLD UGens, 2007,
* IXI Tutorial 10 on FFT. www.ixi-software.net
* McCartney , James et al . 2007,SuperCollider Inbuilt Help.
* Source Forge, http://supercollider.sourceforge.net/

Audio Arts : Film Music
This week was presenting a draft for our alloted films. If you are not aware, I am doing the film made by Dan. I like the feel and question raised in the film, " What is Music". So far I have added the background track and the audio present with the video. I somehow like the minimalist characterstic to what I presented. After spending a bit more time on the musical ideas, I thought they are not really required. After adding the foley sounds the film can have a quite a different and unique characterstic to it. I do want to implement a paced motive when the car scecne comes up. Ther are two interviews in the film with a little more voice to be added on. The sound design work has to be quite precise and effective. Thats where i am .. What is Music : -- succesfully conveying the idea across and convincing the listener that this is music - Mark carrol ..

Wednesday, October 17, 2007

Week 10

Creative Computing : GUI(3)

Keywords and lecture Notes:

* BoxGrid.help.rtf :
* Grid.help.rtf :
* MIDIKeyboard.help.rtf :
* ParaSpace.help.rtf :
* ScrollNumberBox.help.rtf :
* Software, IXI 2006, Experimental Music Software -
Backyard, IXI, 2007... www.ixi-software.net

Task of the Week:
===================================================================
/*
VarSaw : Variable duty saw
Lag2 : It is equivalent to Lag.kr(Lag.kr(in, time), time), thus resulting in a smoother transition. This saves on CPU as you only have to calculate the decay factor once instead of twice.
*/
// Sound Source
(
SynthDef("saw+tri",{|freq, amp=0.90|

var signal,
signal1;

signal = VarSaw.ar(Lag2.kr(freq, 60.1), 0, amp)!2;
signal1 = LFTri.ar(Lag3.kr(freq, 62.1), 0, amp)!2;
signal = (signal * signal1) ;
Out.ar(0, signal);
}).load(s);
)
/* List is a subclass of SequenceableCollection with unlimited growth in size.
SCWindow : User interface Window
ParaSpace is a GUI widget, similar to the SCEnvelopeView, but has some
additional functionality and it is easier to use. One can select many nodes
at the same time and drag them around.
A Synth is the client-side representation of a synth node on the server. It represents a single sound producing unit.
*/
// Set up the ParaSpace
(
l = List.new;
w = SCWindow("ParaSpace", Rect(10, 500, 800, 300));
a = ParaSpace.new(w, bounds: Rect(20, 20, 760, 260));

76.do({arg i;
a.createNode(3+(i*10), 130);
l.add(Synth("saw+tri", [\freq, 150, \amp, 0.04])); // starting frequency
});
75.do({arg i;
a.createConnection(i, i+1);
});

/* Task is a pauseable process. It is implemented by wrapping a PauseStream around a Routine. Most of it's methods (start, stop, reset) are inherited from PauseStream.
SystemClock is more accurate, but cannot call Cocoa primitives.
AppClock uses NSTimers but is less accurate,it can call Cocoa primitives.
*/
t = Task({
var d;
inf.do({arg i;
76.do({arg j;
d = ((i*(j/100)).sin*120)+130;
a.setNodeLoc_(j, 3+((j%76)*10), d);
l[j].set(\freq, 500+(400 - (d*4))); // emerging frequency,travelling range
});
0.25.wait;
})
}, AppClock);
t.start;
)
t.stop;
===========================================================

References :
Referenes:
* Haines.Christian."Workshop-10-sem2 conducted on GUI(3) .Programming with SuperCollider".11 October'2007.Electronic Music Unit.University of Adelaide, South Australia.
* IXI softwares, www.ixi-software.net
* McCartney , James et al . 2007,SuperCollider Inbuilt Help.
* Source Forge, http://supercollider.sourceforge.net/

Tuesday, October 09, 2007

Week 9

Creative Computing : GUI(2)
Keywords for the Week:
* Color : Each component has a value from 0.0 to 1.0 except in new255 :
Color.new(red, green,blue, alpha), Color.rand(hue, sat, val, alpha)
* Document: Opens up a text document.Document(title, text, isPostWindow);
background_set the background color of a Document
(
a = Document("background", "'documentwindow");
a.background_(Color.blue(alpha:0.8));
)

* Font : command-T to look for Font names.
* SC2DSlider : 2 dimensional slider
* SC2DTabletSlider : a 2D slider with support for extended wacom data
* SCButton : each state: [ name, text color, background color ] . Failure to set any states at all results in an invisible button.
SCCompositeView : A view that contains other views.
SCEnvelopeView : opens the envelope window whose nodes can be moved.
SCHLayoutView : Puts the sliders etc horizontally in a formatted layout.
SCVLayoutView : Puts the sliders etc vertically in a formatted layout.
SCMultiSliderView : Multisliders in a single window.
SCNumberBox : Number Box
SCPopUpMenu : Pop up menu
SCRangeSlider : sliders within a specified range
SCTabletView : Tablet view. Can receive data ffrom wacom or mouse.
SCTextField : Text based entry
SCView : SCView is the abstract superclass for all SC GUI widgets.
SCWindow : User Interface window : SCWindow.new(name, bounds, resizable, border);
resize : resize behavior for SCView..
1 - fixed to left, fixed to top
2 - horizontally elastic, fixed to top
3 - fixed to right, fixed to top

4 - fixed to left, vertically elastic
5 - horizontally elastic, vertically elastic
6 - fixed to right, vertically elastic

7 - fixed to left, fixed to bottom
8 - horizontally elastic, fixed to bottom
9 - fixed to right, fixed to bottom
-------------------------------------------------------------------------
Quiz : The error was there should be "states" instead of "state"..
Right : but.states = [["suffering",Color.black,Color.red] and not
Wrong : but.state = [["suffering",Color.black,Color.red],
code
(
var win, but;

win = SCWindow(
name: "Panel"
);
win.front;

but = SCButton(
parent: win,
bounds: Rect(20, 20, 230, 50)
);
but.states = [["suffering",Color.black,Color.red],
["living",Color.black,Color.blue]];
but.action = {"hello".postln;}
)

References :
* Haines.Christian."Workshop-9-sem2 conducted on GUI(2) .Programming with SuperCollider".4 October'2007.Electronic Music Unit.University of Adelaide, South Australia.
* IXI softwares, www.ixi-software.net
* McCartney , James et al . 2007,SuperCollider Inbuilt Help.
* Source Forge, http://supercollider.sourceforge.net/

Film Sound:
Forum : Bent Leather Band

Mid sem break

What did i do << had to move my house, one of the key problems i have faced in being here as an international student(which was obviously my decision)..Do i need that, wont it be nice to go back home, have a stable place,my own place, a permanent drumkit/percussion setup/piano and my own studio setup and I can spend all my time, without worrying literally about anything.. the answer is I have already spent that time, now I am away fom home, far away and the time for that is over,it wil come bacj though, nt now, however I am in search of another home, is it adeladie, not so far, not in 4 years, how cum now.. {i hate to move * inf;}.println
I did have some great time in the holidays, although not alot of study, alas, I am kicking now babay, dont you worry, I will fly high,I believe can fly, I believe i can touch the sky,tara ra ra ra rara,I want to spead my wings and flyaway.. shhhhhhhhhhhhhhhhhh

Thursday, September 13, 2007

Instrument

So far Piezzo seems to be the basis of my instrument. I want to scratch the surface of piezzo with differnt objects like a bow, brush or a hand grinder.Putting it under water and then hitting it on the surface of the glass also gives interesting metallic sounds.Brush and grinder gives quite differnt characterstic to the resultant piezzo's vibrations/sound. Generating complex rhythmic patterns at a very high tempo by touching the surface of piezzo by a grinder fascinates me quite a lot. A easy and interactive way of generating metallic rhythmical patterns.
Further,exploring these ideas I want apply all these techniques and take piezzo as a Line In into the computer and use SuperColliider( or rather use Arduino )to synthesize the incoming signal of the piezzo. So far I have experimented by PitchShifting the incoming signal of the Piezzo or convoluting it with any other sound source.
I need to look more into the Arduino patches and explore them to make interesting sound material.However I am more interested in sticking to SuperCollider since I am in a learning phase of that and spending more time on it will be more benificial for me..( I would need approval to do that).
Breadboarding is interesiting, however its quite fragile and a temporary solution( atleast for me).Old school technology is fine but I find it very time consuming or/and not quite an ideal approach for me to make music with at this stage.There is so much I can do with computers and playing with toys I can leave for the other people to explore.Playing with IC's and resistors is OK but I find it to be kind of pointless at this stage (1) because so far it has not been a priority and I need to spend alot more time (2) Time is very crucial and I have to very sure where am I putting my time..I like electronics and their construction and I do understand it might open up new possibilites for me and thats why I am not rulling it out totally and I will definately be exploring them in the holidays, and hopefully I can use them as part of me instrument ..
I would actually like to have kind of a percussion kit( a basket of few hand constructed objects or miscellinous tools) all hooked up to my computer through line In or a microphone to change their characterstics and timbre by applying various synthesis techniques.
This is a scratch,I will have more ideas as the time goes by in the holidays

Wednesday, September 12, 2007

Week 7

Creative Computing : Physical Modelling (2)

Quoting PMPD documentation intro:" These objects provide real-time simulations, specially physical behaviors. pmpd can be used to create natural dynamic systems, like a bouncing ball, string movement, Brownian movement, chaos, fluid dynamics, sand, gravitation, and more. It can also be used to create displacements thus allowing a completely dynamic approach .With pmpd physical dynamics can be modelled without knowing the global equation of the movement. Only the cause of the movement and the involved structure are needed for the simulation. pmpd provides the basic objects for this kind of simulation. Assembling them allows the creation of a very large variety of dynamic systems ."

CODE
Sound Examples:

Referenes:
* Haines.Christian."Workshop-7-sem2 conducted on Physical Modelling - PMPD library (physical modelling) created by Cyrille Henry on PureData.Programming with SuperCollider".6 September'2007.Electronic Music Unit.University of Adelaide, South Australia.
* PMPD : Documentation and examples
*Collins.Nick and Olofsson.Fredrick.Tutorials on SuperCollider.Chapter 11.1
* McCartney , James et al . 2007,SuperCollider Inbuilt Help.
* Source Forge, http://supercollider.sourceforge.net/

Tuesday, September 04, 2007

Week 6: mid way

Creative Computing: Physical Modelling

'Physical modeling' refers to the use of computers to model or simulate the sounds of traditional musical instruments or any analogue sound.
"There are two different general methods for synthesis of musical instrument sounds. One approach is to look at the spectrum of a real instrument and try to recreate it. This includes methods such as additive synthesis and frequency modulation (FM). These produce sounds with similar structure, but the parameters involved have no relation to the physical parameters of an instrument. The other popular approach is to use a sample of the instrument, such as in wavetable synthesis and samplers. In both of these cases, you're creating sounds without any consideration for how the real instrument actually creates those sounds.

"Karplus-Strong string synthesis is a method of physical modelling synthesis that loops a short waveform through a filtered delay line to simulate the sound of a hammered or plucked string or some kind of percussion . This is a subtractive synthesis technique based on a feedback loop similar to that of a comb filter."

References:

* Haines.Christian."Workshop-6-sem2 conducted on Physical Modelling - Karplus-Strong string synthesis.Programming with SuperCollider".30 August 2007.Electronic Music Unit.University of Adelaide, South Australia.
* Anon. 2007, Karplus-Strong string synthesis, 2007, .
* "Tutorial 9". IXI 2007, Software, IXI, 11/1, .
"Chapter 16". Cottle, David Michael 2005, Computer Music (with examples in
SuperCollider 3).

Audio Arts : Film Concepts
Feedback is the signal that is looped back to control a system within itself. This loop is called the feedback loop. A control system usually has input and output to the system; when the output of the system is fed back into the system as part of its input, it is called the "feedback."
In cybernetics and control theory, feedback is a process whereby some proportion of the output signal of a system is passed (fed back) to the input. This is often used to control the dynamic behavior of the system.

My work is based on feedback theory and digital manipulations. Visuals are generated by the process of video feedback which are then manilulated by using various filters. The end result will be of collarge of various shots and filtered result applied to the generated feedback.
(my film analysis coming soon)


Friday, August 31, 2007

Week5

Creative Computing: Granular Synthesis (2)

Keywords of the Week:
granulation, granular synthesis, source sound, buffer, duration, amplitude, interpolation,
overlap, rate, pitch, envelope, tone, sound file, panning, spatialisation, 'hand coded',
customised.

Task of the Week:
CODE
Sound Examples:
1)Pure Tone granulation
2) Sine graulation inspired from Zannos method
3) Buffer granulation
4) Signal arithmetic and granulation
Any number of permutations can be applied to various oscillators for the desired result.
<< ((osc1 / osc2) + (osc / osc3) * osc4) *env ; << ((osc1 * osc2) - (osc * osc3) / osc4) *env ; << ((osc1 * osc3) / (osc * osc2) * osc4) *env ; Etc.. Here is a list of few Synthesis techniques that can be applied to the signal: Steps: Granular synthesis and the various parameters:

Grain size: Create an envelope - the envelope duration will then be the grain size
Grain amplitude: Change the maximum value of the envelope mentioned above
Grain amplitude dispersion: Add a random factor to the above amplitude
Pitch: Change the rate value of the PlayBuf
Pitch Dispersion: Add a random factor to the Pitch value
Density: Use some sort of signal (e.g. Pulse) to trigger the envelope - if you use Pulse, then a higher frequency will generate a higher density, a lower frequency a lower density.
Time Dispersion: Add a random factor to the frequency of your Pulse trigger signal
No. of Grains: Create/destroy more instances of the synth

Step 1: Creating an envelope
add the variable env

env = EnvGen.kr(Env.new([0,1,1,0], [1,1,1]), trigger, grainamplitude, 0, grainsize);

Work out trigger, grainamplitude and grainsize?

trigger = LFPulse.kr(density+timedispersion)
grainamplitude = Grain Amplitude + Grain Amplitude Dispersion
grainsize = Grain Size + random factor if wanted

Once this is done multiply the output of the synthdef by env.

References:
* Haines.Christian."Workshop-5-sem2 conducted on Grainular Synthesis.Programming with SuperCollider".23 August 2007.Electronic Music Unit.University of Adelaide, South Australia
* Granular Synthesis by Eric Kuehnl : http://music.calarts.edu/~eric/gs.html
viewed on 21.August.07
* "Tutorial 08 - Granular Synthesis". SuperCollider Tutorial, 2006, 2007, .
* "Granular Synthesis. A very simple Granular Synthesis case:", "First steps". McCartney ,
James et al 2007, SuperCollider Inbuilt Help, Source Forge, .

*Collins.Nick and Olofsson.Fredrick.Tutorials on SuperCollider.Chapter 7.2
* McCartney , James et al . 2007,SuperCollider Inbuilt Help.
* Source Forge, http://supercollider.sourceforge.net/

Wednesday, August 22, 2007

Week 4

Creative Computing :Granulation

"Granular synthesis is an innovative approach to the representation and generation of musical sounds" (DePoli 139).The grain is a unit of sonic energy possessing any waveform, and with a typical duration of a few milliseconds, near the threshold of human hearing. The typical duration of a grain is somewhere between 5 and 100 milliseconds. If the duration of the grain is less than 2 milliseconds it will be perceived as a click. The most musically important aspect of an individual grain is its waveform. The variability of waveforms from grain to grain plays a significant role in the flexibility of granular synthesis. Fixed-waveforms (such as a sine wave or saw wave), dynamic-waveforms (such as those generated by FM synthesis), and even waveforms extracted from sampled sounds may be used within each grain.

Asynchronous granular synthesis (AGS) was an early digital implementation of granular representations of sound .When performing AGS, the granular structure of each "Cloud" is determined probabilistically in terms of the following parameters:

1. Start time and duration of the cloud
2. Grain duration (Variable for the duration of the cloud)
3. Density of grains per second (Also variable)
4. Frequency band of the cloud (Usually high and low limits)
5. Amplitude envelope of the cloud
6. Waveforms within the grains
7. Spatial dispersion of the cloud

References:
* Haines.Christian."Workshop conducted on Grainular Synthesis.Programming with SuperCollider".16 August 2007.Electronic Music Unit.University of Adelaide, South Australia
* Granular Synthesis by Eric Kuehnl : http://music.calarts.edu/~eric/gs.html
viewed on 21.August.07

*Collins.Nick and Olofsson.Fredrick.Tutorials on SuperCollider.Chapter 7.2
* McCartney , James et al . 2007,SuperCollider Inbuilt Help.
* Source Forge, http://supercollider.sourceforge.net/

Audio Arts: Film Analysis
I somehow have to accept the casualness of this class. The topic as you can see is huge and can get really interesting and informative.(In any case self research is the key to learn and I dont expect to become an expert by attending those 12 hours). However I do expect inspiration and valuable sessions from my lectures.I certainly believe that a classroom with better Audio-Visual facilities will enhance the experience of any examples we talk about in the class.
Next week is the presentation of our movie and through out the week I will be doing that. Few results I got from video feedback are really interesting.This is done by pointing the camera on the computer screen while final Cut pro is capturing the live input. I will base my work around that and more digital manipulations.

This week, we briefly talked about the various aspects of film music.
Theme music as the word suggets is the title music of the film.It is the sonic image of the film and refers to a piece of music that is written specifically for it.
Mood Music refers to setting a mood of the film or sequence coming ahead.
CharacterMusic/leep music/motives : represents the character or reminds of a place/motive.
Enviornment music : sounds from the place/space/enviornment projected on the screen
Action Music : fast paced music with the action scene, climax, use of high tempo arpeggios

In the later part of the hour we looked at the film Team America - world Police by Trey Parker and Matt Stone. Thanks Luke for the DVD, confessing I have watched the whole film. It is not the choice of film i would like to watch, because of the political message it has got. Now, i think its a very well made film and contains a clear message and some good sound design components and some great arrangments. Music plays an important role in setting the mode of the film. Music suggesting a part of the world/culture is very well used, like for middle eastern vocal singing.In the begining accordion is used to depict Paris.The theme music represents an american way.String arrangments plays aimportant role in depicting the mood.

I was going through the course material of film scoring of Berklee School of Music.Its quite ellaborate and different from how we approach it,but I thought there are some good individual topics here to be studied on.

Film Scoring 101: Berklee Music

Lesson 1: Drama and Music

  • Absolute Music vs. Functional Music
  • List Situations Where Music Provides Support
  • Early Film and Sound Technology
  • Identifying Dramatic Intent
  • Identify Emotions for Music
  • Think Like a Director
  • Quiz

Lesson 2: Dramatic Functions

  • Focusing on the Visual
  • A Symbiotic Relationship
  • Relationship Between Visuals and Music of Scenes
  • More Symbiosis
  • Dramatic Function: Three General Categories
  • Identify Dramatic Functions of Scenes from "To Kill Mockingbird"

Lesson 3: Spotting for Music

  • Spotting
  • Considerations When Spotting
  • Analyze the Spotting Process With a Scene
  • Analyze the Spotting/Scoring of Several Scenes
  • Spot Two Scenes

Lesson 4: Film Terminology and Dramatic Application

  • The Stages of Film Production
  • Setting Up and Shooting a Scene
  • Analysis of Scene Structure
  • Film Grammar and Linear Structure
  • Scene Comparison with and without Music
  • Basic Film Terminology
  • Photographic Processing Effects
  • Identify Editing Techniques and Photographic Effects
  • Camera Movement and Perspective
  • Breaking Down a Scene

Lesson 5: Working with SMPTE Time Code

  • SMPTE Time Code
  • Digital Audio – Digital Video
  • SMPTE and Relative Time (aka-Clock Time or Running Time)
  • Determine Relative Time
  • SMPTE and Relative Time Conversion Tools
  • Importing Video and Creating an Offset Start Point
  • Using Markers in Your Video and Sequence
  • Adding Markers in a Movie Clip
  • Quiz
  • Assignment

Lesson 6: Synchronization

  • Synchronization
  • Methods of Synchronization
  • Critical Synchronization
  • Music and Visual Sync
  • Applying Sync Leeway
  • Leeway- Early or Late
  • Identify the Sync Relationship with Hard and Soft Sync Events
  • Comparison of Timing Events to selected tempo - Bar/Beat Numbers
  • Bar/Beat Breakdown
  • Assignment

Lesson 7: Synchronization Part 2

  • Frame Click Tempo Defined
  • Comparison of Frame Click tempo to Metronome
  • Subdivision of a Frame for Additional Tempi
  • Using the Beat Sync Spreadsheet with a Frame Click
  • Comparing Beat Sync Accuracy – Leeway of Neighboring Tempi
  • Bar Layout and Use of Mixed Meters
  • Starting a Cue
  • Ending a cue
  • Changing Tempo Directly
  • Changing Tempo Gradually
  • "Hiding" the Feel of Pulse

Lesson 8: Scoring Practicum – from spotting to mixing

  • Spotting a Short Film
  • Creating a Music Summary
  • Developing a Concept for the Score
  • Developing Thematic Material
  • Relationship of Cues to Story Content and to One Another
  • Assignment: Preparation for Creating a Complete Score

Lesson 9: Free Timing Techniques

  • Methods of Free Timing Defined/Examples
  • Comparison to Use of Clicks
  • Composing to a Stop Watch
  • The "Written" Click
  • Freedom from Pulse – Ritards, Rubato
  • Conducting/Performance Consideration
  • Free Timing Applied to Using a Sequencer

Lesson 10: Overlap Cues and Transitions

  • Overlap Cues Defined and Analyzed
  • Why and When to Use an Overlap
  • Four Techniques for Creating Overlaps
  • Musical Considerations: Tonality, Tempo, and Instrumental Color
  • Overlapping Source Cues and Underscore
  • Double Track Cues
  • Sweeteners and Overlays

Lesson 11: Scoring under Dialogue or Narration

  • The Realities of Soundtrack Balance: Dialogue is King
  • Open and Closed Scoring Situations
  • Female/Male Voice Timbre and Register
  • Narration vs. Dialogue – Other Considerations
  • Rhythm and Dynamics of Spoken Word
  • Instrumental Color: Choices for Blending
  • Orchestration Dynamics Texture
  • Dialogue and Music as Counterpoint

Lesson 12: Professional Scoring – preparations and application

  • Types of Scoring Jobs
  • Making Contacts
  • Self-Promotion – The Scoring Demo
  • Getting Organized for a Scoring Assignment
  • Going Solo
  • Building a Team
  • Work Methods and Creating Mock-Ups
  • Deadlines and Delivery
  • Contracts and Copyright

Tuesday, August 14, 2007

Week3

Audio Arts: This semester is about Film Music.First six weeks is about film making and producing a short film, providing the visual material for the major project.
We talked about "time lapse photography' and later looked at Koyaanisqatsi - a film by Godfrey Reggio and music by Phillip Glass. We only watched couple of minutes which had shots of builidings and sky/clouds crafted together with time lapse and slow motion effects. It is the first in Qatsi Trilogy, an influential trilogy of films made over a period of almost 20 years.
Qatsi Trilogy:

* Koyaanisqatsi ( Life out of Balance );
* Naqoyqatsi ( War as a Way of Life );
* Powaqqtsi ( Life in Transformation );

Philip Glass - Koyaanisqatsi original soundtrack CD cover

Koyaanisqatsi which was released in 1982. Music plays a vital role in this film. "Film provokes its audience into thinking about aspects of the modern world around them and in particular how our cities and industry sit within and compete with the natural landscape."
( I will write more about the film, once I watch it)




References:
* Harrald.Luke.2007.Lectures in Audio Arts.9 August'2007. University of Adelaide
* http://www.mfiles.co.uk/composers/Philip-Glass.htm ( viewed on 13 August'2007 )
* http://www.qatsi.org/ ( viewed on 13 August'2007 )

Forum : " Breadboarding" -- This was the first forum I attended this semester. I dont have my kit,so I shared with freddy. I completed soldereing a piezo microphone and the first exercise,which was to make an Square wave Oscillator. I could not reach till making of Modulating Square Oscillator and Ring Modulation effect. I should have my kit this week,so can complete previous weeks work and then I could post some sounds for you to listen..
I am glad we are doing something in the forum and not sitting listening to someones talk. Enough talking has been done, its time for some action guys. I dont mind so much that we are still not playing music,indeed we are working on our instruments that will be later used to make music .

Wednesday, August 08, 2007

Week 1

Creative Computing : Splice and Dice(1)

BBCut Library : This is a collection of SuperCollider UGens, classes and patches for automated event analysis, beat induction and algorithmic audio splicing. It is released as public open source under the GNU General Public License, Copyright (C) 2001 Nick M.Collins.
Nick Collins is a London-based composer, computer music researcher and laptop performer. My first encounter with nick was at ACMC'06 at Jade Monkey during Klipp Av's performance.

Keywords
BBCut2 and associated BBCut classes and methods including :

* BBCutBuffer : Represents a Buffer with some BBCut specific refinements, holds data on a buffer including any segmentation (event positions). Derived from the Buffer class and has methods for choosing playback segments.
+ new(filename, beatlength, eventlist)
+ array(filenames, beatlengths, eventlists)

* CutStream1 : cut up a stream, via an intermediary buffer that holds the most recent cut for repeating.The stream can be any bus on the Server, so might be a file streamed off disk, a current audio input or some synthesised data.
+ new(inbus, bufnum, dutycycle, atkprop, relprop, curve)

CutBuf1 :Playback for a buffer, using a single fixed PlayBuf UGen with jump in playback position, but no enveloping.
+ new(bbcutbuf, offset)

CutMixer : The CutMixer organises the final output from a given rendering CutGroup. It control final rendering bus, master volume, and ampfunc and panfunc parameters of cuts.
+ CutMixer.new(bus, volume, ampfunc, panfunc)

*sound examples coming soon

References:

* Collins. Nick , “Further Automatic Breakbeat Cutting Methods,” Proceedings of
Generative Art, Milan Politecnico, 12–14 December 2001.
* Collins. Nick ,“The BBCut Library,” Proceedings of the International Computer Music Conference,Goteborg, Sweden (2002).
*Collins. Nick ,“The BBCut Library,” Examples and help files.
* McCartney , James et al . 2007,SuperCollider Inbuilt Help.
* Source Forge, http://supercollider.sourceforge.net/

Monday, August 06, 2007

Wednesday, July 04, 2007

IMNp

Interconnected Musical Network Performance with SuperCollider










Attempt 1 = Tyrell controlling frequency and vinny controlling amplitude & Pan

Attempt 2 = Vinny controlling frequency and Tyrell controlling amplitude & Pan

Wednesday, June 27, 2007

Projects

Creative computing ;: My first composition with SuperCollider. The basic model of the tree is constructed by defining the stems of the tree that are the sound sources for SynthDef’s.I have called my work as " Timbre Tree". The process started by generating stills of a tree in Photoshop . I was amazed to see how visual representations can give elobrate ideas and can become the foundation of a piece. I have tried to express what I was seeing in those stills.
The tree can have several shapes, depending on how the performer moulds it.One picture of a tree has climbed various stages of filtering. The print has reached the stage of wind shaping the frequencies and patterns, in a way symbolising the feedback process.
I have used various kinds of Noise generating Ugens contained in SynthDef's to shape this piece.These SynthDef's are modulated by their own signal through the feedback engine. Having ‘just’ the right volume without excessive feedback is the key for the performance. After switching the engine, the player can decide either to modulate the incoming signal feeding on to itself or play the table, clap or speak to generate more textures for the tree. A user input sound generation process feeding itself which is played back from the same set of speakers. I played the ‘table on which my laptop was sitting and have also used my laptops body to generate sound. An important point into consideration; my laptop is acting as a sound generating body both ‘physically’ and electronically.
I recorded this piece by close micing the speakers in Studio 2. This created another feedback environment in the chain and affected the overall timbre of the tree. "Life feeds on life, feeds on life and feeds on life".Similar concept applied to sOuNd.
In other words the speakers that are used to playback the SynthDef's are feeding their own signal through feedback engine. This modified signal is reproduced back into the same set of speakers. The end signal is then captured by two Newman U-87 microphones as ‘Left’ and ‘right’ and recorded onto proTools.
The recorded waveform had two sections .One constructed with dust particles and storms and the next section was recorded playing the table. This is very interesting. I have figured out that when my feedback engine is on and I am using the ‘line In’ as the internal microphone of my computer, it converts the table as a drum. Not just the table the whole environment is affected. Speech, abstract noises and even breathing gets amplified with the control of changing parameters like pitch, grain size, time dispersion and pitch dispersion. As the player increase the output of the amplifier. There will be a dense cluster of sonic cloud (use this very carefully, if the volume is high, you ears are at a high risk and so are the speakers.)
The AB section is then made Left = AB and Right = B(rev) A(rev). A mono file is then bounced down which is exported back on to Protools. I made another mono track of the bounce; this time I have A B B (rev) A(rev). I made another mono channel, copied the track, hard panned both the channels and then I got Left = A B B(rev) A(rev), Right = A(rev) B(rev) B A
I then applied panning to both the channels . Both the tracks simultaneously change their position and move to the corresponding speaker several times in the piece.

Score : My code contains instances of synth which can be run and paused at any given time by the performer.. It also contains a user changeable yet fixed scheduling duration of various Synth’s .It also contains a user input random duration scheduling for playing the different Synth’s.The player can decide when he would like to switch ‘on’ the feedback engine. He can then decide if he would like to feed the signal on its own or start playing anything and everything he desires in that acoustic space.Nothing is fixed from the author,go crazy with it.
jai rAm ji ki..
===================================================================
Audio Arts :
Ezulai is a highly unique band from Adelaide. Its influences draws from indi-rock to funk and from punk to jazz. All compositions are short song based originals with the vocals creating vivid stories and images resting upon a bed of twisted soundscapes. Ezulai seeks to utilize all forms of media to create a unique experience. I recorded their couple of songs but have only mixed their 3 songs so far. A little more work but the guys are all set to release their first EP.

Tracks:
mumma
Cheap as a cigarette
Robbers Collar

contact : ezulai@optusnet.com.au>

Lizabella Baker – Vocals, Flamenco dance/drums
Maura O’Reagan – Vocals, Whistle
Julius Crawford – Bass, keyboards
Christian Hodgson - Guitar
Mousiour Duffur - Drums
===========================================================================
Music in Context 3A: Music since1900

" Mosque in war times "- 6'55 mins
download score

References:
Whittington.Stephen.2007.Lectures presented for Music in Context 3A.Music Since 1900.Elder Conservatorium of Music.University Of Adelaide

Thursday, June 14, 2007

PreProduction CC

The basis of my work will be exploring only a fraction of concepts that can be applied to composition using Mathematical Structures.
I want to build a tree consisting of few different stems. I can call my tree as a “timbre tree” whose construction is defined by different random functions. The length of the tree is the order of execution of different stems and the breadth of the tree is the acoustic space under which the tree will grow.
The tree can be understood as an Environment and the stems as functions in SuperCollider. These functions are Synth Def’s that are modeled using different Ugens. The parameters of these Ugens will be generated by using random decision making procedures like coin, choose, Scramble, curdle.I could model the growth of a particular stem on Brownian movement. I can
also model growth of other stems on random procedures like Dwhite,Dbrown,Diwhite, Dibrown, Drand, or Dxrand.
I could also restrict myself by using only Random Ugens/signal generators for example Dust, PinkNoise, GrayNoise, BrownNoise and Crackle. Although this is a significant restriction ...






There could also be a function/stem in a tree, which is identical to another stem in its construction. I can apply “phase shift” to the stem by starting it at a different time from the previous one in the order of execution. Phase shifting smaller stems will result in grown stems that are now resultant stems, which are thicker and complex in their nature. This method can beused on simple or already complex rhythmic patterns to generate (morecomplex) new resultant patterns.
My tree will have a form of either ABBA or ABB’A’ or A B B (reverse) A (reverse).
When my tree is constructed, I could make a reverse copy of the waveform and then put normal and reversed in Left and right channels respectively.
These are the combinations I can have:
Left = ABB'A & Right = ABB'A (default)
Left = ABB’A’ &amp;amp; Right = A’ (rev) B’ (rev) B (rev) A (rev)
Left = ABA’B’ & Right = A B’ (rev) A’ (rev)
Left = ABA’B’ & Right = A B A’ (rev) B’ (rev)
Left = A B B (rev) A (rev)& Right = A (rev) B (rev) B A

Saturday, June 02, 2007

WEEK 12

Audio Arts: We looked and compared few mixed/unmasterd and masterd tracks in the workshop. Few things to remember :
*/ Mixing with mastering in your mind : It is very important you always mix with mastering in your mind. First of all recording should be of very high standards, secondly never apply hard compression, maximum normalisation and extreme Equalisation to your track at the mixing stage. Be aware of things which can enhanced in mastering stage. These things can never be undone and hence the quality will suffer.*/

Essentials: 4 things important for mastering
1 Ear : Everything depends on the hearing power of engineer who is mastering it. Healthy ear, familiarities with client expectancy with advanced knowledge of what you are doing.
2.Room : Having a balanced room is very important. A good room will always contribute in your ability to hear things properly.
3 Equipment : Equipment you use will make huge difference in the final outcome. Software plugins only give you a starting point nad any professional mastering is always done using industry standard hardware.
4. Speed and efficiency : Be efficient in terms of time, client will want more from you in less time. Basically practice to an extent when you have familiarized all the functions of your gear.

Dragos work : Here is the mastered version for dragos piece.

Plugin Screen shot : If you look at this screen shot,it tells you the stages how sound has been processed. I have used 3 VU metres for analysing the change. First one suggests inequality in left-right channels,the second one shows the result of limiting- setting the threshold level so that the signal never peaks,equalisation and joemeck equaliser: equal balancing between both the channels,cutting down annoying frequencies,enhancing a little bit of bottom end and touching up with Joemeck.In the second slot which is a bus, I have used bombfactory , big bottom and then compression. The last VU meter suggests consistency in level,no peaks through out the track and the track being cleaner and healthier. If you analyse 1st and 3rd VU meters there is not much difference in terms of levels,but there is defintely difference in what we finally hear now. The trouble was any slight increase in gain would result in peaking,so I made sure the signal does not peak through out the track even though I could not get the levels go any higher than this.

References:
(1)Grice, David.2007."Tutorials on Audio Arts (3) Semester 1, Week 12.Mastering(1)." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 29/05/2007.
(2) Readings on My Uni.Music Technology(3)2007.
(3)Waves Mastering Help
http://www.waves.com/objects/pdf/general/Update-and-Errata-web.pdf
viewed 29May'07

Forum : Whats happening next semester : Improvisations group with a defined appraoch

I must say the idea of student presentations in the forum was a waste of time for me.I think students should be given these oppurtunites in their respective classes.Building their confidence is one thing and punishing other people through this process is another.

Listening Examples :

2.Raga Sihendra madhyam by Dr, C Sardeshmuh

3. Terry Reli and Johny cale (velvet underground) : Improvisations ; church of Anthrax

4.Machine for making senses : Jim Denley, Rok Rue,chris Mann, Stevie Wishart and Amanda Phillips

5. Ross Bolleter : Crow country

6. Sun Ra and this Astro Intergalatic Infinity Arkestra : space is the place

7. Ryoanji - - cage 1985

8. Mr Ban'gal

Comprovisation :

"Free urself from the idea of self disciplane to improvise" : an idea of Sun Ra: contradictory, but u know what it means..

Scratch Orchestra : School time compositions

References :
(1) Whittington.Stephen.Forum. Semester 1, Week 12. Workshop presented at the Electronic Music Unit, University of Adelaide, South Australia, 31/05/2007.

Friday, June 01, 2007

A shoulder

I have to write this,last day of week 12. Few more lectures next week but otherwise its all over.. Friday started with me thinking there is no computer science lecure,but there was.For some reason in the back of my mind,I thought its tuesday even though I knew I have classical theory lecture which happpens every friday.Weird..!

Another great lesson with stephen on rhythmic ideas of Steve Reich - Phasing effect,metric modulations and more.. He asked me to clap with him :

Both of us playing in 3,
then he moves to 4 and
then subsequently I have to move in 4 as well.

Me and stephen were not on the same page.First time I changed too quickly and the next time I thought let the rhythm settle a bit and waited too long..So obviously I get zero. Stephen, it was a pleasure doing it with you,only if it would have happened the way you wanted.A very interesting idea and you explained it very clearly .Its also interesting how he points out things I have been practising in the past.

Ok,I check my email now, this time peter has sent me a warning about using studios as studios and not as my private study space.I do have used the space for that,most of the times when no one is there,but sometimes booking it so that I can do some work,an hour before forum and then an hour after that,cause I see christian pretty much every thursday.I totally understand peter,I know you are only doing your work. It did not upset me at all. I knew I was wrong..
Ok,Now the disaster, I check my email again and chris writes to me saying Craig and John Aue cant come on sunday for the recording.My Audio Arts major project and its falling apart. lets see what happens then.Its all improvised music , I have to wait and see whats actually going to happen on sunday,uncertainity at its peak, which happens to me all the time. Even when I am travelling.No wonder why I have missed my flights as well.I was lucky enough to catch my flight from malaysia to Adelaide.The flight was closed,some beautiful women helped me and logged me in..just survived..
Ok, the last disaster, My computer science prac exam and I didnt pass.There was some error in my code and I couldnt find it."Exception in the main thread" while running the main patch,even though the main and the driver class compiled.. I felt so down after that I decided to have crappy McDonalds meal.
Came to EMU kitchen , shared my pain with ben and now I am off for some practise.. I am ok now,have to work very hard to do allright in Comp Sci-fi..
I dont think I am going home, will sleep in the studio,at some stage.

Sunday, May 27, 2007

WEEK 11

Creative Computing: Computer Music Programming : Streams (1)

CODE @ Exercise :

(1) A synthDef
(2) A stream playback mechanism that controls different parameters of the synthDef and demonstrates concepts presented in "Streams-Patterns-Events1 - 3.html"
(3) Thoughts and issues on the process with an MP3 result.

This code will crash your computer.I can make it not crash but it still does not sound what I want it to sound..So I have choosen to post the code which will crash the computer.
I still need to understand Streams,patterns and events alot better.I have actually spent more time making the synthDef .
I can see where the things are heading,SuperCollider why didn't you come in my life earlier.. I guess I have the whole life to keep working on it. Thanks christian.

References :
1)Haines, Christian."Creative computing : Semester 1, Week 11. "Streams,Events &Patterns". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 24/05/2007.
(2)McCartney , James et al 2007,SuperCollider Inbuilt Help.
(3)Source Forge,http://supercollider.sourceforge.net/
(4) SuperCollider Help Book
======================
Audio Arts :Mastering (1)
Mastering is the process of finalizing a recording and preparing it for the final medium on which it will be played back ensuring optimum clarity,balance and consistent levels across the entire recording.In the workshop we looked at the website of Eden Sound Mastering - Australia's first Focusrite Authorised Blue Mastering Suite in melbourne.


How mastering came into existence ?
There were left over inaudible high frequencies on the CD recordings which would give you a headache if exposed over long durations.They are non existent in vinyl's.Hence came mastering.


EQ in mastering is often used to tuck away annoying frequencies and to enhance the frequencies that make the recording special.Through the use of EQ you can shape the final curve of the song so that it has the right amount of bass, midrange and treble. Applying correct amout of compression and limiter with "just the right" amount of attack and release will make the track really special.Remember the settings will vary for each song and style of music.Some songs might just need that subtleness and love whereas another one will need that big size and excitement. It just depends on the genre/style of music/song.
Through the use of “Limiting” in mastering you can put a clamp on the entire mix so that it doesn’t move above a given threshold.This will insure the loudest mix do not goes above the given limit.Another useful point in this weeks readings was use of VU meter as a plugin.It ensures that you are not compressing too hard and helps you in making right decisions with the settings.
I will talk about compression techniques in the next week.There is certainly alot more to compression at mastering stage than just boosting everthing.

Master Mix : Here is a (rough) master of the the group 3F+1M
Previous Mix : Here is the older mix
First Mix : Here is the first mix

I took the final mix in to a new protools session,named it mastering sesssion and applied(from the order of top to bottom)
* 7 band EQ = balancing the frequency ranges which sounded nice to me, a little more gain on the lower range(80Hertz), boosting the mid range for more clarity in vocals. I have cut down 75% gain to the high range(above 12 Kilo Hertz).
* minimax = this was giving a bit of depth in the bottom end.
* limiter = setting the threshold level, i made sure the loudest signal does not peak.
* Compression -I still need to understand more on compression.Whats the right amount of attack and release, whats the relationship with other paramerters,how does the relationship along with other parametres change the sound ? I am confused in terms of completely understnding it,however I played with the knobs and did what sounded good.
*Mc2000Mc4 - I just played with the knobs,I could hear the change it did to the mix. Did I wanted that change ? I am not sure,I have used it very carefully, by just increasing a bit if its output.
Something from this line added noise into the mix.I think its compression or Mc2000Mc4.It was really audible at silent spots.I have to work more on reducing the noise from the mix,otherwise I am sort of happy.

References:
(1)Grice, David.2007."Tutorials on Audio Arts (3) Semester 1, Week 1.Mastering(1)." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 22/05/2007.
(2) Readings on My Uni.Music Technology(3)2007.
(3)http://www.focusrite.com viewed 27May'07
(4)http://www.edensound.com.au viewed 27May'07
(5)Waves Mastering Help
http://www.waves.com/objects/pdf/general/Update-and-Errata-web.pdf
viewed 27May'07
====================
Forum :
====================

Memories of forum far apart now.My mind looked somewhat like this picture in those 2 hours. However I rememeber Delany's presentation. Very interesting talk,as much, as the sound examples he played,of people like LustMord, Brien Willaims , Aphix Twin and Robert Rich. Thanks for burning me a Dvd full of Ambient music John, I have not forgotten about burning you some Trilok..

Genre : dArk ambient music
Form : long form timbral variation
characterstic : visual music

John said " feels like I am in a dark forest long from anywhere"..
Stephen said "Continous low level tension without any release"

References:

Whittington.Stephen . “Construction and Deconstruction.” Forum/Workshop presented at EMU space, Level 5 Schulz, Thursday 24' May' 2007.

Delany.John . “Construction and Deconstruction.” Student Presentations. EMU space, Level 5 Schulz building, Thursday 24' May' 2007.

Whitelock.Simon . “Construction and Deconstruction.” Student Presentations. EMU space, Level 5 Schulz building, Thursday 24' May' 2007.

Shea.Nathan . “Construction and Deconstruction.” Student Presentations. EMU space, Level 5 Schulz building, Thursday 24' May' 2007.

Sunday, May 20, 2007

Week 10

Creative Computing: Duplication

It is a method for duplicating the signal which gets sent to the next bus, hence acheiving a stereo signal from a mono channel.
Out.ar(bus: 0,channelsArray: signal.dup == signal sent to bus[0,1]

Some very interesting points were covered in the class really to do with using arrays to generate massive amount of data with very simple code.

Array.series( range,start, increment)
Array.series(10,1,1) * $1(any variable),
Multiplying by 2 - gives even harmonics ,
multiplying by 3 will give 3rd harmonics,
multiplying by 5 will give 5th harmonics and so on..
multiplying by 44oHertz will give octaves of A

Array.geom(5,1,3) == geometric harmonic
Array.geom(10,1,3) %5 == weirdo using modulo
Array.fill(3,"hello world") == 3 * string name

CODE : "Exercise" :Creation of a large synthesiser assembly using the concept of duplication.
(1) MIDI Input and playback
(2) A synthesiser definition incorporating carriers, modulators and modifiers created using large scale duplication
(3)An effects synthDef
(4) The synthDefs will be organised so the main synthesizer definition will have its output routed through an effects synthDef.
Mp3 result

References :
1)Haines, Christian."Creative computing : Semester 1, Week 10. "Duplication". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 17/05/2007.
(2)McCartney , James et al 2007,SuperCollider Inbuilt Help.
(3)Source Forge,http://supercollider.sourceforge.net/
(4) SuperCollider Help Book

Audio Arts : Creative Production (2)






The whole hour was dedicated looking at AD2022 - Avalon's fourth generation fully discrete,symmetrical Pure class A microphone preamplifier.

I tried couple of things to try this long awaited piece of hardware.My initial experiment failed cause everything I tried resulted in feedback.I was trying to take the audio out from protools into Avalon and then record the mix back into protools.Taking the Auxillary out from the C24 to Avalon and recording that in protools didnt work either.How can I send the master to Protools 3&4 out as well, so that I can take those outs and send it through Avalon ??
Well I was not muting the 2 new tracks that were assigned to record the mix.The signal which was played back was recorded as well and played back too while recording.. Hence feedback ..
I simplified the things and used my computer to play back the mix and recorded that onto protools using Avalon's front panel unbalanced inputs.

Mix of 3F+1m : Here is the mix of the group I recorded.
I have used Avalon to record the vocals and then I sent the whole mix through Avalon.The mix you are hearing is [(MixNoVoicewithout Avalon + Voice) + Mixwith AvalonVoice + Voice].bounce;

For comparison,hear the old ones.
No Voice : Older mix with no voice without Avalon
With Voice : Older Mix with voice without Avalon

References:
(1)Grice, David.2007."Tutorials on Audio Arts (3) Semester 1, Week 10.Creative Production(2)." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 15/05/2007.
(2) Readings on My Uni.Music Technology(3)2007.

Forum :
Words replacing colors.Whats the term when you start seeing words as colors, alphabets become pixels and sentences form shapes ? I am suffering from that.
In this life let me hear sounds and see words.

*(all I want to say for this weeks forum)


Reference:

Whittington.Stephen. “Forum – Week 10 – Construction and Deconstruction." Workshop presented at EMU space, lvl 5 Schulz building, University of Adelaide, 17th May, 2007.

Fredrick May. "Forum – Week 10 – Construction and Deconstruction." Student Presentations. EMU space, Level 5 Schulz building, University of Adelaide, 17th May, 2007.

Dragos Nastasie. “Forum – Week 10 – Construction and Deconstruction." Student Presentations. EMU space, Level 5 Schulz building, University of Adelaide, 17th May, 2007.

Matt Mazzone. “Forum – Week 10 – Construction and Deconstruction." Student Presentations. EMU space, Level 5 Schulz building, University of Adelaide, 17th May, 2007.

Sunday, May 13, 2007

WEEK 9


Creative Computing :Busses, Routing and Ordering
The client side representation of an audio or control bus on a server.Manages allocation and deallocation of bus indices.[ read more about busses and routing]

code : This code is incomplete and has got some errors.. I will update it very soon.

Audio Arts: Creative Production (2)

We were supposed to look at AutoTune, a pitch correcting plug-in by Anatares.Unfortunately we don’t have that in the studios,a million-dollar studio without necessary plugins. I am disappointed with myself too that why don’t I own them. I guess I was always working in the uni and confessing that I have spent the money on other things. But now its the right time to get one, I am still double minded about Mbox Pro or Digi 002, which is only $400 more right now.

Auto-Tune works in two modes:
* The Automatic Mode corrects the pitch of a vocal or solo instrument in real time, without distortion or artifacts, while preserving all the expression of the original performance.
* The Graphical Mode displays the performance's detected pitch envelope and allows you to draw in the desired pitch using a variety of graphics tools.

Important notes and features:
Auto-Tune does not like double notes or a chord - it only works when sustained a single note.
"A side-effect of using very fast pitch correction is that shallow vibrato is ironed out altogether, while deeper vibrato turns into a trill.
Auto-Tune also has the ability to add delayed vibrato to whatever sound is being
processed, so in theory, you can strip out the vibrato from the original performance and replace it with something far more mechanical and precise".

One function of the hardware version of Auto-Tune is its ability to shift
pitch according to a MIDI input. In theory, this means you can input a MIDI melody and whatever audio input you have fill be forced to fit that melody.
Things to try: It has the ability to make the human voice sing impossible arpeggios or in that case any instrument. How about taking a single-note instrument like the tanpura or didgeridoo and changing its sound to a melodic bass line?
Every thing comes to the conclusion of being as creative you want to be, with just a little precaution.

Task of the week :
Mix of the group (3F+1M : NO VOCALS [Feli.Ban.Georgi] ;
Mix of the group 3F+1M): WITH VOCALS [Bel.Feli.Ban.Georgi].scope

I am hoping to record the vocals seperately in the coming week.There is a lot of spill from the other instruments in the vocal microphone,since it was recorded live in the EMU space along with other cats.As you can hear the mix with vocals sounds quite reverby from the one without vocals and i can hear the room especially when the loud snare hits..I have asked belinda to come and do the vocals seperately over the recording.
I think, bass needs just a little cut off from the bottom end or may be little level down will solve what i am hearing right now.I cant quite trust this room as well(studio 1). I might have a break and come back and listen to it.Why am i worring about the mix with vocals,few things are not under my control.

References:
(1)Grice, David.2007."Tutorials on Audio Arts (3) Semester 1, Week 9.Creative Production(2)." Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 08/05/2007.
(2) Readings on My Uni.Music Technology(3)2007.
(3)http://www.macmusic.org/software/version.php/lang/en/id/9107/
viewed on 13 May'07

Forum :