This page was last modified on March 17, 2004.
A complete listing of Christopher Dobrian's publications on computer music can be found online.
The publications most pertinent to this course are listed below.
Here are the example C programs from class on January 14, 2004.
Here are the example C programs from class on January 21, 2004.
Here is the example Pd program from class on January 26, 2004.
(Note that although the file has the extension .pd it is just a plain text file. You can download it directly to disk--or copy/paste the text into a plain text word processor and save it--and then open it from within Pd.)
scaleplayer.pd shows several basics of Pd:
Here are a couple of examples of how the array object can be used for control messages or control signals or as storage of an audio waveform.
arrayexample1.pd fills an array with pitches and uses those pitches to play a melody (converts the pitch numbers to frequencies for an oscillator)
arrayexample2.pd lets you draw an audio waveform and use a phasor~ to read through it repeatedly at an audio rate; it also uses another phasor~ to read through the same array at a sub-audio rate, to provide a control signal for the frequency of the complex tone; and it uses a smaller array of amplitudes to control the amplitude envelope of the sound
Here are three progressive examples of FM synthesis instruments.
Note: These three examples have been temporarily removed by the author.
FMexample1.pd Plays a simple FM tone using the basic instrument design shown in Fig. 5.1 on page 116 of Dodge and Jerse.
FMexample2.pd shows a few examples of different kinds of tone on that instrument, using different harmonicity and modulation index settings.
FMexample3.pd demonstrates how to make an evolving timbre by providing continuous control (a time-varying envelope) for the modulation index and the carrier amplitude. This instrument is the same as Fig. 5.7 on page 122 of Dodge and Jerse.
Here are explanations/demonstrations of two ways to make an envelope function (for controlling amplitude, modulation depth, etc.).
envelope1.pd uses the "vline~" object to queue a series of line segments to be executed sequntially.
envelope2.pd uses an envelope shape stored in an array, and reads through the array with "line~" and "tabread4~".
This next program demonstrates how to read samples from an external soundfile, such as an AIFF or WAVE file, and place them in an array, using the soundfiler object. Once the samples are loaded into the array (one cycle of an electric guitar note in this case), we can read cyclically through the array, treating it as a wavetable, with the tabosc~ object.
wavetable.pd uses one tabosc~ object to cycle through the table at a sonic frequency, while another tabosc~ object reads through it very slowly at a sub-sonic rate, treating the shape as a control function to make a pitch glissando that controls the frequency of the first tabosc~. (In order for this program to work, you'll also have to download the very brief AIFF file gtr515.aif.)
Here are two examples of ways to schedule a "score" (a pre-determined list) of timed events in Pd.
sequence1.pd fills one table with pitches and fills another table with durations (more correctly, with time intervals between note onsets), and uses the duration to delay a certain amount of time before reading the next value from the tables.
qlistdemo.pd shows how to do a very similar thing, using a pre-made score stored in a text file; it uses the qlist object to read through the messages in the score file, using the delay times (inter-onset intervals) included in the file. In order for this program to work, you'll also need to download londonbridge.q.
The following four files show how to implement intensity difference and time difference between two channels of audio to create localization illusions. The first three files use intensity panning, and the fourth file uses inter-aural time delay.
Note: These four examples have been temporarily removed by the author.
linearpan.pd does a straight linear intensity pan from one channel to the other. You can experience the "hole-in-the-middle" effect caused by the drop in perceived intensity (and thus the increase in perceived distance of the virtual location of the sound) when the sound is panned to the center versus when it is panned hard left or hard right.
constantpowerpan1.pd is exactly the same as the linear panning example, except that instead of using the linear interpolation between left and right to control the amplitude directly, it takes the square root of the interpolated values, and uses those to scale the amplitude for each of the two audio channels; the result is the impression of panning on a radial arc from one speaker to the other, at a constant virtual distance from the listener.
constantpowerpan2.pd is exactly the same, except instead of calculating square roots, it uses a lookup table containing a sinusoid, and looks up the square root values using the first 1/4 of a cosine wave and the first 1/4 of a sine wave.
delaychannels.pd allows you to delay one channel more than the other to experience interaural delay (if you use an extremely short delay time difference) and other delay and echo effects; the program demonstrates the use of delwrite~ and delread~ to create and read from a circular delay line.
Here is one more delay-based effect to simulate a moving sound source.
Note: This example has been temporarily removed by the author.
dopplermono.pd is a simple monophonic simulation of Doppler effect, calculating changes of amplitude and delay time based on the virtual location of a supposedly moving sound source.
Here are the delay examples from the class session on February 9.
In order for the next four examples to work, you'll need this very small AIFF file (less than one second long) called Brazil01.aif.
basicdelay.pd lets you experiment with different delay times, and lets you modify the balance between the original sound and the delayed sound. Delay times longer than about 50 ms will give a discrete echo effect, and very short delay times give more of a comb filtering effect. Try different delay times in the 1-20 ms region to hear the comb filtering effects of different short delay times.
fbdelay.pd is exactly the same, except that the delayed sound is fed back into the delay line. The feedback must be scaled by some factor with magnitude less than 1 in order to keep it from accumulating and increasing to excessive amplitude.
flangedelay.pd demonstrates "flanging"--continually modulating the delay time with a low frequency oscillator (LFO)--to create a sweeping comb filter (or other more extreme effects). To do this, the delay time must be provided in the form of a signal, so we use the vd~ object instead of delread~. Experiment with varying the basic delay time and varying the LFO rate and depth. Try subtle and extreme values. (vd~ will clip the delay time to keep it within the range of the delwrite~ buffer.)
fbflangedelay.pd is the same flanging, but with the capability to feed the flanged sound back into the delay line. Experiment with different settings.
simplecomb.pd provides a way to hear the effects of comb filtering on a steady harmonic tone (square wave or sawtooth wave). Note that when the delay time is 5 ms and the tone is 100 Hz, the frequencies 100 Hz, 300 Hz, 500 Hz, etc. are completely suppressed, so the tone sounds as if its fundamental is 200 Hz. With all other delay times, the result will be a complex tone with different emphasis at each partial. Change the delay time to hear the effect of sweeping the comb filter. (The upper left portion of the program just shows some alternate ways you could make a square wave or a band-limited sawtooth wave; they're not important to the working of the program.)
Note that in each of the preceding five examples, the values are initialized by ascribing a "receive symbol" (receiving name) to the user interface objects, using the "Properties" dialog box, and sending messages to those objects when the program is loaded.
The following three example MSP patches are from class on 2/25/04.
simplesampler.txt demonstrates sample playback (monophonic, using a single sample, with no amplitude envelope). A real sampling synthesizer would include, at the least, a) multiple samples to choose from, to reduce the need for transposition, for better emulation of the real sound, b) capability to handle more than one note at a time, and c) an amplitude envelope to shape the sound. See MSP Tutorial 20 for a more developed example.
simplewaveshaper.txt implements waveshaping synthesis using a single cycle of an electric guitar sound as the transfer function. (See also MSP Tutorial 12.)
simpleenvelopefollowing.txt uses peak amplitude detection and linear interpolation to create a control signal that traces the amplitude envelope of a sound, and it uses that control function to control the frequency, timbre, and amplitude of a waveshaping synthesis instrument.
Here are two versions of a suggested solution for gating a signal below a certain threshold (suppressing low-level signal). The examples use the AIFF file GateTextSound.aif for testing.
The first version, basicgate, uses the peak amplitude, as detected by the peakamp~ object, and a logical comparison (greater than or equal to) to determine whether to turn the volume off or on.
The second version basicgate2, allows the user to specify the threshold in decibels, and to specify ramp on and ramp off times for the gate that are independent of the polling rate of peakamp~. A suggested further improvement would be to introduce a short delay in the signal being heard--but not in the signal being analyzed--equal to or a bit less than the polling rate of peakamp~, so that the passing of the threshold would be analyzed a little in advance, and the gate could ramp up or down before the heard signal passes the threshold.
No MP3 sound files are posted yet.
Here is an AIFF file that might be useful for testing a gate. It's some spoken text with some pauses, and some low-level background noise.
Bibliography of Other Relevant Publications
Useful Audio Software
Other Online Resources
The professor's research page from this course as it was taught in Winter 2001 contains examples of basic audio programming principles demonstrated in the "Csound" computer music programming language.
The professor's research page from this course as it was taught in Winter 2002 contains examples of basic audio programming principles demonstrated in the "MSP" computer music programming language.
This page is for Music 147 (ICS 180, ECE 195, ACE 277), Computer Audio: Musical Applications of Digital Signal Processing, offered Winter 2004 at UCI.