Computer Music Composition

Assignments


This page contains assignments for the class
Music 151: Computer Music Composition - Winter 2013
University of California, Irvine


The assignment for the upcoming class will be posted here after each class session.


For Thursday March 14, 2013:

There is no assignment. The final class session will be devoted to listening to students' composition projects.

For Tuesday March 12, 2013:

Study for the final exam, which will be administered in class.

Turn in your final composition project as a CD-quality audio file (.wav or .aif) in the EEE DropBox called "FinalComposition". To show your working process, you are welcome to turn in project session files of your work in Reason, Pro Tools, etc. You should also include a brief written explanation (.txt, .rtf, or .doc) of the musical concept you explored in your piece and the techniques you used to realize it.

For Thursday March 7, 2013:

Begin studying for the final exam, by reviewing all the assigned readings and related questions listed in the final exam study guide. Come to class prepared to ask questions about topics you don't understand clearly.

For Tuesday March 5, 2013:

Continue working on your final composition project, bearing in mind the criteria discussed in class:
- Your piece should be at least 3 minutes long
- Use of Pro Tools and/or Reason is strongly encouraged
- Experimentalism is favored over sounding good in a traditional sense (i.e., don't be afraid of failing in traditional musical terms by trying something challenging); try to take advantage of some musical capability(ies) made uniquely possible by the computer.

For Thursday February 28, 2013:

In the DropBox called "ProgressReport" submit a brief progress report on your final composition work. This might be in the form of a text description of the plans you have made and the work you have done so far, and/or it may be work-in-progress files of some elements of your actual project -- source audio files, a Pro Tools session, a Reason synthesis file, etc. The objective is to use this progress report as an opportunity to a) get going if you haven't yet done so(!), b) get your thoughts in order through the process of explaining it to someone else in a semi-formal way, and c) get feedback from the TA about your work so far.

For Tuesday February 26, 2013:

Read about ReWire, and follow the link to the ReWire Help Pages, specifically the link to instructions how to use ReWire in ProTools where you will find several helpful tutorial chapters.

For more information about how to use ReWire, you can simply Google "ReWire Tutorial" to find various helpful tutorial videos. If you have the opportunity to do so, try implementing ReWire as an Instrument plug-in in ProTools.

Read the article on "Advanced Reverberation", Part 1 and Part 2. Pay attention to the discussion of "predelay" (initial delay), "early reflections", "RT60 decay time", "high-frequency damping", and "direct-processed mix", noting what sonic effect each one plays a role in establishing. Keep some of these principles in mind as you use reverberation in your own work.

For Thursday February 21, 2013:

Using Reason's MIDI sequencer and any combination of Reason devices, compose an interesting rhythm made up of multiple sounds. The sounds you use may include any desired combination of synthesized sounds and sampled sounds, and any desired combination of pitched and unpitched sounds. The main goal of the assignment is to employ the time grid provided by the sequencer to help you organize events in a way that creates an engaging "groove" or sense of rhythmic pattern, and at the same time challenge yourself to think of new rhythmic possibilities made possible by Reason's time-grid interface, its pattern generators such as Matrix and ReDrum, and the possibilities to establish different periodicities.

If you don't have a clear concept in mind for a rhythm you want to try to create, it might be helpful to listen to music that has a rhythm you find interesting, and analyze how the component sounds are combined to create that effect, as demonstrated in class, to get some initial ideas. The examples analyzed in class focused on patterns that repeated (with variations) every 4 or 8 beats, with each beat divided into 4 parts. That model (or a "swung" version of that model) is useful for recreating a great many of the grooves used in popular music. However, you are encouraged not to stick to that pop music format. For example, your groove could (and perhaps should) be in some other meter such as 3/4 or 7/8, or could use some other beat division such as triplets, or could employ patterns that repeat with some other periodicity, or could use changing meters. Deciding on a meter, a tempo, and a predominant underlying pulse (main beat division) is a good first step in any case. You don't have to just adopt the default settings, unless those are the settings you actually want.

Begin by constructing different "layers" of pattern -- individual events located at desired moments in time, or rhythmic patterns of pitch, or patterns of percussive sounds -- and experiment with combining different layers. Note that these sounds are not necessarily drum sounds; any sound can be an articulator of a point in time. To keep the pattern interesting over a longer period of time, try adding or removing layers, or varying the layers slightly. Develop your pattern in a way that remains interesting for 30 seconds (or more). Although the main focus of this exercise is rhythm, paying attention to contour, timbre, accent (some things louder than others), panning, etc. can all add to the rhythmic interest.

Hand in your assignment as a Reason file (.reason) deposited in the EEE DropBox called "ReasonRhythm". Your Reason file should be saved such that it will make the exact sound you want when it is opened and played. Check to make sure that the file plays the sound you want just by clicking on the start button. (Before you hand it in, save it, close it, then reopen it and play it to be sure that the saved version works as you want it to.) Important: If your Reason file uses sound files that are not part of the normal sample sets included with the Application, you must be sure to include them in your Reason file by making your document "self-contained". Do that by invoking the "Song Self-Contain Settings..." command in the File menu before saving your document, and checking any sounds that are not a standard part of Reason.

For Tuesday February 19, 2013:

Listen to all of your classmates' musique concrète compositions, which you can download from the CourseFiles folder of the MusiqueConcrete dropbox on EEE. By the end of the day on Monday February 18, post comments on the MessageBoard on at least three of the compositions. Post your comments as a "Reply" to the post that's already established for the person in question. Do your best to post comments on compositions that do not already have more than three posts, so that all compositions receive at least three posts.

Your comments can be about aesthetics (the artistic decisions of the composer and the emotional impact of the composition) and/or technique (the apparent methods used or characteristics of the sound). Don't be afraid to be critical. "Constructive criticism" doesn't mean just "being nice"; it means criticizing in a way that is intended to be helpful. Your commentary will be most helpful if you remark on things that you think are demonstrably true or factual (e.g., "your piece appears to contain unintended distortion due to clipping at times 0:45 and 0:48") or, if you can support your opinion by explanation (e.g., "although the clicking sounds were repetitive, the fact that they varied in terms of their loudness, timbre, and localization within the stereo field kept the repetition from being boring"). Simple statements of "I liked..." or "I didn't like..." are okay, but should be supported with a reason.

For Thursday February 14, 2013:

Make a complex evolving sound lasting approximately (at least) 30 seconds. You should produce this sound using only Reason software. Within Reason, restrict yourself to using only the Subtractor module for sound synthesis. Since the Subtractor is a monotimbral synthesizer (can only play one preset sound at a time), you may want do one or more of the following to make your final sound stereophonically and timbrally interesting: a) use more than one Subtractor, b) use mixer panning for spatialization, c) apply stereo effects processor(s) (e.g., delay, reverb, etc.) to the Subtractor's output. For example, you might create a rack that contains a mixer, one or more effects processors connected to the auxiliary sends/returns of that mixer, and one or more Subtractor modules connected to the inputs of the mixer.

The objective of this assignment is to explore and understand the components of the Subtractor synthesizer by designing a Subtractor patch that has interesting possibilities and using control automation to change its parameters in real time. The goal is to make a single, fairly unified sound (though it may be arbitrarily complex internally) that is interesting because of the way it changes over time, not to make a composition of rhythmic notes in a traditional musical sense. Obviously you will need to have at least one note of long duration stored in the MIDI sequencer part of your file, starting at the beginning of the file and lasting at least 30 seconds. If you need multiple notes to play an interesting chord or to activate multiple Subtractors, they should all occur at the beginning of the file. Even if you find that you need to have some new notes occur in the middle of the sound to add to its interest, refrain from relying on "melody" (the succession of different notes) for interest.

You are strongly advised to read the chapter of the Reason Operation Manual (Macintosh HD/Applications/Reason/Documentation/English/Operation Manual.pdf) that explains the Subtractor synthesizer module in detail. It's very thorough and well-written, and can be quite helpful, so don't hesitate to consult the online documentation regarding any questions you have as you work.

If you make more than one such sound that you think is interesting, by all means feel free to hand in multiple versions of your assignment (properly named to make clear what each file is). Hand in your assignment as a Reason file (.reason) deposited in the EEE DropBox called "ReasonSound". Your Reason file should be saved such that it will make the exact sound you want when it is opened and played. Check to make sure that the file plays the sound you want just by clicking on the start button. (Before you hand it in, save it, close it, then reopen it and play it to be sure that the saved version works as you want it to.)

For Tuesday February 12, 2013:

Teach yourself as much as you can about the ReDrum drum synthesizer in Reason. The Operation Manual (Macintosh HD/Applications/Reason/Documentation/English/Operation Manual.pdf) has a detailed explanation beginning on page 729. (For now, you don't need to concern yourself with the section on Sampling.) To experiment on your own, create a new Reason file, use the Instrument submenu of the Create menu to create a ReDrum Drum Computer device, then follow the manual instructions to open a preset drum kit and enter rhythm information in the drum sequencer.

There are many tutorial videos to be found online. (Just Google "ReDrum tutorial".) For example, here is a decent basic video tutorial (based on Reason 5, but still valid for Reason 6), and here is a brief written tutorial with downloadable example files in Reason 6.

For Thursday February 7, 2013:

Make a sound composition in the spirit of musique concrète, relying primarily on editing, mixing, and sound processing, using predominantly non-instrumental sounds. Your composition should be at least 60 seconds in duration, but need not be much more than that (unless you're inspired to do more). You are advised to use Pro Tools, but you may use whatever DAW you prefer. The source sounds may be sounds you have recorded yourself and/or sounds you have gathered from other recordings. You should employ some of the traditional techniques discussed in your readings and in class, such as attack removal, looping, reversing, speed change, as well as mixing and panning, as well as some sound processing such as filtering, echo, reverberation, etc.

Turn in your completed composition as a single stereo audio file (WAVE or AIFF) in the EEE DropBox called "MusiqueConcrete". Also turn in a text description of your source materials, your methods of editing, mixing, and processing them, and your compositional thinking as you organized the sounds into a formal structure that you consider musical. If you want to include your Pro Tools session folder to show your working process, that will be helpful to our evaluation, but it's not obligatory. Please try to keep the total size of your submitted files under 200 MB.

Your composition will be evaluated on the basis of the quality and care evident in the recording, editing, and mixing, and the thoughtfulness evident in the organization and composition of the sounds.

Thursday's class will feature a guest lecture by composer Ivan Bellocq. You should plan, if at all possible, to attend the free concert of his music scheduled for that same evening at 8:00 pm in Winifred Smith Hall.

For Tuesday February 5, 2013:

Begin learning about MIDI, and about the Reason application. There's no shortage of information online; the main challenge is to find the level of information that's most helpful for you. It's also strongly advised that you get to a computer that has Reason on it -- in the AMC or the Sound Design Studio -- and try it out.

To learn about the MIDI data protocol, I suggest looking at this Tutorial on MIDI and Music Synthesis. Read from the beginning of the article through the section on "The General MIDI (GM) System".

Do enough of your own research to be able to answer questions such as the following. Can MIDI be used successfully to transmit audio? What information is MIDI designed primarily to transmit? What is a MIDI channel, and what is it good for? Do MIDI messages contain timing information about rhythms and durations? How does a computer deal with the timing of MIDI messages (in MIDI sequencer software and MIDI files)? What are some examples of useful MIDI channel messages? Does your computer have a MIDI port (jack)? If not, how does it receive MIDI information to pass it to/from applications in your computer?

To learn about Reason, it's best to get your hands on the program and try things out. You can open and explore any of the compositions in the Demo Songs folder within the Reason folder. But for more basic tutorial information, you can view tutorial videos on the Propellerhead website and additional tutorial videos on the Propellerhead software YouTube channel. You can also simply Google "Reason tutorial" to easily find a great many other tutorials on the program.

Initially we will be examining the basic operations and paradigms of the program, and will be looking specifically at the Subtractor synthesizer module. So you can focus on instructional videos and text about the Subtractor on the Propellerhead website, or, again, simply Google "Subtractor synthesizer".

Also, Reason is one of those rare programs that actually has comprehensive and well-written manuals. So you should plan to do a lot of reading in the Operation Manual PDF file (in the Reason/Documentation/English folder) and in the Reason Help... that's available in the Help menu within Reason.

Come to class prepared to ask questions about things you don't understand about a) MIDI, b) Reason, c) the Subtractor.

For Thursday January 31, 2013:

Familiarize yourself with the policies for use of the Gassmann Electronic Music Studio.

Read the first five pages (through the top of page 6) of the tutorial explaining the basics of how to make a recording in the Gassmann Studio. You won't understand it all until you get an onsite demonstration of the studio, but it will provide some necessary information to help you understand that demonstration. (For now you can safely ignore items 6 and 7 on pp. 4-5.)

Thursday's class will be held in the Gassmann Studio, located in MOB 20, in the northeast corner of the first floor of the Mesa Office Building (to the north of the Mesa Parking Structure), shown as number 59 on the core campus map. The since the room is small, we will divide the class into two halves, with one half meeting 2:00-2:40 and the other half meeting 2:40-3:20, as shown below. Please try your best to be on time for your session; if you feel you might need to receive the demonstration twice, you're welcome to come to both sessions.


2:00-2:40
 - CHEN, RUBY
 - CHIEN, JENNIFER
 - HERNANDEZ, STEPHANIE GRACE
 - HUANG, SHERRY
 - THAI, VICKY
 - WANG, MING YUAN
 - WONG, MARC WILLIAM
 - ZEMA, ALLISON LEE

2:40-3:20
 - JUHAN, ALEXANDER JAMES
 - MAI, VIVIAN
 - NORRIS, DEVIN STEPAN
 - RIPLEY, MATTHEW TROY
 - ROST, RYAN
 - SONG, ANDREW
 - SOUPHIDA, CHRISTOPHER
 - SZWARCSZTEJN, NAOMI ELIZABETH


Digital audio transmission formats such as AES/EBU, S/PDIF, ADAT, T-DIF, and MADI are agreed-upon industry standards for interconnection of digital audio equipment. It's difficult to find literature that explains the different standard formats without getting into hopelessly geeky detail, but this article (published in 1996) is not bad. The three formats you're most likely to encounter are AES/EBU, S/PDIF, and ADAT, so do a little additional research to try to learn at least: a) Why transmit audio in digital form instead of analog form? b) What kinds of plugs/cables are used for the three most common formats? c) What are the potential problems one might encounter when transmitting audio in this way?

For Tuesday January 29, 2013:

1) Analyze one or more television commercials (on TV or YouTube) for their editing and composition of time. (TV ads are some of the most highly produced video clips you will see, in terms of $$ spent per second of video.) The point is not so much to analyze the use of music, nor the marketing techniques, nor the humor, nor the social implications and subtexts -- although all of those are important aspects of the ad's impact on the viewer -- but rather to note how the editing, pacing, juxtapositions, and internal motions of the scenes (and sounds) are used to control our sense of the passage of time, keeping us engaged and giving us a sense of moving forward. Notice the pacing and organization of the video editing and the content, and the balance and relationship of voice, sound effects, and music. Post a link to the video (if possible) and a commentary containing any interesting observations you made -- about its composition, editing, form, use of sound, and emotional impact -- on the class MessageBoard. Don't forget to check the MessageBoard regularly to answer any questions others might have posted there.

2) Michael Chion's Guide to Sound Objects (Guide des objets sonores) is a comprehensive discussion of the ideas contained in Pierre Schaeffer's Traité des objets musicaux (Treatise on Musical Objects). Read at least the following parts of Chion's work (in sections I and II of the PDF files provided online).

3) In a future assignment you will be asked to turn in a brief composition -- approximately one minute in duration -- of musique concrète, music made by editing, assembling, processing, and mixing recorded sounds (especially non-instrumental sounds). Begin some conceptual planning of a) the type or types of sounds you would like to use, focusing on similarities and contrasts, and b) how you think you might want to organize those sounds in time (a macrocosmic view of the entire formal structure, and homing in on specifics of content within that form). Then begin collecting the library of sounds you will use as source materials for your composition, obtained by finding existing sound recordings and/or by making your own recordings of new sounds.

4) Here are some pieces that demonstrate different aesthetics of, and approaches to, musique concrète, in that all of them were made by editing and assembling recorded sounds. Listen to some of them (as many of them as you can) to get an idea of different composers' approaches to this technique of making music, and perhaps to gain inspiration or ideas for your own approach. You can find links to some of these pieces on the Links page; those that are not listed there can be found via Google, YouTube, Spotify, etc.

Schaeffer, Pierre - Etude aux chemins de fer, 1948. This is considered a classic (one of the very first pieces of musique concrète ever) by the person who coined the term. He made it the piece in 1948 with disc cutters, not tape!

Cage, John - Williams Mix, 1952/3. The source sounds and edits in this piece were chosen strictly according to a systematic score combining explicit instructions and chance operations. Thus the editing is an abstract process for creating rhythms that is largely independent of the source sonic materials.

Stockhausen, Karlheinz - Studie II, 1954. This piece consists entirely of electronically produced sounds, but was constructed by detailed editing of each individual sound. (You can find this piece on a CD in the Arts Media Center, the liner notes of which give a detailed description of the making of this work.)

Barron, Louis and Bebe - Forbidden Planet, 1956. Likewise, this piece is made up entirely of electronic sounds designed to create a futuristic imagery (for 1956).

Varèse, Edgard - Poème électronique, 1958. Another classic of early musique concrète, originally composed for multi-channel diffusion in the novel Philips Pavillion at the World's Fair.

Beatles, The - Revolution 9, 1968. This work demonstrates many classic tape techniques such as reversal, looping, speed changes, etc., and because of its distribution on a rock album it is an early instance of avant-garde electroacoustic music merging with popular recorded music.

Dobrian, Christopher - Overture from Microepiphanies: A Digital Opera, 2000. A mash-up of recorded operatic overtures, making inside jokes for those who know the operatic sources, and imitating what an artificially (not-too)intelligent computer program might come up with if charged with a database of "knowledge" about opera overtures.

Hancock, Herbie (and Swift, Rob) - This is Rob Swift, 2001. Realtime musique concrète editing via virtuosic turntablism, using only non-musical vocal sound sources, performed with instrumental music.

For Thursday January 24, 2013:

Using equipment checked out from the Arts Media Center (and/or equipment you own or to which you can otherwise gain access), make the best quality recording you can of each of the following: a short instrumental or vocal music performance (played either by you or by someone else), non-sung vocal sounds (speech, whispering, coughing, vocal fry, whatever), and sound from some source(s) other than voice or a known musical instrument (a squeaking door, hitting pots and pans, pretty much anything).

Your final product should be three high quality mono or stereo audio files, one with each of the three types of sound. The final sounds should be sufficiently edited to remove extraneous silences or noises at the beginning and end (you can do additional editing to fix mistakes, modify the content, etc. if you'd like), and should be normalized to approximately the same amplitude. The main goal is to demonstrate your ability to produce a good-quality sound file (not particularly to show off your musical prowess, cleverness, etc., although you're welcome to show that off if you'd like). If you decide you want to include some sort of sound processing (reverberation, etc.) just because you like the way that it sounds, that's okay, but at least part of your sound should be relatively unaltered so that it demonstrates that you made a successful recording.

Write a brief text explanation of what you recorded, how you recorded it (what equipment you used, what microphone placement, etc.), and what work you had to do after the recording to produce the finished file. This can be submitted as a .txt, .rtf., .doc, or .pdf file.

Hand in your three AIFF or WAVE files and your one text document in the EEE Dropbox called "Recording&Editing". If you want to include project files (Audacity, Pro Tools, Reason, etc.) in order to "show your work", that's OK, but please keep the total disk requirement for all the files you submit under 200 megabytes.

For Tuesday January 22, 2013:

Read the following articles. Do enough of your own research about the authors to feel that you have a sense of who they were and what may have influenced their thinking. Focus on at least one remark in these classic writings that seems to you to be particularly insightful, prophetic, or meaningful in relevance to technology and musical practice today. Describe in detail why you think it is a significant idea, and what its implications are to you as a maker of music. Post your writings on the MessageBoard by 12:00 noon on the due date.

Russolo, Luigi. "The Art of Noises", 1913.

Cage, John. "The Future of Music: Credo", 1937.

Cage, John. "Experimental Music: Doctrine", 1955.

Varèse, Edgard. "The Liberation of Sound", 1936-1962.

For Thursday January 17, 2013:

Familiarize yourself with the basic workings of the Pro Tools application. On each of the computers in the Arts Media Center you will find the program, the reference manual, and a set of instructional videos, "Pro Tools 9 Explained". The book Pro Tools 101 Official Courseware: Avid Training Official Curriculum Pro Tools 9.0 is available on the bookshelf in the AMC computer lab. You can also find the Pro Tools manuals online on the Avid website, and you can find some free tutorial videos online, as well.

Probably the best way to learn the program is to start to use it. You can create a blank session, create one or more new audio tracks, import one or more audio files, and begin to edit the sound(s) and apply different audio effects. We will discuss the underlying principles of most digital audio workstation (DAW) programs in upcoming classes.

Here are some questions to which you might want to try to find the answers. When creating a session, what sample rate, bit depth, and file type should I select? What are the reasons for choosing one over another? When creating a new track, what are some reasons for (or advantages of) creating a stereo or mono track? How can I best edit the sound to avoid clicks? How can I manage "regions" or segments of sound that I consider important? How can I control the amplitude of the sounds if I need to turn things up or down? How do I produce (export) a final sound file in AIFF or WAVE format?

For Tuesday January 15, 2013:

Read the Shure Educational Publication on Microphone Techniques for Live Sound Reinforcement (PDF file). Read at least pages 5-11, and more if you're interested. (This article contains quite a lot of useful information, fairly clearly explained. You might also be interested in pp. 32-33 discussing microphone placement.) You should try to learn the main differences between dynamic microphones and condenser microphones, the different kinds of directional sensitivity patterns available (and what those pattern diagrams really mean), the difference between balanced and unbalanced wiring, and the meaning of the terms phantom power, transient response, frequency response, proximity effect, and decibel.

Read the Soundcraft Guide to Mixing, an instructional brochure available online as a PDF document. Read at least pages 3-7, and more if you're interested. There is a lot of useful information in this brochure, fairly clearly explained. You might find particularly useful section 3 (pp. 10-16) on Mixing Techniques and Section 6 G-J (pp. 28-30) on techniques In the Studio. Familiarize yourself with the common features on most mixers: mic input, line input, gain, insert, EQ, auxiliary send, pan, solo, mute, fader, direct out, auxiliary return, main stereo mix out, submix, monitor (control room) out, and phantom power. The brochure introduces these terms as if you already know them, without a lot of explanation; however, there is a glossary on pp. 32-36.

For Thursday January 10, 2013:

Read the article on Digital Audio. You should try to understand the meaning of the following terms: simple harmonic motion, amplitude, frequency, fundamental mode of vibration, harmonics (overtones, partials), harmonic series, spectrum, amplitude envelope, loudness - amplitude, pitch - frequency, decibel, analog-to-digital converter, digital-to-analog, converter, digital sampling, Nyquist theorem/rate/frequency, sampling rate, aliasing/foldover, quantization, quantization precision, signal-to-quantization noise ratio, clipping. If you don't understand the explanation of those terms in the article, do some research to try to learn more about the things you don't understand. Come to class with specific questions regarding topics, italicized terms, or concepts discussed in the article that are unclear to you.

Read How Hearing Works. You should be able to define words such as compression and rarefaction, frequency, amplitude, pinna, tympanic membrane, ossicles, cochlea, basilar membrane, and organ of corti. What does any of this information have to do with music? Is it useful for a composer to understand how sound is perceived? Can you give reasons and/or examples to support your answer?


This page was last modified March 12, 2013.
Christopher Dobrian, dobrian@uci.edu