This page contains assignments for the class
Music 151: Computer Music Composition - Winter 2014
University of California, Irvine
The assignment for the upcoming class will be posted here after each class session.
For Tuesday March 18, 2014:
The final composition project for this class should:
The composition should be accompanied by a prose description of the goals of the piece, the working methods used in its composition, and a descriptive analysis of the compositional ideas and the musical result. This document may draw upon ideas expressed in the original compositional plan (assigned for Tuesday March 4), but should refer to the actual compositional result in addition to initial intentions.
You should also turn in any and all working files that will help show your working methods (source audio files, Pro Tools session folder, Reason song file, etc.) compressed as a single .zip archive file. If the size of the file is too big to upload to a EEE DropBox, you can instead upload it to a file-sharing site such as Google Drive, DropBox, WeTransfer, etc., and provide a URL link to the download site.
The final project should be a stereo AIFF file of a completed original composition approximately three minutes in duration. The prose description should be submitted as a .txt., .rtf, .doc, or .pdf file. The working files should be submitted as a .zip archive or as a .txt file containing the URL of the downloadable .zip file.
These files should be deposited in the EEE DropBox called "FinalProject" no later than 11:59 pm on Monday March 17, 2014. You are expected, as a requirement of the class, to attend the final critique session for discussion of all student final projects.
For Thursday March 13, 2014:
Study for the final exam by reviewing your class lecture notes, all the assigned readings, and all the related questions listed in the final exam study guide.
For Tuesday March 11, 2014:
Study for the final exam by reviewing all the assigned readings below and all the related questions listed in the final exam study guide. Come to class prepared to ask questions about topics you don't understand clearly.
For Thursday March 6, 2014:
Begin your final composition project, following your project proposal.
For Tuesday March 4, 2014:
Write as complete a description as possible of your plan for your final composition project. Your eventual composition may end up deviating somewhat from this initial plan, but the idea is to try to make as many decisions as possible about what you will do, trying to foresee the path to successful completion and giving yourself a list of plans and/or constraints to follow.
The description should include as much detail as possible about the form and content of the piece. How long will it be? Will it be sectional? If so, how many sections will it have, and how long will each one be? Will it have a sense of beat and meter? If so, what meter(s), and what tempo(s)? Will it allude to a known musical style? What type of sound materials will it use? Recorded sounds, synthesized sounds, sampled media sounds? What will the mood and overall energy level be in the various sections? How dense or sparse will the activity get at different points in the piece? Will there be a narrative and/or a central core concept (technical or extra-musical)? Etc., etc. Many of these decisions may be global guidelines, while others may be very specific.
The description should identify at least one specific compositional technique or sonic/musical idea you want to explore in your project that is uniquely possible using the computer. In other words, your musical result should take advantage of, and ideally should even exemplify, some musical idea or technique that does not commonly belong to instrumental music and is possible primarily because of capabilities afforded by the computer. Use your imagination to find a topic that interests you, and use that decision to help shape the musical content and working methods of your project.
The description should also address the technical methods you plan to use. What software? Pro Tools, Reason, or both? Will you need to use other software to achieve certain tasks? What equipment will you need for recording source material? To what extent will the composition focus on audio editing and/or MIDI sequencing and/or timbre modification with effects? Try to establish what you want to do, and imagine the method by which you'll achieve that goal. Say (just as an arbitrary example) you decide you want to explore rhythmic spatialization via panning, volume, and reverb, using extremely short (unrecognizable or barely recognizable) snippets of the music of a particular CD as your source material. You might choose to use iTunes to rip the songs of the CD and save them as AIFF or WAVE files. Then you might use Pro Tools to capture extremely short excerpts of those files as regions to be placed in specific rhythms with specific panning and effects. Or you might use Audacity to make small selections and export those as individual AIFF files for importation into Reason's ReDrum sampling drum machine(s). Each sample in each ReDrum might then be assigned a unique panning, loudness, envelope, and effect send amount. Or you might decide that you will change those attributes dynamically with automation. By trying to imagine your working method, and describing it in prose, you're making a game plan for yourself.
Once you have determined your artistic goals and your working methods, you can then set up for yourself a timeline of activities, and establish "milestones" of things you plan to have accomplished by a certain date, as a way of making a good working schedule for yourself. Include that timeline and plan in your description.
Post your full description on the class MessageBoard no later than 11:59 pm on Monday March 3. If you think it will be helpful to do so, you should feel free to ask your classmates, TA, or professor for advice or suggestions via the MessageBoard as you prepare your plan.
Read the article on "Advanced Reverberation", Part 1 and Part 2. Pay attention to the discussion of "predelay" (initial delay), "early reflections", "RT60 decay time", "high-frequency damping", and "direct-processed mix", noting what sonic effect each one plays a role in establishing. Keep some of these principles in mind as you use reverberation in your own work.
Read about ReWire, and follow the link to the ReWire Help Pages, specifically the link to instructions how to use ReWire in ProTools where you will find several helpful tutorial chapters.
For more information about how to use ReWire, you can simply Google "ReWire Tutorial" to find various helpful tutorial videos. If you have the opportunity to do so, try implementing ReWire as an Instrument plug-in in ProTools.
For Thursday February 27, 2014:
Listen to the MP3 files of your classmates' second composition projects, rhythms created in Reason. Come to class prepared to discuss the musical and technical pros and cons of each piece.
Come to class prepared to contribute to establishing a final critique time that will work well for all members of the class. If you don't attend the class or at the least provide your available times via email ahead of time, a decision will be made without your input.
For Tuesday February 25, 2014:
Listen to the MP3 files of your classmates' first composition projects. Come to class prepared to discuss the sonic, musical, and technical pros and cons of each piece.
Come to class knowing your final exam week schedule so that we can establish a final critique time that will work well for all members of the class.
For Thursday February 20, 2014:
Using Reason's MIDI sequencer and any combination of Reason devices, compose an interesting rhythm made up of multiple sounds. The sounds you use may include any desired combination of synthesized sounds and sampled sounds, and any desired combination of pitched and unpitched sounds. The main goal of the assignment is to employ the time grid provided by the sequencer to help you organize events in a way that creates an engaging "groove" or sense of rhythmic pattern, and at the same time challenge yourself to think of new rhythmic possibilities made possible by Reason's time-grid interface, its pattern generators such as Matrix and ReDrum, and the possibilities to establish different periodicities.
If you don't have a clear concept in mind for a rhythm you want to try to create, it might be helpful to listen to music that has a rhythm you find interesting, and analyze how the component sounds are combined to create that effect, as demonstrated in class, to get some initial ideas. The examples analyzed in class focused on patterns that repeated (with variations) every 4 or 8 beats, with each beat divided into 4 parts. That model (or a "swung" version of that model) is useful for recreating a great many of the grooves used in popular music. However, you are encouraged not to stick just to that pop music format. For example, your groove could (and perhaps should) be in some other meter such as 3/4 or 7/8, or could use some other beat division such as triplets, or could employ patterns that repeat with some other periodicity, or could use changing meters. Deciding on a meter, a tempo, and a predominant underlying pulse (main beat division) is a good first step in any case. You don't have to just adopt the default settings (unless those are the settings you actually want)! Music majors, especially, I hope you will challenge yourself to try to make something rhythmically novel and interesting, while hopefully still remaining "groovy".
Begin by constructing different "layers" of pattern -- individual events located at desired moments in time, or rhythmic patterns of pitch, or patterns of percussive sounds -- and experiment with combining different layers. Note that these sounds are not necessarily drum sounds; any sound can be an articulator of a point in time. To keep the pattern interesting over a longer period of time, try adding or removing layers, or varying the layers slightly. Develop your pattern in a way that remains interesting for 30 seconds (or more). Although the main focus of this exercise is rhythm, paying attention to contour, timbre, accent (some things louder than others), panning, etc. can all add to the rhythmic interest.
Hand in your assignment as a Reason file (.reason) deposited in the EEE DropBox called "ReasonRhythm". Your Reason file should be saved such that it will make the exact sound you want when it is opened and played. Check to make sure that the file plays the sound you want just by clicking on the start button. (Before you hand it in, save it, close it, then reopen it and play it to be sure that the saved version works as you want it to.) Important: If your Reason file uses sound files that are not part of the normal sample sets included with the Application, you must be sure to include them in your Reason file by making your document "self-contained". Do that by invoking the "Song Self-Contain Settings..." command in the File menu before saving your document, and checking any sounds that are not a standard part of Reason.
For Tuesday February 18, 2014:
This class session will be held in the Colloquium Room of the Contemporary Arts Center (building 721 on the campus map), Room 3201. Leave yourself a few extra minutes to find it without being late.
In preparation for the presentation, you might want to read about the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) at McGill University, and the Instrumented Bodies project undertaken at McGill's Input Devices and Music Interaction Laboratory (IDMIL).
For Thursday February 13, 2014:
We will be examining the basic operations and paradigms of the Reason program, and we'll be looking specifically at the Subtractor synthesizer module. Reason is one of those rare programs that actually has comprehensive and well-written manuals, so you should read the section of the Operation Manual PDF file (in the Reason/Documentation/English folder) that pertains to the Subtractor (pp. 561-586). If you can't get to that file in the AMC or the Gassmann Studio, you can find it on the Propellerhead website. Also, you can find instructional videos and text about the Subtractor on the Propellerhead website, or simply Google "Subtractor synthesizer".
Come to class prepared to ask questions about things you don't understand about a) MIDI, b) Reason, c) the Subtractor.
For Tuesday February 11, 2014:
Complete the following project of recording, editing, and mixing audio using Pro Tools.
Using good quality recording equipment at the Gassmann Studio, the Arts Media Center, or checked out from the AMC, record three types of sound material with the best quality you can: musical sound, spoken word, and other sound(s). The musical sound can be sound that you produced yourself by playing or singing, or music played by someone else. Likewise, the spoken voice can be yours or someone else's. You will probably want to choose the text and the voice type based on your eventual intended use of the material. The other sound(s) can be anything -- ambient noise such as traffic, close-up inspection of some interesting sound that often goes unnoticed, a sound that you find unique and interesting, whatever. You can use whatever software you'd like to do the recording. If you're recording in Gassmann or the AMC it makes most sense to record directly into Pro Tools. If you're using the AMC's remote recording equipment, you will then need to upload those files to a computer that has Pro Tools and import the audio into Pro Tools.
It is often useful to collaborate on the recording portion of this assignment, so that one person can act as recording technician while the other acts as the "talent" who produces the sound. You are welcome to team up with fellow students, so that you can trade roles and help each other. Each person is responsible, however, for producing her/his own unique recorded material. You should record at least as much sound material as you think you will actually need for your finished product, and should usually do multiple takes to ensure that you have a good recording.
Using only the sounds you have recorded, make a stereo sound composition or sound design lasting at least 30 seconds that includes each of the three types of sound in an interesting and effective way. The criterion of "interesting and effective" depends on what you're trying to achieve. Your sound composition could be intended as a musique concrète composition or it could be as clearly functional as a television advertisement or public service spot, or anywhere in between. Use your imagination and do something that is interesting and educational for you. The objective is to use well-recorded sound, edited and mixed skillfully, to produce an interesting listening experience of good technical quality.
For Thursday February 6, 2014:
Michel Chion's Guide to Sound Objects (Guide des objets sonores) is a comprehensive discussion of the ideas contained in Pierre Schaeffer's Traité des objets musicaux (Treatise on Musical Objects). Read at least the following parts of Chion's work (in sections I and II of the PDF files provided online).
For Tuesday February 4, 2014:
Work on your recording/editing/mixing project for this Thursday's class. Come to this Tuesday class with questions (or tips) based on your experience working on the project.
For Thursday January 30, 2014:
Familiarize yourself with the policies for use of the Gassmann Electronic Music Studio.
Read the first five pages (through the top of page 6) of the tutorial explaining the basics of how to make a recording in the Gassmann Studio. You won't understand it all until you get an onsite demonstration of the studio, but it will provide some necessary information to help you understand that demonstration. (For now you can safely ignore items 6 and 7 on pp. 4-5.)
Thursday's class will be held in the Gassmann Studio, located in MOB 20, in the northeast corner of the first floor of the Mesa Office Building (to the north of the Mesa Parking Structure), shown as number 59 on the core campus map. The since the room is small, we will divide the class into two halves, with one half meeting 12:30-1:10 and the other half meeting 1:10-1:50, as shown below. Please try your best to be on time for your session; if you feel you might need to receive the demonstration twice, you're welcome to come to both sessions.
12:30-1:10 - ASANO, CHRIS - BEPPU, HIDEAKI - ESPIRITU, ELWOOD - GANOTIS, MICHELLE - KUO, COLLEEN - MOHAN, ABHINAV 1:10-1:50 - ESCUTIA, DIEGO - GIESE, ERIC - JACINTO, REGINE - CALDERON, MICHAEL-DAVID - PENNELL, MAKA
Digital audio transmission formats such as AES/EBU, S/PDIF, ADAT, T-DIF, and MADI are agreed-upon industry standards for interconnection of digital audio equipment. It's difficult to find literature that explains the different standard formats without getting into hopelessly geeky detail, but this article (published in 1996) is not bad. The three formats you're most likely to encounter are AES/EBU, S/PDIF, and ADAT, so do a little additional research to try to learn at least: a) Why transmit audio in digital form instead of analog form? b) What kinds of plugs/cables are used for the three most common formats? c) What are the potential problems one might encounter when transmitting audio in this way?
For Tuesday January 28, 2014:
Read the Shure Educational Publication on Microphone Techniques for Live Sound Reinforcement (PDF file). Read at least pages 5-11, and more if you're interested. (This article contains quite a lot of useful information, fairly clearly explained. You might also be interested in pp. 32-33 discussing microphone placement.) You should try to learn the main differences between dynamic microphones and condenser microphones, the different kinds of directional sensitivity patterns available (and what those pattern diagrams really mean), the difference between balanced and unbalanced wiring, and the meaning of the terms phantom power, transient response, frequency response, proximity effect, and decibel.
You can find some additional useful information on microphone choice and placement online:
A Wikipedia article on Microphone practice
Recording Basics, an article provided by the MXL microphone company
Recording Tips, a series of four articles by Brandon Ryan provided by the Roland audio/music company
For Thursday January 23, 2014:
Read the Soundcraft Guide to Mixing, an instructional brochure available online as a PDF document. Read at least pages 3-7, and more if you're interested. There is a lot of useful information in this brochure, fairly clearly explained. You might find particularly useful section 3 (pp. 10-16) on Mixing Techniques, Section 6 G-J (pp. 28-30) on techniques In the Studio, and the final page (p. 31) on Wiring & Connectors. Familiarize yourself with the common features on most mixers: mic input, line input, gain, insert, EQ, auxiliary send, pan, solo, mute, fader, direct out, auxiliary return, main stereo mix out, submix, monitor (control room) out, and phantom power. The brochure introduces these terms as if you already know them, without a lot of explanation; however, there is a glossary on pp. 32-36.
Inform yourself as well as possible about the following distinctions:
- line level vs. mic level vs. instrument level
- balanced audio cable vs. unbalanced audio cable
- common audio plug types
For Tuesday January 21, 2014:
There is no assignment for this class session, which will be a guest lecture by Professor Marc Battier on the topic of Electroacoustic music in France. You can prepare for the lecture by listening to Mortuos Plango by Jonathan Harvey.
For Thursday January 16, 2014:
In preparation for the presentation by Marc Battier, read about the applications iAnalyse and EAnalysis. If you have access to a Macintosh computer, you can download the programs and try them out.
For Tuesday January 14, 2014:
Analyze one or more television commercials (on TV or YouTube) for their editing and composition of time. (TV ads are some of the most highly produced video clips you will see, in terms of $$ spent per second of video.) The point is not so much to analyze the use of music, nor the marketing techniques, nor the humor, nor the social implications and subtexts -- although all of those are important aspects of the ad's impact on the viewer -- but rather to note how the editing, pacing, juxtapositions, and internal motions of the scenes (and sounds) are used to control our sense of the passage of time, keeping us engaged and giving us a sense of moving forward. Notice the pacing and organization of the video editing and the content, and the balance and relationship of voice, sound effects, and music. Comment on how the techniques and decisions that were made in the composition of the commercial's time structure might be analogous to techniques of musical composition that you could use in your own work. Post a link to the video (if possible) and a commentary containing whatever interesting observations you made about its composition, editing, form, use of sound, and emotional impact -- on the class MessageBoard. (The professor has already posted an example there.) Don't forget to check the MessageBoard regularly to read what others have posted and to answer any questions others might have posted there.
For Thursday January 9, 2014:
Read the article on Digital Audio. You should try to understand the meaning of the following terms: simple harmonic motion, amplitude, frequency, fundamental mode of vibration, harmonics (overtones, partials), harmonic series, spectrum, amplitude envelope, loudness/amplitude, pitch/frequency, decibel, analog-to-digital converter, digital-to-analog, converter, digital sampling, Nyquist theorem/rate/frequency, sampling rate, aliasing/foldover, quantization, quantization precision, signal-to-quantization noise ratio, clipping. If you don't understand the explanation of those terms in the article, do some research to try to learn more about the things you don't understand. Come to class with specific questions regarding topics, italicized terms, or concepts discussed in the article that are unclear to you.
This page was last modified March 11, 2014.
Christopher Dobrian, email@example.com