Abstract

The data in a digitized video image can be parsed by software designed to track color, value, and change in the frame.

The interpreted data provides information about the speed, direction, and location of moving objects in the video image, and that information can be used to provide input control data to music-generating software.

In this way, movement in a video image can be mapped directly into musical meaning. For example, a dancer can control the music, instead of the traditional model of a dancer following the music.

A variety of systems exist for motion tracking and capture in video, from economical software solutions such as BigEye and SoftVNS for Macintosh, to cutting edge high-end systems such as the Vicon infra-red capture system for Windows NT.

This demonstration provides an introduction to the musical use and programming of these systems, and compares their pros and cons.