A look back at some items in our archives.
Richard Andrews
Center for New Music and Audio Technologies (CNMAT)
University of California, Berkeley
Abstract
A live‐performance musical instrument can be assembled around current lap‐top computer technology. One adds a controller such as a keyboard or other gestural input device, a sound diffusion system, some form of connectivity processor(s) providing for audio I/O and gestural controller input, and reactive real‐time native signal processing software. A system consisting of a hand gesture controller; software for gesture analysis and mapping, machine listening, composition, and sound synthesis; and a controllable radiation pattern loudspeaker are described.
Matthew Wright, Eric D. Scheirer
Center for New Music and Audio Technologies, UC Berkeley,
matt@cnmat.berkeley.edu
Machine Listening Group, MIT Media Laboratory,
eds@media.mit.edu
Richard Andrews
CNMAT, UC Berkeley, 1750 Arch St., Berkeley, CA 94709
Timbre, usually defined as the condition of attributes other than pitch, loudness, and duration, plays a strong role in determining the perceptual organization of musical patterns. Timbre's primary organizational influence appears to be on perceptual grouping, as in auditory stream segregation and rhythmic segmentation. Grouping by timbre can influence the tonal implications of otherwise ambiguous pitch material.
In this paper, we present soft computing tools and techniques aimed at realizing musical instruments that learn. Specifically we explore applications of neural network and fuzzy logic techniques to the design of instruments that form highly personalized relationships with their users through self-adaptation. We demonstrate techniques for adapting sensor arrays and techniques for realizing highly expressive real-time sound synthesis algorithms.
Adrian
Freed, Tristan Jehan