LIVE/ELECTRO-ACOUSTIC MUSIC - A PERSPECTIVE FROM HISTORY AND CALIFORNIA

This is an electronic version of an article published in Contemporary Music Review, 6: 1, 91-106, in 1991. Contemporary Music Review is available online at: http://www.informaworld.com.  This article in its published form is available online at:  http://dx.doi.org/10.1080/07494469100640101.

This is a preprint version of this article, not the version that was published.  It does not contain any of the illustrations that were published with the article, nor are the audio examples that were published with the article available in this preprint version. 

This article represents both an historical perspective of live/electro-acoustic music, and also what was happening in the late 1980s in this field, at least in California.

___________________________________________________________________ 

Live/electro-acoustic music - a perspective from history and California 

Barry Schrader

California Institute of the Arts

I. Before

Electro-acoustic music began as a performance medium. The naturalness of this approach seems obvious since the idea of studio composition did not occur to anyone during the early part of the 20th century. With a few exceptions, early electro-acoustic music instrumental design copied that of acoustic instruments, replacing the mechanical sound producing apparatus with electronics. For the most part, however, the traditional interfaces, usually keyboards remained.

 

The history of acoustic instruments is complex, involving the invention and development of many designs over thousands of years. And yet all acoustical instruments are built around physical action/response mechanisms, which may be apparent or implied. A violin, for example, presents an apparent action/response mechanism since one can easily see and comprehend the bowing or plucking of a string and can connect it to the resulting sound. In some other instruments, such as a piano or a pipe organ, the physical action/response mechanism of the instrument is not apparent but rather implied. One does not usually see the hammers strike the strings or the pipes opening and closing, but the action of the performer's hands on the keyboards clearly initiate the sound events. In all cases, it is the human performer that is the catalyst to the action/response mechanism and links the traditional music performance to the rest of one's experience. 

Sound, it may be argued, is an essentially accompanimental phenomenon.  Being primarily visual beings, humans usually perceive sound as an aural response or accompaniment to a visually perceived action. In fact, there is a great likelihood that people learn the meaning or sounds primarily in this fashion. If one hears a door shut in another room, the sound conveys a meaning of a physical event by past association and is therefore instantly understood. By contrast, most people are uncomfortable if they experience a disembodied sound for which they have no reference, hence no meaning.

The traditional performance of music on acoustical instruments has developed as an extension of action/response perception. The action is perceived as visual and physical and the accompanimental response is aural.[i] Thus one learns that when the percussionist strikes the tympani with a stick, a certain sound will result. The art of "playing" an instrument is that of creating a series of meaningful action/response associations. What is meaningful is learned by experience, and the meanings of acoustical instrumental performance are generally acquired by people at a very early stage through experiencing live, filmed, or videotaped performances. Once the meanings of these action/response mechanisms are acquired, the responses alone are sufficient to recall the actions. Just as one understands the sound of a door shutting in another room, so one comprehends the sound recording of a piano performance.

The understanding of the action/response perception of traditional music performance raises several fascinating questions: Is music primarily accompanimental? How important is the sounding music in what we consider as "the musical experience"? Is the performer more important than the sounding music? While I have dealt with and shall continue to deal with these questions, they are not immediately germane to the present discussion of live/electro-acoustic music. Two more relevant questions than arise from observations of action/ response perception are: How has this influenced the course of live/electro- acoustic music? and What is the relationship between action/response perception and recent electro-acoustic music technology?

 

Early electro-acoustic music essentially involved the traditional practices of performance composition, particularly as understood from an 18th and 19th century western perspective. While there were some new sounds, the medium of early electro-acoustic music could easily be seen as an extension of traditional performance practice.[ii]  Then came studio composition. Although practiced by a few composers in isolated work in the 1920s, 1930s, and early 1940s, a continuous history of studio composition is usually considered to begin in 1948 with the work of Pierre Schaeffer. Studio compositions do not provide the audience with the most important elements of performance practice. There is no real-time performance so there can be no visual action/response relationships; there is no performer to watch.[iii]  Even more disconcerting to most listeners, there is no learned action/response mechanism to reference when listening to electro- acoustic music studio compositions.[iv]  Studio composition presented not only a new way of composing music but also often required a new way of listening to music. For these reasons, studio composition must be seen as the most radical development in the history of music and the most important musical concept of the 20th century. Initially, studio composition was welcomed by the adventure- some as something holding great promise. But people soon began to feel uneasy about loudspeaker music. From the inside came such excuses as the elemental nature of the timbres or the limitations of the technology. From the audience issued a more honest negative response: there was nothing and nobody to watch.

In a deeper sense, most people's uneasiness with the product of electro- acoustic music studio composition in art music circles arises from the lack of reference of even implied action/response mechanisms. While insiders may chortle over comments from the great unwashed such as What does this music mean?, What is this sound?, and How is this music played?, such questions belie one of the main difficulties of the acceptance of electro-acoustic music in the art music world. The problem is one of comprehending music that avoids and transcends traditional musical instrumentalities. Interestingly, this has not been a problem in popular and commercial music. These musics, in their present incarnations, developed in the 20th century and to some extent along with recording technology. In fact, much of commercial music could not exist outside of the studio. Additionally, music for radio, film and video has become mostly studio composition by this time. So, while studio composition developed out of the art music tradition, it has not received a very favorable response due to the many ties of the art music world with the past, particularly ties with the proscenium concert hall. By contrast, studio composition in many variations has been widely accepted without apologies in the more popular forms of music, which exist largely by virtue of contemporary music technology.

 

From the beginning, then, many composers of electro-acoustic music have felt the need to involve elements of traditional performance in their works. Thus was born the world of live/electro-acoustic music that went beyond real-time performance composition. In the earliest examples of live/electro-acoustic music, composers usually sought to combine the results of studio compositions with live performers of traditional 18th and 19th century instruments. This created problems of coordination between the live and prerecorded elements, a problem which composers tried to solve in one of four different ways: (1) The electro- acoustic material was prerecorded but controlled in real-time playback by a performer (John Cage's Imaginary Landscape No. 1 (1939); Morton Feldman's Marginal Intersections (1951); (2) The electro-acoustic material is prerecorded and alternated with the live performers (Edgard Varèse's Deserts (1949-1954); Otto Luening and Vladimir Ussachevsky's Concerted Piece for Tape Recorder and Orchestra (1960); (3) The prerecorded material does not precisely coordinate with the live elements (Bruno Maderna's Musica su due dimensioni (1952; rev. 1958); Mario Davidovsky's Synchronisms (1963-1970); (4) The live performer(s) must follow and coordinate with the prerecorded music (Henk Badings's Capriccio for violin and two sound tracks (1952); Karlheinz Stockhausen's Kontakte (1960).

The examples given illustrate how early live/electronic compositions evidence these solutions. In the Cage and Feldman works, the prerecorded electro- acoustic material is actually played by performers as if it were another instrumental source. Coordination is thus achieved by traditional means. In Deserts and parts of Concerted Piece the live and prerecorded elements merely alternate so that no coordination is necessary. The Maderna and Davidovsky works call for only approximate synchronization whereas the Badings and Stockhausen pieces require rather precise coordination with the taped music.

 

Most of the live/electro-acoustic art music of the 1950s, 1960s, 1970s, and 1980s involved exact or relative relationships between prerecorded studio-composed music and live performers with the prerecorded material presented on tape. While there was always some real-time performance of studio-related electronics by such groups as ONCE, The Sonic Arts Union, and Musica Elettronica Viva, most real-time live/electro-acoustic music performance became the province of popular musicians, particularly in such genres as hard rock and heavy metal.

 

When I was asked to write this article on live/electro-acoustic music in California, it was made clear to me that I should not write about the use of old technologies such as combining live performers with prerecorded tape, but rather that I should discuss the application and interactions of more recent developments in music technology. The more I thought about new developments, however, the more my mind related to the past and how current practices reflect what has already happened. And so I felt that I could not simply describe the present without relating it to the past. For although it seems natural for composers to get caught up in their own hype, drastic innovations seldom occur. What follows, then, is a look at what a select group of composers in California has been doing over the past year or two, with some commentary relating these works to their heritage.

 

 

II.  Now

California is a large state and there is an enormous amount of musical activity of all kinds here. In many ways, California has become the center of activity for electro-acoustic music in the United States since there are many composers, schools, studios, research facilities, and manufacturers here. To try to discuss all of this activity, even just with regard to live/electro-acoustic music, would require a book instead of an article. Even if this were done, it would only demonstrate that large numbers of people are involved in the same relative areas of activity. And so I have decided to limit myself to the work of a few notable representative composers and groups who concentrate on new technology in live/electro-acoustic music. I would like to state, however, that studio composition and the use of older live/electro-acoustic music technologies are still alive and well here.

 

Since all of the works discussed will involve live performance in some manner, a distinction in the types of music needs to be drawn on historical and technological lines. I have adopted the following scheme:

A.  Electronics only

 

            1. Without prerecorded material

 

            2. With prerecorded material 

B.  Acoustic instruments and electronics

 

            1. Without prerecorded material

 

            2. With prerecorded material 

C. MIDI controllers

D. Combinations.

Many live/electro-acoustic music works use digital instruments in a traditional fashion. While there are great technical and aural differences among ensembles of Ondes Martenots, Hammond Organs, Moog Synthesizers, or Yamaha DX7s, the fact remains that these are essentially performance groups and they provide the expected action/response performance relationships. Solo and ensemble performances of electro-acoustic music instruments have become the norm for most popular music with the exception of jazz, but even there the trend is growing. In contemporary art music, few composers have written for live/electro-acoustic music ensembles. A notable exception is Roger Bourland, a Los Angeles composer who teaches at UCLA where he directs the UCLA Synthesizer Ensemble. This group consists of 3 DX7s, a Prophet VS, a Prophet 2000, an Emulator II, and an Ensoniq ESQ1. Bourland has composed three works for the group: Broken Arrows (1986), Lindaraja (1987), and Dances from the Sacred Harp (1987). While Bourland is quite conscious about composing music for this ensemble around specific timbres, the UCLA Synthesizer Ensemble is clearly an extension of traditional performance practice.

A more common attitude towards real-time electro-acoustic music performance in contemporary art music is that of solo and group improvisation with electronics. The history of non-traditional real-time performance with electro- acoustic means goes back to the 1950s when composers began to do performances with equipment found in the classical studio. Performing these works meant manually controlling devices by means of knobs, switches, slide controls, ribbon controls, joysticks, foot pedals and less physical interfaces such as photocells. Since the results of changing any of these controls is extremely variable, the traditional performance action/response mechanism cannot be learned by the audience in advance, if at all, and therefore usually does not exist. Although some physical action can be perceived in interacting with the controls used, they are objectively neutral. While striking a mallet on a xylophone always produces a similar result, the twirling of a knob changes a resistance, which could become a change in volume, pitch, harmonic content, location, vibrato, or any number of things. As the kinds and natures of the instrumentalities differ from traditional instruments, so do the performances of improvisational live/electro-acoustic music.

 

A continuing and increasingly popular trend in real-time improvisation is the concept of interactive music.  Interactive music minimally involves the use of a live performer with a computer controlling sound generating or processing hard- ware. By means of special software, the composer/performer can interact with the computer in real-time, making direct changes and allowing for degrees of controlled randomness. Twilight (1988) is a work of mine written with Intelligent Music's Jam Factory software running on a Macintosh computer controlling a Yamaha TX816 by means of a Yamaha KX88 control keyboard. Different sections of the work store predetermined information, which include transition tables for controlling the predictability of the order of stored pitch events. I can then "play" the computer at the same time I play the KX88 and simultaneously control the degree of randomness and variation of the prerecorded data. The timbres for Twilight are also interactive with this data and react differently, almost cybernetically, to its variations.

A more elaborate example of real-time interactive live/electro-acoustic music is found in San Francisco composer David Rosenboom's Systems of Judgement (1987). Although Systems of Judgement uses slightly different equipment depending on the circumstances of the performance, Rosenboom usually employs a Macintosh computer, Touch6 Computer Music Instrument, Gentle Electric Pitch Follower, Lexicon PCM-70 Digital Effects Processor, Emulator II Digital Sampling Instrument, and several software packages including HMSL (Hierarchical Music Specification Language), and FOIL-85 (the Touch6 instrument language).[v]  The idea behind Systems of Judgement is to create and follow models of development. "There is a conceptual paradigm which guided the creation of the musical form. It attempts to elucidate parallel views of evolution by examining and speculating about processes which we, any organism, or any system, must use to learn to make differentiations, be self-reflexive, and arrive at judgements from which language may be formulated."[vi]  In order to accomplish this, a great deal of interaction must be possible between the composer/per- former and his created universe. This is largely accomplished by the HMSL program, which allows algorithmic composition and musical artificial intelligence. Systems of Judgement consists of a group of themes and harmonic structures which are varied by the application of the interactive software.

Historically, the notion of interaction with electronics in music can be traced back to the work of Louis Barron who began to apply Norbert Weiner's ideas of cybernetics to audio circuit design in the early 1950s. Individual composers such as San Diegan Mark Trail and Gordon Mumma in Santa Cruz, and several early group performances, such as those done by members of the San Francisco Tape Music Center, set the scene for interaction in live/electro-acoustic music, a movement that eventually led to the League of Automatic Music Composers. Following in this tradition is the HUB, a consortium of composers based in the San Francisco Bay Area: John Bischoff, Chris Brown, Scot Gresham-Lancaster, Tim Perkins, Phil Stone and Mark Trayle. The aesthetics of the HUB are based on a relativistic attitude towards time and a cooperative view of music. "I see the aesthetic informing this work perhaps counter to other trends in computer music, we seek more surprise through the lively and unpredictable response of these systems."[vii]

The members of HUB build their own instruments (acoustic and electro-acoustic) and design their own software. The term "HUB" actually refers to an interactive program written largely in Assembly Language, run in ROM on a centralized computer. Their purpose of this HUB-box is to send and receive messages from the other composer/performer workstations. The various stations are idiosyncratically designed by the individual artists. Here are some of the main components of specific stations: (1) Commodore 64, Serge Modular System, Yamaha TX81Z; (2) Buchla Music Easel, TX81Z; (3) Macintosh computer, 2 TX81Zs. In addition to commercial devices, the group has also created new instruments such as the Gazamba (a "prepared" electronic percussion piano built from an old Wurlitzer), the Axe-Thing (a guitar-type body with touch switches on the neck and incorporating a track-ball), and the somewhat self-explanatory Mouseguitar.

Most live/electro-acoustic music does not make use of prerecorded material, except, of course, for the data stored in the programs used. Carl Stone, however, a Los Angeles composer, makes extensive use of musical material prerecorded in the form of long digital samples. Using a Macintosh computer, two Prophet 2002s, and an SPX90, Stone performs sampled acoustic materials in unusual ways. In Hop Ken (1987), for example, Stone uses materials from Mussorgsky's Pictures at an Exhibition to create a very new work from a recognizable source. Stone's music is a kind of computer concrete music clearly related to its studio ancestors but being assembled in real-time.

Much of the live/electro-acoustic music composed for real-time electronics is improvisational to some degree and this music is also increasingly more inter- active. The hope of many composer/performers is that by creating this kind of experience, the excitement of traditional performance in all of its various aspects can be preserved and enlarged. This is probably true to a smaller degree than most of the proponents of this genre believe if for nothing else than the lack of the traditional action/response perception that exists in almost everything that is not generated by means of traditional instrumentalities such as keyboard and guitar controllers. There is no question, of course, that the performers themselves can create interest in many theoretical ways, and this may often create an entertaining performance experience that goes beyond the usual concert event.

The combination of acoustic instruments with electronics continues to be a more popular way of creating live electro-acoustic music, at least in the art music world. I suspect that this is because the use of a performer on a traditional instrument preserves both the action/response mechanism along with the accepted instrumentalities of older art music. The majority of works composed in this genre still use magnetic tape to store the prerecorded electro-acoustic music. This will probably not appreciably change very rapidly because this method has become a standard configuration over the past forty years, and is very convenient to transport and set up. Nevertheless, many composers in California have been experimenting with ways of combining live electronics with live performers.

Carlos Rodriguez, a Los Angeles composer, has written a series of works for live performers in which the sounds of the acoustic instrument(s) are processed in real-time by processing devices. Rodriguez's first work in this medium was Crater Lizards (1986) for cello and Roland SIDE1000 in which the Roland delay unit provides various processings and loops of the cello sound as accompaniment to the live instrument. In 1987, Rodriguez composed two works of this nature: Four Preludes after Max Ernst for flute, guitar, and two Yamaha SPX90s, and Five Movements for Solo Amplified Flute for flute and SPX90, the latter work being presented on the 1988 version of the annual SCREAM Festival.[viii] Rodriguez has chosen this type of composition for several reasons. It is technically not too demanding, requiring little more equipment than would a work for instrument(s) and tape. Also, it allows for the real-time augmentation of the timbral capabilities of acoustic instruments without some of the characteristics of working with a prerecorded tape. Instead of having the live performer(s) following the tape, the soloist(s) can play as expressively as they wish, changing the processing program when they are ready by means of a foot switch. Depending on the particular processing unit, there can be a great variety of preprogrammed processings. In Five Movements, for example, Rodriguez uses 26 different SPX90 programs, but this is still less than half the number of programs that could be created.

There are still more possibilities that exist in combining acoustic instruments with live electronics that not only process but also generate music. In my own Dance Suite for Harp and Computer (1987), a live harpist is combined with real-time use of a Yamaha TX816, Emulator II, and 2 SPX90s controlled by a Macintosh computer and using a Soundcraft 600 mixer, which allows for pre and post fader processing. The harp is miked with a C-Ducer attached to the soundboard so that the sound can be processed along with the electronic and concrete material and can also be amplified considerably beyond normal expectations without feedback problems. A click track is sent from the computer to the harpist via earphone in order to coordinate the difficult subdivisions of the meter found in Tango, the first movement, using the computer.

The relationship of the harp to the live electronics in Dance Suite is essentially concerto style in Tango and Waltz, the first and fourth sections of the work. In Jig, the second movement, the live harp plays with a sampled harp on the E2, producing a strange duet. The computer-controlled E2 plays with greater speed and precision than would be humanly possible, while the live harpist is able to simulate this attitude due to the nature of the notated music, which is travelling at the rate of a dotted quarter-note = 200 in a fast 6/8 meter. In the third movement, Sarabande, the harpist uses Energy BowsTM to play the metal strings by keeping them in constant vibration without touching them. This sound is combined with an analogous computer-altered sampled harp sound on the E2, and both live and sampled materials are similarly processed by the SPX90s.

While Dance Suite has received several performances in the past year, I have only been able to produce one using the original setup. Due to the expense and difficulty of acquiring and setting up the equipment, only the premiere of the work at the CalArts Contemporary Music Festival was done in real-time. For the other performances, I have used a tape containing the electro-acoustic material. When possible, I've used one or two SPX90s, although even this is not always obtainable. What does remain mandatory is the use of the C-Ducer and a sufficient board.  Dance Suite, then, of necessity, exists in different sonic and technical versions. I expected this from the beginning, and composed the work accordingly. The realities of performances of live/electro-acoustic music works dictate, at least to me, that practicality must allow for considerations of this nature.

Perhaps the most important development in the world of live/electro-acoustic music in recent years is the introduction of MIDI controllers. MIDI controllers were initially offered in the physical and functional shape of keyboards. This continued the very old tradition of keyboard interfacing with electronic sound production that began with this century. Soon there were guitar controllers, and now there are MIDI controllers in the shape of many other old instruments, or at least the capabilities of "midi-izing" traditional instruments. MIDI controllers allow performers to interface with electronic hardware in ways that are essentially those of traditional performance practice. Performing on a controller does not require much if any understanding of what is actually happening. Essentially, the interface and what it's controlling can be thought of as an instrument in the usual sense, even if they do not exist in one unified package.

Properly, the history of controllers needs to begin with the earliest use of traditional instrumentalities used in interfacing with electro-acoustic sound generation and processing as discussed in the beginning of this article. Most of what happened was centered on the keyboard. More recently, however, designers of instrumental interfaces have concerned themselves with other instrumental designs. An example of this is the Akai Electronic Valve Instrument (EVI), which was developed by Los Angeles performer Nyle Steiner. Steiner, who has for many years played trumpet in a wide variety of musical agendas, began developing the EVI in 1964 as a way to be able to connect his performance skills with analog electronic music equipment. The EVI went through several incarnations before acquiring its present MIDI mode. To Steiner, who probably remains the most accomplished player of the EVI, the EVI is a performance medium in its own right. Steiner most often uses the EVI with an Oberheim Xpander and a personally designed analog system through a modified Cooper MIDI-to-control-voltage box. Since breath data is updated every few milliseconds, he can easily use the EVI to control voltage-controlled oscillators, amplifiers, and filters. Thus he can produce live performance effects with any sound source, making even digital samples of acoustic instruments sound more "real" by virtue of dynamic control. That he is quite successful in this regard is attested to by his demand in the world of film music. Much of the music for films such as Witness, Fatal Attraction, and Apocalypse Now, was created by Steiner performing both solo and background tracks in a multitrack studio. Most audiences are probably unaware that the production of the music for these and many other films is electro-acoustic.

Even in live performance situations, most people in audiences are not clear what is happening when a performer plays a controller. Most listeners, I believe, actually suppose that what they are hearing is an instrument in the traditional sense. They watch the performer, they experience the action/response phenomena, and they imagine that what they see is what they hear.

 

A few years ago, there was an interesting incident that took place at a Monday Evening Concert at the Bing Auditorium in Los Angeles. The California EAR Unit, a group of performers playing new works on mostly acoustic instruments, presented a concert of pieces in which all of the works were equally amplified; that is, all of the instruments used were miked, mixed, and amplified throughout the entire concert. About two-thirds through the concert a new work by

 

Frank Zappa called While You Were Art was played. The audience sat quietly through the performance of this work while the EAR Unit dutifully went through the performance ritual on stage. After the conclusion of the piece, there was polite applause, and the concert continued on its way. It was only after the concert was over that the performers privately informed people that they were faking the performance and actually miming a tape of a performance that Zappa had made in his studio using digital sampling and other equipment. The reasons for this pantomimed performance has never been entirely clear to me, but Zappa was clearly delighted with the results. A few days later on the NBC Tonight Show he gleefully related the event to Johnny Carson, stating that he had clearly deceived the Los Angeles art music establishment. He even named composers in the audience, such as Morton Subotnick, that he had fooled. Zappa finally said that this proved to him that the art music world was full of fakes and pompous pretenders.

There is no question that people in the Monday Evening audience were quite angry about what Zappa had done, particularly composers, but no one that I talked to after the fact would admit to having been fooled. Also, I never got a very good explanation from anyone as to why they were angry, but a common complaint was that the air of deception in the performance was bad because it was dishonest.

Personally, I could not get too excited about the Zappa incident, although I thought it mildly amusing. After all, popular music works, conceived and recorded as studio compositions, have been faked on television and in concert for as long as I can remember. I do not see much difference here. But if all of the fuss about deceiving the audience has some relevance in this matter, does it also become a consideration in live/electro-acoustic music works using controllers?

There are two main uses of controllers in live/electro-acoustic music. In one case, the controller serves as an interface with designated sound-generating hardware. The main elements of timbral definition of the sounds being heard are determined not by the performer, but by whoever designed the sounds. In this case, the action/response phenomena as perceived by the audience is only partially valid, since note on and off data will usually coincide with the performer's actions. But when a performer depresses a key on a Prophet 2002, you are as likely to hear a soprano voice as you are the sound of a thunderstorm; there is little or no direct connection between the timbral definition of the sound heard, the physical action of the performer, and the simulated nature of the controller. Nevertheless, the performer is controlling the entries of events and usually the pitch information as well. Is it correct, however, as some composers and performers contend, that one is actually playing, in the traditional sense, a cello, or anything else for that matter, off a keyboard controller? Of course there is velocity, aftertouch, breath control, and several other types of MIDI performance data possible, but, at best, these only allow better simulations of the action/response mechanism, never a perfect imitation. What one is always playing is the previously created electronic or sampled sound.

The other use of controllers in live/electro-acoustic music is considerably more obtuse. In these instances, the controllers are interfaces to computers and are used only to create event initiation and, perhaps, cessation (note on, note off) data. The computer serves as an intermediary between an outside stimulus and prerecorded data. When the performer "plays" a controller, the computer produces the desired response in controlling the sound generating and processing hardware. This kind of controller application is used for both generating step by step progressions in which each sonic event is initiated separately, as well as for initiating entire phrases or even larger sections started by a single performance action. In this type of controller use, the performer is essentially delineating only a portion of the temporal displacement of musical information. The particular type of controller may be irrelevant, even to the performer, since tapping a key on a computer keyboard could generate the same results.

While many composers have engaged in these uses of controllers in their live/electro-acoustic music works, few have given much thought to what it is that they are doing vis-à-vis the performance situation. In most of the works that I have experienced, the main points of what is being done seem to be 1) to pretend that traditional instrumental performance practice is being extended in truly new ways, and 2) to allow the audience to believe that they are witnessing a performance that follows the action/response phenomena that they are used to. The only other reasons for these particular uses of controllers seem to me to be theatrical, an aspect that may prove to be the most important.

In 1987 I composed a piece entitled Extreme Variations on a Theme and Variations by Mel Powell. The work was written for six Yamaha Clavinovas (digitally sampled piano keyboard instruments) "played" by a Macintosh computer. In a concert presentation of this work, the audience sat surrounded by the six Clavinovas which were played without the use of any human performers, but entirely by the computer, the sound of each instrument coming from its own internal speakers. Much of what the Clavinovas produced is not humanly possible to play, even though the basis for the piece was an actual work composed for the traditional piano. Some of the audience was confused, some were bemused, and several seemed to take the whole thing as a matter of course. The intended irony of the situation may have escaped many who were there.

Little, however, about the way many composers are using controllers is intended as ironic. Even though there are often incredibly elaborate attempts to coordinate elements of compositions and performances using controller interfaces, the ghost of Zappa's prank haunts my mind.

 

One of the most elaborate works to make extensive use of controllers with live electronics is Hungers (1987) by Ed Emshwiller and Morton Subotnick.[ix]  Hungers is a multimedia work of large proportions. There are five performers on stage at various times: a soprano, a percussionist, a pianist, a cellist, and a dancer.[x]  MIDI controllers are associated with the soprano who uses two air drums (each producing 6 possible streams of directional data), the percussionist playing a KAT (a MIDI mallet controller) and the pianist performing on a KX88. The cello is acoustic, but is processed. In addition to these, the work requires two music technicians to control the Macintosh computer, Yamaha QX sequencer[xi], 2 TX816s, 2 SPX90s, a Prophet 2002 Digital Sampler and a Yamaha DMP7 MIDI mixer. Also needed are three camera operators to video the live action and two video technicians to control the three VCRs and the video switcher feeding the more than dozen monitors and video projectors. The equipment and personnel required to present Hungers live are so costly that the work has had only a few performances in its original 50-minute theatrical version. The work exists in a more accessible way as a 20-minute video piece created by video artist Ed Emshwiller from tapings of the extended version.

The heart of Hungers is a computer score-following program written by Mark Coniglio. This program takes MIDI data (essentially MIDI note data) from the four controllers and advances the computer score stored on the QX sequencer. Thus, to a large extent, the live performers are not creating but rather triggering sound events, actually sequences of events. The first incarnation of this program was called MIDI Baton, and was somewhat problematic in it use. The reliability factor was about 70%, which meant that the computer and sequencer had to have babysitters to initiate the proper events the other 30% of the time. Composers Greg Fish and Tod Winkler served this function for the first performance of Hungers, reading a notated score to make certain that the music actually did follow the live performers' actions properly, and manually triggering sequences when the interface and/or program failed. The latest incarnation of Coniglio's program is called Interactor. Written in PASCAL, this program is extremely reliable and almost never refuses to respond properly.

Since Hungers is largely a visual work, much of the emphasis is on the video portions of the piece. An interface between audio and video was created using a Yamaha FBO1 to create a 1 kHz control signal. Reading from a special track on the QX, the FBO1 sends the control signal to a reader/converter unit built by Dale McBeath. The output of this unit goes to an Amiga computer, which, in turn controls a video switcher. Thus is born a clock-controlled video switcher.

 

According to Subotnick, Hungers deals with different levels of musical and gestural activity. Ideas are separated as to their origin from either studio or performance composition practices, what Subotnick calls "tape music gestures" as opposed to "stage performance" ideas. Performance ideas are further delineated on the basis of the natures of the controllers used. Subotnick tried to devise sounds that would go best with the specific controllers. This was equally but differently the case for the air drums, which were intended as an extension of "theatrical movement". The KX88 and the KAT trigger events on the TX816s and the air drums control the initiation of events from the sampler. It is not entirely clear to me which way the relational information in this idea is supposed to flow. Are the controllers influencing the nature of the prerecorded electronic and sampled sounds or did this material dictate the specific controllers used? This is a chicken and egg situation to which we may never know the answer.

When I asked Subotnick what was the connection between the music of Hungers and the technology used to create it, he said that there was no conscious relation or influence as far as he was concerned. But he did add that the score following aspects of the controller-computer interface allowed additional freedom for the performers since they did not have to worry about many details. This is true because the performers using controllers are not triggering individual events but rather sequences of events and control changes. So their function is not to literally create or modify sound, nor is it to literally control the precise temporal displacement of sound events; rather they initiate pre-composed and prerecorded phrases and sequences.

More interactive and in some ways more philosophically complex than Hungers is Los Angeles composer Greg Fish's A Little Light Music. The work was composed for percussionist Amy Knoles and is written for a KAT controller, Macintosh computer, 2 Yamaha TX81Zs, a TX7, and a variety of percussion instruments: 3 triangles, tin-tin-sags (miniature Tibetan cymbals), aluminum and brass pipes, 2 large cymbals, a log drum, and a Crasher (spring steel strips overlaid on a central spindle). Fish deals with two kinds of interactions in Light Music. The first is a timbral one in which acoustic sounds once performed by Knoles are heard in electronic simulations via the TX81Zs and TX7. The second kind of interaction is temporal, somewhat similar to Subotnick's temporal inter- actions in Hungers, but created on multiple levels.

The MIDI output of the KAT is used in the traditional fashion to trigger both sound events and sequences through the computer controlling the Yamaha hardware. Beyond this are two types of sensors that are used to interact with the motion of the performer. Using ideas developed in his earlier work with dancers, Fish provides light-sensing devices to create data. The first device involves a 5-milliwatt red helium-neon laser split five ways by mirror diffraction. These beams strike photosensitive transistors completing a circuit. The circuit is broken by the stick in the percussionist's hand moving through the beam, and this is used to create a MIDI status byte (note on and off information). The second group of devices is actually two sets of photo resister circuits which sense ambient light and provide MIDI data bytes (note numbers) and also velocity information. The velocity data is then converted by the Interactor program to MIDI control 7 information, which controls volume. The elaborate nature of these interfaces and their interaction with the performer requires choreography of the performance in order to achieve the desired control signals.

 

While both Hungers and A Little Light Music are technically fascinating and musically interesting works, they raise several questions about the role of their associated performers. Suppose there is a continuum of possible performance roles. At one extreme is the traditional situation of a performer playing an acoustic instrument and creating an easily and clearly understood series of action/response mechanisms; at the other end of this continuum is the Zappa Mime where the performer simply pretends to be performing the music by simulating actions that seem appropriate to the supposed aural responses. Wherein does Subotnick's, Fish's, and many other composers' use of controllers lie? Is a new sort performance practice based on the use of MIDI controllers now developing, or are composers and performers merely aping the past? I am raising these questions not merely because they fascinate me, but because I think they beg the answers to other more difficult questions about the nature of live/electro-acoustic music performance and about music performance in general. But the answers to these questions, some of which were raised in the beginning of this discussion, must remain for another article at another time.

I have used isolated examples of recent live/electro-acoustic music works from California to illustrate the nature of the medium at the present time. While these works are but a small fraction of the current activity in California, they show the kind of live/electro-acoustic music compositions that are being done here and, by extension, around the world. If there is something special about the live/electro- acoustic music being done here, it would be the result of a continuing willingness to employ new ideas and question their results. If nothing else, California is a multifaceted culture, a polyglot state of mind and being in which tradition is elusive and experimentation seems natural.

 

 


[i] This is, of course, not true in the same sense for blind as for sighted people.

 

[ii] Even with non-traditional electro-acoustic music instruments, such as the Theremin and Croix Sonore, there was an easily assimilable action/response association.

 

[iii] There are, of course, several instances of real-time works having been composed in the classical studio. Due to the difficulty of repeating these "performances", these works were almost always presented in a recorded format.

 

[iv] This becomes somewhat less true as electro-acoustic timbres approach traditional acoustical instrumental sounds, and as the sounding music more closely relates to the stylistic experience of the listener.

 

[v] HMSL is a music composition and performance programming language written in Forth for the Macintosh and Amiga computers. It was developed by Phil Burk, Larry Polansky, and David Rosenboom at the Center for Contemporary Music at Mills College.

 

[vi] David Rosenboom in program notes to Systems of Judgement.

 

[vii] Tim Perksin program notes to HubMusic, a cassette available from HUB, 1048 Neilson Street, Albany, CA 94706.

 

[viii] SCREAM (Southern California Resource for Electro-Acoustic Music) is an organization that I started in 1986. SCREAM is a consortium of Los Angeles area colleges including California Institute of the Arts, California State University at Northridge, Harbor College, University of California at Los Angeles, and University of Southern California. SCREAM produces a yearly festival of electro-acoustic music, featuring at least one concert of live/electro-acoustic music.

 

[ix]Since Subotnick has resided in New Mexico for several years, he is no longer considered by most reportive sources (Los Angeles Times, LA. Weekly, et al.) to be a Los Angeles or California composer. However, since Subotnick continues to teach on a regular basis at CalArts in Los Angeles, he may at least partially be considered as a local composer.

 

[x] Early versions of Hungers called for some additional forces, which were subsequently deleted.

 

[xi] Subotnick has used, at different times, a QX1, QX3, and QXS.