Sound editor (filmmaking)

This article is about sound editors in the film or television industry. For the general term, see Audio engineering.

A sound editor is a creative professional responsible for selecting and assembling sound recordings in preparation for the final sound mixing or mastering of a television program, motion picture, video game, or any production involving recorded or synthetic sound. Sound editing developed out of the need to fix the incomplete, undramatic, or technically inferior sound recordings of early talkies, and over the decades has become a respected filmmaking craft, with sound editors implementing the aesthetic goals of motion picture sound design.

The Academy of Motion Picture Arts and Sciences recognizes the artistic contribution of exceptional sound editing with the Academy Award for Best Sound Editing.

There are primarily 3 divisions of sound that are combined to create a final mix, these being dialogue, effects, and music. In larger markets such as New York and Los Angeles, sound editors often specialize in only one of these areas, thus a show will have separate dialogue, effects, and music editors. In smaller markets, sound editors are expected to know how to handle it all, often crossing over into the mixing realm as well. Editing effects is likened to creating the sonic world from scratch, while dialogue editing is likened to taking the existing sonic world and fixing it. Dialogue editing is more accurately thought of as "production sound editing", where the editor takes the original sound recorded on the set, and using a variety of techniques, makes the dialogue more understandable, as well as smoother, so the listener doesn't hear the transitions from shot to shot (often the background sounds underneath the words change dramatically from take to take). Among the challenges that effects editors face are creatively adding together various elements to create believable sounds for everything you see on screen, as well as memorizing their sound effects library.

Equipment

The essential piece of equipment used in modern sound editing is the digital audio workstation, or DAW. A DAW allows sounds, stored as computer files on a host computer, to be placed in timed synchronization with a motion picture, mixed, manipulated, and documented. The standard DAW system in use by the American film industry, as of 2012, is Avid's Pro Tools, with the majority running on Macs. Another system in use presently is Yamaha owned Steinberg's cross platform DAW Nuendo running on Macs using operating system Mac OS X but also on Windows XP. Other systems historically used for sound editing were:

The WaveFrame, Fairlights, and Audiofile were of the "integrated" variety of DAW, and required the purchase of expensive proprietary hardware and specialized computers (not standard PCs or Macs). Of the two surviving systems, Pro Tools still requires some proprietary hardware (either a low cost portable device such as the "Mbox" or the more expensive multichannel A/D,D/A converters for more professional high end applications), while Nuendo (a successor to Cubase) is of the "host based" variety.

Sound Effects Library

Sound effects editors typically use an organized catalog of sound recordings from which sound effects can be easily accessed and used in film soundtracks. There are several commercially distributed sound effects libraries available, the two most well-known publishers being Sound Ideas and The Hollywood Edge. There are also online search engines, such as Sounddogs, A Sound Effect and Sonniss, which allow users to purchase sound effects libraries from a large online database.

Many sound effects editors make their own customized sound recordings which are accumulated into highly prized personal sound effects libraries. Often, sound effects used in films will be saved and reused in subsequent films. One particular case in point is a recording known as the "Wilhelm Scream" which has become known for its repeated use in many famous films such as The Charge at Feather River (1953), Pierre Marette Story (1957), The Empire Strikes Back (1980), Raiders of the Lost Ark (1981), and Reservoir Dogs (1992). Sound designer Ben Burtt is credited with naming and popularizing the "Wilhelm Scream"..

History

Early Talkies

The first sound process to substantially displace silent films in the moviegoing market was the Vitaphone process. Under the Vitaphone process, a microphone recorded the sound performed on set directly to a phonograph master, which made Vitaphone recordings impossible to cut or resynchronize, as later processes would allow. This limited the Vitaphone process to capturing musical acts or one-take action scenes, like Vaudeville routines or other re-creations of stage performances; essentially, scenes that required no editing at all. However, Warner Brothers, even as early as The Jazz Singer, began experimenting with the mixing of multiple phonograph recordings and intercutting between the "master" sync take and coverage of other angles. The original mixing console used to make the master recording of The Jazz Singer, still viewable in the Warner Bros. Studio Museum, has no more than four or five knobs, but each is still visibly labeled with the basic "groups" that a modern sound designer would recognize: "music", "crowd", and so on.

Warner Bros. developed increasingly sophisticated technology to sequence greater numbers of phonograph sound effects to picture using the Vitaphone system, but these were rendered obsolete with the widespread adoption of sound-on-film processes in the early 1930s.

Mechanical Editing

In a sound-on-film process, a microphone captures sound and converts it into a signal that can be photographed on film. Since the recording is imposed linearly on the medium, and the medium is easily cut and glued, sounds recorded can be easily re-sequenced and separated onto separate tracks, allowing more control in mixing. Options expanded further when optical sound recording processes were replaced with magnetic recording in the 1950s. Magnetic recording offered a better signal-to-noise ratio, allowing more tracks to be played simultaneously without increasing noise on the full mix.

The greater number of options available to the editors led to more complex and creative sound tracks, and it was in this period that a set of standard practices became established which continued until the digital era, and many of the notional concepts are still at the core of sound design, computerized or not:

Historically the Dubbing Mixer (UK) or Re-Recording Mixer (US) was the specialist who mixed all the audio tracks supplied by the Dubbing Editor (with the addition of 'live sounds' such as Foley) in a special Dubbing Suite. As well as mixing, he would introduce equalization, compression and filtered sound effects, etc. while seated at a large console. Often two or three mixers would sit alongside, each controlling sections of audio, e.g., dialogue, music, effects.

In the era of optical sound tracks, it was difficult to mix more than eight tracks at once without accumulating excessive noise. At the height of magnetic recording, 200 tracks or more could be mixed together, aided by Dolby noise reduction. In the digital era there is no limit. For example, a single predub can exceed a hundred tracks, and the final dub can be the sum of a thousand tracks.

Digital Sound

The mechanical system of sound editing remained unchanged until the early 1990s, when digital audio workstations acquired features sufficient for use in film production, mainly, the ability to synchronize with picture, and the ability to play back many tracks at once with CD-quality fidelity. The quality of 16-bit audio at a 48 kHz sampling rate allowed hundreds of tracks to be mixed together with negligible noise.

The physical manifestation of the work became computerized: sound recordings, and the decisions the editors made in assembling them, were now digitized, and could be versioned, done, undone, and archived instantly and compactly. In the magnetic recording era, sound editors owned trucks to ship their tracks to a mixing stage, and transfers to magnetic film were measured in hundreds of thousands of feet. Once the materials arrived at the stage, a dozen recordists and mix technicians required a half an hour to load the three or four dozen tracks a predub might require. In the digital era, 250 hours of stereo sound, edited and ready to mix, can be transported on a single 160 GB hard drive. As well, this 250 hours of material can be copied in four hours or less, as opposed to the old system, which, predictably, would take 250 hours.

Because of these innovations, sound editors, as of 2005, face the same issues as other computerized, "knowledge-based" professionals, including the loss of work due to outsourcing to cheaper labor markets, and the loss of royalties due to ineffective enforcement of intellectual property rights.

Animation Sound Editing

In the field of animation, traditionally the sound editors have been given the more prestigious title of "film editor" in screen credits. As animated films are more often than not planned to the frame, the traditional functions of a film editor are often unnecessary. Treg Brown is known to cartoon fans as the sound effects genius of Warner Bros. Animation. Other greats of the field have included Jimmy MacDonald of the Walt Disney Studios, Greg Watson and Don Douglas at Hanna-Barbera, and Joe Siracusa of UPA and various TV cartoon studios.

Other fields

In the production of radio programs and music, persons who manipulate sound recordings are known simply as "editors," in cases where the producers themselves do not perform the task.

See also

This article is issued from Wikipedia - version of the 12/2/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.