Last Updated on October 8, 2016 by Andrew Culture
The manipulation of audio samples is an integral part of the music process. We describe the essential functions you’ll find in most audio editors and explain how and when to use them…
In this Sample Edit feature we explain the following sample processing and manipulation functions:
Cutting & pasting, Snap To, Zero crossing points, DC offset, Crossfading, Time stretching, Pitch shifting, Pitch correction, Formants, Pitch bending, Channel conversion, Phase and phase inversion, Pseudo stereo, Reverse
Normalisation, Resample/Sample conversion
Whether you want to edit an audio recording or tweak a sample, you need suitable tools for the job. And those tools are audio editors such as WaveLab, Sound Forge, Adobe Audition (formerly Cool Edit) or one of the many other audio editors bundled with hardware or available on the web. The main differences between the software are their editing and processing features. Here we’ll look at common sample edit and manipulation functions and explain the part they play in sample editing.
The processes described here can be applied to short samples for use in a sampler, sample loop files, and to larger audio files such as instrumental recordings. The editing principles are exactly the same although some types of file lend themselves to certain functions more than others and we’ll discuss those when we get to them.
The most basic edit functions are cutting, copying and pasting. In a sequencer, cutting is often done with a scissors tool. In most audio editors you highlight a section of audio and select Cut from the edit menu.
There are two important considerations here. If you are cutting part of a song file, try to cut at beat lines to keep cut sections in beat or rhythmic ‘blocks’. Most sequencers have a Snap To function which restricts edit functions to certain divisions of the beat so use this. Audio editors may not have such a function so you need to be precise with your manual cuts. Some editors display material in a range of formats such as samples, minutes and even beats so zoom in close in order to cut or place a marker exactly on the required division.
Whatever editor you’re using, check to see if it has any zero crossing points options. When you look at a waveform, the places it crosses the horizontal zero line are called, naturally, zero crossing points. At these points the waveform is at zero amplitude and, therefore, if you’re going to cut and paste a sample these are the best places to do it to minimise potential glitches.
A DC Offset occurs when there is too large a DC (Direct Current) component in the signal. It shows up in a waveform when the wave is not centred above and below the zero line. However, a DC Offset may be problematic even if you can’t physically see it (if you really want to see it, zoom in close on the sample). A DC Offset is most commonly caused by mismatched equipment such as a microphone or soundcard.
DC Offset causes two problems. Firstly, it affects zero crossing points because any routine that scans for a these points will be looking close to the zero line for the crosses. Glitch-free looping, cutting and pasting will be difficult at best and impossible at worst.
Secondly, many processing functions may not perform optimally due to the offset in the waveform.
In a nutshell, you really ought to fix a DC offset problem before editing or processing a file. This is very easily done with most audio editors via a Remove DC Offset function. Most editors simply remove the offset completely which is usually what you want, although some may allow you to adjust the offset value by a specific amount. This may be necessary if the offset is not consistent throughout the sample.
If you cut at another point, above or below the zero line, and join this to another sample which starts at a different level, the change will produce a click. Selecting good zero crossing points is an essential part of the art of making sample loops. Zero crossing point options may simply be a ‘snap to zero crossing points’ setting or, as with Adobe Audition, for example, it may have options to move the boundaries of the selected area to crossing points to the left and right.
However, simply cutting at a zero crossing point is not always enough. If you try to join two contrasting timbres, for example, then there will inevitably be a click so you should also ensure that the two parts sounds similar. There are ways to join parts that are not similar or to loop a sample that is proving difficult (creating and manipulating drum loops) but for more amenable parts, most editors have some looping tools to help. And you should take advantage of any help that’s offered!
Sound Forge’s Loop Tuner and WaveLab’s Crossfade Looper both butt the end of the loop up against the start so you can see and hear exactly how the join performs. You can easily adjust the start and end points and quickly hear the results. Both have a Crossfade Looper function that can crossfade across the join, too, which helps with difficult loops. WaveLab also has a Loop Tone Equalizer that goes even further by evening out differences in timbre and level.
Zero-X’s Seamless Looper has a host of functions dedicated to finding and creating good loops. It is particularly adept at creating loops with a sustain section such as instrument samples designed to be played by a sampler.
Two related and particularly useful processes for both audio tracks and samples are time stretching and pitch shifting. If you speed up a tape recording you’ll hear that the time or duration gets shorter while the pitch gets higher, and if you slow down a tape, the duration increases and pitch gets lower. The two are inexorably linked.
However, with digital editing we can now do one without the other. Changing the duration is particularly useful if you need to make a sample fit into a section of a certain length. The term ‘time stretching’ may be a little misleading as we can ‘time shrink’ just as easily.
Different editors offer different ranges of functions. You may be able to change the duration by tempo, by time, or by a percentage. Many have an option to change the pitch along with the time, like a tape recording, which may be more useful for creating effects than for song production.
Possibly the most common requirement when working with audio is to change the duration of a sample. Let’s say you have found or created the perfect loop but it’s too fast or too slow. No problem – time stretch it to the correct duration. You can also use this function with an audio track if you decide you want to change the tempo of a song after recording. This is not recommended but it can be done.
If you try to stretch (or shrink) a sample by too much the result will sound unnatural although the exact degree of acceptable change will depend on the source material. For small ‘corrective’ adjustments there should be few problems. However, extreme changes can be used to good purpose to create special effects and to create your own sound samples. Again, you need to be aware that large changes will sound unnatural and will introduce artefacts into the audio which will be particularly noticeable if you slow down the audio. This in itself, of course, can be used as an effect. Severe duration changes applied to vocals, for example, can generate interesting sections of pitched and resonant audio that could be used for tunes. The more experimental musician can time stretch specific sections of a sample rather than the entire file to create even more extreme effects.
Pitch shifting is the sister of time stretching but no longer are they joined at the hip. One of the most common uses of pitch shifting is, perhaps, more endearingly referred to as pitch correction, and that’s to pull recordings of poor singers back into pitch. Perhaps the technicians on Pop Idol and the X Factor couldn’t find the button. While small pitch adjustments can be made to vocals with built-in audio editor functions, you have more options and will get better results with a dedicated processor such as AutoTune.
Formants are a collection of harmonics that occur naturally in many sounds such as bells and reed instruments but more especially in vocal sounds. They play a major part in giving the human voice its specific tonal characteristics.
If you pitchshift a vocal, the formants are shifted, too, and you lose the distinctive character of the voice. The ‘chipmunk’ effect is created simply by increasing the pitch of a voice. To maintain vocal characteristics you need to preserve the formants during a pitch change.
Processors in many sequencers and editors, and particularly effects designed for processing vocals, have this as an option.
Pitch shifting can be used to fit the pitch of a sample to a recording. This is particularly useful if you have a sampled melodic riff in a different key to the song but, again, beware of shifting a sample too far from its original pitch. You may also come across samples that are out of pitch, perhaps by a few cents, and these can be pulled back into line, too. There may also be drum loops and samples that you think would sound better at another pitch – all good candidates for pitch shifting.
As with time stretching, pitch shifting changes the characteristics of the audio, particularly vocals, and many editors have an option to preserve formants through a pitch change. This can also be used when shifting instrument and drum samples and it’s worth experimenting with it to see whether you prefer the tones produced with the ‘with’ or ‘without’ formants setting.
And, like time stretching, pitch shifting is a good way to marmalise samples during those days when you turn into a sound sculptor.
Bend it like Beckham
An extension of pitch shifting a sample is pitch bending it. This is a wonderful effect not available in all audio editors but it is in Sound Forge, WaveLab and Adobe Audition, where you can gradually change the pitch of a sample during its production. So you could, for example, increase or decrease the sample pitch as it plays back. But more than that, these three editors let you draw pitch envelopes onto the sample so the pitch can be varied any which way during production.
While we struggle to think of a musical use for this, it can be used creatively for special effects, creating fast or slow pitch changes within a sample. On a more global level it can be used to create the effect of a record (remember them?) slowing down. Use it at the end of a song (of a suitable genre, of course), to slow it down and stop as an alternative to a fade-out or sudden stop ending.
Just a phase
If you look at a waveform, say a sine wave, in an editor, you’ll see half the wave is above the zero line and half is below. If you copy the waveform and move it half a cycle to the right, you’ll see that when one is positive, the other is negative. They are the exact opposite of each other and are said to be 180% out of phase. The phase has no effect on the tone and the two signals sound absolutely identical. However, if you sum these signals together they will cancel each other out.
In a stereo recording, if the phase of one side of the signal has been inverted some parts of the sound may fade in and out. The Phase Invert function in an editor allows you to correct a stereo signal in which one channel has been inverted. Not something you’ll need every day but useful to have in your editing arsenal.
If you need to convert a stereo file to a mono file or vice versa, you need a Channel Converter function. The stereo to mono conversion ought to be straightforward. Theoretically, all you need do is mix the two channels into one but some editors let you determine how much of each channel goes into the final mix.
Normally you will want 50% of each but if you’re into karaoke (spit!) you can try the trick of converting a stereo to mono recording with a left mix of +100% and a right mix of -100%. Most vocals are mixed smack bang in the middle of the stereo image and this setting will invert the phase (see side panel) before mixing, the idea being that the common signal content – that is, the vocal – will be removed or severely reduced.
Converting mono to stereo is just as easy – you simply copy the recording into both left and right channels of a stereo file. Some editors have a specific function to help with this but it should not be difficult with any editor. However, all you have now are two identical mono channels and not a recording with a sense of stereo placement. But, of course, there are effects to help. Adobe Audition’s Echo Chamber, for example, can give a stereo effect to mono material by adding ambience effects. It lets you specify positions for ‘left’ and right’ microphones and by increasing the distance between the Mics, a pseudo stereo effect is produced. Audition also has a Pan/Expand function that can expand (or narrow) the stereo separation of the left and right channels.
Put it another way…
Reverse is a very old effect but deserves a mention because we can bring it bang up to date. The process itself simply reverses a waveform so the sample plays backwards. It’s an interesting effect, atmospheric with vocals (and Black Sabbath fans) and useful for creating special sample effects. It’s also the core of the reverse cymbal effect so widely used, still, in Dance music to lead into the start of a section. If you like reverse cymbals it’s worth running some of your favourite cymbal samples through this.
We can give the reverse effect a twist by reversing a sample, applying an effect and reversing it again. What this does is to give us a reverse effect that leads up to the main sound that created it. Don’t think about, try it! Heavy reverb, for example, creates a sort of backwards effect so we start to think the sound is being played backwards but then we hear the sound correctly so we think it sounds likes it’s coming from down a tunnel but we don’t really know what the heck we’re hearing! Such effects are commonly used in the movies to create a disorienting ambience.
Normalisation increases the amplitude of a signal to make it as loud as possible without distorting. This is useful to help balance different tracks when assembling them for a CD or simply to increase the loudness of a quiet track. However, you need to be aware that normalisation will increase any noise as well as the signal, which highlights the importance of making good recordings in the first place.
Normalisation works by scanning the file for the highest peak in the signal, subtracting this from a level of 0dB and then increasing the total waveform by that amount. Many editors let you vary the scale of the increase so you can decide not to do a full normalisation, say if the noise level is too high. You can normalise greater than 0dB which will result in clipping although this may be acceptable if the peaks are only a transient pulse or two, or if you particularly want a soft clipping/distortion sound.
When normalising a stereo file you should ensure that the two channels retain their relative amplitudes. WaveLab’s Stereo Link function scans both channels to find the maximum peak but applies the same level of gain to both.
Finally, there’s the Resample or Sample Conversion function that converts one sample rate into another. If you want to use a collection of samples of different rates in a project, it may be necessary to use this to convert them to the same rate. Note that increasing the sample rate will not increase the fidelity of a low-quality sample – essentially you’re just using more disk space to store the same crap sound – but some sequencers and editors need all the audio files to be the same format so this may sometimes be necessary.
If you work at 24-bit you have to downsample to 16-bit for burning to CD. This is best done with a dithering algorithm such as that found in WaveLab and Cubase, and Sound Forge and Adobe Audition also have several dithering options. There are no optimum choices here – the best settings will depend on the material so you need to experiment.
The basic edit functions of modern audio editors are essential tools for anyone working with audio, particularly samples. Get to know how they work in your editor of choice and make your samples better fit your music.