Most streaming audio tools work well enough, but you can take certain extra steps with your source audio to assist an encoder in doing a better job. These extra steps, sometimes called audio optimizations, can be helpful for all streaming audio scenarios, but they're most useful when authoring streams at 64Kbps and below. Using various forms of optimization, you can shape your source audio in numerous ways to improve the chances of delivering the stream to the listener with reasonable sonic legibility. Naturally, encoding to a low bit rate is going to do some strange things to your audio. With only a small amount of space to represent your source audio, the choice of what to emphasize or throw away is critical. To put it visually, think of making a postage stamp from a painting. What must be lost to make a large image translate to a tiny one? Is the goal to keep the original colors true or to be merely recognizable? You've got to give up something to make it work. The question is: What? While you're thinking about that, consider this: You know your audio source best, so if you're going to remove something, it's better to choose those cuts yourself instead of letting the encoder do it. Although using somewhat heavy-handed audio optimization for low bit rate encoding has some obvious sonic benefits, there are some risks involved when applying the same optimization for high bit rate encoded audio. Overall, subjective audio quality of high bit rate streams is much closer to the original source audio; therefore the positive or negative effects of your careful optimization will be much more noticeable. The whole point of optimizing your source audio prior to encoding is to ensure a good quality experience for the listener. Of course, quality is a subjective term. True, you have no control over a listener's environment, but you can make generalizations about whether your listeners are using decent speakers at home or are at work in a cubicle surrounded by ambient noise. To make effective use of the audio optimization process, it helps to clearly outline your goals before you begin. What, exactly, are you trying to gain? When you're encoding for low bit rates, the primary criteria should be basic sonic legibility. At higher bit rates, however, it's useful to have some sense of your projected end user's listening environment. This book cannot begin to cover all the myriad ways to apply the numerous kinds of audio processing. Many shelves of books have already been written, and many more exist inside the heads of experienced audio engineers. This book can only cover a handful of audio optimization processes, and it's safe to say you could make a lifelong study of compression alone. Expert audio engineers can make something sound better with worse tools than a poor engineer can with great tools. The point is that you shouldn't be afraid to experiment, but also feel free to use the engineered presets as starting points. Let your ears be your guide. Don't expect to master this stuff overnight; many people dedicate their entire career to these subjects. This chapter focuses on applying audio optimization for the best sonic legibility in low bit rate encoding. You'll learn about equalization, normalization, compression, and a handful of quick and easy modifications to perform on your audio prior to encoding. To use these processes when encoding to high bit rate streams, simply scale way back on the processing values. Less is more. Here are a few standard approaches (and obvious reminders) for optimizing your source audio prior to low bit rate encoding. It's assumed (since you're in an "advanced" chapter) that you've already taken appropriate steps such as using high-quality cabling whenever possible, having no noisy ground loops, and ensuring that all the toys in your audio chain are plugged into the same power source. (A power conditioner can help here, too.)
If you're optimizing digitized audio files on your computer, it's easiest to use software optimization tools. Most dedicated audio hardware requires sending the audio out in analog form to the unit and back, often going through the digital and analog conversion process two more times. High-end hardware and computer sound cards often use digital interfaces to avoid signal degradation that typically occurs through repeated digital to analog (D/A) and analog to digital (A/D) conversion, but not everyone has access to them. If you're encoding a live stream from an external source, it's typically easier to route your audio signal through dedicated hardware before it enters the computer. Take the time necessary to properly configure your encoding system, choosing the most appropriate hardware and software audio optimization tools for your environment. The audio optimization processes detailed in this chapter are starting points. A groovy-looking piece of audio hardware with an anodized faceplate hooked up with oxygen-free gold cabling is not enough. Patience and your good ears will make the difference. |