8.1. Importing Sounds
There are two primary ways to use sound in Flash. The first is by importing a sound into Flash so that it is distributed as part of the compiled .swf file. The second is by loading an external sound at runtime using ActionScript. In most cases, your project will be better served by loading external sounds during playback. This helps keep the size of your main .swf down and makes it easier to update your project.
However, there are some cases where internal sound is preferred. Perhaps the most common need for internal sound is when you want to try to synchronize audio and visual assets. Another example is when you don't know enough ActionScript to load the sounds yourself and you are not satisfied with using a Flash component. Finally, you may need to confine your project to a single .swf for use in content management systems that don't play nice with external assets.
8.1.1. Preparing Sound for Import into Flash
Flash can import a variety of standard file formats, including AIFF (Mac only), WAV (Windows only), and MP3 (both platforms). If you have QuickTime installed, it can act as a conduit to allow AIFF and WAV to become cross-platform file formats, importable into Flash on both Macintosh and Windows machines.
Many developers say that importing uncompressed AIFF or WAV files will give you the best sound quality. This is because Flash will compress the files when publishing to .swf. When uncompressed sounds are used, Flash will not be compressing an already compressed sound and, therefore, will not contribute to further sound quality degradation.
Other developers argue that using uncompressed sound makes the .fla file too large and unwieldy and dramatically increases publishing time. It becomes time-consuming to test your movie during development, and harder to open and store each file.
As is often the case, you will need to do what you find to be best for you. Try working with your preferred higher-quality format, WAV or AIFF, and then contrast that same working experience with using MP3 files. For the import examples in this book, all three file formats have been provided.
8.1.2. Sync Type
For imported sound files, Flash offers a few basic ways of dealing with the audio. To see them in action, first import two sounds to work with and place one in your timeline:
Create a new file and save it as stream_sound.fla in the 08 folder of your working directory.
Import the Nero file from the 08 folder, selecting the compression format of your choice.
For consistency, open the Library panel and rename the file to Nero, removing the file extension from the original name if necessary. You won't have to do this in your day-to-day work, but keeping the name consistent with this text will make the projects easier to follow.
To examine basic timeline sound options, add a sound to frame 1. To do this, select frame 1 of Layer 1 and, in the Properties panel, pull down Nero from the Sound menu. Finally, choose Stream from the Sync menu. Your Properties panel should now look like Figure 8-1.
Save your file.
Figure 8-1. The Properties panel showing the first sound added to your file
The Sync menu, as seen in Figure 8-1, is where you choose how Flash handles your timeline sound. Choosing the appropriate sync type can significantly improve the user's experience when viewing your file. Here is a brief overview of each type:
Plays sounds beginning when the playhead first enters the starting keyframe and lasting until the sound is finished, independently of the timeline. This is an important factor to consider. It means that, for example, the sound will continue to play even if the .swf stops playing, and if the sound is triggered again, another instance of the sound will play even if the previous instance is not finished. Also, event sounds must be fully downloaded before they can play. For these reasons, event sounds should be limited to very short sounds placed in the timeline, such as a sound meant to simulate a user's mouse click. In these cases, it is unlikely that multiple occurrences of the sound will overlap or linger beyond their intended playback duration.
As with the Event sync type, plays sounds until they are finished, independently of the timeline. However, if the sound is already playing, no new instance of the sound plays.
Takes the Start sync type one step further, preventing multiple instances of the sound from playing, but also stopping the sound when the last keyframe of the frame span in which it resides is reached.
Attempts to synchronize the sound with the timeline in which it resides. The Stream sync type does its best to force the timeline animation to keep pace with stream sound. As with most video technologies, if the animation cannot keep pace with the sound, Flash will skip frames in the timeline tray to sync them up. Stream sounds stop when their frame spans elapse or when the .swf stops playing. Most importantly, stream sounds, like the timeline itself, can begin to play even while the remainder of the .swf is downloading. For all of these reasons, stream sounds are best for long audio files, such as soundtracks and ambient sounds.
Sounds can be repeated or looped, and the number of times this occurs can be specified in the field provided. It's not a good idea to loop streaming sounds, though, because Flash will expand the file size by the number of loops. If looping is desired, Stop is typically the recommended sync type.
To get a feel for how this works in practice, try working with a few examples of the Stream and Event sync types.
8.1.3. Using stream sounds
The Nero sound you placed in frame 1 is a long music file suitable as a soundtrack for an animation. In fact, you will see this file in use as a video soundtrack in Chapter 9. In your current setup, however, the sound will play for only one-twelfth of a second (assuming Flash's default frame rate of 12 frames per second), because the sound currently spans only one frame.
Add the required frames and listen to the sound:
To allow the sound to play through to its end, add 1300 frames to Layer 1. You may need to add them in two or three steps, as Flash optimizes its interface by displaying only a minimal number of frames until you need more.
Note: You may notice the waveform visible in the layer as you add frames. This can be handy when trying to match animation events to portions of the audio. You can increase the size of timeline layers to better see the waveform by using the menu at the end of the frame number bar, located directly above the timeline's vertical scrollbar. If Large isn't big enough, you can select Preview. This adds the benefit of placing thumbnail previews of each layer's content in the timeline. It can take a while to generate the thumbnails, though, so use this view only when needed.
Save and test your movie. It will take a little more time than usual because the sound is being compressed.
You will hear the sound play until the end (approximately 1 minute and 48 seconds), and then it will loop. This looping is not part of the sound itself; it occurs because the end of the file was reached and, without an ActionScript command to the contrary, Flash loops the .swf by default.
8.1.4. Using event sounds
As described earlier, for small, short sounds, one of the other sync types is preferable to Stream. Take a look at how the Event sync type can be used to play a short sound effect when you click a button:
Open the alert_example.fla file in your 08 folder and save it as button_ sound.fla in the same folder.
This file has been set up with two keyframes with simple stop actions, and a button script in each keyframe that sends the playhead to the other keyframe. It's a simple simulation of an alert dialog. In frame 1, pressing the button to continue your way through the program causes a warning to be displayed. You then continue past the warningin this case, cycling through the process again for tutorial purposes.
Test the movie to try it out. There's no sound, but you will add that next. When you understand the simple file structure, continue on.
Import the bip audio file. Again, use the file format of your choice and rename the sound to bip, removing the file extension if necessary, just for consistency.
Double-click the button to edit it. Add a new layer called sound and put keyframes in the Down and Hit state frames.
Select the Down state frame and add the bip sound via the Sound menu in the Properties panel. Select Event from the Sync menu. Your edited button should now look like Figure 8-2.
Figure 8-2. The edited button with the sound in the Down state frame
Save your work and test the movie. You should now hear a sound each time the button is pressed. If you want to compare your file with the source file provided, it should now resemble button_sound_01.fla.
In this example, notice that the sound occurs both when the alert is displayed and when you continue past the alert. This is because the sound is in the button, and the button is used in both places.
However, you can reconfigure the file structure slightly so the sound is heard only when the alert is displayed:
Open the alert_example.fla file in the 08 folder again. This time, save it as frame_sound.fla in the same folder. In this example, instead of placing the sound inside the button, you will place it in the frame in which the alert is displayed.
Create a new layer called sounds in the main timeline, and create a keyframe in frame 2.
Using the same procedure you've used the last couple of times, use the Properties panel to add the bip sound to frame 2 in the sounds layer. Your edited timeline should now look like Figure 8-3.
Figure 8-3. The edited timeline with the new sound in the new sounds layer
Save your work and test the movie. You should now hear the sound when the alert is displayed, but not when you continue on afterward. Placing the sound in the frame with the alert, and not within the button, localizes it to the alert. This allows you to reuse the continue button when it suits you. If you want to compare your file with the source provided, it should now resemble frame_sound_01.fla.
8.1.5. Compression Settings
Regardless of which sync type suits your specific situation, you can apply one of many compression algorithms to the sound when publishing to .swf. Whether you like it or not, compression is a battle between quality and file size. The trick is to determine which setting gives you the lowest possible file size without ruining fidelity.
Detailing the various compression options available is outside the scope of this text, but here are some basic guidelines to get you started:
When delivering your content via a network (either the Internet or a local network such as an intranet), it is usually a good idea to choose MP3 as your compression scheme. This is Flash's default option.
Whenever possible, use 16-bit sounds. 8-bit sounds are too hissy and degraded, and thanks to advancements in compression technology, they yield few benefits over 16-bit files.
Use the lowest sample rate you can without compromising quality beyond acceptable standards. Typically, you will use 22.050 kHz for network delivery. You may use 11.025 kHz for speech only, and you may use 44.100 kHz for disc-based delivery (as in kiosk use or CD-ROM delivery).
Taking the previous tip further, when working with Flash, always make sure you use a sample rate that is derived from 44.1 kHz. Otherwise, Flash will resample the audio, altering its pitch. This is really important in the world of MP3 compression, because many software packages offer sample rates of 32, 24, and 16 kHz, among others. This is a common problem, so watch for it.
Use mono when you can, to keep file size smaller. You can still pan and fade mono sounds using in-Flash editing techniques and ActionScript, both of which you will learn about in upcoming sections of this chapter.
When your chosen compression option offers a quality setting (such as the MP3 option), you can save development time by using the fastest option during development and switching to the best option for final output.
There are many more subtleties you may wish to consider, as these brief tips cover only the simple settings that you have direct control over in the Flash interface. Also, these tips do not cover basic optimization efforts, such as using loops instead of longer files when possible and trimming any unwanted leading or trailing segments of your sound. The Flash Help system, as well as numerous online resources, can provide additional information.
In Flash, you can apply these settings in two ways. In File Publish Settings, in the Flash section, you can select global compression settings that will apply to the entire file by default. However, as you saw with graphics compression in Chapter 5, you can also set the compression settings on a case-by-case basis in the Library. Right/Ctrl-click on any sound in the Library, and the Sound Properties dialog will appear. Here, you can change the settings for each individual sound.
8.1.6. Simple Edits
In addition to basic playback options, Flash provides simple sound editing capabilities. With a sound in a frame selected in the Properties panel, clicking the Edit button will open the Edit Envelope dialog, seen in Figure 8-4. In this rudimentary editing environment, you can use volume markers to create fades, pans, and segments marked by in and out points.
Figure 8-4. The Edit Envelope dialog, with the Effect menu displayed
The Effect menu in the upper-left corner of the dialog allows you to choose from preset effects, but you can also create your own:
Reopen the stream_sound.fla file you created earlier.
Select frame 1 of Layer 1, where you placed the Nero sound clip. In the Properties panel, click the Edit button next to the Effect drop-down list.
For a stereo sound such as this one, you will see two sound waves. (A mono sound will display only one sound wave.) They represent the sound data in the left (top sound wave) and right (bottom sound wave) stereo channels. The horizontal bar in the center of the panel displays the time of the sound clip. Using the bottom-right buttons, this time can be displayed in seconds or frames, and the magnifying glass can be used to show more or less of the sound wave at once.
Click the Zoom Out magnifying glass three times, until your sound wave resembles Figure 8-4.
In the upper-left corner of each channel, you'll see a small square resting on a horizontal line that spans the length of the sound. This is a volume control handle. Using as many handles as necessary, you can change the volume of a sound over time, allowing for fades and pans.
At frame 8, click the volume line in the left (top) channel. A handle will be added to both channels at this point.
Repeat this process at frame 4.
In the left (top) channel, drag the handle you just created (at frame 4) down to the bottom of the channel. This causes the volume to drop to zero in this channel only, over four seconds.
Reduce the volume to zero in the right (bottom) channel, using the first handle. This causes the volume to start at zero in this channel only, increasing to full volume over four seconds. The handle placement should look like Figure 8-5. The resulting effect is a pan from left to right over the first four seconds, and then, as the right channel stays at full volume, an increase in the left channel over the next four seconds, centering the sound.
Figure 8-5. Using volume control handles to create a pan over the first four seconds and a center over the next four seconds
Use the play button in the lower-left corner of the dialog to make sure you're happy with the results of your edits.
Click OK to close the dialog box, then save your work and close your file.
This technique not only lets you make changes to the sound, but it does so on an instance basis. This means that you can create an envelope for a sound in frame 1, and then a different envelope for the same sound used in another part of the timeline. By positioning two volume control handles atop each other in one channel, you can get an immediate muting of a sound, allowing for in and out points without the need for fades.
Note: If you do not already have a favorite sound editor, the free, open source, cross-platform Audacity (http://audacity.sourceforge.net) is a good place to start.
While handy for quick edits, these tools are understandably crude. If you need more exacting edits or audio processing effects, or if you do not need the entire audio clip, it is usually best to use an external sound editor. You'll look at that next.