[ LiB ]  

FCS supports true streaming media. This differs from the way plain Flash movies support progressive streaming. Like so many audio and video formats, Flash movies will play the first part of a movie while the rest of the file downloads. The idea is the rest of the movie will download by the time those end frames need to display. The two big advantages of true streaming are that the quality of the video file can be adjusted as it plays (for example, skipping frames as needed) and, after the movie or sound plays, the native media format is not downloaded to the user 's machine (which media producers desire because it protects their content).

Of course, there are less esoteric features to media streams. Clients can play recorded videos or sounds and they can publish audio (from their microphone) or video (from their webcam) to share with other users live or, optionally , record it for later playback. That's pretty much everything. Naturally, there are lots of details. But for as cool as it is, there's not too many.

A Channel Inside Your NetConnection

Just like how an RSO is attached to a particular NetConnection, so are NetStreams. Each NetStream instance can handle one-way avenue over which audio, video, or both travel. If you want to both send audio from your microphone and, at the same time, hear audio from another user, you need two NetStream instances. It's sort of like how a telephone has both a speaker next to your ear and a mic by your mouth. Of course, it doesn't work that way, but remember each stream is a single channel. FCS can handle thousands of simultaneous streams (although, obviously, bandwidth quickly becomes a issue).

There's a second step after you create a NetStream instance. Namely, you have to say whether you want to play or publish. That is, if you want to play a particular stream that's either been recorded earlier or currently being published live, you just have to identify it. In addition, when you play a video steam , you'll want to attach it to a video symbol onstage (so that you can see the video). Alternatively, when publishing, you first need to identify the user's camera and microphone and attach them to the stream. Finally, you need to give this published stream a name (so that others can subscribe to it) and whether you want to optionally record it permanently. It may sound like a lot of details, but it's not that bad. Figure 8.7 shows an overview of the possibilities.

Figure 8.7. This visualization of NetConnections and NetStreams should help sort things out.


It all starts with a NetConnection, and then a NetStream. From there, you can go on to either play or publish. Making a NetStream is pretty easy, as Listing 8.8 shows.

Listing 8.8. Skeleton NetStream Creation
 1 my_nc = new NetConnection(); 2 my_nc.onStatus=function(info){ 3 if( info .code=="NetConnection.Connect.Success"){ 4 initNS(); 5 } 6 } 7 my_nc.connect("rtmp:/stream_app/r1"); 8 9 function initNS(){ 10 my_ns = new NetStream(my_nc); 11 } 

The first 7 lines contain standard NetConnection stuff. It turns out that you don't really have to wait for a successful connection before creating a NetStream, but it makes the most sense to do so. Anyway, line 10 is all you need. In this case, we'll have the my_ns instance onto which we can play or publish a stream. That instance will also have an onStatus event that you can use to trap such events as when a stream stops playing (as you'll see in the next section). If you need more streams, just duplicate line 10, but change my_ns to some other unique variable name.

Take a little time to plan out your use of NetStreams. It's fine to have lots of streams, you just have to keep track of them all. If you're building a video chat application for two people, for example, the SWF will need to create two streams (one for publishing and one for playing the other person's published streamcall the two instances out_ns and in_ns if you want). (By the way, one user can publish both audio and video over a single NetStream.) From FCS's perspective, however, there will be a total of four streams (two for each user). This means that while the variable names for the two streams inside the SWFs can have the same names , user A and user B can't both publish something they call "me" and subscribe to "him". Rather, user A can publish "a" and play "b", whereas user B would play "a" and publish "b". Okay, that sounds like two stream names that the server has to track, but in fact, the server will set up four channels of stream. It just so happens each of the two named streams will be used in two channels each. I think visualizing this is easier than explaining it (see Figure 8.8).

Figure 8.8. Although there are only two named videos ("a" and "b"), the server needs to maintain four streams.


Playing FLV Videos and MP3s

Although you may think you have to first record or publish a stream before someone can play it, that's not necessarily true with FCS. You can place MP3 audio or FLV videos in the appropriate folder and connected clients can stream them into their Flash movie.


What's an FLV Video?

The only video format that Flash can play dynamically at runtime is FLV. This proprietary format can be produced three ways.

  1. FCS produces FLV files when a user publishes video and audio stream coming from their camera and microphone (and the Record option is specified).

  2. If you embed a conventional video (say, a QuickTime) inside Flash, you can export an FLV (by selecting the Export option when you double-click a video item in the library).

  3. Both third-party video editors that support Flash Video Export (which comes with Flash Pro) or products such as Sorenson Squeeze (sold separately) can produce high-quality FLVs either manually orin the case of Squeezeautomatically when raw videos appear in a "watch" folder.

Note that although Flash Player 7 can natively play FLVs dynamically, you need FCS to stream them. Without FCS, you're only able to do a progressive download (which means you can see the first part of the video while the last part downloads).

Playing FLV Videos

The script for playing a prerecorded FLV is identical to playing a live video (see Listing 8.9). In fact, playing an MP3 (always prerecorded ) is nearly identical you just include an additional "mp3:" in the script. In the case of video, there's an extra step of attaching the stream to a specific video instance onstage so that you can see it. To make the following code work, you need a video instance onstage named my_video. (Create the video instance by first selecting New Video from the Library's menu as shown in Figure 8.9.) You also need an FLV in the right place. In this case, call the app my_app and the instance my_instance, then the FLV (called "somevideo.flv" ) must reside in the following location inside FCS's applications folder my_app/streams/my_instance/somevideo.flv.

Listing 8.9. Playing a Video Stream
 1 my_nc = new NetConnection(); 2 my_nc.onStatus = function(info) { 3 initNS(); 4 }; 5 my_nc.connect("rtmp:/my_app/my_instance"); 6 function initNS() { 7 my_ns = new NetStream(my_nc); 8 my_video.attachVideo(my_ns); 9"somevideo"); 10 }; 

Line 7 creates the NetStream instance (and notice it's associated with my_nc). Then line 8 attaches the video portion of this stream to the my_video instance onstage. (If it's pure audio, you don't need this step.) Finally, the play() method is issued in line 9. Incidentally, you leave off video's .flv extension. (It's the default format.) FCS actually looks for a live stream first, and then looks for a saved FLV.

By the way, you can quickly modify this code to work inside Flash (with no FCS). Just change line 5 to my_nc.connect(null); and line 9 to"someVideo.flv"); . Finally, you need to store the FLV adjacent to your SWF.

Figure 8.9. You can create a video symbol (to later drag onstage) via the Library Options menu.


Playing MP3s

It's possible to create audio-only FLVs, but it's not exactly a standard format. Version 1.5 of FSC now supports true MP3 streaming (via the NetStream object). It's pretty much the same as the preceding code listing, except you don't need the attachVideo() method, and you need to specify MP3. Just take an MP3 file named somesong.mp3 and put it in the same folder you placed the FLV above. Then comment out line 8 and change line 9 to read as follows :"mp3:somesong"); 

Alright, having the extension in front of the filename is weird. I suspect that because Macromedia added MP3 support after already deciding FLV files will be the default is why it's odd. (It will make more sense later when you see how to play live streams.)

Virtual Directories for Sharing Streams

Not only are the streams played so far hidden deep inside a set of subfolders , they're tied to a particular app instance. If you want several applications (and different instances of those apps) to have access to the same library of songs, you don't have to fill up your server's hard drive with duplicate copies of each song. Instead, you just have to set up a common folder called a Virtual Directory. Virtual directories have two parts : the actual folder location, and an alias that you use from inside Flash to point to this folder.

The way you identify such an alias is by editing the Vhost.xml configuration file (found deep inside the conf folder adjacent to your installed version of the FCS). In Listing 8.10, you'll see how the alias music will point to the folder C:\My Music where the actual FLVs and MP3s reside.

Listing 8.10. Defining and Using a Virtual Directory

Here's the pertinent portion of the Vhost.xml file:

 <VirtualDirectory> <Streams>music;C:\My Music</Streams> </VirtualDirectory 

When you restart the server, you'll be able to play MP3s stored in C:\My Music from any SWF by just referring to the folder music (as shown next)."mp3:music/other"); 

Basically, you just squeeze in alias_name/ (or, in this case music/ ) in front of the MP3 filename. The same technique works for FLVs (just leave off the mp3: ). Now, any application can stream MP3s or FLVs stored in the My Music folder.

There's certainly more to streams than just getting them started. For example, you may want to stop the stream with script:; 

Notice you don't stop() , you just use play() and pass false rather than a filename.

Next you'll see more advanced controls than just start and stop (or, I should say "play a file" and "play false").

Playback Controls

The advanced controls covered in this discussion include the following:

  • Pausing (and resuming) a stream

  • Displaying the current position with a progress bar

  • Seeking to a particular position

  • Ascertaining when a stream has finished playing

  • Creating a play list that automatically plays one song after another

Let's go through the scripts for each of these tasks (see Listing 8.11). Because the first four tasks build on each other, you can keep adding to the same file. Start with three homemade button instances ( play_btn, pause_btn, stop_btn ) and one rectangle clip instance ( bar ) for the progress bar (using the center-left registration option). You also can draw an outline around the bar. Finally, to let the user click right on the progress bar and seek, place an invisible button (the same size as the progress bar). This button should also have the center-left registration option (see Figure 8.10).

Listing 8.11. Pausing and Resuming a Stream
 1 my_nc = new NetConnection(); 2 my_nc.onStatus = function(info) { 3 if (info.code == "NetConnection.Connect.Success") { 4 initNS(); 5 } 6 }; 7 my_nc.connect("rtmp:/jukebox/ap1"); 8 function initNS() { 9 my_ns = new NetStream(my_nc); 10 my_ns.onStatus = function(info) { 11 for(i in info){ 12 trace(i+":"+info[i]); 13 } 14 }; 15 paused=false; 16 }; 17 play_btn.onPress = function() { 18 if (paused) { 19 my_ns.pause(); 20 } else { 21 //set up progress bar later 22"mp3:rock"); 23 } 24 paused = false; 25 }; 26 pause_btn.onPress = function() { 27 paused = true; 28 my_ns.pause(); 29 }; 

I hope most of this looks familiar. Note that I have an application named jukebox and instance ap1. (The MP3 rock.mp3 needs to be in a folder ap1 inside streams inside jukebox.) The onStatus event (starting on line 10) is used later to ascertain the end of the song (although you'll see interesting information in the Output window until then). The default use of pause() will pause when playing and resume when pausedhence, the simplicity of the pause_btn's onPress callback. However, because play() will always start from the beginning, I came up with the homemade variable paused to track whether it was really time to play() or just (un) pause() . That is, in play_btn's onPress callback, it either plays or pauses and always sets paused back to false .

Figure 8.10. The center-left registration option is used for the progress bar (and the invisible button later).


A NetStream instance has a time property from which you can determine the current position. To show this as a percentage (such as with a progress bar), however, you also need to know the entire duration of the stream. The length property is available only in server-side ActionScript. Instead of making you wait until the next chapter, I've included client-side code that asks the server-side code to get and return the length of a particular stream. The server-side code is just a text file named main.asc and resides in the application folder. Listing 8.12 shows how to make the progress bar work.

Listing 8.12. Displaying Current Position with a Progress Bar

First, the contents of your main.asc should contain this code:

 Client.prototype.getLength =function (filename){ return Stream.length("mp3:"+filename); } 

Second, you need to insert the following code in place of line 21 in the preceding listing (that is, in the else part of the if statement):

 21 returnObj = new Object(); 22 returnObj.onResult = function(result) { 23 totalLength = result; 24 _root.onEnterFrame = function() { 25 bar._xscale = my_ns.time/totalLength*100; 26 }; 27 }; 28"getLength", returnObj, "rock"); 

The server-side code returns the length() of whatever filename was passed. Then the client side code calls the getLength function in line 28. You'll see call() in Chapter 9, but for now, notice that the second parameter is the object returnObj that I previously defined to handle an onResult event. Inside that onResult callback, I set totalLength to the value returned, and then set up the onEnterFrame code that will repeatedly set bar's _xscale to the appropriate percentage (that's time/totalLength ).

For the seek code in Listing 8.13, it turns out that actually seeking is very easy (just issue ) and replace seconds with a number). Calculating to which point in the song you want to jump is a bit more work. The invisible button instance seek_btn must match exactly the shape of the bar instance (and be right on top of it).

Listing 8.13. Seeking to a Particular Position
 seek_btn.onPress = function() { var percent = (_xmouse - seek_btn._x)/bar._width;*totalLength); }; 

Because the registration point for the button is at its far left, you can calculate how far over the mouse click was by subtracting the seek_btn's _x from the _xmouse . I first tried using seek_btn._width , but invisible buttons have no width! Notice the totalLength variable was set in the previous code listing.

Finally, use the NetStream object's onStatus event to figure out when the song has finished playing. (I suppose you could just keep checking to see whether time was equal to the totalLength , but Listing 8.14 enables you to explore the onStatus event.)

Listing 8.14. Ascertaining When a Stream Finishes

This code goes within the onStatus event (after line 13 in Listing 8.11).

 14 if (info.code == "NetStream.Play.Stop") { 15 this.waitingForEmpty = true; 16 } else if ((info.code == "NetStream.Buffer.Empty") && 17 this.waitingForEmpty) { 18 trace("song is over"); 19 this.waitingForEmpty = false; 20 } else { 21 this.waitingForEmpty = false; 22 } 

You'd think you could just wait for the code property to equal "NetStream.Play.Stop" . However, this information is sent to the onStatus before the song really endsyou stop hearing it when the buffer is empty ( code "NetStream.Buffer.Empty") . The preceding script first sets the homemade variable waitingForEmpty to true; then when the buffer is empty and waitingForEmpty is true , you hit line 18 where the trace() executes.

You'll probably want to replace that trace() with some code that hides or disables the stop_btn and pause_btn . If you do that, just remember to re-enable those buttons, say, when the user clicks the play_btn again.

Play lists are a lot cooler than you might think at first glance. Basically, you just keep issuing play() commands (but with optional parameters) and one song will play after the next. (Listing 8.15 effectively breaks the progress bar and seek features, although you'll find fixes in the online files for this chapter.)

Listing 8.15. Creating a Play List"mp3:song1",-2,-1,false);"mp3:song2",-2,-1,false); 

You can put this code in place of your existing play() command in the earlier examples. Of course, you need to create the files song1.mp3 and song2.mp3. Translated, the preceding code will play song1.mp3 and, when it finishes, automatically begin playing song2.mp3. The optional second and third parameters are a sort of code system. For example, changing second parameter to -1 is the code for "play a live stream only." I suggest you look at the Reference panel information inside Flash for this method because there are lots of variations. The fourth parameter ( false here) means that existing play lists (or, just old play() commands) should not be flushed. That is, continuing to issue play() will just queue up the requests so that they play in sequence.

Not only does my online version of this code include a working progress bar and seek feature (for play lists), it also includes a nice interface for users to create their own play lists. It's just way more code than will fit here.

Now you've seen there's more to playing back audio and video than just the play() command. Virtual directories for sharing media, seeking to specific places in a song or video, and defining play lists are all variations on the main idea behind playing audio and video. It turns out that's only the first half! Now you'll see how to record live media through what's called publishing.


This section shows you how to take the image and sound from one user's camera and microphone and publish it to other connected users and optionally record it. Here's the process in a nutshell :

  1. Make a variable instance that points to the user's camera using Camera.get() .

  2. Make another variable instance that points to the user's microphone using Microphone.get() .

  3. On a free NetStream instance, use both attachAudio() and attachVideo() to connect the preceding two instances.

  4. Issue the publish() method on that NetStream instance and specify a stream name (that other clients identify when they play()) as well as other options such as whether it will be recorded or just go out live.

Accessing the User's Camera and Microphone

You'll see that grabbing the camera and microphone really is pretty easy. It's probably easiest to see with a simple example.


The preceding code finds the first available camera (say the user has more than one connected to his machine) and stores a reference to it in the my_cam variable. To access a specific camera, you can pass an index inside the get() method. To best see whether the user does have multiple cameras , however, let the user select, say, from a ComboBox (named camera_cb ) as this code shows:

 my_cam = Camera.get(); if (Camera.names.length>1) { camera_cb._visible = true; camera_cb.removeAll(); camera_cb.addItem("Change Cameras",null); for (var i = 0; i<Camera.names.length; i++) { camera_cb.addItem(Camera.names[i], i); } camera_cb.addEventListener("change", pickCamera); } else { camera_cb._visible=false; } function pickCamera() { my_cam = Camera.get(; } 

You'll get an array of all attached cameras using Camera.names (not to be confused with , which returns just the string name of the camera in the instance my_cam). The preceding code uses that array to populate the ComboBox. The ComboBox triggers the pickCamera() function, which then selects a different camera.

You can implement nearly the same code to grab a microphone:

 my_mic = Microphone.get(); if (Microphone.names.length>1) { microphone_cb._visible = true; microphone_cb.removeAll(); microphone_cb.addItem("Change Microphones",null); for (var i = 0; i<Microphone.names.length; i++) { microphone_cb.addItem(Microphone.names[i], i); } microphone_cb.addEventListener("change", pickMicrophone); } else { camera_cb._visible=false; } function pickMicrophone() { my_mic = Microphone.get(; } 

Publishing a Live or Recorded Stream

Now that you've got handles on both the camera and microphone, you can move on to publishing. The code in Listing 8.16 is complete (although it does forgo a few error checks such as checking whether the connection is really made):

Listing 8.16. Publishing Audio and Video
 1 my_cam = Camera.get(); 2 my_mic = Microphone.get(); 3 my_nc = new NetConnection(); 4 my_nc.connect("rtmp:/video_app/r1"); 5 my_ns = new NetStream(my_nc); 6 my_ns.attachAudio(my_mic); 7 my_ns.attachVideo(my_cam); 8 my_ns.publish("live_signal"); 

(Don't forget to make an application folder called video_app.) After you get the camera and microphone, set up a NetConnection (line 3 and 4) and a NetStream (line 5). Then lines 6 and 7 attach the user_mic and user_cam to the NetStream. Finally, line 8 begins publishing this stream called "live_signal" . Incidentally, the publish() method has an optional second parameter for which you can supply "record", "append" , or "live" . By leaving it off, you're using "live" , meaning there won't be an FLV left on disk when it's over. Line 6 trips the Security dialog box for users (see Figure 8.11).

Figure 8.11. Before you can successfully tie a camera or microphone to a stream (or video instance), the user must "allow" access.


The creepy thing about this example is that it just begins to broadcast the camera and microphone signal, but because no one is watching, you don't see anything! That is, the stream called "live_signal" is out there for anyone to play. (You'll see that code next.)

Once you're broadcasting the live stream, you can do a few things with it. You may want to let the publisher see himself. (That is, whoever is running the movie with the code from the preceding listing may want a picture mirrored back.) To show the user what he really looks like (that is, over the Internet), you need to set up another stream, because you're already using my_ns for the signal going out. Remember you have to have a separate stream for each direction. Although into the outgoing stream you can squeeze both the microphone and camera, you need another stream to play (think "download") the published stream. The thing is, you don't really want to let the publisher hear himself with the delay from the Internet, so the code that follows (although it works) isn't ideal. Just place a video instance named my_video onstage and add this code to the preceding listing:

 in_ns=new NetStream(my_nc); my_video.attachVideo(in_ns);"live_signal"); 

Basically, you just set up a whole new NetStream instance ( in_ns ) to handle playing the live stream. This code should look pretty familiar. It's just like Listing 8.9, which played a recorded video streamthis one just happens to be live. The problem with this is that it also sends the publisher's audio back through the publisher's speakers (where it can get picked up by the microphone to make an echo).

A much easier solution avoids this problem, although it does give the publisher a high-quality image of himself (which may not be accurate). This code should be used rather than the last block showed:


To turn all this code into a practical application, consider the idea that you might want to broadcast video to a bunch of students. For the teacher, you could use the preceding listing (with the additional line that displays the video). For the students, create a different movie with a video instance onstage named my_video and the following code:

 my_nc = new NetConnection(); my_nc.connect("rtmp:/video_app/r1"); in_ns = new NetStream(my_nc); my_video.attachVideo(in_ns);"live_signal"); 

This code is identical to the old "play a video" code from much earlier, but it does show you how you can have two separate SWFs connected to the same application (but effectively with different privileges or responsibilities). That is, although there are a teacher.swf and a student.swf, you don't let the students have access to the teacher.swf movie. Really, the only hassle here is testing such an app. It's really easy to get mixed upI recommend using the App Inspector to monitor what's going on with the streams (see Figure 8.12).

Figure 8.12. The App Inspector's Streams tab enables you to monitor all network streams.


There really isn't much more to say about publishing and playing. There are definitely some neat tricks and performance enhancements that I'll go over next. As I was learning FCS, a few areas of confusion kept popping up. For one thing, you have to remember to use attachVideo() to connect your camera to an outgoing stream as well as to attach an incoming stream to a video instance! In addition, unlike most objects that must be instantiated using the keyword new , the SharedObject, Camera, and Microphone objects all use the get() method instead. On top of these syntax issues, you really have to take on a whole new mind-set when building multiuser applications. I guess the point of this interlude is that you'll want to carefully map out your FCS apps.

Miscellaneous Tips When Publishing

I said there's not much more to say about publishing but, in fact, you can do quite a lot of tweaking with the signal uploaded from a camera or microphone. (Just check out all the properties for both the Camera and Microphone objects, in addition to the NetStream object.) Regarding published streams, FCS pretty much handles that automatically. The big concept to understand is that a video signal can adapt to low bandwidth by dropping frames and lowering picture quality, whereas a sound really can't. Besides lowering the capturing microphone's frequency rate, the best thing you can do for slower connections regarding audio is to increase the buffer time (that is, the amount that must download before it plays). The following list highlights some of the more important properties and methods (which you should look up in Flash's Reference panel) for microphones, cameras, and streams.

  • setRate() Sets the frequency rate at which the microphone captures. A mic instance's rate property has a direct impact on quality as well as bandwidth requirements. (The lower the number, the lower the quality and the bandwidth need.)

  • setSilenceLevel() Affects the threshold at which a microphone stops transmitting. That is, when it gets really quiet, there's no point in using bandwidth. You can control this breakpoint.

  • setUseEchoSuppression(true) Effectively presses the Reduce Echo option seen in the Settings panelbut it's really such a no-brainer; I'd say always set this to true .

  • activityLevel Continuously updates to show how much signal is going through the microphone. It's neat because you can use it to make your own VU meter.

  • muted (which also applies to Microphone objects) Is true when the user has chosen Deny on the Security a dialog box (shown earlier in Figure 8.12).

  • setMode() Has huge affect on how well the image survives squeezing through the Internet. Use setMode() to set the resolution and frame rate for how much data is captured by the Camera object. The thing that messes me up is that this is different from the _height and _width for a particular video instance onstage.

  • setQuality() Confusingly similar to setMode() , but controls the quality of each frame in the video by ensuring it doesn't exceed a specified bandwidth (such as controlling the quality with JPG compression).

  • setBufferTime() Enables you to control how much of the stream must preload before beginning to play.

  • liveDelay A read-only property that helps you monitor what sort of delay the user is experiencing.

When it comes to balancing quality and bandwidth, the general approach is to try to keep your original streams at their highest practical quality. That is, if you think some people will be able to handle a 300Kbps download, there's no need to make the video any higher quality than that. Remember that audio is a different story: Everyone will experience it at the same quality level (although, perhaps, delayed differently for buffering). The sad part is that people tend to notice audio quality suffering before they notice picture quality. (This is why FCS puts a priority on audio quality.) In any event, a high-quality image can be sent at different quality levels to clients with differing connection speeds.

There's one last tip that's really worthy of a whole chapter, but I'll just mention it here and give you a simple example. The NetStream object's send() method is a way to embed event triggers right into a stream. If you record a video stream, for example, you'll be recording images, sounds, and any event triggers sent during the recording. Subscribers of these streams have the opportunity to be notified of such events while the video plays (and in perfect synchronization). This holds true even if they're playing a stream recorded earlier.

You just have to set up a callback to handle these expected triggers. Here's a quick example. Just make a clip with a stop() in the first frame and a visual change in its second framegive the clip an instance name clip. Use the code in Listing 8.17 for the recording version.

Listing 8.17. Embedding Events While Recording a Stream
 1 my_mic = Microphone.get(); 2 my_nc = new NetConnection(); 3 my_nc.connect("rtmp:/video_app/r1"); 4 my_ns = new NetStream(my_nc); 5 my_ns.attachAudio(my_mic); 6 my_ns.publish("recorded_message","record"); 7 _root.onMouseDown=function(){ 8; 9 my_ns.send("clickTime"); 10 } 

Nothing really new here except that line 6 includes the "record" parameter (to store the FLV stream permanently) and line 9 is issuing send("clickTime") . For the code that plays this stream, clickTime needs to be defined as a callback.

 1 my_nc = new NetConnection(); 2 my_nc.connect("rtmp:/video_app/r1"); 3 my_ns = new NetStream(my_nc); 4 my_ns.clickTime=function(){ 5; 6 } 7"recorded_message"); 

Notice that we set up the clickTime callback on my_ns . This gets triggered while the stream plays. I just think this is the coolest feature. I built a simple guided tour on my main web site,, that uses this technique. I just launch a separate SWF that connects to the same app and it takes just a minute to rerecord a stream along with my mouse movements and button clicksand it took less than 2 hours to program!

By the way, events embedded this way trigger even if you play back the FLV without FCS. That is, you'll need FCS to record as shown in the first block of code. But the second part will work without FCS when using Flash Player 7 for playback. Just replace the RTMP string in line 2 with null and add ".flv" after the filename in line 7.

I could go on an on about little techniques and "gotchas" when doing real FSC apps. Remember, however, this is a book about RIAs. It's enough if you get nothing else from this chapter except a clear idea of what's possible in FCS. Ideally, given the basic concepts you've learned, you will now have a clear place to start when you have an idea for an application.

[ LiB ]  

Macromedia Flash MX 2004 for Rich Internet Applications
Macromedia Flash MX 2004 for Rich Internet Applications
ISBN: 0735713669
EAN: 2147483647
Year: 2002
Pages: 120 © 2008-2017.
If you may any questions please contact us: