Server-Side Stream Object

[ LiB ]  

There's a special version of the Stream object just for the server side. As you recall, on the client side a separate stream is created for each audio/video signal traveling in a particular direction to or from a particular client. So, basically, you may have a ton of client-side streams. The server-side Stream object is different. It's not like your main.asc file wants to watch a stream, nor can it publish the source of its camera signal. However, the server-side Stream can perform the following interesting tasks :

  • Begin publishing a prerecorded stream at a specific time so that users who tap in will hear or see it already in progress

  • Grab an existing stream (being published by a user ) and record part or all of it independent of what the client script is doing

  • Record a stream with no sound or video, but just for embedding events as a form of logging or archiving

To understand the server-side Stream object, you almost have to forget how things work on the client side; not quite, but the syntax is a little differentas you'll see in the following examples.

Publishing Existing Streams from the Server Side

For most situations, you'll probably want to give each client the capability to decide when to begin streaming a prerecorded video or MP3. Suppose, however, that you want something such as a radio DJ to select songs to play and let people tune in and out as they want? Although the DJ could connect and begin publishing a live stream, he can't really get each client to begin streaming the same song at the same time. I suppose you could send a message over an RSO and tell everyone to commence streamingbut it's not the same. Listing 9.16 creates a server-side Stream and immediately begins publishing a recorded stream. This is sort of like an individual client publishing a live stream; because once it starts, anyone can receive it by just issuing a play() command.

Listing 9.16. Publishing a Server-Side Stream
 1a application.onAppStart=function(){ 2a application.s=Stream.get("theStream"); 3a"mp3:some_song"); 4a } 

Then, on the client side, issue the following:

 1 begin_btn.onPress=function(){ 2 my_ns= new NetStream(my_nc); 3"theStream"); 4 } 

You can think of line 2a as the server-side equivalent to new NetStream() . It creates a variable called s (kept inside the Application object, because I figure you might want to access it laterfor instance, to stop it). This variable ( s ) contains a server-side instance of the NetStream object. When you issue get() , you specify a name that gets used by those who want to play the stream. (That name is also the name of the saved FLV if you decide to record it.) Line 3a just starts playing the some_song.mp3 even though no one may be listening. (The actual MP3 needs to reside in the appropriate Streams folderdetails of which you learned in Chapter 8.) The client-side code pretty much looks as if you're playing a stream called "theStream" that you expect someone else is currently publishing. In this case, that "someone else" is the server side. Of course, it won't begin until the begin_btn is pressed. And then when it does begin, you'll hear the song already in progress. (So, maybe pick a longish song for this example.)

Most of the same options found on the client side are available for playing a server-side stream. For example, that play() command could include the options for making play lists (covered in Chapter 8). You could set up hours of music to stream for whenever anyone happens to log in. It turns out that if you're really planning on creating an Internet radio station, FCS is probably not the ideal tool. What's cool, however, is that you've got the Flash interface, so you can customize the interface and really do whatever you want.

Republishing Portions of Client Streams

You just saw how a server-side stream can play a prerecorded stream. Alternatively, when you identify what stream to play, you can specify a live stream being published by any user. Of course, any client can also identify that same stream and start playing it. For example, I could create an admin application that enables me to type in any stream name that I see in the App Inspector and start listening in (see Figure 9.3).

Figure 9.3. You can tap into any active stream when you know its name.


You don't need the server-side stream for thisjust the stream name. Anyway, it might be neat to automate the process. Suppose, for instance, there are three users, each publishing a stream from their microphone. Listing 9.17 sets up a special server-side stream that, one by one, rebroadcasts (that is, publishes) each user's steamsort of like a security camera application, but this one cycles through each audio stream. Perhaps this might even be useful for monitoring a large chat application. Keep in mind that you'll hear only one user's microphone at a time (not two people talking to each other). You would just need an extra server-side stream to listen to two people simultaneously .

Listing 9.17. Cycling Though Three Published Streams
 1 application.onAppStart=function(){ 2 application.s=Stream.get("theStream"); 3 application.n=0; 4 myInt=setInterval(application.switchNow,5000); 5 } 6 7 Application.prototype.switchNow=function(){ 8 application.n++ 9 if(application.n>3){ 10 application.n=1; 11 } 12; 13"user_"+application.n); 14 trace("now republishing user "+ application.n); 15 } 

In addition to exploiting the setInterval() feature, notice that all I do in lines 12 and 13 is stop playing the current stream and then start playing a new one (based on the application.n variable I made up). Notice, too, that I only needed to get() the stream once in line 2 (effectively just to give it a name).

This code assumes that someone is publishing each of the following stream names : user_1, user_2, user_3 . Then, someone is playing theStream stream to hear them cycle through. When I tested it, I created two NetStream instances in my FLA and attached my microphone to one instance and published "user_1" . Then I used the other NetStream instance to play "theStream" . If I heard myself after seeing that trace() in line 14, I knew it was working. Here's the portion of my client-side code that I put inside the my_nc.onStatus callback:

 in_ns = new NetStream(my_nc);"theStream"); out_ns = new NetStream(my_nc); my_mic = Microphone.get(); out_ns.attachAudio(my_mic); out_ns.publish("user_1"); 

While republishing different users' streams may seem sinister, I'm sure you can think of some practical examples. The point is that you can use a single server-side stream to selectively "play" different streams. If anyone plays the server-side stream, that user will hear whatever the server-side stream is playing at the time.

Server-side streams are definitely an interesting concept that allow for some pretty neat applications. For example, you'll see next that you also can specify that a server-side stream gets recorded. I've used that feature to archive online meetings. Because a stream can't mix multiple streams together, in my app I made the stream switch around like the preceding example (although not randomly ). This way the archive included only one person at a time, but it switched to the person presenting at the time.

By the way, everything in the preceding two listings also applies to video streams (live or recorded). I say that because the next section focuses on using server-side streams to archive or log events. In such cases, you may not want to include the video or audio portion.

Using Server-Side Streams to Log Events

You're about to see how you can use recorded streams for something other than audio and video: specifically , for logging. For example, you may want to keep an archive of all text sent through a chat application. Although you could probably just store everything inside an RSO, this isn't ideal because the archive code would have to work alongside your existing RSO code. In addition, RSO files are associated to application instances where you can save log streams with unique namesperhaps including the date. What's really wild about streams saved full of events is that you can output all contained events in real time or in one burst. Consider the idea of saving a chat meeting: You could play it back in real time or just output a transcript of everything.

You also can record server-side streams that do contain audio and/or video, but you'll probably want to keep the task of logging separate from saved media streams.

The following listing shows a simple example of a saved server-side stream that can store a record of events triggered while the app runs. For this example, suppose that you have a simple chat application already complete, but you want to keep a record of everything that was said and when it was said. The existing app can have a message_txt input text field instance and a button instance, send_btn , and this code:

 my_nc = new NetConnection(); my_nc.onStatus = function(info) { if ( info .code == "NetConnection.Connect.Success") { my_so = SharedObject.getRemote("chat_so", my_nc.uri, false); my_so.onSync = function() { message_txt.text =; }; my_so.connect(my_nc); } }; my_nc.connect("rtmp:/chat_app/r1"); send_btn.onPress = function() { = message_txt.text; }; 

This code works as is. Basically each user can change the text in their input text field, press send, and everyone else will see what they entered. The shared object file ( "chat_so" ) contains a single property called archive and that's the value each user changes.

To extend this super-simple chat application so that archives can be recorded, we're going to write some server-side code (to record a stream) and then some client-side code (to initiate the recording and to play it back later). That is, we need a client interface to both start the stream and to extract data from it later. At first I thought we would automatically start recording when the app started, but then we would have to work out a strategy to uniquely name the streamsand we would need some way to stop the recording. That solution might be okay down the road, but during testing it was just way too hard to constantly restart the app and so on. Figure 9.4 shows what the prototype interface might look like.

Figure 9.4. The buttons , text field, and list on the bottom portion enables you to record all text sent through the chat interface (on top).


In the extended chat app, I have buttons to start and stop the recording ( start_btn and stop_btn ), and a button to extract the events (really, play the stream) called play_btn . Finally, I've added an instance filename_ txt input text field (to specify the stream name) and a List component instance called _lb (to view the data extracted, as shown in Figure 9.5).

Figure 9.5. The list will display the entire log recorded in a data-only FLV file.


The following solution in Listing 9.18 uses the send() feature first explained in Chapter 8but this time on the server side.

Listing 9.18. Logging Events in a Stream

Here are the contents of the main.asc file:

 1a Client.prototype.onStartRecording=function(filename){ 2a application.s=Stream.get(filename); 3a application.s.record(); 4a application.s.send("newText", "start time", application.getTD()); 5a application.the_so=SharedObject.get("chat_so",false); 6a application.the_so.onSync=function(list){ 7a for (var i = 0; i < list.length; i++) { 8a if(list[i].code=="change" && list[i].name=="archive") { 9a application.s.send("newText", 10a application.the_so.getProperty("archive"), 11a application.getTD()); 12a } 13a } 14a } 15a } 16a Client.prototype.onStopRecording=function(){ 17a application.s.record(false); 18a application.the_so.onSync=null; 19a } 

For the client-side code, you can add this to the existing chat code (or, at least, make sure you also have the connection code):

 1 start_btn.onPress = function() { 2"onStartRecording", null, filename_txt.text); 3 }; 4 stop_btn.onPress = function() { 5"onStopRecording", null); 6 }; 7 play_btn.onPress = function() { 8 in_ns = new NetStream(my_nc); 9 _lb.removeAll(); 10 in_ns.newText = function(message, timeStamp) { 11 _lb.addItem(timeStamp+"--> "+message); 12 }; 13; 14 }; 

The server-side code has two functions: one to start and one to stop the recording. Line 2a creates a new stream with the filename passed in, and then line 3a begins recording. Incidentally, if you also added something such as"some_stream") , it would include the media from that stream recorded in the archivebut we're just going to include events. Line 4a goes ahead and sends the first event into the stream. Basically, newText is the event name, "start time" is just text for the first parameter (that will get extracted later), and then we use getTD() (from earlier) to create a nicely formatted date string (and this is just another parameter to extract). Finally, we connect to the RSO the clients are using ( "chat_so" ). The onSync callback executes only lines 9a through 11a provided the archive property has changed. (Who knows , there might be other properties.) Then, you see that lines 9a through 11a are similar to 4a, but after the event name, we stuff the new value of archive (line 10a). Finally, the onStopRecording code closes the stream file and clears the onSync that we were monitoring.

For the client side, realize there are two features: start/stop a recording and then playing one back. Line 2 shows how to trigger the onStartRecording method on the server (via call()) , and line 5 shows how onStopRecording is triggered. The idea is that you first type in a name for the recorded stream, and then click the start_btn . Do some chatting (typing into the message_txt field and pressing the send_btn ) and finally press the stop_btn when you're done. Then, to extract the data from a stream, make sure to fill in the filename_txt with the stream you want to "play." The play_btn creates a stream (line 8), clears the list (line 9), and sets up the newText callback (lines 10 through 12). (Remember newText was the first parameter in the server-side send() line 4a.)

Figure 9.5 shows what a typical log might produce when " played ."

Note that when you play a stream containing events that were recorded on the server side, the events all trigger at once when the stream plays back. This is useful for logging. In this example, for instance, you'll see the whole chat history immediately after you click play. If you want the events to retrigger at playback at the same time they were recorded, however, you need to use client-side code like you saw in Chapter 8.

The technique of using streams for logging events is actually pretty easy considering the power. What's really nice is that you can build a rich Flash interface to view the recorded data (and it's really easy). Alternatively, you could connect to an application server and write log data to a database; but then to view that data, you either use the database's viewing tools or have to write another Flash app to extract the data. Events go into streams (via send()) almost as easily as they come out (via callbacks).

[ LiB ]  

Macromedia Flash MX 2004 for Rich Internet Applications
Macromedia Flash MX 2004 for Rich Internet Applications
ISBN: 0735713669
EAN: 2147483647
Year: 2002
Pages: 120 © 2008-2017.
If you may any questions please contact us: