Section 5.3. Publishing Streams in Detail

5.3. Publishing Streams in Detail

The NetStream.publish( ) method can be used to publish live or recorded streams and to append data onto the end of an already recorded stream. To control how a stream is published, pass the string "live", "record", or "append" as the second parameter. For example, to append onto an already recorded stream:

 out_ns = new NetStream(nc); out_ns.publish("private/brian/greeting", "append"); 

When "live" or "record" is used, any previously recorded stream with the same stream URI is deleted. When "append" is used, data is appended onto the end of a preexisting stream. If a recorded stream at the same URI does not already exist, "append" creates the stream as though "record" was passed.

The stream will be saved as an .flv file within a subdirectory of the streams folder associated with the application instance. For example, assuming a Flash movie is connected to the instance albegra101 of an application named courseChat and that the relative URI of a published stream is:

private/brian/greeting

a file named greeting.flv will be created within this directory:

.../applications/courseChat/streams/algebra101/private/brian

The full path will vary depending on the server's operating system and on how FlashCom was installed. See Chapter 4 for more information on resource URIs. In this example, the stream URI can be broken down into two parts . The path information:

private/brian/

and the actual stream name :

greeting

You do not have to create the application's streams directory or any of its subdirectories when recording a stream. They are created automatically by the server as needed when the stream is published.

5.3.1. onStatus( ) handlers

During development, it is a good idea to define a simple onStatus( ) event handler that simply writes out all the properties of the information object it is passed. An easy way to do this is to define a method on the NetStream.prototype object as in Example 5-1. An onStatus( ) handler defined this way will work for both publishing and subscribing streams. However, publishing and subscribing streams are sent different sets of messages and are generally designed to behave differently, so it is unlikely that a generic onStatus( ) handler will apply readily to both types of streams. Another way to define an onStatus( ) handler is to define it as the method of an individual stream:

 out_ns = new NetStream(nc); out_ns.onStatus = function (info) {   if (info.code == "NetStream.Publish.BadName") {     writeln("Invalid stream name. Please enter a different one.");   } }; 

Yet another approach is to create a NetStream subclass with an onStatus( ) handler designed just for publishing or just for subscribing. Creating a NetStream subclass is described later in this chapter and is strongly recommended for applications that must always know the exact state of a stream at any given time.

If no onStatus( ) handler is defined for a NetStream object (or NetStream.prototype ), the onStatus( ) method of the NetConnection object to which the stream is attached will be passed any NetStream information objects with a level property of "error". If the NetConnection.onStatus( ) method is not defined either, then the System.onStatus( ) method, if any, is called.


5.3.2. Attaching and Detaching Audio and Video Sources

When a video or audio source is not needed, it is a good idea to detach it from a stream instead of leaving it to consume bandwidth. For example, during an online seminar, participants may need to see only the seminar leader and the person who is currently speaking. If there are 20 participants in the seminar, there is no point in sending the audio and video data for 20 people to the server. To temporarily detach audio or video data from a stream, pass null to either NetStream.attachAudio( ) or NetStream.attachVideo( ) . You can always call these same methods later to attach the sources again. For example:

 out_ns = new NetStream(nc); mic = Microphone.get(  ); cam = Camera.get(  ) out_ns.publish('public/' + userName); function detachSources ( ) {   out_ns.attachAudio(null);   out_ns.attachVideo(null); } function attachSources ( ) {   out_ns.attachAudio(mic);   out_ns.attachVideo(cam); } 

Streams flow in one direction, and each point within a stream corresponds to a unique time measured in seconds. While the stream is being published, the time property of a NetStream object can be used to retrieve the number of seconds a stream has been publishing, provided a data source is attached to the stream. Time values are floating-point numbers that can provide values down to the millisecond. For example:

 var millisecondTime = out_ns.time * 1000; 

If no data is being sent in a stream, the time property will contain the last time data was sent. When a data source is attached to a publishing stream, the time value will restart at the last time value plus the time elapsed during which no data was sent. So even though the time value may pause when no data is sent, actual stream time always progresses and is reflected by the time property when data is again sent on the stream.

To stop publishing a stream without closing it, pass false to the publish( ) method in place of the stream URI:

 out_ns.publish(false); 

To explicitly close a stream use NetStream.close( ) :

 out_ns.close(  ); 

Use publish(false) when you plan to republish using the same NetStream object, and use close( ) when you are done publishing or to reuse the same NetStream object to subscribe to a stream. When either method is called, any audio, video, and ActionScript data that has not been sent already will be lost.

5.3.3. Bandwidth and Performance Problems

Network bandwidth can vary dramatically depending on the type of connection (LAN, DSL, modem) and on the quality of all the network segments between the publishing movie and the FlashCom Server. The bandwidth may also vary significantly during the life of the connection. Each stream travels within a network connection, and multiple publishing and subscribing streams cannot send or receive more data than the connection is capable of carrying at any moment. For example, a connection via modem may allow sending a total of 56 Kbps of data while receiving 56 Kbps of data. Sending a single video stream at 16 KB/s (131 Kbps because there are 8 bits in a byte) will therefore cause a problem. Even worse , receiving three 16 KB/s streams is not possible. Most of the data will have to be dropped to reduce the total from 48 KB/s to 56 Kbps and the arrival of data will have to be delayed. Furthermore, the ideal speed of 56 Kbps is rarely achieved on a dial-up modem. Throughput of 33 Kbps is more realistic. On the other hand, someone with a good DSL connection will have no problems sending one video stream or receiving the same three video streams.

The amount of data that a movie attempts to publish within a stream largely depends on how the video and audio sources attached to the stream are configured. The resolution and frame rate of the video source and sampling rate of the audio source are just some of the adjustments that can be made to control data rates. Using the Camera and Microphone classes to control the amount of data sent in a stream is described in detail in Chapter 6. Before video data can be carried by the stream, it has to be captured from a device and compressed. The publishing system's CPU may not be able to process all the video data in real time. At high video resolutions , the system may have to drop video frames in order to avoid falling too far behind the video source. Even when the system can capture all the audio and video data, the stream may not have sufficient bandwidth available to it to carry all the data. Sending large amounts of ActionScript data can also adversely impact performance (later, we'll look at ways to minimize the bandwidth required for ActionScript data).

The RTMP protocol runs over a reliable TCP connection, which means that unlike UDP streaming protocols, any lost data packets are resent in order to guarantee they are delivered. When a stream contains more data than can be sent because of bandwidth limitations, data must be either held in a buffer until it can be sent or dropped at the source.

For live streams, when users want to communicate in real time, buffering is not a practical option. Therefore, for live streams, RTMP uses a scheme that does two things:

  • Prioritizes data into three classes so that audio receives the highest priority and is therefore delivered as soon as possible. Data is given second priority, and video the lowest priority.

  • Decides what data to drop if the client and network cannot keep up. Video data requires more bandwidth and is generally rated as less important than audio to most observers. Video data is therefore dropped before audio. Under extreme circumstances, if dropping video data does not reduce the amount of data to what the network can handle, audio data will also be dropped. ActionScript data is never dropped.

RTMP does not provide on-the-fly adjustments of video compression in order to dynamically adapt to bandwidth limitations. However, the NetStream.currentFps property returns the actual video frame rate being published, and the quality settings of the Camera object can be adjusted to change the level of video compression or even the video resolution.

As RTMP adapts to the difference between the amount of data it is trying to send and the capacity of the network, video may be delayed more than the audio, causing them to get out of sync. RTMP gives preference to delivering audio because it is important to hear a conversation with as little latency as possible. Audio latency and video synchronization were improved in FlashCom 1.5.

RTMP has to adapt to bandwidth twice during the publishing and playing of a stream. The Flash Player publishing the stream must adapt to the network bandwidth available between the Player and the FlashCom Server. When the server sends the stream on to each subscribing movie, it must also adapt to the bandwidth available to each subscriber. It is important to remember this when designing applications. An incoming high-bandwidth stream may arrive at the server without problems but lose most of its video frames on its way out to viewers . Stream bandwidth therefore must often be optimized for the slowest viewing connection and not for live publishing.

The amount of bandwidth any client is allocated can be capped on the server using Client.setBandwidthLimit( ) . See "Capping client bandwidth usage" later in this chapter for more details.

5.3.4. Buffering When Publishing

When recording streams, buffering is a good idea to minimize dropped video frames. Instead of dropping frames that cannot be sent immediately, they can be stored in a buffer until they can be sent. If the buffer overflows, video and audio are eventually dropped, but buffering helps to improve quality, especially when video data rates fluctuate depending on changes in the scene being captured. By default, a NetStream object being used for publishing has a buffer time of 0 seconds (no buffering is performed). To change it, use NetStream.setBufferTime( ) . For example, to set the buffer time to 2 seconds:

 out_ns.setBufferTime(2); 

During publishing, bufferLength returns the amount of data, in seconds, currently in the buffer. The bufferTime property returns the total buffer time. In practice, the buffer is always larger than the time set via setBufferTime( ) ; this allows the amount of data in the buffer to start to exceed the buffer time without any frames being immediately dropped. When the buffer time is exceeded, NetStream.onStatus( ) is passed an information object with a code value of "NetStream.Buffer.Full". This provides an opportunity to increase the buffer time, reduce the amount of data being sent into the stream, or stop recording altogether. If the buffer length reaches one and a half times the buffer time, all video messages are dropped. If it reaches two times the buffer time, all audio and video frames are dropped. When a buffer is empty, a "NetStream.Buffer.Empty" message will be sent to the onStatus( ) handler. See Example 5-2 and Example 5-5 for examples of responding to "NetStream.Buffer" messages.

5.3.5. Snapshots, Stop Motion, and Time-Lapse Video

FlashCom makes it possible to capture a single video frame or snapshot from a camera or video source, instead of capturing all the frames as they arrive from the source. The NetStream.attachVideo( ) method has an optional second parameter named snapShotMilliseconds . When the parameter is omitted, each frame from the source is sent to the stream according to the frame rate set using Camera.setMode( ) . If a value of 0 is passed in, a single frame of video is placed in the stream. For example:

 out_ns.attachVideo(cam, 0); 

To be certain the frame is not dropped and has been sent to the server before the stream is closed, two things must be done. The NetStream.setBufferTime( ) method must be used to make sure the frame is buffered, and an onStatus( ) handler must be used to make sure the buffer is empty before closing the stream. Example 5-2 lists the essential code that sets up a stream to publish a snapshot and then takes one when the Record button is clicked. The complete listing is available on the book's web site.

Example 5-2. Taking buffered snapshots
 // Called after a network connection is made. function initProgram( ) {   if (out_ns) out_ns.close( );   out_ns = new NetStream(nc);   out_ns.setBufferTime(2);   out_ns.onStatus = function (info) {     writeln(info.code);     if (info.code == "NetStream.Buffer.Empty") {       this.close( );       in_ns.play("mugshot");     }   };   if (in_ns) in_ns.close( );   in_ns = new NetStream(nc);   snapshot_video.attachVideo(in_ns);   cam = Camera.get( );   preview_video.attachVideo(cam);   System.showSettings(0);   record_btn.setEnabled(true);   connect_btn.setLabel("Disconnect");   in_ns.play("mugshot"); } // Called when the user clicks the Record button. function doRecord( ) {   if (out_ns) out_ns.close( );   out_ns.attachVideo(cam, 0);   out_ns.publish("mugshot", "record"); } 

In Example 5-2, whenever a network connection is created, the code creates a NetStream object, gives it a buffer time of 2 seconds, and creates an onStatus( ) event handler:

 out_ns = new NetStream(nc); out_ns.setBufferTime(2); out_ns.onStatus = function (info) {   writeln(info.code);   if (info.code == "NetStream.Buffer.Empty") {     this.close( );     in_ns.play("mugshot");   } }; 

When the snapshot is taken, the buffer briefly holds some video information until it is successfully sent. When all the video data has been sent, the onStatus( ) handler receives an info.code value of "NetStream.Buffer.Empty". At that point, it can safely close the stream without the snapshot frame being lost. To take the actual snapshot, the code calls doRecord( ) to attach the video source to the stream and publish the stream:

 out_ns.attachVideo(cam, 0); out_ns.publish("mugshot", "record"); 

Shortly afterward, the buffer empties and the stream closes .

The snapshotMilliseconds parameter can also be used to create time-lapse videos in which snapshots are taken at regular (usually long) intervals and then played back quickly. This technique can be used to speed up an otherwise slow process, such as watching a plant grow.

To make time-lapse video work, we cannot just publish a stream, leave it open , and send snapshots at regular intervals. The stream time progresses as long as the stream is being published. When such a stream is played back, the frames change at the same interval they were taken. For example, if a frame is taken every hour for 24 hours, the stream takes 24 hours to play back the 24 frames. To solve this problem, we repeatedly open the stream, capture a single frame, which is appended onto the end of the stream, and then close the stream:

 out_ns.attachVideo(cam, 0); out_ns.publish(streamName, "append"); 

When we play it back, the frames go by so quickly that we can't see most of them. To solve the problem, set the number of milliseconds the frame will last within the stream using the snapshotMilliseconds parameter, capture a frame, and then close the stream. For example, to play back at 10 frames per second, each frame must last 100 milliseconds:

 out_ns.attachVideo(cam, 100); out_ns.publish(streamName, "append"); 

The snapshotMilliseconds parameter therefore does two things. It provides a single frame from the video source and adds trailer information as to how long the frame should be played back. Example 5-3 is a simplified version of a longer working example available on the book's web site. When the user clicks the Record button, the script captures a frame every 5000 milliseconds (5 seconds). When played back, each frame plays for 100 milliseconds (one-tenth of a second).

Example 5-3. Simplified time-lapse video example
 out_ns = new NetStream(nc); out_ns.setBufferTime(2); out_ns.onStatus = function (info) {   if (info.code == "NetStream.Buffer.Empty") {     this.close( );   } }; function captureFrame ( ) {   out_ns.attachVideo(cam, 100);   out_ns.publish(streamName, "append"); } function doRecord (btn) {   if (btn.getLabel( ) == "Record") {     if (intervalID) clearInterval(intervalID);     out_ns.attachVideo(cam, 100);     out_ns.publish(streamName, "append");     intervalID = setInterval(captureFrame, 5000);   }   else if (intervalID) {     clearInterval(intervalID);   } } 

Stop-motion video can be done in a similar way. Instead of using setInterval( ) to capture a frame at regular intervals, a single frame can be captured and appended to the recorded stream whenever a button is clicked. Unlike normal video capture, every frame in a time-lapse or stop-motion video stream is a complete frame (a video keyframe, not a so-called difference frame that contains only the delta between two successive frames). Time-lapse and stop-motion videos will normally require much more bandwidth than regular video and should be buffered accordingly . Each application is likely to be different, but you should try experimenting with a buffer 10 times larger than you might otherwise have used.



Programming Flash Communication Server
Programming Flash Communication Server
ISBN: 0596005040
EAN: 2147483647
Year: 2003
Pages: 203

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net