Configure Adobe Flash Media Server for Live HTTP Dynamic Streaming

How to set up Live HTTP Dynamic Streaming

So you want to stream a live event using HTTP Dynamic Streaming (HDS) and HTTP Live Streaming (HLS)? No problem. Adobe Media Server (AMS) provides a right out-of-the-box solution for you. To do so, you’ll need to:

  1. Download and install Flash Media Live Encoder (FMLE)
  2. Make a small configuration change to the encoder
  3. Setup your live event
  4. Begin streaming
  5. Set up a player

Installing and configuring Flash Media Live Encoder

  1. Download FMLE from http://www.adobe.com/products/flash-media-encoder.html
  2. Once it is installed open the config.xml file from
    1. Windows: C:Program FilesAdobeFlash Media Live Encoder 3.2conf
    2. Mac: /Applications/Adobe/Flash Media Live Encoder 3.2/conf/
  3. Locate the “streamsynchronization” tag under flashmedialiveencoder_config -> mbrconfig -> streamsynchronization and set the value for “enable” to “true”. The streamsynchronization node should look similar to the following:
    <flashmedialiveencoder_config>
     <mbrconfig>
       <streamsynchronization>
         <enable>true</enable>
       </streamsynchronization>
    ...
  4. Save and close the file.

Setting up the live event

Streaming a live event involves using the “livepkgr” application that comes installed with AMS. The livepkgr application comes with a preconfigured event named livestream. We’ll use this as a template for our live event.

  1. On your server navigate to the {AMS_INSTALL}/applications/livepkgr/events/_definst_ directory.
  2. We’re going to call our event “myliveevent”. Create a new directory and name it “myliveevent”.
  3. Open the newly create mylivestream directory and create a new XML file named “Event.xml”. This file is used to configure the just-in-time (JIT) packaging settings for your HDS content. Add the following XML to the file. Note: You can also copy the Event.xml file from the liveevent directory that is setup by default. Just update the EventID to match the folder name.
    <Event> 
      <EventID>myliveevent</EventID> 
      <Recording> 
        <FragmentDuration>4000</FragmentDuration> 
        <SegmentDuration>16000</SegmentDuration> 
        <DiskManagementDuration>3</DiskManagementDuration> 
      </Recording> 
    </Event>

    For more information about the values in the Event.xml  file you can review Adobe’s documentation – link in the resources section below.

  4. Save and close the file.
  5. Your event is now set up. You can reuse this event all you want, or create another one for a different event name.

Begin streaming

Now we can start up FMLE and set it up to connect to our livepkgr application and begin streaming.

  1. In the left panel of FLME make sure the “Video” and “Audio” sections are both checked.
  2. Video
    1. In the video section, set the format to be “H.264” and then click the button with the wrench icon.
    2. In the resulting pop-up window, make sure the settings match the following:
      1. Profile: Main
      2. Level: 3.1
      3. Keyframe Frequency: 4 seconds
        Live HTTP Dynamic Streaming H.264 Settings
    1. Click “OK” to close the pop-up window.
    2. In the “Bit Rate” section make sure you only have one of the bit rates selected. We’re only creating a single stream for now.
      Live HTTP Dynamic Streaming Video Encoder Settings
  3. Audio
    1. In the Audio section, set the format to “AAC”
      Live HTTP Dynamic Streaming Audio Encoder Settings
  4. In the right panel set “FMS URL” to point to your server and the livepkgr application:
    1. Example: rtmp://192.168.1.113/livepkgr
  5. Set the “Stream” value to be mylivestream?adbe-live-event=myliveevent
    1. “mylivestream” is the name of the stream and can be anything you’d like. The actual files that AMS creates will be stored in the livepkgr/streams/_definst_/mylivestream directory.
    2. “?adbe-live-event=myliveevent” tells the livepkgr application to use the Event.xml in the livepkgr/events/_definst_/myliveevent directory that we created.
      Live HTTP Dynamic Streaming RTMP Server Settings
  6. Click the “Connect” button. If all goes well, you’ll connect to your server. If not, check to make sure there aren’t any typos in the values for “FMS URL” and “Stream” and that you can connect to your server and it is running.
  7. Click the bug green “Start” button to begin streaming.
    Live HTTP Dynamic Streaming Big Green Start Button
  8. You now have a stream. Let’s see if we can get a player to play it back.

Setting up the player

Getting to the HDS content for your new stream involves requesting a URL that lets Apache (installed with AMS) know what we are looking for. The path will consist of the following parts:

  1. The protocol: http://
  2. The server location: 192.168.1.113/ (in my case, yours will be different)
  3. The Location that is configured to deliver live streams. By default these are:
    1. HDS: hds-live/
    2. HLS: hls-live/
  4. The application name: livepkgr/
  5. The instance name (we’ll use the default): _definst_
  6. The event name: myliveevent
  7. The stream name: mylivestream
  8. The F4M file extension for HDS – .f4m or the M3U8 file extension for HLS.

So if we put all of that together we’ll get a URL that looks like:

  • HDS: http://192.168.1.113/hds-live/livepkgr/_definst_/myliveevent/mylivestream.f4m
  • HLS: http://192.168.1.113/hls-live/livepkgr/_definst_/myliveevent/mylivestream.m3u8

Note: You may need to add the 8134 port to the URL if you didn’t install AMS on port 80: http://192.168.1.113:8134/hds-live/livepkgr/_definst_/myliveevent/mylivestream.f4m

  1. Open a browser window and navigate to that URL, you should see the F4m’s XML content.
    Live HTTP Streaming F4M XML
  2. Open the following URL: http://www.osmf.org/configurator/fmp/#
  3. Set your F4M url as the value for “Video Source”
  4. Select the “Yes” radio button for “Are you using HTTP Streaming or Flash Access 2.0?”
  5. Set “Autoplay Content” to “Yes”
    Live HTTP Dynamic Streaming Player Settings
  6. Click the Preview button at the bottom of the page.
  7. Congratulations. You are now streaming live media over HTTP.

To verify the HTTP Streaming, open a tool that will let you inspect the HTTP traffic (something like Developer Tools or Firebug). You should see requests for resourecs like “mylivestreamSeg1-Frag52” and “mylivestream.bootstrap”. This is the player requesting HDS fragments and Apache and AMS working together to package them just-in-time for the player.
Live HTTP Dynamic Streaming HTTP Traffic

Hopefully this provides you with some good information about Live HTTP Dynamic Streaming and clarifies some of the setup and configuration details. Please, if you have any questions, let me know in the comments or contact me.

Resources

Cleaning Up After Adobe Media Server Live HDS Events

After broadcasting a live event using using Adobe Media Server (AMS 5) you’re left with a bunch of files (.control, .meta, .bootstrap, .f4f .f4x). When you start the live event back up, things can get wonky and you won’t be able to use that live event again. This can be problematic if you have a regular live event that you want to broadcast and don’t feel like setting up a live event or don’t want to have a bunch of live events hanging around on your server all the time.

Telling AMS to Clean Up

There is an easy solution – you can tell AMS to clean up after it’s self when the application is terminated. Meaning that the files that are deposited in the /streams directory are removed and ready for another live event. This saves space on your server and it saves you the headache of having to manually clean up the files.

The Clean-up Command

To get AMS to clean up the live event files, add the following directive to the application’s Application.xml file:

<ScriptEngine> 
 <ApplicationObject> 
 <config> 
 <clearOnAppStop>true</clearOnAppStop> 
 </config> 
 </ApplicationObject> 
</ScriptEngine>

After you restart the application, broadcast a live event, then stop the live event and then unload the application, you should see that the files have been cleared from the stream directory.

NOTE: You can add this directive to the server level Application.xml, but I would suggest keeping it on the application level.

Configuring AMS for Real

For a default install of AMS using the “livepkgr” application, open the Application.xml file in the {AMS_INSTALL}/applications/livepkgr directory.

Add the following xml as a child of the <application> node:

<ScriptEngine> 
 <ApplicationObject> 
 <config> 
 <clearOnAppStop>true</clearOnAppStop> 
 </config> 
 </ApplicationObject> 
</ScriptEngine>

Save and close the file.

Do a little live streaming and notice the files being created in the {AMS_INSTALL}/applications/livepkgr/streams/_definst_/livestream directory.

Before Clean Up

Stop broadcasting your live stream and either wait for the livepkgr application to unload or use the admin concole to unload the application. As soon as the application is unloaded, the livestream directory should be removed from {AMS_INSTALL}/applications/livepkgr/streams/_definst_ effectivley cleaning up your server for you.

Clean Up After

I thought this was a nice little find, what do you think?

Let me know in the comments.

OSMF Fix: “Local HDS” Playback

OSMF & Local F4M Files

The Open Source Media Framework (OSFM) doesn’t respond well when you pass it a local F4M file path for playback. Playing back local HDS fragments is an edge case, but enough people have encountered this problem that I think it deserves a little attention.

Fixing the IndexHandler

The basics involve extending the HTTPStreamingF4FIndexHandler then overriding the constructFragmentRequest() method & removing the requirement that HTTP be part of the URI.

So, after you’ve created your new class that extends org.osmf.net.httpstreaming.f4f.HTTPStreamingF4FIndexHandler, add the following override:

[actionscript3]override protected function constructFragmentRequest(serverBaseURL:String, streamName:String, segmentId:uint, fragmentId:uint):String
{
var requestUrl:String = "";
requestUrl += streamName + "Seg" + segmentId + "-Frag" + fragmentId;
return requestUrl;
}[/actionscript3]

The original constructFragmentRequest() method looks like:

[actionscript3]protected function constructFragmentRequest(serverBaseURL:String, streamName:String, segmentId:uint, fragmentId:uint):String
{
var requestUrl:String = "";
if (streamName.indexOf("http") != 0)
{
requestUrl = serverBaseURL + "/" ;
}
requestUrl += streamName + "Seg" + segmentId + "-Frag" + fragmentId;
return requestUrl;
}[/actionscript3]

So you can see we just removed the “http” requirement.

Now you need to get OSMF to use your adjusted IndexHandler. This means you need to create a custom NetLoader, that creates a custom StreamingFactory & then the Factory creates your IndexHandler. Once you have all of this done, you can then create a custom MediaFactory (I usually just extend DefaultMediaFactory) and create a MediaFactoryItem that can handle your local F4M file resource, & specify your new NetLoader for the element that is returned.

Convoluted eh? Let’s break it down.

Creating the Factory

Since we are dealing with HDS content extend org.osmf.net.httpstreaming.f4f.HTTPStreamingF4FFactory & override the createIndexHandler() method. In this method set the indexHandler property equal to an instance of your IndexHandler (mine is named “LocalFileStreamingF4FIndexHandler”).

Example:
[actionscript3]override public function createIndexHandler(resource:MediaResourceBase, fileHandler:HTTPStreamingFileHandlerBase):HTTPStreamingIndexHandlerBase
{
indexHandler = new LocalFileStreamingF4FIndexHandler(fileHandler);
return indexHandler;
}[/actionscript3]

Creating a NetLoader

Now we need our NetLoader – this is where the Factory is created. I extended org.osmf.net.httpstreaming.HTTPStreamingNetLoader & created an override for the createHTTPStreamignFactory() method.

Example:
[actionscript3]override protected function createHTTPStreamingFactory():HTTPStreamingFactory
{
return new LocalFileStreamingF4FFactory();
}[/actionscript3]

Creating a custom MediaFactory

Now we need to be able to injet this NetLoader into the system. So extend the org.osmf.media.DefaultMediaFactory & add a MediaFactoryItem that return a new VideoElement. Pass in an instance of your NetLoader to the the VideoElement constructor.

Example:
[actionscript3]localFileStreamingNetLoader = new LocalFileStreamingNetLoader();
localHDSFileNetLoader = new LocalHDSContentNetLoader();
addItem( new MediaFactoryItem( "com.realeyes.osmf.elements.video.httpstreaming", localHDSFileNetLoader.canHandleResource,
function():MediaElement
{
return new VideoElement(null, localHDSFileNetLoader);
}
)
);[/actionscript3]

Specifying the canHandle and passing in the NetLoader

You can also see in the previous bit of code that the MediaFactoryItem asks for a canHandleResource function. This is where conditions are added to make sure the you’re working with the right type of media. For this situations you can default to the super’s canHandleResource() which checks the resource HTTP Streaming metadata.

Now you have a setup that can handle a local F4M file and the downloaded fragments. There are a couple of considerations you’ll need to keep in mind:

  1. The fragments and F4M file must reside in the same directory. OSMF looks for the fragments in the directory that they F4M is in.
  2. Multi-bitrate won’t work, unless you can download the fragments for all the bitrates and work out storage for them.

The provided examples is the least obtrusive way to get local HDS content to play. Alternatively  you can edit the HTTPStreamingIndexHandler and compile an OSMF swc to use for a quick and dirty fix.

HTTP Dynamic Streaming Content Download and Playback

Recently I’ve been working on a system to playback HTTP Dynamic Streaming (HDS) content locally, from a single file. If you have seen my previous post on HTTP Dynamic Streaming (HDS) or are already familiar with it, you know that the media item is packaged in such a way that there are multiple segments and fragments that make up an entire media item. Similar to the image below:

A sample segment and its fragments
A sample segment and its fragments

The system involves a client, an AIR application, requesting some remote HDS content (basically an F4M file). The client downloads the fragment data for the media item and writes it to disk. Instead of writing each fragment to a separate file, the fragment data is written to a single file. This part alone is pretty straight forward. The tricky part is when you want to play the content back.

A few problems needed to be overcome to get playback to work. First, to get the local fragments to playback, I needed to fix an issue in the OSMF framework that only accounts for requests for remote HDS fragments. This was accomplished by overriding the HTTPStreamingIndexHandler class and removing some code that only accounted for “HTTP” being part of the request. Second, and more importantly, I needed to intercept the request for the HDS fragment that is generated when OSMF is playing back HDS content, use the request to determine where the fragment’s byte data exists in local file that was created when the client downloaded the content. Then return this byte data to the rest of the OSMF code that parses it into FLV data to pass onto the appendBytes() method on the NetStream.

On top of that, we wanted to allow for playback while the fragments were still downloading. On OS X this wasn’t a huge deal because AIR on OS X can have multiple FileStreams open the same file. On Windows the file is locked when it is opened by the first FileStream that open the file. This is a problem because I want to write the downloaded fragment data to the file and I want to read fragment data for playback at the same time. This issue was solved with a little utility library that uses only 1 FileStream instance and manages read and write requests by queuing up requests and only allowing the requests to happen 1 at a time.

It was a huge headache and lots of time was spent in the OSMF rabbit hole but, I now have a great File IO library for AIR and I’m able to download and playback HDS content locally.

Sencha’s EXTJS 4: Getting Started Quickly

Prerequisites:

Getting started with Sencha’s EXTJS

  1. Download the Library – EXTJS 4.0.7: http://www.sencha.com/products/extjs/download/ext-js-4.0.7/1234
  2. Unzip the contents of the download to a folder in root of your web server. I used /extjs. It should look something like the following image where “extjs” is at the root of your web server:

    Screen Shot of Unpacked EXTJS Files
    Screen Shot of Unpacked EXTJS Files
  3. Make sure your web server is running and open http://localhost/extjs/index.html in your browser.
  4. Click the View the Examples button.
  5. This is a good place to start to see what EXTJS is capable of.

At this point your ready to go. The Getting Started Guide has plenty of information to get your rolling and provide you will a decent knowledge set for EXTJS development.

You want to build something though right? So, let’s build.

  1. Create a directory named “extplay” in the root of your web server.
  2. In the extplay directory create a MyApp.html file.
    Usually EXTJS applications are contained in a single HTML file.
  3. Open the MyApp.html file in your text editor.
  4. Add the following HTML template code to your application:
    <!--<!DOCTYPE HTML>
    <html>
        <head>
            <title>EXTJS Getting Started: MyApp</title>
            <link rel="stylesheet" type="text/css" href="/extjs/resources/css/ext-all.css">
            <script type="text/javascript" src="/extjs/ext-all.js"></script>
            <script type="text/javascript" src="app.js"></script>
        </head>
        <body></body>
    </html>
  5. You will notice the app.js file in the HTML above.
  6. Create the app.js file in your extplay directory and open it in your text editor.
  7. Now we can write some code for our simple application.
  8. Add the following code to the app.js file:
    Ext.onReady(function() {
     Ext.Msg.alert('Status', '<h1>Hello World!</h1>');
    });
  9. Open MyApp.html (http://localhost/extplay/MyApp.html) in your browser. You should see something similar to the following image:

    Sencha EXTJS: Message Box
    The EXTJS Message Box
  10. This is not a full-on EXTJS application – this just illustrates how you can use pieces and parts of EXTJS when and how you’d like.
  11. Let’s create a EXTJS application.
  12. Open app.js.
  13. Replace the code in app.js with the following code:
    Ext.application({
       name: 'MyApp',
       launch: function() {
        Ext.create('Ext.container.Viewport', {
        layout: 'fit',
        items: [
        {
         title: 'EXTJS: MyApp',
         html : 'Hello World!'
        }
       ]
      });
     }
    });
  14. Open MyApp.html in a browser or refesh the browser if it is already open.
  15. You should see something similar to the following:

    Sencha EXTJS: EXTJS Barebones Application
    Sencha EXTJS: EXTJS Barebones Application
  16. Here is what is going on in the code example:
    1. Ext.application: Defines the application named “MyApp”
    2. launch: The built in launch event creates a Viewport and sets it’s config object with a layout and items.
    3. layout: Tells the Viewport to fit to the size of its parent (in this case the body of the HTML)
    4. items: defines the children of the viewport. In this case a title and some HTML.

Now you’ve got an EXTJS 4 application up and running. What do you think?

I’ll follow this post up with some more information on what really goes on with the application, creating and managing the “items” or children of your application & how Sencha handles dependancies and deployment.

SenchaResources:

Aptana SDOCML & Code Hinting Support

Prompted by this post, I’ve been working on a tool that uses jsduck and Adobe AIR to create the SDOCML necessary for Aptana code hinting. Specifically I’ve been playing with EXTJS 4.0 becuaseof some interest in Sencha.

In the near future I’ll post the SDOCML files that I’ve created for the code hinting and once I get the tool looking presentable and working efficiently, I’ll post the source of that to GitHub.

Here is the current version of the file. I’m not completely sure is has everything it needs. So, it is a work in progress and I’ll post updates as I create them.
extjs4.0-v0.1.sdocml

If you’d like the SDOCML for EXTJS 4.1 Beta
extjs4.1.beta-v0.1.sdocml

Take the PIA out of load testing AMF services

Testing service APIs is a big pain. More so when you are using a format that isn’t widely supported by testing tools. Load testing AMF services…ugh.

Recently I was tasked with finding a way to load test some AMF services built using PHP. In the past I had used jMeter for load testing. So, that is where I started. jMeter is a good tool, but not for AMF. A few extensions for jMeter have been built for AMF, but they are a PIA to setup.

I found a couple of tools that have made load testing AMF serivices a snap. Both of the tools are from SmartBear, and both are open source and free to use. Bonus!

  1. Tool one: soapUI
  2. Tool two: loadUI

The following are the basic steps to get a simple load test set up:

Setting up the project

  1. Download and install soapUI (you’ll need Java too).
  2. Start up soapUI.
    soapUI Startup
  3. Create a new soapUI project – File -> ‘New soapUI Project’
  4. Name your project and click ‘OK’.
    soapUI New Project
  5. Add a new TestSuite – right-click your new project -> ‘New TestSuite’
  6. Name your TestSuite and click ‘OK’
    soapUI New TestSuite
  7. Create a new TestCase – right-click the new TestSuite -> ‘New TestCase’
  8. Name your TestCase and click ‘OK’
    soapui New TestCase
  9. Expand the Test Case
  10. Right-click ‘Test Steps’, select ‘Add Step’ -> AMF Request.
    soapui New AMF Request
  11. Your project is set up and ready-to-roll. Save it.

Configuring the AMF Request

Now that we have a request set up we need to specify the arguments for the call. The is where I had troubles with jMeter – setting up the data required proxies and additional tools. There was no where to easily create & edit AMF object to pass along in the AMF request.

Enter soapUI. Lets say we have an API method called getMonkeys(). getMonkeys() requires an array of ids that specifies what monkeys we want in the list. The name of this parameter is ‘monkeyIds’.

  1. In soapUI, right-click the AMF Request object and select ‘Open Editor’. You should see a window similar to the following:
    AMF Request Editor
  2. In the text field for ‘Endpoint’ enter your service end point. For example: http://www.thekuroko.com/Monkey/amfphp/gateway.php
  3. Enter the name of your call in the text field for the AMF Call setting: For example Monkey.getMonkeys
  4. Just under the entry for Endpoint add property to the property list for the call by clicking the ‘Add Property’ button.
    New Property Button
  5. Enter ‘monkeyIds’ as the name of the property. If the value for this property were a simple value we could enter it into the value column. We need an array though.
  6. To set the value for the property we’ll use the script window just under the property list.
  7. In the script window, enter the following to create an Array that contains the id values 1,2 & 3 and assigns that Array to the monkeyIds parameter.
    parameters[“monkeyIds”] = [1,2,3];
  8. That is it. The call for getMonkeys() is set up.
  9. To test the call click the little green arrow in the top left of the AMF Request editor.
  10. If your paths and data are set up correctly, you should see an XML formatted response in the window to the right of the Script editor.
    soapUI AMF Request Result

Creating and Running a Load Test

So now we have a test for a service, but we wanted to get some load testing done. If you’re looking for quick and simple load testing, you don’t have to go much further than soapUI itself. To create a load test in soapUI:

  1. Right-click the ‘Load Tests’ node under ‘Test Steps’ -> ‘New Load Test’
  2. Name the new load test and click ‘OK’
  3. 2 steps. That’s it, the load test is set up. You can run the test “as-is”.

Now, this is a very simple load test and there are a ton of things you can add to the test to improve it to build more useful load tests within soapUI.

Running Load Test with loadUI

The other tool I mentioned, loadUI, is built to integrate with soapUI and make load testing “easier and more efficient”.

Once loadUI is installed can you execute the test case that you set up in soapUI in loadUI.

  1. Right-click the test case, then select ‘Run with loadUI’.
  2. You will be prompted to save the project, do so.
  3. Select ‘Fixed Rate for the ‘Default Generator’ selection – this will determine how “clients” are generated for the load test.
  4. Select ‘Statistics’ for the ‘Default Statistics’ selection – this will display a graph for the load test metrics.
    loadUI Test Settings
  5. Click ‘OK’.
  6. loadUI will launch.
    loadUI Sstartup
  7. Click the ‘Play/Stop’ button to start the load test.
    load Play/Stop Button

You can play around with the Generator settings to change the rate at which clients are created to see changes in the load test results while the load test is running.
loadUI Generator

loadUI Stats

To view a report of the results you can click the ‘Summary Report’ button in the top right of the loadUI interface.
loadUI Summary Report Button

This is just a simple load test and there are plenty of additional settings, assertions and analysis tools that can be added, adjusted and tweaked to improve the validity of the load tests.

Next Steps

Our next step is to integrate the tests into our Continuous Integration (CI) system. We use Jenkins and I saw this post about Automating loadUI and Jenkins (formerly Hudson). So, in theory it can be done. I’ll let you know what we get worked out on that end when we get there.

So far, I’m pretty excited about the two tools. They are very useful, and free to boot. Hey SmartBear – you really are smart, thank you – you rule.

Resources:

Multi-screen is Sexy, even at Adobe Max

There are too many devices to consider when developing video players now-a-days.

Don’t you wish you didn’t have to think about each and every device that Bob & Sally were using to play video?

We’ve got it covered with our Adobe Max 2011 BYOD Lab

“Video Player Development for Multiple Devices” .
(boring name right? I know, but still…)

WTF is BYOD? Stands for Bring Your Own Device – it just means you’ll need to bring your own devices to follow along. Here is what we’re looking at for setup:

We’ll go over concepts and code with you to take the tediousness out of developing video players.

Android? Yep.

iOS? Sure-nuff

Blackberry? Um….yeah, this is Flash baby – we got it.

We’ll show you how to AUTO-MAGICALLY detect the platform and screen size and let the player do the work for you.

If you’re attending our session, AWESOME. Thanks!

I’m sure you’d like to get going early right…I thought you might.

Go download the BYOD Lab Files

Hey, even if you’re not going to the session or Max, you’ll get some good sample files right?

While you’re at it, check out Jun Heider’s “Multiscreen Project Best Practices” session too.