Configure Adobe Flash Media Server for Live HTTP Dynamic Streaming

How to set up Live HTTP Dynamic Streaming

So you want to stream a live event using HTTP Dynamic Streaming (HDS) and HTTP Live Streaming (HLS)? No problem. Adobe Media Server (AMS) provides a right out-of-the-box solution for you. To do so, you’ll need to:

  1. Download and install Flash Media Live Encoder (FMLE)
  2. Make a small configuration change to the encoder
  3. Setup your live event
  4. Begin streaming
  5. Set up a player

Installing and configuring Flash Media Live Encoder

  1. Download FMLE from http://www.adobe.com/products/flash-media-encoder.html
  2. Once it is installed open the config.xml file from
    1. Windows: C:Program FilesAdobeFlash Media Live Encoder 3.2conf
    2. Mac: /Applications/Adobe/Flash Media Live Encoder 3.2/conf/
  3. Locate the “streamsynchronization” tag under flashmedialiveencoder_config -> mbrconfig -> streamsynchronization and set the value for “enable” to “true”. The streamsynchronization node should look similar to the following:
    <flashmedialiveencoder_config>
     <mbrconfig>
       <streamsynchronization>
         <enable>true</enable>
       </streamsynchronization>
    ...
  4. Save and close the file.

Setting up the live event

Streaming a live event involves using the “livepkgr” application that comes installed with AMS. The livepkgr application comes with a preconfigured event named livestream. We’ll use this as a template for our live event.

  1. On your server navigate to the {AMS_INSTALL}/applications/livepkgr/events/_definst_ directory.
  2. We’re going to call our event “myliveevent”. Create a new directory and name it “myliveevent”.
  3. Open the newly create mylivestream directory and create a new XML file named “Event.xml”. This file is used to configure the just-in-time (JIT) packaging settings for your HDS content. Add the following XML to the file. Note: You can also copy the Event.xml file from the liveevent directory that is setup by default. Just update the EventID to match the folder name.
    <Event> 
      <EventID>myliveevent</EventID> 
      <Recording> 
        <FragmentDuration>4000</FragmentDuration> 
        <SegmentDuration>16000</SegmentDuration> 
        <DiskManagementDuration>3</DiskManagementDuration> 
      </Recording> 
    </Event>

    For more information about the values in the Event.xml  file you can review Adobe’s documentation – link in the resources section below.

  4. Save and close the file.
  5. Your event is now set up. You can reuse this event all you want, or create another one for a different event name.

Begin streaming

Now we can start up FMLE and set it up to connect to our livepkgr application and begin streaming.

  1. In the left panel of FLME make sure the “Video” and “Audio” sections are both checked.
  2. Video
    1. In the video section, set the format to be “H.264” and then click the button with the wrench icon.
    2. In the resulting pop-up window, make sure the settings match the following:
      1. Profile: Main
      2. Level: 3.1
      3. Keyframe Frequency: 4 seconds
        Live HTTP Dynamic Streaming H.264 Settings
    1. Click “OK” to close the pop-up window.
    2. In the “Bit Rate” section make sure you only have one of the bit rates selected. We’re only creating a single stream for now.
      Live HTTP Dynamic Streaming Video Encoder Settings
  3. Audio
    1. In the Audio section, set the format to “AAC”
      Live HTTP Dynamic Streaming Audio Encoder Settings
  4. In the right panel set “FMS URL” to point to your server and the livepkgr application:
    1. Example: rtmp://192.168.1.113/livepkgr
  5. Set the “Stream” value to be mylivestream?adbe-live-event=myliveevent
    1. “mylivestream” is the name of the stream and can be anything you’d like. The actual files that AMS creates will be stored in the livepkgr/streams/_definst_/mylivestream directory.
    2. “?adbe-live-event=myliveevent” tells the livepkgr application to use the Event.xml in the livepkgr/events/_definst_/myliveevent directory that we created.
      Live HTTP Dynamic Streaming RTMP Server Settings
  6. Click the “Connect” button. If all goes well, you’ll connect to your server. If not, check to make sure there aren’t any typos in the values for “FMS URL” and “Stream” and that you can connect to your server and it is running.
  7. Click the bug green “Start” button to begin streaming.
    Live HTTP Dynamic Streaming Big Green Start Button
  8. You now have a stream. Let’s see if we can get a player to play it back.

Setting up the player

Getting to the HDS content for your new stream involves requesting a URL that lets Apache (installed with AMS) know what we are looking for. The path will consist of the following parts:

  1. The protocol: http://
  2. The server location: 192.168.1.113/ (in my case, yours will be different)
  3. The Location that is configured to deliver live streams. By default these are:
    1. HDS: hds-live/
    2. HLS: hls-live/
  4. The application name: livepkgr/
  5. The instance name (we’ll use the default): _definst_
  6. The event name: myliveevent
  7. The stream name: mylivestream
  8. The F4M file extension for HDS – .f4m or the M3U8 file extension for HLS.

So if we put all of that together we’ll get a URL that looks like:

  • HDS: http://192.168.1.113/hds-live/livepkgr/_definst_/myliveevent/mylivestream.f4m
  • HLS: http://192.168.1.113/hls-live/livepkgr/_definst_/myliveevent/mylivestream.m3u8

Note: You may need to add the 8134 port to the URL if you didn’t install AMS on port 80: http://192.168.1.113:8134/hds-live/livepkgr/_definst_/myliveevent/mylivestream.f4m

  1. Open a browser window and navigate to that URL, you should see the F4m’s XML content.
    Live HTTP Streaming F4M XML
  2. Open the following URL: http://www.osmf.org/configurator/fmp/#
  3. Set your F4M url as the value for “Video Source”
  4. Select the “Yes” radio button for “Are you using HTTP Streaming or Flash Access 2.0?”
  5. Set “Autoplay Content” to “Yes”
    Live HTTP Dynamic Streaming Player Settings
  6. Click the Preview button at the bottom of the page.
  7. Congratulations. You are now streaming live media over HTTP.

To verify the HTTP Streaming, open a tool that will let you inspect the HTTP traffic (something like Developer Tools or Firebug). You should see requests for resourecs like “mylivestreamSeg1-Frag52” and “mylivestream.bootstrap”. This is the player requesting HDS fragments and Apache and AMS working together to package them just-in-time for the player.
Live HTTP Dynamic Streaming HTTP Traffic

Hopefully this provides you with some good information about Live HTTP Dynamic Streaming and clarifies some of the setup and configuration details. Please, if you have any questions, let me know in the comments or contact me.

Resources

OSMF Custom Media Elements

OSMF Video Sample

A good argument for using a framework is the ability to extend the built in capabilities of the framework. For example, there was a comment on the ‘Getting Sstarted with OSMF Plugins‘ post that asked about using embedded images in theWatermarkPlugin sample.

Here are the steps that I took to get an embedded asset (instead of a ‘loadable’ asset) to show as a watermark:

1. Create a new class that extends MediaElement (this is a simple element, but you could extend any existing element depending on your needs). I named mine StaticImageElement.

[actionscript3]
package com.realeyes.osmf.plugin.element
{
public class StaticImageElement extends MediaElement
{

}
}
[/actionscript3]

2. Add a private Bitmap property with a getter and setter to the class – I named mine _bitmap.

[actionscript3]
private var _bitmap:Bitmap;

public function get bitmap():Bitmap
{
return _bitmap;
}

public function set bitmap( value:Bitmap ):void
{
if( value != _bitmap )
{
bitmap = value;
}
}
[/actionscript3]

3. In the setter for the bitmap property add the DisplayObjectTrait to the StaticImageElement

[actionscript3]
addTrait( MediaTraitType.DISPLAY_OBJECT, new DisplayObjectTrait( _bitmap as DisplayObject, bitmap.width, bitmap.height ) );
[/actionscript3]

4. The completed class is pretty simple because we get to use everything already created for OSMF.

[actionscript3]
package com.realeyes.osmf.plugin.element
{
import flash.display.Bitmap;
import flash.display.BitmapData;
import flash.display.DisplayObject;

import org.osmf.media.MediaElement;
import org.osmf.traits.DisplayObjectTrait;
import org.osmf.traits.MediaTraitType;

public class StaticImageElement extends MediaElement
{
private var _bitmap:Bitmap;

public function StaticImageElement()
{
super();
}

public function get bitmap():Bitmap
{
return _bitmap;
}

public function set bitmap( value:Bitmap ):void
{
if( value != _bitmap )
{
_bitmap = value;

addTrait( MediaTraitType.DISPLAY_OBJECT, new DisplayObjectTrait( _bitmap as DisplayObject, bitmap.width, bitmap.height ) );
}
}
}
}
[/actionscript3]

5. Create the embedded asset in the WatermarkPluginElement class
[actionscript3]
[Embed( "/assets/osmf_logo.png" )]
protected static const OSMF_LOGO:Class;
[/actionscript3]

6. Now all we need to do in the WatermarkProxyElement set the bitmap property on a new instance of the StaticImageElement instead of creating an ImageElement with the watermark URL and ImageLoader.
Before:

[actionscript3]
var watermark:ImageElement = new ImageElement( new URLResource( watermarkURL ), new ImageLoader() );
[/actionscript3]

After:

[actionscript3]
var watermark:StaticImageElement = new StaticImageElement();
watermark.bitmap = new OSMF_LOGO();
[/actionscript3]

Bonus points for developing with a framework – more specifically OSMF! The embedded watermark shows up.

Download the original sample code:
[dm]10[/dm]

UPDATE: I’ve created an additional custom MediaElement called InteractiveImageElement.as. Thanks for the idea @cucu_adrian! The new element handles rollover and rollout by adjusting the image’s alpha property and setting the cursor to a button cursor. It also navigates to a url specified in the class – this would be any easy thing to make configurable though.
[dm]11[/dm]

Getting started: OSMF Plugins

In the past media players have had a problem. They all have custom implementations. When building a media player the developer encounters multiple services to load content, track analytics & quality of service information as well as handle advertising & custom user interactions among other things. The Open Source Media Framework (OSMF) provides a solution to these issues by introducing a flexible plugin system that can integrate multiple plugins to solve these problems without the media player developer needing to write a bunch of extra code.

What is an OSMF plugin?

An OSMF plugin is an easily distributable extension for an OSMF based player that can unobtrusively provide or adjust the functionality of that player. At its most basic, a plugin is a class or set of classes (static plugin) or a SWF (dyanamic plugin) file that is loaded into the media player by the MediaFactory. The plugin is built to a specific API that adheres to a contract that OSMF has created so the media player knows what plugins are interested in which MediaElements and can pass them the necessary data for the plugin to do its job.

Types of plugins

There are 3 types of plugins that you can create, and each one has a different type of purpose Standard, Proxy & Reference.

  1. Standard plugins are responsible for creating either built-in or custom MediaElements.
  2. Proxy plugins are responsible for changing MediaElements that have been created by the MediaFactory.
  3. Reference plugins are responsible for adding functionality to the MediaElements created by the MediaFactory.

*NOTE: The MediaFactory must be used for plugins to work correctly. When a MediaElement is created by the MediaFactory, the MediaFactory references a list of MediaFactoryItems defined in the PluginInfo class to be notified if a plugin can handle a specific media resource.

Building an OSMF plugin

Initially I like to develop a plugin as a static or class based plugin. This allows for easier debugging and keeps things simpler when testing and/or refactoring. And it is pretty easy to convert a static plugin to a dynamic plugin, so don’t be worried about that.

The PluginInfo class is the first class you will need to be familiar with when building plugins. This class provides the API and information for the MediaFactory to access and provide the necessary data for the plugin to function correctly.

To create your PluginInfo class, you will need to extend OSMF’s PluginInfo class. The definitions for what and how a plugin can handle a MediaElement as well as what type of plugin is being loaded are specified using a collection MediaFactoryItems. In the constructor of your PluginInfo class you will need to define a MediaFactoryItem for each type of MediaElement you plan to handle and/or create in your plugin and add this to an ArrayCollection that is passed to the super’s constructor.

[actionscript3]package com.realeyes.osmf.plugin
{
import com.realeyes.osmf.plugin.element.WatermarkElement;

import org.osmf.media.MediaElement;
import org.osmf.media.MediaFactoryItem;
import org.osmf.media.MediaResourceBase;
import org.osmf.media.PluginInfo;
import org.osmf.metadata.Metadata;
import org.osmf.net.NetLoader;

public class WatermarkPluginInfo extends PluginInfo
{
public function WatermarkPluginInfo()
{
// Add MediaFactoryItems
var items:Vector.<MediaFactoryItem> = new Vector.<MediaFactoryItem>();

var loader:NetLoader = new NetLoader();
items.push( new MediaFactoryItem(
"com.realeyes.osmf.plugins.WatermarkPlugin",
canHandleResource,
createWatermarkElement
) );

super( items );
}

public function canHandleResource( resource:MediaResourceBase ):Boolean
{
return true;
}

public function createWatermarkElement():WatermarkElement
{
// Create the watermark element
var newWatermarkElement:WatermarkElement = new WatermarkElement()
return newWatermarkElement;
}
}
}[/actionscript3]

The sample class above shows the basics to defining a Standard plugin that creates a WatermarkProxyElement for any type of resource. The constructor defines a single MediaFactoryItem that creates a WaterMarkElement via the createWatermarkElementI() method. This plugin will be used for any type of MediaElement that the MediaFactory creates because the canHandleResource() method returns true. If we wanted this plugin to only handle RTMP streams we could adjust the canHandleResource() method to look something like the following:

[actionscript3]
public function canHandleResource( resource:MediaResourceBase ):Boolean
{
var canHandle:Boolean;
var urlResource:URLResource = resource as URLResource;
if( urlResource.url.indexOf( "rtmp" ) )
{
canHandle = true;
}

return canHandle;
}
[/actionscript3]

The WaterMarkElement is then returned to the MediaFactory and passed on to be handled by the media player.

Standard Plugin details

Standard plugins are used to create MediaElements. This means that they should concentrate on the resource being passed in and create the appropriate MediaElement based on that resource.

A proxy plugin

If we wanted to create a Proxy plugin we can keep the same PluginInfo class, but we’ll need to add in a 4th parameter to the MediaFactoryItem to let the MediaFactory know that it will be working with a proxy plugin and to call the proxiedElement setter on the WatermarkElement created via the createWatermarkElement() method. This would look like:

[actionscript3 highlight=”11″]
public function WatermarkPluginInfo()
{
// Add MediaFactoryItems
var items:Vector.<MediaFactoryItem> = new Vector.<MediaFactoryItem>();

var loader:NetLoader = new NetLoader();
items.push( new MediaFactoryItem(
"com.realeyes.osmf.plugins.WatermarkPlugin",
canHandleResource,
createWatermarkElement,
MediaFactoryItemType.PROXY
) );

super( items );
}
[/actionscript3]

Proxy Plugin details

Proxy plugins allow a developer to non-invasively alter the MediaElement’s behavior. For example, you could use a proxy plugin to disable the seek or pause functionality in a media stream. You can also use proxy plugins to alter the type of MediaElement after the plugin is loaded. In this case you could create a SerialElement, add a pre-roll VideoElement to the serial element and then the original MediaElement for simplified pre-roll advertising.

A reference plugin

A reference plugin is created by adding a mediaElementCreationNotificationFunction() method to the plugin and then passing that method to the super() call with the MediaFactoryItems collection. These additions would change the PluginInfo class to look like:

[actionscript3]
public function WatermarkPluginInfo()
{
// Add MediaFactoryItems
var items:Vector.<MediaFactoryItem> = new Vector.<MediaFactoryItem>();

var loader:NetLoader = new NetLoader();
items.push( new MediaFactoryItem(
"com.realeyes.osmf.plugins.WatermarkPlugin",
canHandleResource,
createWatermarkElement
) );

super( items, mediaElementCreated );
}

protected function mediaElementCreated( element:MediaElement ):void
{
trace( "MediaElement created!" );
}
[/actionscript3]

Reference Plugin Details

Reference plugins rely on having a reference to the MediaElement’s created by the MediaFactory. Unlike the proxy plugin, which receives the created MediaElements after it has been loaded a reference plugin receives a notification for each MediaElement that has been created, even those before the plugin was loaded. Reference plugins are good containers for tracking as well as interaction control.

The Power of OSMF Plugins

The Open Source Media Framework & the Plugin system provides a flexible and powerful solution to some common problems that media player in the past have had – the need to communicate with many, disparate services and content providers and the custom code implementations around these communications.  OSMF makes it easy to integrate plugins that have been built by these service and content providers as well as create plugins that extend the functionality of players built using OSMF.

Below is the presentation that I gave with David Hassoun to the Rocky Mountain Adobe Users Group on August 8th, 2010 about OSMF & Plugins.

Resources

Sample Code Download

[dm]10[/dm]

Quick fun with AIR & Dailymugshot.com

So i’ve been playing with DailyMugShot.com for the past couple of months. DailyMugShot is just that – you take 1 picture every day of your mug. Well I wanted all my mugshots and there wasn’t a direct way of downloading them from the site. They have an RSS feed for your shots, but it only shows the current picture.

They do have a little flash based badge that you can post to your site.

So with some hunting around in the firebug output I found where the little flash piece calls a service for the sequence of images. The dataservice is simple XML (Yay!), and I like ActionScript 3 and XML. So, I wrote an AIR app that downloads all my mugshot images. It is really basic and urls and final file locations are all hard-coded, but it was a fun 45 minutes and worked like a charm and I have all my past mug shots.

Here are a few of my favorites:
02-03-200902-18-200902-24-200903-14-200903-17-200904-06-2009

Update: If you’d like to play with dailymugshot.com I’ve compiled an AIR Application that will download all the images for a given user ID (you can get the userID from the slide show page URL)

[dm]9[/dm]

Adobe AIR – Issues with Command Line Arguments

After working on a little automation tool for video encoding process we ran into an interesting issue with AIR applications and command line arguments. Here is the scenario:

  1. Encoding process ends.
  2. The encoding process passes a file path to the waiting AIR application via command line.
  3. If the AIR app is not running, it starts up.
  4. The AIR application then checks some data in a database updates some tracking info and possibly grabs the duration out of the file.
  5. The AIR app waits for some more input.

Here is the issue – when the application starts up via the command line call, subsequent calls fail to the AIR application. Our solution, the AIR app has to be running when the OS starts up – that way the initial command line call to start the application doesn’t hold the process.

The command line looks something like in Windows:
[vb]
C:/Program Files/ServerApplication/ServerApplication.exe “D:/my/storagedir/vidfile.f4v”
[/vb]

The command line looks something like on a Mac:
[vb]
/Applications/ServerApplication.app/Contents/MacOS/ServerApplication “D:/my/storagedir/vidfile.f4v”
[/vb]

There has to be some way to start the application via the command line without holding everything up right? What am I missing?

Here is what I’m missing:
The new command line looks something like in Windows (added the /b option):
[vb]
C:/Program Files/ServerApplication/ServerApplication.exe /b “D:/my/storagedir/vidfile.f4v”
[/vb]

The new command line looks something like on a Mac (added the ‘&’ at the end):
[vb]
/Applications/ServerApplication.app/Contents/MacOS/ServerApplication “D:/my/storagedir/vidfile.f4v” &
[/vb]

Now our little automation AIR tool doesn’t need to be running when the first call happens – it will actually start up – and it can stay open and successfully receive new command line arguments.

Yammer and SVN post-commit hooks

If you don’t know what Yammer is, it is a twitter like communication tool for your company:

Yammer is a tool for making companies and organizations more productive through the exchange of short frequent answers to one simple question: “What are you working on?”

Whats nice about Yammer is it is an internal tool that you can quickly communicate with everyone, well anyone listening, in your company. Answers to questions come quickly and from the appropriate party without much effort on either end and notifications to everyone are a snap.

The notifications is what got me thinking about Subversion and post-commit hooks. Subversion provides hooks that allow you to trigger scripts based on a repository event. So, I set up a script that retrieves information about the latest commit to the repository formats an email which is sent to Yammer and published. Now, when someone commits to the repository, everyone is automatically notified without the developer having to write an email and send it to everyone that needs to know about it.

Another unforeseen benefit of this system is that everyone has gotten much better at their SVN comments for their commit. I would imagine this is because they get instant feedback about inadequate comments when everyone see it in Yammer.

On to the resources – SVN hooks are pretty easy to implement and provided by default. They reside in each repository you create in a ‘hooks’ directory {SVN_ROOT}/{REPOSITORY}/hooks. There is a provided template for each type of hook that SVN supports. The script can be any type of script (shell scripts, Python scripts etc), it just needs to have the same name as the supplied template file. For our Yammer script I dusted off the .bat script skills to retrieve the commit and send an email. To send the email I downloaded blat to handle sending the email to Yammer. Finally we create an email for out SVN user and a yammer account using the svn user email.

So here is the list of what we have so far:

  • SVN Repositoy and access to the hooks directory
  • Email address for the SVN user
  • Yammer account using the SVN user’s email address
  • Some way to send an email (Blat)
  • SVN post-commit hook script (post-commit.bat)

Now on to the contents of the script – The post commit hook receives 2 arguments, the name of the repository and the revision. The script uses svnlook, the repository name and revision to retrieve the details (message and author) of the commit. Then usign the commit details the script creates a text file that Blat uses as the email body and sends the email to Yammer.

Here is the actual script (names have been changed to protect the innocent):
[code]
@echo off

:::::::::::::::::::::::::::::::::::::::::::::::::::::
::: ARGUMENTS :::::::::::::::::::::::::::::::::::::::
SET REPOS=%1
SET REV=%2

:::::::::::::::::::::::::::::::::::::::::::::::::::::
::: GENERAL INFO ::::::::::::::::::::::::::::::::::::
SET DIR=%REPOS%/hooks
SET MESSAGE_FILE=%DIR%/message.txt

:::::::::::::::::::::::::::::::::::::::::::::::::::::
::: SVN INFO ::::::::::::::::::::::::::::::::::::::::
SET DIR=%REPOS%/hooks
SET REPO_PATH=file:///%REPOS%

::: Get the author ::::::::::::::::::::::::::::::::::
For /F “Tokens=*” %%I in (‘svnlook author %REPOS% -r %REV%’) Do Set author=%%I

::: Get the log messsage ::::::::::::::::::::::::::::::::::
For /F “Tokens=*” %%I in (‘svnlook log %REPOS% -r %REV%’) Do Set log=%%I

::: Set the message body ::::::::::::::::::::::::::::::::::
ECHO Commit – rev %REV% (#%author%): ‘%log%’ – %REPOS% > %MESSAGE_FILE%

:::::::::::::::::::::::::::::::::::::::::::::::::::::
::: EMAIL INFO ::::::::::::::::::::::::::::::::::::::

set to=-to yammer@yammer.com

set subj=-s “SVN Commit (Revision %REV%)”

set server=-server mail.domain.com

set debug=-debug -log blat.log -timestamp

set auth=-u email@domain.com -pw yourpasswordhere

set from=-f email@domain.com

:::::::::::::::::::::::::::::::::::::::::::::::::::::
::: SEND THE EMAIL ::::::::::::::::::::::::::::::::::
C:/pathtoyourrepos/_tools/blat/blat %MESSAGE_FILE% %server% %to% %from% %subj% %auth% %debug%
[/code]

Or you can download the script:

[dm]4[/dm]