I recently acquired an inexpensive tool that’s proven invaluable for development and testing of dynamic streaming using a Flash media server (or Wowza, or whatever multi-rate streaming solution you might use). For a long time, I had a hard time simulating suboptimal network conditions so I could test how a media player designed to adapt to those conditions would behave. The best I could do was to IM some friends with bad internet connections a link to a video and see if they could get it to play under different configurations. (Thanks especially go out to Lauren, whose apartment has ubiquitous wi-fi which ran at a crawl because some drunk butt-head smashed up all the antennas.)
I’d been looking into seeing whether I could set up some kind of an emulated machine (like VirtualBox or vmWare) to test rigorously on, something with which I could dynamically re-allocate the network resources to the emulator. It turns out that there’s already a product out there that does precisely what I need and more directly that is descriptively called Net Limiter.
Net Limiter is developed by a Czech company called Locktime Software and sells a single-user license to the current Pro version for $30. The basic feature of the application is that it monitors how much bandwidth each application on your system is consuming. I think the free version of it does just that, but the Pro version allows you to set limits on how much bandwidth each process running on your machine can consume, and you can change that limit on the fly.
That’s useful for development and testing of dynamic streaming implementations on both sides of the RTMP pipe. You can load a player that’s loading a multi-rate stream and throttle the available bandwidth down and up to observe how smoothly the player adapts and at what network connection quality the video will fail to play entirely. On the other end (and I haven’t yet done this myself), you could set up an FMS development version on your workstation, publish a multi-rate stream, then connect to it from as many other clients on different machines as the dev license allows. Then if you throttle down the bandwidth to the FMS process that publishes the stream, you can simulate what would happen if your production server would reach its bandwidth availablility, presumably by dropping some connections down to lower bitrate streams to free up more network resources.
That’s what in theory we figured it would do under extremely high-traffic situations, and we believe that we saw it in practice during the Titan Arum stream. I’ll have more on the details of that in a few days once the pseudo-time-lapse is published, but for the purposes of this discussion, we were sending out two streams, one at 1500kbps (at 960X540 or half of full HD resolution) and one at 350kbps (640X360), and we were saving both streams to disk for the time-lapse. When the flower was opening, viewership hit around 1200 simultaneous connections, most on fast University or corporate networks that would support the 1.5mbps stream. I loaded the video on a few computers around the office and all were picking up the 350kbps stream. And that is exactly how it was intended to work: reliably.