It’s that time of year when my domain name renewal comes around again. Over the past year, I’ve written a total of 3 posts. Not exactly getting my money’s worth.
I’ve had some ideas I’ve been itching to get out, though, and think I might take up the old writing hobby again, so thehotsign.com and liam-moran.com are mine to stay.
In the meantime, check out A Lion Eye, a blog I discovered the new-fashioned way: it was cited by a local newspaper columnist. There you will find excellent Illinois Basketball analysis… The best we’ve had since John Gasaway went pro. I cannot give a higher compliment than that.
I recently acquired an inexpensive tool that’s proven invaluable for development and testing of dynamic streaming using a Flash media server (or Wowza, or whatever multi-rate streaming solution you might use). For a long time, I had a hard time simulating suboptimal network conditions so I could test how a media player designed to adapt to those conditions would behave. The best I could do was to IM some friends with bad internet connections a link to a video and see if they could get it to play under different configurations. (Thanks especially go out to Lauren, whose apartment has ubiquitous wi-fi which ran at a crawl because some drunk butt-head smashed up all the antennas.)
I’d been looking into seeing whether I could set up some kind of an emulated machine (like VirtualBox or vmWare) to test rigorously on, something with which I could dynamically re-allocate the network resources to the emulator. It turns out that there’s already a product out there that does precisely what I need and more directly that is descriptively called Net Limiter.
Net Limiter is developed by a Czech company called Locktime Software and sells a single-user license to the current Pro version for $30. The basic feature of the application is that it monitors how much bandwidth each application on your system is consuming. I think the free version of it does just that, but the Pro version allows you to set limits on how much bandwidth each process running on your machine can consume, and you can change that limit on the fly.
That’s useful for development and testing of dynamic streaming implementations on both sides of the RTMP pipe. You can load a player that’s loading a multi-rate stream and throttle the available bandwidth down and up to observe how smoothly the player adapts and at what network connection quality the video will fail to play entirely. On the other end (and I haven’t yet done this myself), you could set up an FMS development version on your workstation, publish a multi-rate stream, then connect to it from as many other clients on different machines as the dev license allows. Then if you throttle down the bandwidth to the FMS process that publishes the stream, you can simulate what would happen if your production server would reach its bandwidth availablility, presumably by dropping some connections down to lower bitrate streams to free up more network resources.
That’s what in theory we figured it would do under extremely high-traffic situations, and we believe that we saw it in practice during the Titan Arum stream. I’ll have more on the details of that in a few days once the pseudo-time-lapse is published, but for the purposes of this discussion, we were sending out two streams, one at 1500kbps (at 960X540 or half of full HD resolution) and one at 350kbps (640X360), and we were saving both streams to disk for the time-lapse. When the flower was opening, viewership hit around 1200 simultaneous connections, most on fast University or corporate networks that would support the 1.5mbps stream. I loaded the video on a few computers around the office and all were picking up the 350kbps stream. And that is exactly how it was intended to work: reliably.
I’ll let you in on my secret dry rub ingredient: the tomato bouillon powder you can find at your local mexican grocer (caldo de tomate). When the fat renders out into the rub (I cake it on thick), it’ll turn into a very tasty bbq sauce. The rest of my rub is pretty much improvised: lots of garlic and onion powder, then enough brown sugar and various dried hot pepper powders or crumbles to balance the sweetness.
I’ve used and recommended others to use Subtitle Workshop as a good open-source tool for producing subtitles or captions for video. A problem with the software is that it doesn’t export Timed Text DFXP XML caption files that are pretty standard for web video, both in Flash and for the jQuery-based caption handler that I’ve seen used for HTML5 video.
Not being able to export TTML isn’t really a problem for me, since years ago I wrote a suite of scripts to translate captions and subtitles between all the different formats I run into. I took a few minutes today to make a Custom Format Profile for TTML that you can download here, if you’re so inclined.
Save that to the CustomFormats directory where Subtitle Workshop installed (on a Windows machine, that’s in one of the Program Files directories, then under URUSoft/Subtitle Workshop/). When you got to save the captions you made, click on custom formats, hit the Load Project button, and pick the DFXP_XML.cfp profile that you saved there.
Maybe I’ll post again sometime this year. I’m a busy dude.
I reset my car’s trip computer when I left for Madison on Thursday evening and averaged 27.3 mpg on the drive up.
I reset it again when I left Madison to head back South to Champaign and averaged 27.1 mpg. There seemed to be less construction on the way back, so less time doing 55. At one point on the way up, I joked that the State of Illinois must store unused orange road construction barrels on the center stripes of I-39…
But my car wouldn’t lie to me. Going South doesn’t feel like walking downhill.
The following appears on a website that isn’t my personal blog here.
Below is a print version of a presentation I gave Friday, 10/22 at the Vision Midwest conference in Madison, Wisconsin. The timing was good for a number of reasons, not least of which was the 21st Century Communications and Video Accessibility Act of 2010 that the president signed into law on October 8th. Among other things, the law requires the FCC to create a committee within 60 days of 10/8 to advise on the technical challenges of adding audio description to online video and to present a report within 18 months. I’ve spent the last year studying this problem and coming up with my own solution (that deploys within two months, not two years) and hope that whoever ends up on the committee doesn’t settle on requiring something that’s unworkable or inadequate. Now’s a good time to throw some voices from the trenches out there.
My name is Liam Moran. I work for a digital media unit at the University of Illinois. I wear a lot of hats: I’m a videographer, a programmer, I run our audio studio, configure our streaming servers, and try to keep up with the latest technology to stay at the forefront of delivering the best quality of service possible on the budget we have to work with. As of August of this year, my unit exists as a partnership between the college of LAS and the Office of Continuing Education in order to devote more of our resources to developing video content for online and blended courses for LASOnline, one of the programs for providing online courses at the University of Illinois. I’ve been thinking about Audio Description for about a year and a half, feel confident that the plan I came up with for producing and delivering audio descriptions is a good solution, and was allocated time during my work week to implement it in the past few months. In this presentation, I walk through the different possible ways to produce and deliver audio descriptions and try to convince you that what we’re doing at the University of Illinois makes the most sense for the various stakeholders involved: the faculty members who provide the content, the media units that produce the content, the server administrators who host the content, and the students who must learn the content.
Video is being used in higher education at an increasing pace: in traditional classrooms, for blended learning, and in online courses. This is a good thing: video is an informationally dense source of curriculum materials and presenting the learning material in different ways can’t be a bad thing. There are certain situations where video is necessary: where a demonstration is too dangerous or expensive to perform in a classroom with students present or where an individual can’t schedule time to be present in a classroom, but a camera crew and interviewer can meet with them. However, using video without taking care to make the video accessible deprives students with various disabilities of the learning materials they are expected to master. If a video is presented that is not fully accessible—especially accessible to the blind and those with low-vision, it is not a decision made in malice, but that there simply does not exist the infrastructure or standards to generate or deliver accessible video. Let’s take a step back and think a little bit about what accessibility is in the grand scheme. Accessibility has two major aspects: usability and equivalent content. Usability has to do with how easy it is to anticipate how to navigate and control whatever it is that is to be accessible. The provision of equivalent content means that when information should be presented in a way that everyone can acquire it. Building codes require that structures be built accessibly with respect to usability by specifying how high off the ground and far away from a doorway a light switch should be located, where handrails should be installed, and how a door can be opened by a visitor in a wheelchair. Buildings are accessible with respect to equivalent content if signs have braille equivalent for the text, elevator buttons have a tactile means of indicating which floor the button will take you to, etc. A webpage is accessible with respect to usability if it uses high-contrast colors and is structured in a way that screen-reader software can index it and provide a navigation method of its different content areas. A webpage is accessible with respect to equivalent content if, for example, the images have alt-tags to describe what the image shows.
Jumping back to accessible media: the player controls have to be usable to blind and low vision users–they have to be of appropriate size and use high-contrast colors and they have to be navigable and operable with a screen reader and a keyboard. The media also has to provide equivalent content for users with disabilities. Most of the time, when media professionals talk about accessibility, they’re really talking about captions. Captions are great, of course, and a mainstream part of life now: captions are usually turned on at bars and restaurants so many patrons can watch different television programs on different televisions without interfering with one another or the music being played, for one example. Captions are only half of the game, though: providing the equivalent content if you can’t hear the audio portion of the video. The motion picture part of the video contains content that needs to be made accessible, too, otherwise we wouldn’t buy all the expensive cameras we have. So how do we provide equivalent content for that portion of the video material?
The solution came in the 1970s from Margaret Pfanstiehl, an avid fan of the theater who lost her eyesight while in her young 30s. She cultivated a group of volunteers who would describe the visual aspect of the performance for her and other blind and low vision patrons, eventually developing an infrastructure involving radio transmitters and headset receivers for broadcast to those patrons in the audience that has become the standard for providing equivalent content for theatrical performances and is the basis for all other forms of making visual media accessible to those with visual impairments.
The current best-practice method for including the equivalent content is to play a second audio track over the video, synchronized with the video—exactly like a director’s commentary track on a DVD (which is a standard designed for audio description in the DVD specification, but that hasn’t been widely exploited, unfortunately). Two resources for guidelines on producing the descriptions are: Audio Description Coalition—with free registration, you’ll receive a PDF file containing the standards used by the theatrical audio description community which contains a useful set of ethical guidelines; and Joe Clark’s AD Principles, a concise and clear presentation.
My own distillation of the guidelines for educational content:
Describe what you observe
Keep interpretation to a minimum
Respect your audience
The objective is to be the user’s eyes, not their brains. Good descriptions are not annotations of the video. You have to keep your ego in check and try not to be too helpful. A good rule of thumb to keep you from being too helpful is to watch the video with your audio descriptions turned on and to verify that the descriptions add no content that isn’t readily apparent from the video. I acknowledge that this is impossible at times; that some human judgment is often needed to determine what information it is that students who can see the visual aid are likely getting from it. I suggest that it’s better to err on the side of completeness–to describe the apparent content as completely as possible for reasons that will be clear later. Finally, make sure to resolve ambiguous references that rely on visual cues: “the second figure on the left”, “this one right here”, “the green arrow.”
Here is a clip from the Ribbon of Sand sample made by Audio Description Solutions, chosen to demonstrate this method of audio description because it is both beautifully shot and beautifully described. Note that the production house who produced this video did something interesting: Meryl Streep’s narration is in one channel (left) and the description audio is in the other channel, so you could, in theory, turn the AD off if you wanted to and if the player provided a method to turn it off. Quicktime does not allow you to control the panning or balance of the audio.
The question is whether this technique would work for educational video. For the following clip, I made the audio track by watching the video, noting what time a description should occur what the description should say, recording the descriptions in my studio, editing them in a multi-track editor so they would start when I noted they should, and exporting it as a mono file. Here’s the first demo using this method:
The standard method clearly isn’t going to work all the time for educational video: the lecturer has a finite amount of time with students and has to provide as much information as possible in that time—natural pauses are few and short. Visual aids used are often dense with information and require extensive description in order to provide equivalent content. In video shot for entertainment, the camera does a lot of the exposition—it tells a good portion of the story. That’s not always always the case for some types of educational video.
Since my first mission on making our media accessible was captioning, that was my only tool when I turned to audio description—it was a hammer and every problem looked like a nail. My first attempt to improve on the standard practice was to leverage the screen reader’s typically fast speech rate, by essentially presenting the text of the descriptions to the screen reader to synthesize at the appropriate time. Captions standardly work in Flash by being provided in a w3c standard dfxp.xml timed text file, containing paragraph nodes having values of the text of the caption, with attributes begin and end with values of the time stamps when the caption should appear and disappear, respectively. To get the screen reader to synthesize the text displayed, you need to use the Accessibility.updateProperties() method to force the screenreader to refresh its buffer containing the accessible objects in the movie (including the new text), then force focus onto the text box where the description is printed with the FocusManager.setFocus() method. I can comprehend the synthesized speech in JAWS with a setting at 85, so if you have JAWS available, turn it on and set the JAWS Cursor Dialog speech rate there (or higher) and watch Demo #2.
Demo #2 worked better than Demo #1, but is still not good enough. JAWS spoke over the audio native to the video; JAWS couldn’t keep up with the pace of the video, even when speaking at a high word rate; and seizing keyboard focus from the user would be problematic with a properly functional user interface.
WGBH noticed that it frequently happens where a natural pause is not available into which to insert a description without interfering with the native audio and so suggest “extended descriptions” where the video pauses as needed to allow time for the description.
Their example is: WGBH “All Systems Go” Extended Description demo. As much as it pains me to criticize the great work they do at WGBH, I have problems with this particular implementation. First, the descriptions in the video aren’t descriptions, but annotations. There’s information presented in the extended descriptions that is not readily apparent from the motion picture. Also, the player doesn’t actually pause, it merely displays the same frame repeatedly until the descriptive audio finishes.
This is problematic for a number of reasons: we’d more than double our disk usage and costs in order to deliver accessible video in this manner, the media producers would have to make two different versions of each video we make, web designers would have to come up with a clever way to reliably route users to the correct version of the video for them (if that’s even knowable), and users would have no way to skip the description if they didn’t feel it necessary to listen to (the objective is to provide equivalent content, not to extend the amount of time they are required to consume content) or to switch to the other version of the video if they were routed to the wrong one.
Flash does more than play video—we can push the technology to do what we need it to do, exactly how we need it to do it. This is one of the benefits of using Flash over an embedded commercial player. Extended descriptions are going to be necessary in educational video, so a mechanism is needed to pause the video stream when necessary and resume it either when the description has finished playing or the user chooses to skip the remainder of the description.
Not only because my only tool is closed captioning, the audio descriptions are controlled via an xml file extened from the timed text standard, with two new attributes available for p-nodes: pause, which takes a boolean value (defaulting to false), and href, which tells the player where to find the audio to be played at the time specified by the begin attribute. The player doesn’t interact much with the screen reader, since I intended it to behave the same everywhere, except to detect that it’s in use and to turn on the audio descriptions. It only makes that check once, then stops checking in case the user doesn’t want them on. This has been a little confusing for beta-testers so far, and so am working out under what conditions to let the screen reader handle the UI and when to let the built-in controls take over. The goal is for the thing to behave in an expected manner, and my beta-testers know a lot better than I do what is expected.
Note that the buttons are large and use high-contrast colors; that the real-estate is limited since the buttons are large; and that captions over the screen are only useful when their location indicates the speaker, but that keeping the captions off the screen where it can’t block the motion picture content is preferable from my point of view (and others on campus). So when captions are turned on, the controls panel flips to reveal the captions and a button to flip back. Keyboard controls still work with captions on in the predictable manner. When you played the video, observe that it pauses when it has to and doesn’t when it can. It’s easy to skip a caption if you don’t want to see it by pressing spacebar or the play button. I need to add forward-back buttons to go back to the last description if you skipped it but then find that you missed something and also for general navigation control.
When it comes down to it, getting audio descriptions to be used widely on campus will depend on a cost-benefit analysis. The benefits are fixed: audio description have to be provided by law for government online materials in Illinois. The trick is to reduce costs enough that the decision isn’t made to just get rid of online video, which would be bad for me, since making it is my profession, and bad for everyone because online video is a valuable type of learning material for the students and the public at large. Costs can be measured in either cash or in how much their way of doing things needs to change.
Nothing much changes for server administrators: they don’t have to double their disk installations for media servers. There are a few mp3 files that are very small relative to video and another xml file that’s hosted on our video content management system.
Media producers need to make the xml files I intend to use as a standard method of delivering audio descriptions and to record the audio files. I estimate that all AD generation requires about 2X realtime to produce. If we were to adopt the standard workflow, where we produce an audio file to play in sync with the video, the way I’d produce them would be to watch the video and take notes of when a description would need to play and what the description would be, then watch the video a second time with a headset on and record the descriptions from my notes at the right time. For an hour-long video, that would take about two hours. I estimated that it took me a little under twice the length of the video to type up and time-sync the descriptions using SubtitleWorkshop, but quite a while to record the audio because I’m not a competent voice actor. That estimate does, of course, depend on the amount and complexity of visual aids presented.
Furthermore, since the format is Timed-Text, the infrastructure to generate them is already in place for captioning and can be re-purposed; also some of the skills people with expertise in captioning possess carry over as well, which altogether should make adoption of the process more acceptable. All that is needed is a post-processing script to convert the standard timed-text file to the proposed extension to the timed-text format. (More on that later).
Also, many of the videos my unit makes are storyboarded: we know from the early planning phase what each shot is meant to communicate and translating the storyboards to descriptions is straightforward. Since almost all television shows and feature films are storyboarded, the excuse for them not to provide audio descriptions once they have the infrastructure in place to deliver them is flimsy.
For faculty and other instructors, the routine is the same if they’re working with us. In my experience, faculty new to video often report that working with us to tighten up their presentation for scripted video forced them to re-think the way they present content in a way that benefited their teaching methods in positive, career changing ways. It’s fair to assume that they already think hard about what visual aids to use and have a good idea of what they intend it to communicate, so if they produce their own videos, producing their own audio descriptions shouldn’t be a stretch and might become just “part of the process”. I assume that when people do things in an inacessible ways, like failing to structure a pdf file so that a screen reader can index or even read it, they do so just because they don’t know how to do it right. I only learned that (very minor) skill a few months ago and now it’s simply how I do things—doing it any other way would be doing it wrong and creating work for myself down the road.
There are secondary benefits to implementing audio description as we will be doing at the University of Illinois. Since the timed text files I suggest using include the text of the descriptions within the p node even though the player doesn’t directly use it at this time, it’s good to have it in there for future proofing in case Flash 11 includes a built-in speech synthesizer. It’s immediately good to have in the file for search: with a fully accessible video, you can search for a term and navigate to either when it’s spoken or described. Being able to simply search the motion picture part of the video isn’t possible with inaccessible video, so that’s a significant advantage. That’s another reason why I suggest erring on the side of too much descriptive audio instead of too little, especially if it’s easy to skip them and to navigate back as needed.
Those secondary benefits are critical to mainstreaming audio description the way that closed captions are now mainstream and expected. Once students without vision impairments notice that descriptions are available, I’d hope that they start using them to multi-task: playing the video with the descriptions on in the background while reading or typing their notes, cooking dinner, whatever… That it becomes of value to the way they do their learning, too, and come to expect it as well. Even though I’m a video producer, I don’t particularly watch much online video because I’m usually too busy to do just one thing at a time. Sometimes I’ll start a video, then switch to another tab to do something else and will get lost immediately, since I’m depriving myself of the motion picture content. If audio description becomes as mainstream as I’d like it to be, I would watch more online video.
The method I propose structurally reduces the costs about as much as I can see possible, with a few exceptions.
Recording the audio would be time and labor expensive, and so it would be preferable from a cost-savings perspective if we could synthesize the descriptions. The workflow I have in mind for this would be to have the tool that translates the standard dfxp.xml file to my proposed ad.xml file format also synthesize the audio by piping the text through a synthesizer like Festival or whatever we have handy on campus. It’s been a few years since I worked on speech synthesis, but when I last did, the hot area was in prosody and communicating emotional states, etc., which is needed for dramatic presentations. The simplest way to do it would be to generate a caption file in SubtitleWorkshop (which needs a patch to export timed text xml in the current version at least) and for each description, set the end time to be the same as the begin time if you don’t want it to pause and to some later time (to display for a non-zero duration) if you do want it to pause. The script for translating the xml would then know what the value for the pause-attribute would be by comparing times and could assign the href values depending on where it writes the output from the synthesizer. I’ll be using a synthesizer in any case, since that workflow provides descriptions faster instead of having to wait for studio time: it’ll just be a matter of replacing the synthesized speech with recorded human speech. If it can be experimentally demonstrated that the synthesized speech is just as good a presentation of the material, though, we can devote all of our resources to creating the descriptions instead of splitting it between typing them up and then recording them, which would mean more audio described video gets made per dollar. We know that audio description aids learning, but I need to know under what parameters that effectiveness is maximized so we can best position resources.
There’s a possibility that the descriptions themselves could be automatically generated. Since the objective is for the descriptions to be free from interpretation, there’s a good chance that some fancy image recognition and OCR could produce the descriptions without continuing, or at least with limited, human supervision. In the simplest case, where the visual aid is a powerpoint and the professor provided alt-text in them, the task of producing much of the descriptions would be trivial. A pair of ECE professors at the University of Illinois are working on a lecture-capture system that automatically performs the mix between the camera video source and the projected video source based on the professor’s gestures as identified by the camera system, which could be re-purposed to sync the descriptions if their system is reliably successful.
The question of whether either of these would be acceptable isn’t a policy issue: not a decision that needs to be made by someone well credentialed, but is an empirical issue (and in the latter case, an engineering issue) that needs to be answered by learning comprehension studies testing how well the different methods of generating descriptions present the equivalent content to the students.
An outstanding issue is how to let students know how to report when an audio description file is missing for a video that they want one for. For captions, this is straightforward: if the ensemble server that catalogs our media reports the the player that no captions are available, it displays a message in the caption area on how to report that the captions aren’t available. It’s not clear how to go about providing the same information for reporting missing audio descriptions, but not an unresolvable problem.
That’s the end of the presentation.
If I could go back and add anything, I’d probably talk about how relatively easy it would be to do audio description for live streaming video. To caption live video, our regular procedure is to hire a professional caption writer and a very bright WILL broadcast engineer named Matt Jones to add line-21 captions to the video feed (which would probably be aired live on UI7, the campus television station, anyways) then decode them with a PCD-88 before capturing the video for encoding and streaming up to the server.
For audio description, we’d just need to send an audio only stream to the server, make sure it’s synced up on the downstream side, and have the player use the same controls for both streams. An alternative method would be to send two video streams, one with the descriptive audio mixed in, and re-purpose the dynamic bitrate switching machinery to swap between the two on demand.
Since I think extended descriptions are of real benefit to the users, I think it would be best if whatever standard the television broadcasters adopt would allow users with DVRs to allow the video to pause while the descriptions play as needed and until the buffer runs out. I assume that alternative audio tracks are extra audio streams in an mp4 container so the video stream and audio stream would have to go increasingly out of sync as the program played, which might be possible with a DVR, but I’m thinking more thinking is needed.
Paul Klee, basketball beat writer for the News Gazette, has written up a bullet-pointy season preview. For starters, let me praise Klee’s usage of tempo-neutral statistics in his evaluation of last year’s team defense. John Gasaway left Big Ten coverage in good hands. The high level of play we’ve gotten from the Illini football team this year; plus the excitement of how skilled, strong, and deep the basketball team is looking will carry me through to Spring Training with no problem. Add in the fact that people in Champaign are watching hockey now that the Blackhawks hoisted the cup last season and this winter looks pretty damned tolerable.
Four Nittany Lions have at least 13 catches this season, led by Derek Moye’s 19. Illinois has just two players over 13. Part of the problem for Illinois has been an ankle injury suffered by Eddie McGee that has limited him to two catches. Neither team has used the tight end much in the early part of the season. Illini freshman Evan Wilson is capable of a big game.
It doesn’t make any sense to compare counting stats when Penn State has played 5 games and Illinois has played 4. Derek Moye has 19 catches in 5 games, Jarred Fayson has 16 in 4 games. His point’s correct—the Nittany Lions have done a better job moving the ball through the air than Illinois, but the argument is unconvincing.
It was formerly the case in the United States that only land-owning males held voting rights. This was a misguided system of representation for obvious reasons.
However, suppose that school districts were run entirely by local boards elected only from the top 70-percentile by income of that district’s alumni. How would that arrangement benefit or hinder present-day students in the district?
This is the population who would presumably have the best understanding of the district’s shortcomings at the time they attended, so one would expect school boards maximally interested in the needs for their region’s students to succeed best in life.
Assuming a reasonable rate of geographic transfer in population, it would be difficult for any one group of people to manipulate the school system to benefit one local company or interest group to corrupt the students’ curriculum from what they need to succeed.
Some of my friends recently bought Android phones and I started writing an email to them recommending some apps and different widgets, when it occurred that it might be more useful if I put them up here.
I’ve had a Motorola Droid since the weekend they came out, so have gotten pretty comfortable with the device. And so away we go…
A wallpaper graphic on the Droid, at least, needs to be 960X854 in pixels. One way to put a cool background in there is to search the web for images in that size, then hold your finger on the image to get the context menu asking you to save the picture. Another way is to just take a picture with the camera (which is what I did). Any picture you have in your photo gallery can be made wallpaper by viewing the picture, then press “more”, choose “set as”, and then “wallpaper”. That’s also how you replace your phone contacts default droid picture with a picture of them. (Or if your friend leaves their Android phone lying around, replace the picture that shows up when her mom calls with a picture of something obscene.) If the picture you want to set as wallpaper isn’t the right size, it prompts you to crop it to the correct dimensions.
Here are the applications that I use the most, not including Facebook, camera, and other stuff that comes pre-installed. Just launch “Market” and you can search for them:
“Live Scores” by Sportacular —Good sports application
DroidLight by Motorola—a flashlight
Compass by Snaptic
Aldiko, an e-book reader
Advanced Task Killer Free—good for freeing up memory on occasion
Proxoid—use your phone’s 3g network on the laptop via USB, great in a pinch.
Connectbot—excellent ssh client and the main reason I bought the phone
AndFTP—excellent sftp client
MLB At Bat—worth every penny of the fifteen bucks or so
XKCD Viewer—quick, convenient laughs.
Here’s some stuff that I don’t use that often, but are very cool and worth having around:
Metal Detector—uses the magnets in the back that detect whether the phone is in a dock to see if there’s any iron nearby. I’ve used it to find little screws.
Google Sky Maps—just download it and be amazed.
Google Translate—it prompts you also to add…
TTS Service Extended—a speech synthesizer (your phone can now order beers and pick fights in thirty languages)
Google Voice—transcribes my voicemails
Google Earth—pretty world
OI File Manager—A good filesystem browser
And I also have some pretty fun games, in order of my favorites:
Phit Droid by mToy
SNesoid Lite—free SNES emulator. Awesome.
Cavedroid by Rob Everest
Blocked Stone by mToy
Shot 3 by mToy
Bebbled by Nikolay Ananiev
Labyrinth by Illusion Labs
I also have a silly lightsaber thing, just because some of the iPhone kids in the office have fake sword fights with theirs. I don’t think I’ve ever had to jump in and break up a war.
I have three widgets installed on my desktop or whatever you call the workspace on the phone. Widgets are things that look sort of like Application launch icons, but they have interactive behavior. On the center desktop panel, I have the Weather Channel’s large widget instead of having the Weather Channel app launcher. The widget shows the current temperature and conditions, which is much more useful than the static app icon. I also have the new BatteryTime Light widget, that shows what percentage of battery I have remaining. If you press the widget, it launches the application, which estimates how much talk-time, video watching time, etc. you have left before you’d need to charge.
On the left panel, I have a big ol’ Power Control widget, which lets you dim or brighten the screen, turn on and off wi-fi, bluetooth, gps, etc., in order to conserve battery or make the screen easier on the eyes. It takes up a whole row on the panel.
To add a widget, you just hold your finger down on a blank spot of the home screen until a menu pops up, asking you what you want to “add to home screen”, with Widgets as an option.
That’s pretty much what I have installed on my phone, plus WordPress (which I obviously never use) and Twitter (which I read quite a bit while contributing very little).
Later update: I’d also recommend setting your default alert sound to “None.” Some of the more poorly designed applications don’t allow you to customize the sound (I’m looking at you, weather channel) and so you get a bunch of ambiguous bleeps and bloops from the pocket.
I’ve got a post coming with some thoughts on the WebM project, mentioned in there. As a preview, I’ve got their VP8 encoder compiled on my research server and have been extremely impressed with the quality of the output, although haven’t yet dug through the source code enough to figure out how to map ffmpeg flags to some of the really useful features of the codec.
Writing an academic paper, though, and working on the dissertation—priorities are priorities.
If we install Adobe Production Suite CS3 on new Win7 machines (hopefully we’ll have CS5, which looks fantastic), and we start losing network due to an incorrectly set default gateway of 0.0.0.0, fix it with this solution (the first response).
I’m going to sort of live-blog my process of configuring my new office computer so that it’s a dual-boot Windows 7 and Linux machine. The first thing you need to do is select your preferred linux distribution and download the installation media. You can learn about pretty much every distribution out there from DistroWatch.com. A distribution is the set of standard applications, package installers, and configuration tools that different development teams maintain and distribute, wrapped around the Linux kernel. At work, we use the commercially maintained Suse. Many people I know use the community maintained Ubuntu. I use Slackware, which is maintained primarily by Patrick Volkerding. If you’re interested in having a Linux system that’s very painless to use and customize, I’d probably recommend Ubuntu. If you want to learn a lot about how Linux in specific and operating systems in general work, you’ll have a lot of fun with Slackware, which works just fine out of the box, too.
On we go:
1. You need to have some unallocated space on an installed hard disk. You can either slot a new one into your box or resize the existing disk partition. In the past, I’d use Partition Magic, which you can get on Hiren’s boot CD. Windows 7 has a very welcome “Shrink partition” routine, accessible by right-clicking “computer” in the start menu, and choosing “manage” in the context menu. Click the Disk Management submenu, then right-click on the system volume and choose “Shrink volume”. My computer came with a 1tb disk. I’m sacrificing 216gb for the Linux installation. 200 of that will be the linux partition and the other 16 will be a swap partition. When an OS runs out of available memory, it stores some of the data that was to be kept in memory to the hard disk in what’s called a page file. Windows stores page files on the system disk. Linux uses a dedicated disk to swap excess data from memory to. A sound rule of thumb is to allocate twice the amount of RAM for the swap disk and you’ll likely never see your system crash for lack of available memory.
2. Put the linux installation disk into your optical tray and restart the computer, booting off that disk. How to do that depends on your computer’s BIOS: some automatically boot from a CD when one is present, mine requires me to press F12 at boot time. I originally learned how to install Slackware (and a bunch of other stuff) from Grogan at BitBenderForums, although much has changed since then—notably, there’s no real point in partitioning your disks the way he did back when he wrote that. You just need one disk partition for the system and the swap partition. This is my first time installed Slackware since version 11, I think, and supposedly much has improved in the current release, which is 13. Grogan’s procedure is still a good guide: use fdisk to create your two partitions from unallocated space, change the swap partition’s id to 82, then run the slackware installer with setup. My computer came with 3 partitions installed, two of them for rescue partitions, one for Windows 7. I created an extended partition with the two logical partitions inside.
3. A few things have changed in the installer already. The EXT4 disk format is now available. Surprisingly Reiserfs is still, too, in spite of its author’s murderous ways. NTFS support is available now, too. The installer recognized the windows disks that are on this machine and asked whether I want to be able to see them when booted to Linux, I opted to allow users read-only access and to give root RW privileges. I did the full distro installation and enabled a few of the network servers like samba and nfs. After setup is done, you restart the computer and choose Linux in Lilo’s boot menu. In the past, I’d had to edit Lilo pretty extensively, but it appears to have installed nicely this time automatically. I create a non-root user for myself using the adduser script, then configure audio with alsaconf.
4. Everything works great out of the box. Slackware is configured to boot up to a bash shell. Since I’ll be using this as a desktop workstation, I’m changing that so it’ll boot up into the KDE graphical environment. To do that, you edit /etc/inittab using vim or emacs, and changing the line that reads: # Default runlevel. (Do not set to 0 or 6)
to this: # Default runlevel. (Do not set to 0 or 6)
With that done, I issue the command: shutdown -r now
to restart the computer and boot it up to Slackware in KDE using the user I created. (And the current version of KDE is quite beautiful out of the box).
That’s it. I’m done. Took me about an hour start to finish.
Later: Turned out that the installation killed my ability to boot to windows. Remember those utility partitions I mentioned? Lilo automatically assumed the windows system partition was sda1, which was a diagnostic partition. Editing /etc/lilo.conf to make windows boot to sda2 instead fixed that. I’ve also got the proprietary driver installed for my ATI graphics card, so I’m rolling along at full 1920X1080. I also had a weird problem with the network that sorted itself somehow after a bunch of poking at stuff.
Is the first he’s given up to a lefty since Kosuke Fukudome pulled one last summer. Glad both of the lefties have gotten their annual non-platoon home run out of the way early and in low leverage situations.
The objective, of course, is to win every series. Since the majority are 3-games long, the objective then is to finish the season with around a .666 (rounding down for mild comedic purposes) winning percentage—or two wins for every loss.
The Cardinals hovered about .666, or 2X+.500 if you cringe to invoke Beelzebaseball, throughout April and have the chance to finish May 1 there with a win this afternoon. Unfortunately, Kyle Lohse is pitching, so this one will be up to the offense. In their favor, Homer Bailey is pitching, whose struggles continue into this young season. His BABIP right now is .420, which is is ridiculously unfortunate and he’s still striking out batters at a very healthy clip. If the Cardinals take their free passes today, they should be able to score a bounty of runs.
After this series is a tough four-game stretch against the Phillies before settling into a pretty weak looking May with a bunch of games against the Pirates, Padres, Astros, Reds, and Cubs. If they go about their business, we should be looking at .666 to start June, too. And that’s a very happy thing.
Eric S. Raymond wrote a compelling pair of essays a short while ago about how Smartphones could replace desktop computers and how the competing smartphone markets spell good news for the open-source movement:
The first essay describes a near-future scenario where your home computer setup is basically a good monitor, a full-size keyboard, mouse, and a docking station for your smartphone to interface with those devices. Your work setup would be the same, and plugging your phone in at the office provides you with the same computing environment you have at home, and, in a more restricted mode, while on your way to the office.
The second is how Apple’s strategy to lock their customers into using only software approved by the company (and deemed non-threatening to opportunities for in-house profit) is doomed to failure, making a loose analogy to how IBM’s hardware designs came from behind to win out over Apple’s, back in the day. Their walled-garden model, I think they like to call it.
I find the main argument of both essays to be completely persuasive and have a bit to add about how I see computing going in the near future.
First, a bit of introduction: I have no dog in the Apple vs. Microsoft hunt. I think they’re both pretty crappy companies that I wouldn’t want to work for. I tend to prefer Windows to Mac, for the sole reason that everyone knows Windows is garbage, but some people seem to think Macs are significantly better. Mac OSX is no better than a severely broken linux distribution (with an extremely hands-off, generally successful package handler) as far as I have investigated and the shell environment needs almost as much augmentation as a Windows build in order to function usefully. My personal computers are all dual-boot Windows XP and Slackware. At work, my workstations run Windows XP, except one Mac that I use for audio ( since Bias makes some nice software for that platform)…
Over the past few years, I’ve worked hard to move as much heavy-lifting computing work onto dedicated linux servers to free up resources on my and my colleague’s workstations for creative work. That’s the key of where I see computing going. We’re going back to a terminal-mainframe system, in which ERS’s idea of evolving smartphones works great.
An obvious example of this is my own smartphone, the Droid. I’ve got an application called ConnectBot installed on it that let’s me run secure shells on any server I have access to in the world. I have access to enormous computing power at all times from a pocket-sized, ubiquitously networked device.
Another, slightly further-off example of the return to terminal-mainframe computing is in thin clients. I could easily see cable companies and other ISP’s offering thin clients in the near future, where the company maintains a small server cloud and rents thin client boxes and peripherals to customers that access it. They’d no doubt offer subscription tiers that give the customers access to different software packages. If they were to adopt an Apple-like model, where customers would only be allowed to install “signed” software that wouldn’t infringe on their tiered subscription business model, it would be an unpopular service. If the tiers were worked more like the standard cable subscription model, where customers who pay more would get access to bundles of services that they’d already have to pay for (like ESPN360 access and other services like that), it would make sense for a lot of people, who’d free themselves from a lot of problems like keeping their hardware up to date, maintaining a secure computing environment with redundant data storage, and having access to a routinely scaling amount of computing power and storage with very little trade-off: they’d just be giving their money to Comcast instead of Best Buy.
As a quick aside before getting to the point of this essay, I don’t have much faith in the future of iPads or the Android tablets coming out; or for the existing netbooks. They strike me as half-measures: I want a portable, inobtrusive computing machine and I want it to be integrated seamlessly with my desktop workflow. The screen should be in my hand or on a big screen, not perched on my knees, girlishly pinched together.
To sum up, a vision of the very near future: ESR argues that smartphones can displace desktop workstation boxes and that closed software markets are likely to fail in competition with freer alternatives; I observe that thin-client type devices might fill the void more quickly than smartphone computing power can keep up (and satisfy marketplace demand given telecom contracts and what-not), thus moving smartphones into that sort of a terminal-server constellation.
I don’t want to get into the Flash vs. Apple war, which I find to be overheated, to put it mildly. I do want to make some observations and predictions on the future of Flash, however.
While Flash is largely closed and proprietary, it does allow content developers to make applets that work on any platform that has the Flash plug-in—and that’s a very good thing. I hope that Flash 10.1 works well on phones. Almost certainly it’ll work better on Android than on PalmOS or Windows Mobile 7, for the simple reason that Android developers uniquely have no profitability motive to close off access from the eventual Flash plug-in for Android devices from the hardware video decoders available on the device.
And somewhat counter-intuitively, I believe it to be an advantage that Flash is largely closed and proprietary. You can do things with Flash that you can’t do with any HTML5 video player, most importantly, you can play streaming video from an RTMP server like FMS, Wowza, or Red 5; and you can serve up copyrighted materials in a way that makes it as difficult as possible for people to steal the content and save it to their own computers. You need a proprietary plug-in if you want to do that. (Which I need to do.) Hulu and many Universities will continue to depend on Flash because there is no viable alternative. Without a closed plug-in (and some other things) there’s really no way to make copyrighted materials available and protected.
So my prediction is that Flash sticks around and that eventually, Apple and Adobe will compromise by allowing a stripped down Flash plug-in that only includes the features needed to decode and render video and that requires HTML5 style controls to manipulate. That’s assuming, of course, that Flash 10.1 works as well as it needs to on mobile devices.