Monday, November 19, 2007

Openmoko xmms2 client



Finally I got around publishing the source for the little XMMS2 client I've been working on (when not preparing the DrKosmos release). It is designed for use on the FIC neo1973 phone, turning it into a remote control for your XMMS2. Most time was wasted fighting with bitbake. Or rather getting bitbake and waf to get along.

Avahi is used to detect XMMS2 servers on your network. x2r starts with a serverbrowser listing the XMMS2 servers on your network (+ some hardcoded addresses, later version will allow adding to this list). After server is selected and connected to the "now playing screen" is shown (see screenshot to the left). See screen cast in very crappy quality below.

I wrote some simple custom gtk widgets using gob2 for use in x2r. Those are available in the awidgets repository on git.0x63.nu.

The bitbake-files for xmms2 and x2r are available in the omoko-apps repo at git.0x63.nu.

Update: ipkgs are available at http://0x63.nu/files/ipkgs/

Btw, I have some plans for a playlist screen aswell, just need to find the time to implement it.

video

Labels: ,

Saturday, September 22, 2007

Batteries included

I used to read a few news feeds in Thunderbird, it worked pretty nice (except that things got badly corrupted if disk got full). However I started working from different computers to a larger extent, so I wanted to maintain state across computers so articles I have read are marked as such on all computers I use. I did actually consider using some online service such as google reader, but I realized that I want more control and it would be nice to be able to read stuff offline.

So I decided I wanted to store them to maildirs accessible over imap. I found quite a few different tools to get feeds converted into email in different formats. Newspipe looked most promising, so I set it up to pipe to procmail using a special procmail.rc-file. But things didn't really behave they way I wanted. It also wanted to run in daemon mode, I would have preferred it to run from cron.

Well, how hard can it be to write a pythonscript that grabs feeds and stores into maildirs, I thought. I had heard about feedparser, it was supposed to be good. apt-get install python-feedparser. Firing up the python interactive console poking around a bit with feedparser, grabbing rss/atom feeds was indeed a walk in the park. I recall having seen something about maildir support in python somewhere. Yes, google tells me there is a maildir module shipped with python. Batteries included. But at first the documentation didn't seem correct, it mentioned stuff that wasn't available in the module on my system. Oh, it had been updated for Python2.5, I finally got to write my first real python2.5 script (yet I didn't manage to squeeze in a with-operator or usage of the new generator features). I had to maintain state across invocations, so throw in another standard module, pickle and feeddir was born.

So now I have feeddir.py update in cron. And can do feeddir.py add feeds.xmms2planet http://planet.xmms.se/rss20.xml to add a new feed which will be available for subscription in my imap clients (unders feeds/xmms2planet).

Code can be found here.

Labels: , , ,

Friday, September 07, 2007

Playlist language

This code has been sitting in my homedir for quite a while, finally got around to clean it up enough to publish it. There still is way too many rough edges, but the implementation isn't very interesting anyway.

I wanted a readable language for custom playlist formatting, instead of some hard to read string like: $if(%album artist%,%album%,%artist%)|$if(%album artist%,$num(%tracknumber%,2) - %title%,%album%)|$if($not(%album artist%),$num(%tracknumber%,2) - %title%)

Playlists are pretty much rows of data, which should be processed one by one, doing different things depending on what they contain. So lets have a language that has the form list(predicate + list(actions)), a bit like awk. There is no need to have multiple levels of nesting, so making use of indentation should be simple, a bit like python.

So, without further delay, here is an example in the playlist language where awk meets python:

:title && :artist == lastartist
pad(lastartist)
emit(" + ")
emit(:title)
emit(CR)
done()

:title && :artist
emit()
emit(:artist)
emit(" - ")
emit(:title)
set(lastartist, :artist)
emit(CR)
done()

"always"
set(lastartist, "")
emit(:url)
emit(CR)


...and here is the (crappy) implementation. There is no loops and things to avoid the halting problem.



BTW. I put the cunit-wrapper in git aswell, and added an example that is a 1:1 conversion of the cunit example.

Labels: , ,

Wednesday, August 29, 2007

C unit testing framework

There was some complaints about how much boilerplate code was required in CUnit, and how much cleaner c++ tests were. I found it a bit absurd to use c++ unit test framework to test c-code just because of the boilerplate code, so I whipped up some wrapper around CUnit to avoid the boilerplate code, and poured in some gcc-extensions and linker tricks to make it work nicely. It is available for download here. So now you have the choice between portable tests or lots of boilerplate code.

A test suite looks like this:


INIT("Stupid testsuite") {
return 0;
}

CLEANUP() {
return 0;
}

TEST(test_1, "Test case 1") {
CU_ASSERT(0 == 10 - 10);
}

TEST(test_2, "Test case 2") {
int i = 10 + 20;
CU_ASSERT(i == 30);
}



Oh, did I mention that each testsuite is put in a separate shared object which the runner loads using libdl?

Labels: , ,

Monday, June 05, 2006

I've been rebooted

Friday I was at reboot 8. There was a few presentations worth mentioning...

Jesse James Garrett - Beyond tagging: User generated information structure. The really interesting part was when he showed how Amazon instrument in every link to be able to compleatly track how the user surfed, not only what pages, but also exactly where in the pages (s)he clicked on the links. This got me thinking about applying this to desktop applications and doing a variant of the "test coverage feedback" stuff for GUI design, but instead of getting coverage data the GUI should be instrumented and track every action in the GUI and give feedback on how the users interact with the application. The problem is of course to analyze the data; a tool that just gives a list of the most important things to optimize in the GUI would probably be a good start. Using every user as a test person that improve the usability can't be bad, can it? Luckily I'm not writing any GUI application currently, so I wont try to implement it :)

Jeremy Keith - In Praise Of The Hyperlink. A great presentation; a wonderful loveletter to the hyperlink. I really can't make the presentation justice in this blog entry, but there is two things that I just have to relay. #1: Permalink should be a tautology - links want to be permanent. #2: Google bot is a quantum device; when I visit a webpage with a number of links, universe spawns a number of parallell universes, one for each link, and I will remain in the one corresponding to the link I click on - google bot however is a quantum device and will follow all of them, and exist in several universes.

Chris Heathcote - A mobile Internet manifesto. This guy from nokia gave a good presentation. I totally agree that the real blocker for good mobile apps is that operators charge per transferred amount of data. I did a pretty cool demo with his phone basically running a port of apache and mod_python and controlling the phone over the web. It wouldn't be that useful in it self, but it really allows to build applications that pushes data into the phone. Nokia is definitly doing the right thing opening up the platform with python. My next phone might even be a noika.

Categories: reboot8 web

Sunday, April 02, 2006

-multicast +caching followup

(ok, this post has been in draft state for almost month, so I guess I'll just publish it, before it is totally outdated.)

I got quite a few reactions on my previous post. Most saying something like: "Pretty nice, but content providers wont allow it to happen". Unfortunately I have to agree to a large extent. It would have been better if they understood that it is in their interest. People really want time shifting, they will buy recording devices instead if the it is not supported directly in the service. So the content providers will actually have better control with the caching scheme.

It is also interesting to relate this to the "network neutrality"/"broadband discrimination" discussion. The caching should decrease the amount of traffic through the carriers. Instead of starting paying the carriers lots of money, the money could be spent deploying caches at the ISPs.

There are even content providers that have realized that the situation is not sustainable. http://www.svd.se/dynamiskt/noje/did_12262248.asp (A short summary in English: SVT (the public service television in Sweden) is starting to search for alternative methods for distributing their on demand "internet TV". The huge increase in popularity has caused higher traffic costs, which is the main problem with the current method of distribution; "The more people watching, the higher cost").

Categories: multicast IpTV VOD

Monday, March 06, 2006

How caching supersedes multicast

After some discussion with Alexander I've spent some time during the last week thinking about and discussing multicast and distribution of television. I decided that it would probably be a good idea to have it written down.

Multicast is an effective method to deliver the same data to the several nodes at the same time, but what are the applications of multicast? There are two basic criteria that must be fulfilled for an application to benefit from multicast.

  • There must more than one node requesting the same data at the same time. If only one node want the data at that specific moment the multicasting becomes unicasting.
  • There can't be any strict requirements that the data will be delivered correctly. It is a one way communication channel, and resending data that one node have lost would deliver it to all nodes. Yes, there are schemes of different complexities to make it "reliable", but they are at best a ugly hack.


This makes the group of applications where multicast is useful very limited. There is actually only one single application that clearly is many nodes requesting the same data at the same time; Television and radio. The triple playing and putting every service into IP packets makes us need multicast enabled networks?

The way television and radio is works today is just a consequence of the technology used to deliver the data to the viewers; the data is broadcast (either via radio waves or via shared media in CATV-networks) to many receivers at the same time. The broadcasting by definition forces the viewers to watch the programs at the same time (or use a time shifting device such as a VHS recorder). But why would I want watch the latest episode of "Lost" Wednesdays at 21:00 just because it is scheduled for broadcast at that time. Almost everyone can say a time that would suit them better. Why should this misfeature of TV and radio be reimplemented when the technology to distribute it is changed?

All TV and radio should be on demand, giving the viewer the power to decide when he or she wants to watch the programme. This is already happening to a large extent, although in a way that is not blessed by the creators of the content, by people downloading with file sharing applications such as bittorrent. The main two reasons people download TV programmes are that the commercial breaks are removed and, probably more important, they can watch it at any time. Another datapoint is how podcasting has changed how people listen to radio. People download the shows they are interested in and listen to it when it suits. The huge popularity of podcasting is a strong indication of that people want to consume media when it suits them, and not at a time scheduled by the broadcasting company because a limitation in the technology.

Such paradigm shift in how media is delivered will make the need of multicast to distribute television and radio void. There will be very few cases when many people watch the same thing at the same time, and that is live broadcasts, which in principle is limited to large sport events such as the Olympic games, world championships or the final of some league. But that would just be a very very small share of the media that benefits from multicast.

But can the network cope with unicasting all that data to the viewers? Lets say that 5 million persons (a big city or a small country) is watching about one hour of TV each evening between 18:00 and 24:00 and that it is a 4Mbps stream. Then we have 5M*4Mbps*1h/6h ~= 3Tbps. That's quite some bandwidth required. So maybe fully deployed on demand TV is impossible?

It can be assumed that there will be temporal locality; people will watch the same things at approximately the same time. The evening a new episode of "Lost" is released many people will watch it, or at least the during the next days. There is also a high probability that there will be some geographical locality; some programmes have a audience that is based on the region they live in, regional news is one example. Both temporal and geographical locality are always strong cases for caching. The caches will work like some kind of relay stations. The placement of the caches is crucial, they can't be too far from the end users and serve too many users because then their load will affect large parts of the network and they can't be too close either because then there will be less benefit of the caching. But it should be easy to create a model to find the optimal placement.

The caches would grab their content from the content-provider when they don't have the data available. The content-provider could be far away, but there should atleast be much peak bandwidth available even though the QoS can't be guaranteed. That shouldn't cause any problems, because it would easily be able to download the full media file much faster than the end user views it. The path from the cache to the end user is much shorter and is much more controllable environment where QoS applies. The caches are actually doing some kind of multicast, but on layer 7 instead of the IP layer, and they also implement timeshifting.

The cache operators would be the new television networks, which buys content and distributes it to the customers. It could be the ISP that own and operates the caches. It could also be a separate entity, maybe even a separate AS, but the caches need to be closer to the end users than the normal peering points allows for, so it would be some kind of new schemes for exchanging traffic between AS:es much closer to the end users. Most likely it would be easier to just put the caches in the ISP's network.

To conclude: media wants to be consumed at the most convenient time, therefore it will not be consumed simultaneously and multicast is not beneficial.

Categories: multicast IpTV VOD