And just like that, I’m a GNOME user

When I first started using Linux (more than a decade ago), I did my share of playing around with various desktop environments: the classic FVWM, GNOME, KDE, Enlightenment etc. I settled down with KDE. Over the years, I kept coming back to GNOME to check it out but somehow KDE always felt home to me.

Well guess what, not any more. As of a few days ago, I’m (mostly) a GNOME user.

I still love KDE (the desktop) and KDE based applications (KMail, Amarok etc). It is still infinitely more configurable than anything comparable in GNOME (Evolution and Thunderbird are still fairly limited in comparison) and over the years I’ve tweaked it to just the way I like it. But GNOME has something the KDE project does not: Canonical.

Thats right, I switched to GNOME because of Canonical, the company that drives Ubuntu development. Sure, there is a lot of effort behind the various Ubuntu variants such as Kubuntu, Xubuntu etc. But make no mistake, none of these variants are first-class citizens in the Ubuntu ecosystem.

The switch was a result of my recent experience setting up Ubuntu on my home theater PC. The effort Canonical has put into making the Ubuntu experience more seamless and pleasant is clearly visible. Pretty much everything works out of the box: folders that I share show up on other computers in my home network, bluetooth/webcam etc all work just fine, setting up remote desktop is a breeze and so on, Avahi/bonjour works like a charm; I can setup a DAAP server to share my music and it shows up on iTunes just like that.

Note that all of these things are obviously not limited to Ubuntu in any way. But the user experience in Ubuntu is unparalleled in comparison with Kubuntu etc. Subtle niceties like the notifications (the Ayatana project), the Me menu,  the messaging menu, the “light” themes etc. come together in a very cohesive way to deliver an experience that rivals that of Mac OS. But beyond the subtleties, Canonical is shaping the future of Linux on the desktop, laptop and mobile devices: the Unity interface, multi-touch support for mobile devices and more. Bottomline: having a company put its weight behind a desktop has ramifications.

So as much as I love thy, KDE, for now we shall part ways. I’m still using some KDE apps (like digiKam), but until Canonical decides to officially adopt Kubuntu, GNOME it is.

The San Francisco Taiko Dojo

My wife and I had been thinking about learning Taiko, so after some quick Googling, one fine Tuesday we dropped in at the San Francisco Taiko Dojo to “observe” the adult beginners class. We only stayed the first hour or so, and it was interesting to say the least. First, there was the intimidating workout: everyone was counting in Japanese; the workout included sets of 60 pushup, situps, scissor kicks and tricep dips! And then there was the class itself — there seemed to be no “orientation” for beginners or a structured way to learn the ropes; everyone there just seemed to know what they were doing; there seemed to be a lot of understood etiquettes — there was an expected way of doing pretty much everything. Suffice to say that we decided to start classes the following week.

BTW, if don’t know what Taiko is or have never heard Taiko, I refer you to the mighty Wikipedia and the mightier YouTube:

I’m on a temporary hiatus from Taiko right now, but I had an amazing experience the few months I spent with SF Taiko Dojo.

Yes, there are rules and etiquettes. But in a society where anything goes and freedom rules and any kind of “discipline” is often frowned upon, SFTD was almost refreshing. In many ways, it was reminiscent of the Gurukul system in ancient India.

Taiko itself is a wonderful art form. There is something powerful about a Taiko performance. A single drum is an excellent percussion device, but in a group, Taiko takes a life of its own. Like most art forms, you can pick up the basics real quick. But to go deep into Taiko, you need time, patience and a lot of hard work. The veterans at SFTD have been playing for 10-15 years and still learning.

Needless to add, Taiko is also a fantastic full body workout. It is a combination of dance, drumming, music and more. The classes are fun, but you do need serious commitment if you want to become an advanced Taiko player. The folks in the adult beginners class are a merry bunch. Before our first class, I was extremely anxious, trying to memory numerals in Japanese from Wikipedia and worried whether I’ll be able to keep up with everyone. There was help every step of the way. The class won’t stop for you, but it will not leave you behind either :)

But the best part of SFTD is the opportunity to learn from Sensei Tanaka. His accomplishments in the world of Taiko are well known, so I won’t enlist them here. What surprised me was the humility and generosity and the energy he brings with him, even after doing this for more than four decades. He could easily delegate the adult beginners class to one of his many advanced students; yet he still routinely teaches the class himself, ever so patient and understanding. Better yet, his expertise in Taiko is matched only by his wistful humor.

So if you are in the Bay Area and are looking for some inspiration, do checkout San Francisco Taiko Dojo.

Toying with node.js

A commenter rightly complained that despite my claims of “playing around” with node.js, all I could come up was with the example in the man page. I replied saying that I did intend to post something that I wrote from scratch, and as promised, here is my first toy node.js program:

var sys = require('sys');
var http = require('http');
var url = require('url');
var path = require('path');

function search() {
  stdin = process.openStdin();
  stdin.setEncoding('utf8');
  stdin.on('data', function(term) {
    term = term.substring(0, term.length - 1);
    var google = http.createClient(80, 'ajax.googleapis.com');
    var search_url = "/ajax/services/search/web?v=1.0&q=" + term;
    var request = google.request('GET', search_url, {
      'host': 'ajax.googleapis.com',
      'Referer': 'http://floatingsun.net',
      'User-Agent': 'NodeJS HTTP client',
      'Accept': '*/*'});
    request.on('response', function(response) {
      response.setEncoding('utf8');
      var body = ""
      response.on('data', function(chunk) {
        body += chunk;
      });
      response.on('end', function() {
        var searchResults = JSON.parse(body);
        var results = searchResults["responseData"]["results"];
        for (var i = 0; i < results.length; i++) {
          console.log(results[i]["url"]);
        }
      });
    });
    request.end();
  });
}

search();

This program (also available as a gist) reads in search terms on standard input, and does a Google search on those terms, printing the URLs of the search results.

I was quite surprised (and a bit embarrassed) at how long it took me to get this simple program working. For instance, it took me the better part of an hour to realize that when I read something from stdin, it includes the trailing newline (as the user hits ‘Enter’). Earlier, I was using the input as-is for the search term, and that was leading to a 404 error, because the resulting URL was malformed.

Debugging was also harder, as expected. Syntax errors are easily caught by V8, but everything else is still obscure. I’m sure some of the difficulty is because of my lack of expertise with Javascript. But at one point, I got this error:

events:12
        throw arguments[1];
                       ^
Error: Parse Error
    at Client.ondata (http:881:22)
    at IOWatcher.callback (net:517:29)
    at node.js:270:9

I still haven’t figured out exactly where that error was coming from. Nonetheless, it was an interesting exercise. I’m looking forward to writing some non-trivial code with node.js now.

Some thoughts on dbShards

I heard about dbShards via two recent blog posts — one by Curt Monash and the other by Todd Hoff. It seemed like an interesting product, so I spent some time digging around on their website.

dbShards
dbShards

As the name suggests, dbShards is all about sharding. Sharding, also known as partitioning, is the process of distributing a given dataset into smaller chunks based some policy. AFAIK, the term “shard” was popularized recently by Google even though the concept of partitioning is at least a few decades old. Most distributed data management systems implement some form of sharding by necessity, since the entire data set will not fit in memory on a single node (if it would, you should not be using a distributed system). And therein lies the USP of dbShards — it brings sharding (and with it, performance and scalability) to commodity, single-node databases such as MySQL and Postgres.

So how does it work? Well, dbShards acts as a transparent layer sitting in front of multiple nodes running MySQL, lets say. Transparent, because they want to work with legacy code, meaning no or minimal client side modifications. Inserting new data is pretty simple: dbShards using a “sharding key” to route an incoming tuple to the appropriate destination. Queries are a bit more complex, and here the website is skimpy on details. Monash’s post mentions that join performance is good when sharding keys are the same — this is not a surprise. I’m not interested in what other kinds of query optimizations are in place. When data is partitioned, you really need a sophisticated query planner and optimizer that can minimize data movement and aggregation, and push down as much computation as possible to individual nodes.

I found the page on replication intriguing. I’m guessing when they say “reliable replication”, they mean “consistent replication” in more common parlance (alternative, that dbShards supports strong consistency, as opposed to eventual or lazy consistency). This particular bit in the first paragraph caught my eye: “deliver high performance yet reliable multi-threaded replication for High Availability (HA)”. I’m not sure how to read this. Are they implying that multi-threaded replication is typically not performant? And usually you do NOT want threading for high availability, because a single thread can still take the entire process down. The actual mechanism for replication seems like a straightforward application of the replicated state machine approach.

But making a replicated state machine based system scale requires very careful engineering, otherwise it is easy to hit performance bottlenecks. I’d be very interested in knowing a bit more about the transaction model in dbShards and how it performs on larger systems (tens to hundreds of nodes).

The pricing model is also quite interesting. I think it is the first vendor I know of that is pricing on CPU and not storage (their pricing is $5,000 per year per server). I think this is indicative of the target customer segment as well — I would imagine dbShards works well with a few TBs of data on machines with a lot of CPU and memory.

Udaan and Whitespace

There are movies, and then there are movies.

Udaan Poster

Udaan is one of those rare movies where it seems like the crew had an intense clarity about the movie they wanted to make, and that is exactly what they did. They did not make it for the money, they did not make it to please a broad audience, they did not make it to please the critics — they made it, because that was what they wanted to show.

I’m not going to talk about the story or the characters much, just Google those things if you are interested. Instead, I want to talk about an analogy.

Any good designer knows the importance of whitespace, whether in layout or typography. Architects have long understood that negative or empty space is just as (or perhaps more) important as filled space. Watching Udaan was a good reminder that good moments in a movie need their space as well.

I didn’t feel rushed as I saw the movie; it felt a bit slow at times, but there was no hurry to get to the end. There are several scenes that are made poignant by the lack of dialog. The same goes for the music. Amit Trivedi has done an outstanding job with the background score as well as the soundtrack. The lyrics (by Amitabh Bhattacharya) are fabulous and are fittingly given their space in the songs — Amit makes sure that the music recedes and does not overwhelm so you can pay attention to the words. But when the voices take a break, the music that fills in the gaps is just as good.

As my wife observed, “this movie has craft.”