The Eagle is Down

by admin on February 21, 2013

Yahoo! FireEagle is finally dead. One of those little loved Yahoo! products that was way ahead of it’s time (Pipes being the obvious other example). I actually loved the way it integrated with several location based services like Dopplr and Brightkite. I even had a small widget on my blog powered by it. I used to track all my trips on Dopplr, which would in turn update FireEagle. Another service called EagleFeed consumed this location from FireEagle and exposed it as an RSS feed. I then had a simple widget that would consume the RSS feed and populate the “Currently At” widget on my blog. This widget would, of course, automatically update whenever I actually made a trip that I had recorded on Dopplr. The whole thing was surprisingly stable and worked flawlessly all these years without a single change. The RSS feed end point actually still appears to work with possibly a cached copy of the data. I guess it’s finally time to kill the widget.

PS: Yahoo! “Paranoids” probably know why the domain was hosted off yahoo.net instead of yahoo.com :)

Real Time Voting App with Node.js, Socket.io and mongoDB

by admin on February 5, 2013

Tried my hand at building a real time voting app this weekend using Node.js, Socket.io and Mongoose. I had used Node.js and Socket.io before, so I was fairly familiar with them, but this was my first experience with Mongoose (and actually MongoDB as well) Some lessons learnt

  • Things move fast in Node.js land. And things break fast as well. There were several instances where some of the code I had written against older versions of Node.js/Express were just not valid anymore. Even the socket.io documentation has different steps for Express v3 and v2.
  • Redis makes things like this trivial to implement. Redis has pubsub baked into it, and you can simply publish events to it that your socket.io code can react to (and push to the browser) Your application code need not even be written in Node.js. You could have a Rails/Django/PHP website and just use Node.js to push out events to the browser, and use Redis pubsub as the intermediate queue. I had already tried Redis (and absolutely *love* it), and I wanted to stick with a Mongo only approach, since this was just for a hackday.
  • Mongo has a tailable cursor (the analogy they provide for this is the Unix tail -f command) that seems ideal for use cases like this. However, this works only for capped collections. Capped means that you need to specify the max. size up front, and more importantly, updates and retrieves are done in the order of insertion. I believe you can convert a non-capped collection to a capped collection, but I read about it after I deleted and recreated the one I had. (I didn’t have any data, in any case) Not sure about the performance impacts of using a tailable cursor though. You could also skip or read the same data multiple times (cursor goes back to the start when it gets reset I believe).
  • Mongoose documentation makes almost no mention of tailable cursors. In fact, I couldn’t find many examples of people using tailable cursors to implement a real time app like this. The one example I did find (mongolab/tractorpush-server ) used native Mongo drivers instead of Mongoose, but that was good enough to get me started (it also convinced me that something like this was possible)
  • Mongoose 2.7.x documentation mentions tailable cursors (for some reason the 3.5.x one does not) Fortunately, the API listed there still seemed to work. This is the code I used to react to insertions into my collection
    var Vote = mongoose.model('Vote');
    var stream = Vote.find().tailable().stream();
    stream.on('data', function(vote) {
        // do stuff like socket.emit
    });
    

    You can add filters to the find() query as well.

  • socket.io is simply awesome. On that note, Heroku doesn’t actually support websockets. You need to configure socket.io to use AJAX long polling instead. Fortunately, this *is* documented here. We found this to be near real-time and a pretty acceptable option.
  • Node.js is great for API servers/exposing JSON etc. but for some reason, I had a tough time finding a templating engine that was easy to use. Jade just didn’t work, while the other popular ones were all no-logic, no HTML type stuff. I just wanted to write some simple HTML and embed a few dynamic values, goddamnit! I finally went with Swig which was fairly similar to the stuff I’m used to (i.e, erb/django templates or even PHP for that matter) More importantly, it had clear instructions for use with express, and the instructions actually worked!

Overall, it was a good, learning experience and I eventually got it all to work. However, I don’t think Node.js would be my default choice next time if I had to quickly mock up some web pages (it might be for a REST API server) The code is at rohitnair/sportshackday if anyone’s insterested, although most of it might be useless.

Dear Logitech

by admin on November 8, 2012

Dear Logitech/Apple/makers of mice and keyboards,
As a software engineer, I’ve realized the importance of using good, comfortable input devices pretty early in my career. I’ve decided that I’m ready to spend whatever it takes to buy such devices before I’m forced to do so by RSI. Unfortunately, the current selection of keyboards and mice don’t quite make the cut. (All wireless, of course. Who needs wires?) Here’s my wishlist for the holiday season

  1. A full sized Bluetooth mouse. Please, for god’s sake! Bluetooth does not imply portability, or at least it doesn’t always need to. It does however mean one less USB receiver to worry about.
  2. A truly “unified” product range. Right now, the Performance MX and the G700 use different receivers, different charging cables and more importantly, different software.
  3. Better Macbook support. The Performance MX has software for the Mac but the G700 does not. At the same time, the G700 can work as a wired mouse while the Performance MX cannot. Why is this relevant to Macs? Because they have only two USB ports. I need to use both ports while charging the Performance MX, one for the charger and one for the receiver. WTF?
  4. Keyboards/mice that can pair with multiple devices at once. Most people have multiple devices, (including tablets) if not multiple computers. The Logiteck K810 is the step in the right direction (and has great reviews on Amazon so far), so how about a mouse that can do the same?
  5. (Update) Had one more item on the wishlist. Batteries instead of charging with a wire! (and better battery life to go with it) The Apple Wireless Keyboard has outstanding battery life (lasting several months!), and is a truly wireless experience. Compare that to a Logitech wireless keyboard/mouse that has to be charged every few days and ends up being permanently wired in the process :|

Adding Spree to an Existing Rails app (and not mess up authentication)

by admin on June 11, 2012

tl;dr – Read Custom authentication (#1512) on Github and follow the instructions mentioned there

The Problem – You have an existing rails app. You’ve implemented a user authentication layer for it (this could be using something like Devise) Now, you want your app to have some e-commerce capabilities and decide to use Spree, an excellent and extremely comprehensive open source solution. You follow their instructions, do a spree install etc. and things break (especially authentication). A lot of people have posted this problem on stackoverflow and the Spree mailing list, but no one actually provides a clear solution.

The current version of Spree expects developers to build their apps around it and not the other way around. It completely takes over authentication, and worse, renames your existing users table to spree_users and completely changes the schema as well. It also takes over your routing, your landing page now points to the Spree landing page, and if you had your own ‘/admin’ page, (this could be via something like rails_admin) that now points to the spree admin page.

Fortunately, there’s a branch in active development that aims to rip out the whole spree_auth component and fix the authentication problem. You can follow the entire thread on Github – Custom authentication (#1512) In particular, look at this comment and follow the instructions there. Essentially, these changes would make sure that your users table is untouched, and you can tell Spree to just use whatever authentication you already have in your existing rails app. The guide mentioned there lists out all the steps required, but there are a couple of other changes that I had to do. I’ve listed them here.

Installing PostgreSQL on OS X Lion (using homebrew)

by admin on April 15, 2012

So I decided to switch from MySQL to PostgreSQL for my rails app (primarily because Heroku supports PostgreSQL) OS X Lions ships with psql (the client) but you still need to install the server manually. The easiest way is to just install it via homebrew, but unfortunately that wasn’t a smooth process. The fact that this I’ve never used PostgreSQL before didn’t help matters either. Here’s a list of commands and hacks that I had to do to finally make it work.

  1. Install homebrew if you haven’t yet (worth installing even if you don’t need PostgreSQL) Then do
     brew install postgres 
  2. Ideally, that should be it, but in my case, it wasn’t. The brew install failed for me because some mirror (ftp.kaist.ac.kr to be specific) was down. I worked around this by adding an entry to my hosts file and pointing this to a working mirror. Add the following line to your /etc/hosts file if you face the same issue
    198.82.183.70 ftp.kaist.ac.kr
  3. Read the rest of this entry »

Rails + Backbone.js + Backbone-relational

by admin on March 29, 2012

Playing around with Rails and Backbone.js for the last few weeks has been nothing but fun. Both libraries have features and functionality that almost seem like magic. At the same time, they have their limitations as well. For instance, I’ve really felt the need for a way to express relations within Backbone. This would enable a one-to-one mapping between Rails and Backbone models and would, in general, make things a lot simpler. The Backbone docs quite clearly mention that they will not be adding direct support for nested models/relations, and one has to look elsewhere for it. Searching around will lead you to the Backbone-relational library, which is a Github project and well maintained. The documentation, however, is a bit lacking and there are no demo/tutorials either (the lack of a tutorial is an open bug against the project) And thus, I decided to build a demo app using Rails, Backbone.js and Backbone-relational.

Read the rest of this entry »

Mining Bitcoins for Fun (and very little profitability)

by admin on February 29, 2012

So what do you do with $60 worth of Amazon EC2 credits that expire in a few days? Why yes, you think of ways to waste them! (and give the evil, big corporation the least amount of satisfaction). I could have donated computing resources to Science or even used them for my own research project. But none of that comes close to making a few Bitcoins, in the hope that one day the world’s economy will crumble and you’ll be a billionaire thanks to this virtual currency.

I decided to go for the meanest and most expensive EC2 instance type (the cg1.4xlarge Cluster GPU instance that costs $2.10 per hour) Mining bitcoins is compute intensive, and you almost definitely need some GPU power, although it appears that Nvidia GPUs are not best for this particular kind of computation. Next, I created an account on deepbit and followed the instructions mentioned here to install all the necessary software/libraries. As of now, you might need to do a few things differently. The following additional steps were enough to get things working for me.


# Install source for current package before trying to run the nVidia installers
yum -y install kernel-devel-$(uname -r)
# Download and run latest installer, and point installer to source directory which was installed above
bash devdriver_3.2_linux_64_260.19.26.run --update --kernel-source-path=/usr/src/kernels/2.6.35.14-97.44.amzn1.x86_64
# use updated syntax for poclbm.py
screen -d -m python poclbm.py username:password@deepbit.net:8332 -v -w 256 --device 0

And finally, some numbers. I got 28 hours worth of EC2 compute time and an average of around 200 MH/s using the instance. What does that translate to? According to this profitability calculator, 200MH/s can earn you around 0.1462 BTC. In 24 hours. In other words, 60$ of EC2 compute power got me less than 0.2 BTC (which is barely worth a glorious $1) Oh well, it was a fun experiment, and IMHO, totally worth it! :D

Now served by nginx!

by admin on January 19, 2012

A few months back, my blog web server(apache) randomly stopped responding to HTTP requests.(My EC2 instance once stopped responding to pings as well, but that was an entirely different issue) The error log said that apache had hit MaxClients. This was surprising considering that my server/blog gets virtually zero traffic. A quick restart and things seemed to be back to normal. But the issue kept repeating, and I had to come up with a long term fix. I could have spent some time trying to debug the issue, and tinker with apache’s default settings (which might not be the best for an EC2 Micro Instance). But I decided to go the easier route and just switched to nginx. I had absolutely no experience of working with nginx, but the term “lightweight” had always been associated with it in my mind, and it seemed to be an ideal choice for the 613MB of memory that an EC2 Micro Instance provides. Read up a bit on setting it up, tried a few things with the configuration and things were up and running in a few minutes. The blog has been fairly stable since and I’ve hardly noticed any downtime. The memory usage, as expected, is pretty low, and it’s noticeably faster as well!

Would you use airbnb?

by admin on May 31, 2011

Airbnb is certainly getting a lot of attention these days. And also a billion dollar valuation. For those who don’t know, it’s a service which allows you to “book” rooms/houses/couches as if they were hotel rooms. The difference being that you pay a real person, not a business. And sometimes, this person is living in that same apartment and acts as a host. I personally used it because hotels in San Francisco are mostly too expensive, and living with a host would have given me access to valuable local information.

As someone who’s used airbnb, I’ve tried to explain the concept to quite a few people. Most people were surprised at the idea that someone could rent out their room to a complete stranger. Which brings us to the topic of reviews on airbnb. Now that I look back, I realize that I’d seen only positive reviews (I must mention that hosts can review their guests too. And yes, I got reviewed as well) Doesn’t this boil down to the fact that you’re reviewing real people and their homes and not evil, private businesses? Aren’t most people nice? At least when you’re paying them? For example, I saw this in one of the reviews on airbnb
“He had two tiny problems we had to work with, the shower head was awful and one of the windows wouldn’t fully close”
“Look past the window and shower and everything is amazing and perfect.”

Would people have “looked past” such things had it been a hotel? You can in fact see hundreds of negative reviews of hotels on sites like Tripadvisor for much less. The Wi-Fi in my room was broken for a couple of days, but I still ended up giving a mostly positive review. Had it been a hotel, I probably would have whined and cursed them all day. And written bad things about them on Twitter/Yelp/Tripadvisor/every goddamn site I could find.

People are just too nice to say bad things about a person and his home, especially in public. For this reason, I feel airbnb’s review system is flawed. Or rather, the idea is right, but people are just not straightforward enough to use the system right. At the same time, you need to be considerably open-minded to use a service like this, both as a host and a guest. Maybe that translates to more flexibility and lower expectations, and in turn more positive reviews.

Something better than nothing?

by admin on July 20, 2009

It’s been more than eight month since I posted something here. Eight bloody months. This in spite of several jobless periods in those eight months including one week of doing absolutely nothing at home in Ernakulam. Clearly, blogging is not my cup of tea. Or maybe it’s just twitter getting all those lines popping up in my head. Add to that my extreme laziness and this state of the blog is quite expected. Well, i’ve decided not to be too fancy or creative and at least post things that come easily to me, namely reviews and football. I had started putting my restaurant reviews on Burrp! (one great site) as i’m pretty sure they would get more readers there than on my blog :) I might duplicate some of it, but I guess having some updates here is better than having none at all. Also hoping to come up with a post for Real Madrid’s almost unreal transfer season. The Galacticos are back baby! :)

PS: I contribute on Burrp! with the username “rohitnair


Elsewhere