Saturday, June 28, 2008

Designing Caches for Highly Scalable Web 2.0 Applications

Unix/Linux file systems have been designed in a way that reads are heavily cached and sometimes pre-fetched. There are various techniques, algorithms and methods for read caching, and each file system has its somewhat unique method and therefore performance. Most file systems would use page-cache for caching read-I/O and buffer cache for caching the metadata.

There has been immense amount of research in this area – of how to improve the read performance using caching (see here and here).

Enter highly scalable Web 2.0 era, enter Facebook - if you look at the Facebook IO-profile in my previous post – 92% of the read (for photos) are served by the CDN. What that means is reads will only happen once, after that the file will be cached in the CDN and the read will never go to the backend storage (NetApp filer in this case). So all the file system caching is probably going waste, since we are never going to read from the file-system-cache ever. Facebook photos are cached in CDN for 4.24 years (their http cache-control max-age is 133,721,540), which means the CDN will not go back to the origin server for that period.

This raises interesting questions – do file systems really need to do any caching, what is the read-write ratio for such an application, how can this file system be better tuned for such an application?
Can file system cache be better used for pre-fetching the entire metadata in the cache, so that Facebook NetApp filer has to do fewer than 3 reads for reading a photo?

Thoughts?

Labels: , , , ,

Friday, June 27, 2008

Facebook Photo Storage Architecture

Awesome presentation called "Facebook - Needle in a Haystack: Efficient Storage of Billions of Photos", here is an excerpt. You should see the full presentation (if you can get flowgram to work).

Facebook uses MySQL, Memcache, Apache, PHP and Extensions in their application stack.

Facebook uses NetAPP filers for storing files.

Facebook scale of photos
  • ~6.5 billion total images, 4-5 sizes stores for each image => ~30 billion files => 540TB total storage capacity.
  • ~475,000 images server per second at peak – most through CDNs.
  • ~100 million uploaded per week.
Facebook uses a 4-tier architecture for serving profiles and photos.
The first tier is CDN, then their proprietary “Cachr”, then their photo servers then the NetApp filers.

Cachr
  • “Protects the origin for profile pictures
  • Based on modified evhttp
  • Uses memcache as backing store
    • Microsecond response time on cache hit
    • Server can die or restart without losing cache
File handle cache
  • Based on lighthttpd
  • Uses memcache as backing store
  • Reduces metadata workload on NTAP
NetApp storage architectural issues
  • NetApp Storage is overwhelmed with metadata
  • ~3 disk reads to to read one photo
  • Totally bottlenecked on disk bandwidth
Thus, heavy reliance on expensive CDNs to serve reads:
  • 99.8% hit rate in CDN for profile images
  • ~92% hit rate for photos
  • Drastically reduces load on the storage

Labels: , , ,

Thursday, June 26, 2008

All models are wrong, and increasingly you can succeed without them.

Says Peter Norvig, Google's research director, as an update to George Box's maxim.

This is an awesome article about Petabyte size datasets and why correlation of data is enough, instead of finding a reason why datasets are related and building a model around it. Read some excerpts here, and full article via this link.

Excerpts from the original article:
Sixty years ago, digital computers made information readable. Twenty years ago, the Internet made it reachable. Ten years ago, the first search engine crawlers made it a single database. Now Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition. They are the children of the Petabyte Age.

Google's founding philosophy is that we don't know why this page is better than that one: If the statistics of incoming links say it is, that's good enough. No semantic or causal analysis is required. That's why Google can translate languages without actually "knowing" them (given equal corpus data, Google can translate Klingon into Farsi as easily as it can translate French into German). And why it can match ads to content without any knowledge or assumptions about the ads or the content.

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

If the words "discover a new species" call to mind Darwin and drawings of finches, you may be stuck in the old way of doing science. Venter can tell you almost nothing about the species he found. He doesn't know what they look like, how they live, or much of anything else about their morphology. He doesn't even have their entire genome. All he has is a statistical blip — a unique sequence that, being unlike any other sequence in the database, must represent a new species.

Petabytes allow us to say: "Correlation is enough."

Labels: , , , ,

Friday, June 20, 2008

3 awesome resources for Cross Browser Coding

I have been doing a bunch of reading, here are three of best articles I have read on cross browser coding. They cover – a) rendering, b) events compatibility and c) cookie restrictions.

Rendering - How to get Cross Browser Compatibility Every Time.

Here is a quick summary for those of you who don't want to read the whole article:

  1. Always use strict doctype and standards-compliant HTML/CSS
  2. Always use a reset at the start of your css
  3. Use opacity:0.99 on text elements to clean up rendering in Safari
  4. Never resize images in the CSS or HTML
  5. Check font rendering in every browser. Don't use Lucida
  6. Size text as a % in the body, and as em's throughout
  7. All layout divs that are floated should include display:inline and overflow:hidden
  8. Containers should have overflow:auto and trigger hasLayout via a width or height
  9. Don't use any fancy CSS3 selectors
  10. Don't use transparent PNG's unless you have loaded the alpha
Events compatibility table. This article compares the event handling for IE 5.5, IE 6, IE 7, IE8b1, FF 2, FF 3b5, Saf 3.0 Win, Saf 3.1 Win, Opera 9.26, Opera 9.5b and Konqueror 3.5.7. Very detailed article. Must read for all Javascript programmers.

Cookie Restrictions - Browser cookie restrictions very nicely documented here. An excerpt follows:
  • Microsoft indicated that Internet Explorer 8 increased the cookie limit per domain to 50 cookies but I’ve found that IE7 also allows 50 cookies per domain. Granted, this may have been increased with a system patch rather than having the browser’s first version ship like this, but it’s still more than the 20 that was commonly understood to be the limit.
  • Firefox has a per-domain cookie limit of 50 cookies.
  • Opera has a per-domain cookie limit of 30 cookies.
  • Safari/WebKit is the most interesting of all as it appears to have no perceivable limitCookie header. The problem is that the header size exceeded the limit that the server could process, so an error occurred. through Safari 3.1. I tested setting up to 10,000 cookies and all of them were set and sent along in the

Labels: , , , , , ,

Friday, June 13, 2008

Selling a Used Car in India - eBay.in, CarWale.com, Kijiji.in compared

I put my car-ad on eBay.in (http://motors.ebay.in/), CarWale.com and Kijiji (http://www.kijiji.in/).

The good news is that I could sell my car in less than 2 weeks after putting the ad, so I am happy.

Here is a short review of the sites:

eBay.in has a very cool interface, trusted site, many cars are sold every hour on eBay.in .I had a small issue with updating my classified-ad, and their support emailed me within a few hours. That's great support. They charge only Rs. 30 for placing a classified ad.

CarWale.com also has a great interface. The best thing I liked about CarWale.com is that whenever there is a new lead, CarWale.com would send me an SMS on my mobile telling me the name and the phone number of the person interested – that comes in extremely handy, since I can call the person immediately, and don’t have to get online to see the person’s mobile number. The second best thing about CarWale.com is that when I started to fill in the expected price of the car, they showed a nice popup suggesting me the resale-value of this car for that year, etc. That was really cool. Though the prices were approximately 20% inflated, so it initially raised my expectations a lot about the resale value. They charged me Rs. 500 for putting the ad.

Kijiji.in is cool too. Interface is cool; they sent me 2 emails on new leads – the prices were pathetically low. I never visited their web-page to find out. Kijiji hasn’t charged me any money so far for putting the ad.

CarWale.com generated the most number of leads (6), followed by eBay.in (4), followed by Kijiji.in (1).

I sold my car last week. Overall, I had a great experience.

Thursday, June 12, 2008

Web Service for Content Distribution Network

Having used CDN for a year now, I can say that the complexities of CDN deployment, origin server configuration, Apache configuration, pricing models, file placement, cache flush makes it pretty non-trivial deployment.

Just wondering why not have a "web service model for CDN".

Google recently introduced Google AJAX Libraries API.Google would place some of the popular AJAX libraries on their CDN servers thereby allowing caching, gzip etc. - thereby making the web pages load faster. Great idea! That could make some web pages really fast, quick to load.

Let's make CDN simple. Let's make hosting a file on CDN a 2 step process. See the following mockup:


Step 1: Upload file
Step 2:
Content type Bandwidth Requests per second


The entire process of - origin server configuration, Apache configuration, pricing, file placement, cache flush etc. can be made a 2 step process. A wrapper can be written on top of this entire process.

The pricing model can easily cover the cost of the server and the cost of developing such a web service. This could be a service on top of some of the popular CDN providers such as Panther Express, Level3, Limelight Networks, Akamai, etc.

I am surprised Amazon doesn't provide this service in conjunction with EC2 and S3.

Thoughts?

Update: Just read an interesting article - "10 Easy Steps to use Google App Engine as your own CDN". I am going to try it out, and see what the latencies look like, from different cities in the world.

Labels: , ,

Sunday, June 01, 2008

Gartner states the obvious, yet again

Gartner states the obvious, yet again:

Through TechMeme and 'Business of IT' ...

The good folks over at the Gartner Group have revealed the top 10 technologies that they believe will change the world over the next four years:
  1. Multicore and hybrid processors
  2. Virtualization and fabric computing
  3. Social networks and social software
  4. Cloud computing and cloud/Web platforms
  5. Web mashups
  6. User Interface
  7. Ubiquitous computing
  8. Contextual computing
  9. Augmented reality
  10. Semantics
Give me a break guys. Isn't this obvious.