Ambisonia’s scalable architecture

The aim is to provide a fast website which can, if/when necessary, scale to serve hundreds or thousands of concurrent users.

Doing this is non-trivial. Scaling websites is something that I do not know much about, except that its a bit of a black art.

I’ve enlisted outside help… and the resultant architecture is shown in the image below.

The future Ambisonia site will sit on EC2, Amazons cloud computing service.

You’ll notice the diagram shows 2 Amazon EC2 virtual machine instances. Only one (the one on the left) will be deployed to start with. The point here is that I will be able to deploy as many concurrent virtual machines to cope with whatever traffic demand I need to cope with. The second virtual machine (on the right) can be deployed in multiples and will share the load in rendering pages and executing requests to the database.

In the diagram on the left, the majority of visits will be served by Varnish. Varnish is a cache server which (most of the time) wont even bother to go back to Zope/Plone to render a web page … it’ll just spit back the same page that was requested 5 seconds ago…. no requests to databases, no rendering of HTML etc. Super fast.

When a web page is requested that has never been requested before (or the user is logged in, or its time to get a new version of the page), then Pound (a load balancer) will decide which Plone instance (there could be as many of these as necessary) is the least busy … and get that instance to fulfill the request.

Here I am using a product called ZEO … ZEO essentially caches the Zope/Plone database (ZODB) on each Plone instance. This time it is not the web page rendering that is being cached, it is exclusively accesses to the database.The contents of each database will be cached in each Plone instance.

This architecture is already implemented and sitting on EC2. My next tasks are to implement the new Ambisonia skin, migrate all data to Amazon S3, implement the Premium accounts and other e-commerce services. The Uploader is already integrating with the EC2 instance, and uploading to S3.

Still a lot of work to do, but this is the _final_ architecture, no more looking back wishing I’d done it ‘properly’ … this is properly.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

6 Responses to Ambisonia’s scalable architecture

  1. John Leonard says:

    This is looking really god and has got me thinking about a similar scheme for distributing my sound effects library commercially. I don’t know where you find the time and the energy to do all this stuff; you need to thank your wife and family from all of us for the effort that you put into this. I hope that it’ll prove to be a worthwhile venture.

    Great stuff.

    Thanks,

    John

  2. John Leonard says:

    it’s looking really good as well….

  3. thanks John,

    You are right that all this work comes at an extra sacrifice from the part of my family. I can only hope that it will pay off … but I will explicitely thank Kath at your prompt.

  4. oli says:

    appengine

    but you shouldn’t think too much about scalability at the moment.

  5. oli says:

    btw, putting apache before varnish is useless, unless you want to slow down varnish. varnish can do load balancing, no need to use pound.

    i cannot comment much on zope. iirc plone is quite slow in comparison to plain zope. have you thought about using zope3?

  6. Hi oli,

    Apache is before Varnish because of 2 reasons. Apache will manage flicking off requests to different parts of ambisonia, like the wiki and the forum.

    I’ve also been advised that Apache is considered safer to front a webserver than varnish, from a security standpoint. Varnish is reletaviely young code.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s