Varnish Cache + Riak

A common problem when building applications with something as simple as Riak is that matching your cannonical name for a resource to a suitable key=value store object on the backend can be hard. Generally, using a directly accessible Riak cluster is a bad idea(tm). Users can PUT data and replace your entire site in a matter of seconds. In order to work around these limitations Basho recommends hiding it behind a caching proxy like Varnish.

Setting up varnish is rather straight forward, but configuring it to power your application can be daunghting. The VCL configuration language even allows embedding C code in your configuration. It is not so much a config file, as extension program. This however means you can easily build custom backend routers for your applications that map incoming requests to multiple backend services, as well as, request rewriting to map external urls to internal resources.

If you download the current Riak Search tarball and build from source:


# wget http://downloads.basho.com/riak-search/CURRENT/riak_search-0.14.0-1.tar.gz
# tar zxvf riak_search-0.14.0-1.tar.gz
# cd riak_search-0.14.0
# make all
# make devrel


It will build the application and a development instance with 3 independent servers in the dev directory. By default it is configured to run on ports 8091, 8092, and 8093 for instance dev1, dev2, and dev3 respectively. You can then build Varnish from source as well:


# wget http://repo.varnish-cache.org/source/varnish-2.1.5.tar.gz
# tar zxvf varnish-2.1.5.tar.gz
# cd varnish-2.1.5
# ./configure
# make
# sudo make install


At this point you'll have everything you need to build a cluster of servers, but for now we're just going to configure it to run locally on our development box. What I personally like to do is to move my development riak servers and my varnish config into a new directory:


# mkdir ~/Servers/
# cp -r ~/riak_search-0.14.0/dev/* ~/Servers
# cd ~/Servers


This works because Riak's builds are fully self contained, and can be simply tarred up or rsynced to another server and run pretty much as is (with maybe a change to the app.config and vm.args files to set the correct ip address and instance name).
To make working with the servers, bringing the up and down I generally write a little utility script:


#!/bin/bash
ulimit -n 1024
for I in 1 2 3; do
./dev$I/bin/riaksearch $1;
done


Which just lets me run the same commands on all 3 servers. The ulimit change is helpful if you're running on a system like MacOS X which has an abysmal open file limit by default (and even worse concurrent process limit). The last administrative task is to join dev2 and dev3 to the ring in dev1:


for I in 2 3; do
./dev$I/bin/riaksearch-admin join dev1@127.0.0.1;
done


With the three nodes all in the same ring, we can put stuff inside of it using curl. Riak keeps the Content-Type you give it so be certain to set the right MIME type for each file. The next step is to configure varnish to act as a proper proxy that can also purge its caches when we update some internal representation. The first helper we'll write is a proxy script to launch varnish:


#!/bin/bash
sudo varnishd -f riak.vcl -T:9000


Here we're running varnishd with the riak.vcl file as its config and the admin interface running on port 9000 on localhost. We can write a reload script too, in order to make futzing with our config file easier like so:


#!/bin/bash
CWD=`pwd`
UUID=`uuid`
varnishadm -T:9000 "vcl.load $UUID $CWD/riak.vcl"
varnishadm -T:9000 "vcl.use $UUID"


If your system doesn't have uuid look to see if you have uuidgen which is basically equivalent. This will be very useful as you make changes and test your configs. It is also helpful when you want to update your configs gracefully and have a way to revert. But the first bit to add in your new riak.vcl is a directive to have varnish talk to all 3 backends in a round-robin fashion:


director riak round-robin {
{ .backend = { .host = "127.0.0.1"; .port = "8091"; .probe = { .url = "/ping"; .threshold = 3; }}}
{ .backend = { .host = "127.0.0.1"; .port = "8092"; .probe = { .url = "/ping"; .threshold = 3; }}}
{ .backend = { .host = "127.0.0.1"; .port = "8093"; .probe = { .url = "/ping"; .threshold = 3; }}}
}


This directive sets up all 3 nodes to talk to varnish, with a health check pinging the "/ping" interface about once every 5 seconds. If any one node goes down, varnish will route around it until it comes back up as healthy in the ping checks. If you run varnishlog you will see a ton of activity like:


0 Backend_health - riak[0] Still healthy 4--X-RH 4 3 4 0.003052 0.003220 HTTP/1.1 200 OK
0 Backend_health - riak[2] Still healthy 4--X-RH 4 3 4 0.003596 0.002407 HTTP/1.1 200 OK
0 Backend_health - riak[1] Still healthy 4--X-RH 4 3 4 0.001998 0.002764 HTTP/1.1 200 OK


which is exactly what you want to see. Once you have that bit done, the next bit is to establish an ACL to hide the bits of the Riak interface from the public we really don't want exposed:


acl admin {
"127.0.0.1";
"192.168.1.0"/24;
}


This will restrict access to our local box and network. You can add additional ACLs to control access to particular bits of your Riak key value store, such as removing the ability to perform map-reduce queries, access the Solr interface, or simply do PUTs, POSTs, and DELETEs on specific buckets. The place that you want to set up your routing and access logic is in the vcl_recv subroutine like this:


sub vcl_recv {
unset req.http.cookie;
if (req.url ~ "^http://") {
set req.url = regsub(req.url, "http://[^/]*", "");
}
if (req.request == "PUT" || req.request == "DELETE" || req.request == "POST") {
purge("req.url ~ " req.url);
return(pass);
}
if (req.url ~ "^/admin/") {
if (!client.ip ~ admin) {
error 403 "Access Denied";
}
if (req.url ~ "^/admin/ping") {
set req.url = regsub(req.url, "^/admin","");
}
if (req.url ~ "^/admin/stats") {
set req.url = regsub(req.url, "^/admin","");
}
} elsif (req.url ~ "^/objects/") {
set req.url = regsub(req.url, "^/objects/", "/riak/objects");
} elsif (req.url ~ "^/riak/") {
// do nothing we're cool!
} elseif (req.url ~ "^/$") {
set req.url = regsub(req.url, "^/$", "/riak/site/index.html");
} else {
set req.url = regsub(req.url, "^/", "/riak/site/");
}
return(lookup);
}


In this example, I've removed all cookies from the request. Cookies break cache-ability and we aren't going to do anything with them anyways on the backend. We also normalize the URLs to account for buggy clients which incorrectly embed the http protocol directive in the path. Then we direct varnish to purge cache elements when we PUT, POST, or DELETE some entry and pass the request directly on to Riak bypassing further processing. Finally, we rewrite the addresses mapping everything in the /admin/ path to an ACL controlled set of paths that let us get at the health and stats for any backend. We also direct anything in a top level /objects/ directory to a /riak/objects/ bucket making it easy to pass UUID named objects to Riak and consume on the client. The /riak/ path is passed through as is to make populating entries in new buckets straight forward. Finally, in order to serve a static site out of Riak, we've mapped "/" to the site bucket and specifically the index.html file. Finally, for all paths that remain, we map those to entities in the site bucket allowing us to dump all our CSS, HTML, Javascript, images, videos, and other garbage directly into that bucket. For all these things we explicitly look up the values in cache before going to Riak.

Since varnish is a caching proxy, it is handy to add some cache control directives to the configuration, and some debugging info to help us identify if our caching is working. We can do this by modifying the vcl_fetch subroutine:


sub vcl_fetch {
if (!beresp.cacheable) {
set beresp.http.X-Cacheable = "NO:Not Cacheable";
} elsif ( beresp.http.Cache-Control ~ "private") {
set beresp.http.X-Cacheable = "NO:Cache-Control=private";
return(pass);
} else {
unset beresp.http.expires;
set beresp.http.cache-control = "max-age = 900";
set beresp.ttl = 1w;
set beresp.http.magicmarker = "1";
set beresp.http.X-Cacheable = "YES";
}
}


In this section, we're setting the X-Cacheable header on the response to provide more information about how varnish is caching the results. In the case of things that are not cacheable, we pass those right back to the user. Otherwise, we override the expiry times and setup a new TTL of 1w to keep the item in varnish cache for 1 week. We also add a cache-control header so that the client doesn't attempt to redownload again for 15 minutes. The magicmarker is a flag set so we can reset the document age to 0 in the vcl_deliver subroutine for content we are caching for a long time:


sub vcl_deliver {
if (resp.http.magicmarker) {
unset resp.http.magicmarker;
set resp.http.age = "0";
}
}


This helps force the browser to not check again for the full length of our cache-control header. And now we're done! We can load up our initial homepage into the site using curl as follows:


# curl -X PUT http://127.0.0.1/riak/site/index.html -H "Content-type: text/html" --data-binary @index.html


where index.html is a file that contains the HTML for our home page. Because Riak handles most of the important bits for us, and varnish handles the routing and caching for us, we can then focus on just building the pieces of our applications that make our site interesting to the user. Take for example trivial things like gzip encoding content. When I ran the above curl command as part of my deployment script, it took a 8138 byte HTML file, and loaded it into the key value store replacing the old value. The vcl config purged the cache, and reloading the home page returned these headers in the result:


Age:0
Cache-Control:max-age = 900
Connection:keep-alive
Content-Encoding:gzip
Content-Length:2316
Content-Type:text/html
Date:Sat, 29 Jan 2011 03:37:41 GMT
Etag:1obS1uJRzaR7IrjLL6m014
Last-Modified:Sat, 29 Jan 2011 03:37:31 GMT
Link:; rel="up"
Server:MochiWeb/1.1 WebMachine/1.7.2 (participate in the frantic)
Vary:Accept-Encoding
Via:1.1 varnish
X-Cacheable:YES
X-Riak-Vclock:a85hYGDgzmDKBVIsrN/5TmQwJTLmsTIw/tt+jA8izNacxCrONQUqEYOQYGHJtK/GFAaqZ1++qhUqkc6wA66e8epnRkxhoHrmLLPXUIktSOqZ+XPnYAoD1TM80iyCSqiwI9Szcp5pwyLMsOuWK1S4mh3ZGBblx87IElkA
X-Varnish:704000027


Notice that gzip encoding was provided, shrinking the payload to 2316 bytes, and proper Last-Modified, Date, Cache-Control, and Etag entries were generated. The mime type also was preserved from my initial post, meaning that I can ensure that things such as HTML5 MANIFEST files can be delivered with the proper mime type:


# curl -X PUT http://127.0.0.1/riak/site/MANIFEST -H "Content-type: text/cache-manifest" --data-binary @MANIFEST


When combined with the caching proxy, the high performance of the Riak cluster itself, and proper caching settings on the client side, we have a powerful infrastructure on which to build highly scalable applications. More over, we haven't even touched the incredibly cool things one can do with search and map/reduce on the Riak cluster, or building complex mappings using the links and key filters.

The primary reason I'm focusing on making Phos a much more usable tool for delivering reusable web widgets is that this stuff scales even better when your rendering and application logic is also shifted to the client side. Your users all have perfectly serviceable machines on which they will view and interact with your site or application. It is about time you started to rely on them as a natural extension of your cloud. This infrastructure simply helps make it easier to synchronize across multiple clients and versions of your application, with minimal investment in a cloud of your own.