Varnish Cache

Varnish Cache

README

Varnish Cache is a web application accelerator also known as a caching HTTP reverse proxy. You install it in front of any server that speaks HTTP and configure it to cache the contents. Varnish Cache is really, really fast. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture. A high level overview of what Varnish does can be seen in the video attached to this web page.

Other features:

  • Varnish Cache also features:
  • Plugin support with Varnish Modules, also called VMODs
  • Support for Edge Side Includes including stitching together compressed ESI fragments
  • Gzip Compression and Decompression
  • DNS, Random, Hashing and Client IP based Directors
  • Technology preview for HTTP Streaming Pass & Fetch
  • Experimental support for Persistent Storage, without LRU eviction
  • Saint and Grace mode

Installation

The latest version available at the time in which this guide was written is 3.0.4 and you can get it from the official website Varnish Cache Download. The installation process of this tools is very simple and you can even add the repository to the software source list of your package manager.

$ cd /opt/
$ wget -c 'http://repo.varnish-cache.org/source/varnish-3.0.4.tar.gz'
$ tar xfz varnish-3.0.4.tar.gz && rm -f varnish-3.0.4.tar.gz
$ mv -i varnish-3.0.4/ varnish && cd varnish/
$ mkdir source && mv -i * source/ && cd source/
$ ./configure --prefix=/opt/varnish
$ make && make install

Configuration

The most basic configuration consist of some basic rules specifying the host and port that varnish should server, but there are more options that you can set and you’ll also find some good examples in the file /opt/varnish/etc/varnish/default.vcl

backend default {
    .host = "127.0.0.1";
    .port = "8080";
}

We will also need to change the default port where our web server will listen, so that varnish can make internal requests to port 8080, in this specific case I’m using a nginx server to I will change all the vhosts as this example shows:

http {
    server {
        listen      8080;
        server_name localhost;
    }
}

After that we will need to change the port number where the varnish daemon will run, so all the requests will pass first through varnish running on localhost:80 then to nginx on localhost:8080.

To do so we can opt for two options, the first one is to add this rule in this file /etc/default/varnish:

DAEMON_OPTS="-a :80 \
    -T localhost:6082 \
    -f /etc/varnish/default.vcl \
    -S /etc/varnish/secret \
    -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G"

The second option is to start the varnish service manually specifying the host and the port where it should run, like this:

$ sudo /opt/varnish/sbin/varnishd -a 127.0.0.1:80 -b 127.0.0.1:8080 -d
  Platform: Linux,3.11-2-amd64,x86_64,-sfile,-smalloc,-hcritbit
  200 269
  -----------------------------
  Varnish Cache CLI 1.0
  -----------------------------
  Linux,3.11-2-amd64,x86_64,-sfile,-smalloc,-hcritbit
  varnish-3.0.4 revision 9f83e8f

  Type 'help' for command list.
  Type 'quit' to close CLI session.
  Type 'start' to launch worker process.

start
  child (27738) Started
  200 0
  Child (27738) said Child starts
  Child (27738) said SMF.s0 mmap'ed 104857600 bytes of 104857600

Now check that you can access the web server in port localhost:8080, and finally check that the varnish headers were added to the response of the requests sent to port localhost:80.

$ curl --head http://127.0.0.1:8080/phpinfo.php
  HTTP/1.1 200 OK
  Server: nginx/1.4.4
  Date: Sun, 01 Dec 2013 05:37:57 GMT
  Content-Type: text/html
  Connection: keep-alive
  X-Powered-By: PHP/5.5.5-1

$ curl --head http://127.0.0.1/phpinfo.php
  HTTP/1.1 200 OK
  Server: nginx/1.4.4
  Content-Type: text/html
  X-Powered-By: PHP/5.5.5-1
  Content-Length: 82858
  Accept-Ranges: bytes
  Date: Sun, 01 Dec 2013 05:37:50 GMT
  X-Varnish: 354088061 354088059
  Age: 92
  Via: 1.1 varnish
  Connection: keep-alive
Do you have a project idea? Let's make it together!