Deploying and Configuring Varnish for Static-Dynamic Content Routing on CentOS

Building Varnish from Source

Begin by preparing the required system libraries. Compile and install PCRE2 from source:

tar -xzf pcre2-10.23.tar.gz
cd pcre2-10.23
./configure --prefix=/opt/pcre2
make && make install

Update the package configuration environment variable and install remaining dependencies via YUM:

export PKG_CONFIG_PATH=/opt/pcre2/lib/pkgconfig
yum install pcre-devel ncurses-devel libedit-devel -y

Compile the Docutils Python package required for documentation generation:

tar -xzf docutils-0.13.1.tar.gz
cd docutils-0.13.1
python setup.py install

Create a dedicated service account and establish the necessary directories for cache storage and logging:

useradd -r cache_svc
mkdir -p /var/data/v_cache /var/data/v_log
chown -R cache_svc:cache_svc /var/data/v_cache /var/data/v_log

Extract the Varnish archive, configure the build with a custom prefix, and compile:

tar -xzf varnish-4.1.5.tar.gz
cd varnish-4.1.5
./configure --prefix=/opt/varnish_core --enable-dependency-tracking --enable-debugging-symbols
make && make install

Transfer the default VCL configuration template to the installation directory:

cp etc/example.vcl /opt/varnish_core/default.vcl

Daemon Execution Parameters

Launch the Varnish daemon using the command-line interface. The following example starts the server, allocating 2GB of RAM for caching, binding the administrative interface to localhost port 2000, and listening for client traffic on port 80:

/opt/varnish_core/sbin/varnishd -f /opt/varnish_core/default.vcl -s malloc,2G -T 127.0.0.1:2000 -a 0.0.0.0:80

Key runtime flags include:

  • -a: Defines the IP address and port for incoming client requests.
  • -b: Specifies a singular backend server. Cannot be used concurrently with -f.
  • -d: Activates debugging mode, running the parent process in the foreground.
  • -f: Points to the VCL configuration file. Mutually exclusive with -b.
  • -h: Selects the hash algorithm with optional length parameters.
  • -s: Configures the cache storage mechanism. Common options are malloc (memory-based, fast) and file (disk-based).
  • -T: Sets the IP and port for the management CLI interface.
  • -w: Determines the min, max, and timeout values for worker threads.

To halt the service, either terminate the process directly using pkill varnishd or gracefully shut it down via the admin interface:

/opt/varnish_core/bin/varnishadm stop

Architecture and Request Processing

Varnish operates through two primary processes: the Management process and the Child (Cache) process. The Management process handles VCL compilation, system monitoring, and CLI provisioning, periodically polling the Child process to ensure availability. If the Child process becomes unresponsive, the Management process initiates a restart.

Client requests traverse a sequence of VCL state engines:

  1. vcl_recv: Evaluates the incoming request. Unidentifiable requests are piped directly, cacheable requests are routed to lookup, and non-cacheable requests bypass the cache via pass.
  2. vcl_hash: Calculates the hash key to search the cache index, resulting in either a hit or a miss.
  3. vcl_hit: When cached data is found, it is scheduled for delivery.
  4. vcl_miss: Triggers a fetch operation to retrieve data from the origin backend.
  5. vcl_backend_response: Processes the fetched data before handing it off for delivery.
  6. vcl_deliver: Finalizes the response payload and transmits it back to the client.

Routing Static and Dynamic Content

A typical deployment scenario involves routing static assets (HTML, CSS, images) to one backend and dynamic scripts (PHP) to another, while enforcing caching rules. Consider an environment with a Varnish proxy server and two distinct backend nodes: static_node (IP 192.168.10.20) and dynamic_node (IP 192.168.10.30).

Define the backend servers and access controls within the VCL file (/etc/varnish/routing.vcl):

backend static_node {
    .host = "192.168.10.20";
    .port = "80";
}

backend dynamic_node {
    .host = "192.168.10.30";
    .port = "80";
}

acl authorized_purgers {
    "127.0.0.1";
    "192.168.10.0"/24;
}

Implement request routing and cache invalidation logic in vcl_recv:

sub vcl_recv {
    if (req.method == "PURGE") {
        if (!client.ip ~ authorized_purgers) {
            return (synth(405, "Method not allowed"));
        }
        return (hash);
    }

    if (req.url ~ "\.(html|css|js|jpg|png|gif)$") {
        set req.backend_hint = static_node;
    } else if (req.url ~ "\.php$") {
        set req.backend_hint = dynamic_node;
        return (pass);
    }

    return (hash);
}

Handle cache purge requests across the remaining subroutines:

sub vcl_hit {
    if (req.method == "PURGE") {
        set obj.ttl = 0s;
        return (synth(200, "Purged"));
    }
}

sub vcl_miss {
    if (req.method == "PURGE") {
        return (synth(404, "Not in cache"));
    }
}

sub vcl_pass {
    if (req.method == "PURGE") {
        return (synth(502, "Purged on a passed object"));
    }
}

Specify TTL durations for static assets upon receiving the backend response:

sub vcl_backend_response {
    if (bereq.url ~ "\.(html|css|js|jpg|png|gif)$") {
        set beresp.ttl = 3600s;
    }
}

Inject custom headers into the client response to verify cache status and the responding backend:

sub vcl_deliver {
    if (obj.hits > 0) {
        set resp.http.X-Cache-Status = "HIT from " + server.ip;
    } else {
        set resp.http.X-Cache-Status = "MISS";
    }
    set resp.http.X-Backend-Server = req.backend_hint;
}

Load the active configuration into the running Varnish instance via the administrative port:

varnishadm -T 127.0.0.1:2000 vcl.load routing /etc/varnish/routing.vcl
varnishadm -T 127.0.0.1:2000 vcl.use routing

Tags: Varnish centos reverse proxy Caching VCL

Posted on Thu, 14 May 2026 23:38:54 +0000 by nscipione