How to use nginx as caching server?
In this tutorial, we’ll explore the caching functionality of nginx by creating a simple caching server. By following the steps below, you can set up your own environment to cache responses using nginx.
For those who don't know what nginx is - a high-performance, open-source web server that’s also frequently used as a reverse proxy, load balancer, and content cache. It was designed to handle large numbers of concurrent connections efficiently, often making it a go-to choice for high-traffic websites.
First, install nginx if you haven’t already:
sudo apt install nginx
Then enable and start it via systemd:
sudo systemctl enable --now nginx
Next, create a dedicated directory for caching and secure it:
sudo mkdir -p /var/cache/nginx
sudo chmod 700 /var/cache/nginx
Open the main nginx configuration file /etc/nginx/nginx.conf
in your favourite editor. Replace the existing http { ... } block with the following:
http {
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
server {
listen 80;
server_name caching_server;
location / {
proxy_pass http://127.0.0.1:1234;
proxy_cache my_cache;
proxy_cache_valid 200 1h;
proxy_cache_valid 404 30m;
proxy_cache_use_stale error timeout invalid_header updating;
add_header X-Cache-Status $upstream_cache_status;
}
}
}
It seems to be a lot of lines, but actually it's not complicated at all. Here is detailed description of what each option does:
proxy_cache_path
/var/cache/nginx
- specifies the dir where cached files will be storedlevels=1:2
- defines the dir structure for the cache,1:2
means that the cache will have two levels of subdirectories, thats how it will look like after cache will be created:
keys_zone=my_cache:10m
- allocates a shared memory zone with namemy_cache
and10 MB
of storage for cache metadata. This memory is used for fast lookup for cached responses.max_size=10g
- limits the total size of the cache to 10 GB, when it reaches the limit old files are removedinactive=60m
- specifies that the cached items will be removed if not accesses within 60 minsuse_temp_path=off
- ensures cached files are written directly to the cache directory instead of temporary directory
server
block defines settings for the virtual serverlisten 80
- specifies port to listen onserver_name caching_server
- specifies the server namelocation /
- specifies settings for specific URL path in this case/
proxy_pass http://127.0.0.1:1234
- directs requests to the backend server at provided path, in this example we will direct requests tolocalhost
on post1234
proxy_cache my_cache
- enables caching for this location using the cache defined inmy_cache
proxy_cache_valid 200 1h
- specifies that responses with return status 200 (OK) will be cached for 1hproxy_cache_valid 404 30m
- specifies that responses with return status 404 (Not Found) will be cached for 30 minsproxy_cache_use_stale error timeout invalid_header updating
- allows serving cached content in specific scenarios related to backend server:error
- server unavailable,timeout
- server takes too much time to respond,invalid_header
- response from backend is invalid,updating
- when a fresh version of the cache is being updatedadd_header X-Cache-Status $upstream_cache_status
- adds custom header to responses for reporting cache status:MISS
- response not found in cache,HIT
- response found in cache,EXPIRED
- cached response expired and a new one was fetched
If you don't want to change server_name caching_server
in your configuration to your own one, make sure that your system can resolve the hostname caching_server. One way to do this is by adding an entry to your /etc/hosts
file:
echo "127.0.0.1 caching_server" | sudo tee -a /etc/hosts
Check if syntax of edited config file is correct via:
nginx -t
If everything is ok, then reload nginx using systemctl so changes will be applied:
sudo systemctl reload nginx
Now, when you know what each option does we can proceed to test new configuration. Let's create simple server using netcat
in while loop which will give a simple response each time we access localhost
at port 1234
.
while true; do
echo -e "HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\nContent-Length: 13\r\n\r\nHello, world!" | nc -l -p 1234;
done
Now from other terminal session we can curl
our backend server and see if caching utility works:
The first request should return an X-Cache-Status: MISS, meaning it wasn’t in the cache initially. Subsequent requests should return X-Cache-Status: HIT, indicating the response has been cached.
It works, let's check contents of /var/cache/nginx
directory.
As you can see our cache exists in disk memory.
That’s it! By following these steps, you’ve set up a simple nginx caching server and learned how to inspect whether responses are coming from cache. You can adapt these settings to fit your production environment or further explore advanced caching features like cache purging or conditional caching.