There are APIs that can be cached for a couple of seconds, minutes, or even hours. If the speed and stability of your API have more priority than the freshness of the data, this article is for you.
The API itself
Imagine an API which is powering your dynamic search features, but also serves dumps of all your data in specific variations.
In our case, its Locations API — an API used to search, suggest, and resolve locations in various situations. For example, it could be all airports per language and country. The data is calculated daily and these dumps or popular searches are the most requested resource on your API, so why not to cache it?
Thanks to the caching policy we are saving 10TB of bandwidth monthly as 85% of all Locations API requests are served directly from the Cloudflare cache.
To cache our APIs we are using Cloudflare’s “Cache Everything” Page Rule, for example:
With this rule, all repeated requests within an hour are going to be served from the Cloudflare cache. Even a couple of seconds cache could significantly lower the load on your origin.
Sounds good enough, where is the catch?
We were happy with the simple caching policy until our origin or any of its dependencies went down and we started seeing the error couple of minutes after.
Any requests that were not previously cached or have fresh cache in the CDN edge will fail with an error. Of course, you could have the “Edge Cache TTL” to lower the impact of an outage as there would be many more API requests cached, but it would mean sacrificing the freshness of your data or you would need to implement some cache invalidation whenever something changes in your API.
Magic stale cache, but only when needed
Did you hear about stale-if-error and stale-while-revalidate cache-control header options?
These headers let Cloudflare know that we don’t mind serving a stale cache if the origin will fail while revalidating the cache. All we need to do is to include the following header options into the API response.
Cache-Control: stale-while-revalidate=30, stale-if-error=60
This header tells Cloudflare that it’s fine to serve stale cache (for up to 30s) and revalidate the cache in the background, regardless of your origin being up or down.
In the case of our Location API, it is a matter of thousands of responses daily immediately served from the stale cache during revalidation.
The real hero of this story. You can instruct Cloudflare to serve stale cache (for up to 60s in our example) if there is an error on your origin. We are using stale-if-error with 1day to make sure there is a stale cache on all recently requested Cloudflare edges.
These settings ensure our API to maintain availability in the most used edges around the globe.
To make the explanation of how the different cache headers work as simple as possible, here is a handy table showing how the different headers affect the request made by the client.
- The first column shows a second request made by a client sometime after the first request that cached the data.
- The second column shows if the upstream is able to respond or not.
- The rest of the columns represent the different headers and their effect, read these columns as having effect first from left to right, so max-age (or Edge Cache TTL) is the first header evaluated, stale-while-revalidate the second one, and so on.
Try it yourself
- Create and enable the Cloudflare Page Rule with Cache Everything (set lowest possible Edge Cache TTL unless you like waiting)
- Make sure your API response headers include cache-control with stale-if-error
- Hit your API, check for cf-cache-status: HIT
- Shutdown your origin and wait for the cache to expire
- Repeat the request, instead of a 503 error you should get a response from the stale cache. To verify it, check for cf-cache-status: STALE
???? That’s it! Do you know any other caching tricks? Let us know!
Are you interested in similar articles from Kiwi.com writers or in our open positions? Check our open positions and subscribe code.kiwi.com newsletter so we can inform you about new articles in the last month.