The safest way to start using the Stellate Edge Cache in production is to:
- Start using Stellate with caching disabled to get access to the GraphQL Analytics for your production traffic and understand which queries are particularly valuable to cache
- Set up edge caching, one query at a time once you know which queries you really want to cache
If you have questions at any point during your setup of Stellate do not hesitate to get in touch either via the in-app messenger (included on this page as well), or by emailing [email protected]
We'd ❤️ to help you get started!
After signing up to Stellate and creating a service for your GraphQL API, disable caching by going to the Cache > General tab on your service’s dashboard and setting the Caching state to disabled:
With caching disabled, Stellate will only store analytics for your GraphQL requests but not touch them at all, making it a pass-through gateway. In order to start using it for production traffic, you’ll need to do two steps:
- Update your client to send requests to your Stellate service. Instead of sending GraphQL requests to
mydomain.com/api, have your GraphQL client send them to your service’s domain (either
name.stellate.shor your custom domain). Since we have disabled caching your clients will keep working as before, however, we would still recommend that you test this in a development environment first.
Once you are passing your production traffic through Stellate, you’ll start seeing analytics for your GraphQL API! 🎉
Your dashboard will show information on queries and mutations used, errors returned from your backend as well some additional bits of information. Once you have sufficient analytics data to know which queries (or types) are the most valuable to add caching for, you’re ready to take the next steps and set up edge caching.
To get a better understanding of how the Stellate Edge Cache works under the hood, we would recommend you read our An Introduction to Stellate. This will help with the next steps.
The first step is to set up Scopes, which makes sure that you do not inadvertently share cached information with somebody who doesn’t have access to that information. Scopes are explained in detail in the introduction to Scopes, so we recommend you read that documentation article before continuing.
With the required scopes configured, we can now look into cache rules and start by deleting the default cache rule, which is configured to cache all queries for 15 minutes.
Since we are taking a conservative approach with this guide, we would also recommend thinking about any types (or fields), that you definitely do not want to cache at all. This could include information that is rapidly changing or information that you need to be accurate at all times.
Configure cache rules for those types with a
swr of 0, to make no response that includes any of those types is ever cached. You can target specific fields with cache rules as well if you need that additional specificity.
With our sample SpaceX API we do not ever want to cache information about the
Roadster floating around in our star system, so we’ve disabled caching for it.
With scopes configured and cache rules for types (or fields) you don’t want to cache set up, we can now (finally) work on caching data. This will be an ongoing cycle of:
- Identify a query (or type) to cache
- Implement the required invalidations in your backend
- Configuring the corresponding cache rules to enable caching for that query
The queries that are important for you to cache depend heavily on your specific use case. If there are any that have particularly slow response times or cause a lot of load on your server, those usually make good initial targets.
On top of that, data that is public, read-heavy, and/or doesn’t change frequently will have a higher cache hit rate and it could thus make sense to prioritize those queries.
Once you have a query (or type, or field) to cache identified, we would recommend thinking about how you want that cached data to be invalidated. If you are fine with stale data for a certain time, you don’t have to make changes to your application, and can instead rely on the
However, if you want to have fine-grained control over cache expiration, you can implement custom purging logic in your application based on the Purging API we make available for each service.
If you pass your mutations through Stellate as well, we automatically take care of some invalidation. That behavior is documented on Automatic Purging based on Mutations, however, there are some edge cases e.g. list invalidation where we can not automatically figure out which results to purge.
Once you are happy with the invalidation logic, whether handled automatically by Stellate, implemented in your application, or relying on time-based expiration, you can go ahead and add a new cache rule for your query.
You can either target a named query (as seen in the following screenshot), or target types and fields included in the response. If you do not target a named query, keep in mind that the cache rules will apply to every response returning that type or field.
In our example, we targeted the
launchesPast queries, and cache them for a day (86400 seconds) with a 1 hour (3600 seconds) stale-while-revalidate time. We also cache that information in the Public scope, because it does not change depending on who is asking for that data. If your data is specific to a user (or a set of users), make sure you have a corresponding scope selected when creating the cache rule.
With the cache rule in place, and invalidation taken care of, we can now enable edge caching on the Cache —> General settings page.
You can test that the cache is working by sending a matching query to your Stellate service and looking at the
gcdn-cache header included with the response. It will initially show a
MISS, indicating that the query was not cached yet. However, if you send it again, you will see a cache
You’re now edge-caching your first GraphQL query! 🎉
If you do not see a cache
HIT, even after repeatedly sending the query, reach out to [email protected]. We’d be happy to help you look into this.
Once you’re happy with the results of the initial caching attempts, repeat the steps again, first thinking about what would be the next best query (or type, or field) to cache, how to handle invalidation and whether you need to make changes to your application and then adding the required cache rules.
You will be able to see the progress that you make in regards to cache hit rate as well as response times and bandwidth saved on your dashboard. And your backend service will likely see reduced load and improved response times as well.
Updated 2 months ago