Modern systems are fast. Very fast. So why is it that they seem so slow? Controls take too much time to be filled with their contents, web pages take too long to display, documents take ages to retrieve, and applications feel like they have to “think” for a while before they do anything. All running across ridiculously fast networks that somehow seem to run like they’re connected with pieces of damp string. What’s gone wrong?
There are lots of possible reasons for this, but mostly they start with designs that don’t take performance into account to begin with (see System Performance – Part 1 – Outrun The Bear). If you’ve inherited a system like this, you have a lot of work ahead of you. If you’re just designing or implementing a system, now is the time to consider performance.
It might sound obvious, but two of the main reasons for slow system performance, especially across networks, are that there is too much data being sent around the system, or that that data takes too long to be generated. How can this be prevented, or fixed?
Somehow increasing the power of the systems generating the data, and using better network connections might seem a good first approach. This is a very expensive solution though, especially as most data centres already have very powerful servers running either as native or VM systems, and 1Gb networking is a bare minimum. You can add CPUs and RAM to VMs but it will only get you so far, especially if the algorithms you use don’t lend themselves to parallel processing. Generally, unless you’re using under-powered or outdated hardware, this isn’t actually much of a solution
Another way that often makes a large impact is one that’s used everywhere from CPUs, through hard drives and databases to the Internet: caching. The basic function of a cache is to sit between the data source and client. Instead of going directly to the data source, the client tries to retrieve the data it needs from the cache. If the data is in the cache (a “hit”), then it is returned immediately to the client. Because the data is held locally in memory, or at least very close by on the network, the turnaround time is much faster than making a query to the data source.
However, if the data is NOT in the cache then the system must go to the data source anyway. Before it is returned to the client, the data is placed in the cache. This cache “miss” takes slightly longer than a query without the cache, but once the data is in the cache then time will be saved by faster hits in future.
All of this behaviour should be abstracted away in a repository or similar class.
It doesn’t speed the initial production or transfer of data but it does increase the overall speed of retrieval based on the principle that the fastest results are the ones you don’t have to generate, and the fastest data transfer is the when you don’t transfer any data at all. Ideally the next time the data is retrieved from the source will be either when it hasn’t been read for quite some time, or when its time-to-live has expired and the cache has automatically dropped it.
The type of cache(s) you choose depends on many factors: is the cached data only needed by a single process, or is it to be shared across several? Is the data to to be shared across several machines? If the data is to be shared across machines, should the cache be installed on one of those, or on a separate machine? What operating systems and languages are being used? Do you need the cache to run on the same OS as the rest of the processes? How many caches do you need?
You may decide to have more than one type of cache, each running in different areas of the system. That’s quite normal – each area has its own requirements. If you have plenty of memory and only need to access the data from a single machine, caching in local memory is probably the best, and fastest, method. If you need to access the cache from multiple machines you might have one cache on each machine, or you might use a single cache on its own machine connected by LAN; something like Redis for example.
This is all very well and good, but do you actually NEED caching in your system, or are your efforts best spent elsewhere? In my experience I’ve found that often caching is the answer, sometimes it isn’t, and generally the answer is like so many other engineering problems: it depends. To try to help you decide, I’ve created a questionnaire below which lets you answer questions about your environment and data, and presents the answer on a scale because the other thing about engineering problems is that the answers aren’t always black and white; sometimes they’re shades of grey.
Should I Cache My Data?
Where is the data retrieved from?
How often does the data change?
Does the data have an expiry time or Time-To-Live (TTL)?:
How complex is it to calculate the data?
How difficult is it to implement the cache?
How heavy is the network load?
How is your network connected?
Result
Should You Cache The Data?:
Probably Not