What is the latency WITHIN a data center? I ask this assuming there are orders of magnitude of difference

Solution 1:

There are several versions of the "latency charts everyone should know" such as:

  • https://people.eecs.berkeley.edu/~rcs/research/interactive_latency.html
  • https://gist.github.com/jboner/2841832
  • https://computers-are-fast.github.io/

The thing is, in reality, there is more than just latency. It's a combination of factors.

So, what's the network latency within a data center? Latency, well I would say it's "always" below 1ms. Is it faster than RAM? No. Is it close to RAM? I don't think so.

But the question remains, is it relevant. Is that the datum you need to know? Your question make sense to me. As everything has a cost, should you get more RAM so that all the data can stay in RAM or it's ok to read from disk from time to time.

Your "assumption" is that if the network latency is higher (slower) than the speed of the SSD, you won't be gaining by having all the data in RAM as you will have the slow on the network.

And it would appear so. But, you also have to take into account concurrency. If you receive 1,000 requests for the data at once, can the disk do 1,000 concurrent requests? Of course not, so how long will it take to serve up those 1,000 requests? Compared to RAM?

It's hard to boil it down to a single factor such as heavy loads. But yes, if you had a single operation going, the latency of the network is such that you would probably not notice the difference of SSD vs RAM.

Just like until 12Gbps disk showed up on the market, a 10Gbps network link would not be overloaded by a single stream as the disk were the bottleneck.

But remember that your disk is doing many other things, your process isn't the only process on the machine, your network may carry different things, etc.

Also, not all disk activity mean network traffic. The database query coming from an application to the database server is only very minimal network traffic. The response from the database server may be very small (a single number) or very large (thousand of rows with multiple fields). To perform the operation, a server (database server or not) may need to do multiple disk seeks, reads and writes yet only send a very small bit back over the network. It's definitely not one-for-one network-disk-RAM.


So far I avoided some details of your question - specifically, the Redis part.

Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. - https://redis.io/

OK, so that means everything is in memory. Sorry, this fast SSD drive won't help you here. Redis can persist data to disk, so it can be loaded into RAM after a restart. That's only to not "lose" data or have to repopulate a cold cache after a restart. So in this case, you'll have to use the RAM, no matter what. You'll have to have enough RAM to contain your data set. Not enough RAM and I guess your OS will use swap - probably not a good idea.

Solution 2:

There are many layers of cache in computer systems. Inserting one at the application layer can be beneficial, caching API and database queries. And possibly temporary data like user sessions.

Data stores like Redis provide such a service over a network (fast) or UNIX socket (even faster), much like you would use a database.

You need to measure how your application actually performs, but let's make up an example. Say a common user request does 5 API queries that take 50 ms each. 250 ms is user detectable latency. Contrast to caching the results. Even if the cache is in a different availability zone across town (not optimal), hits are probably 10 ms at most. Which would be a 5x speedup.

In reality, the database and storage systems have their own caches as well. However, usually it is faster to get a pre-fetched result than to go through the database engine and storage system layers again. Also, the caching layer can take significant load off of the database behind it.

For an example of such a cache in production, look no further than the Stack Overflow infrastructure blog on architecture. Hundreds of thousands of HTTP requests generating billions of Redis hits is quite significant.

Memory is expensive.

DRAM at 100 ns access times is roughly 100x faster than solid state permanent storage. It is relatively inexpensive for this performance. For many applications, a bit more RAM buys valuable speed and response time.

Tags:

Cache