Home

Awesome

FastCache.Cached

<p><img src="https://raw.githubusercontent.com/neon-sunset/fast-cache/main/img/cached-small-transparent.png" width="180" height="180" align="right" /></p>

CI/CD Coverage Status nuget

The fastest cache library written in C# for items with set expiration time. Easy to use, thread-safe and light on memory.

Optimized to scale from dozens to millions of items. Features lock-free reads and writes, allocation-free reads and automatic eviction.

Credit to Vladimir Sadov for his implementation of NonBlocking.ConcurrentDictionary which is used as an underlying store.

When to use FastCache.Cached over M.E.C.M.MemoryCache

Quick start

Install

dotnet add package FastCache.Cached or Install-Package FastCache.Cached

How to use

Get cached value or save a new one with expiration of 60 minutes

public SalesReport GetReport(Guid companyId)
{
  if (Cached<SalesReport>.TryGet(companyId, out var cached))
  {
    return cached;
  }

  var report = // Expensive operation: retrieve and compute data

  return cached.Save(report, TimeSpan.FromMinutes(60));
}

Get cached value or call a method to compute and cache it

var report = Cached.GetOrCompute(companyId, GetReport, TimeSpan.FromMinutes(60));

Async version (works with Task<T> and ValueTask<T>)

var report = await Cached.GetOrCompute(companyId, GetReportAsync, TimeSpan.FromMinutes(60));

Use multiple arguments as key (up to 7)

public async Task<Picture> GetPictureOfTheDay(DateOnly date, FeedKind kind, bool compressed)
{
  if (Cached<Picture>.TryGet(date, kind, compressed, out var cached))
  {
    return cached;
  }

  var api = GetApiService(kind);
  var picture = await api.GetPictureOfTheDay(date, compressed);

  return cached.Save(picture, TimeSpan.FromHours(3));
}

Use multiple arguments with GetOrCompute

var expiration = TimeSpan.FromHours(3);
var picture = await Cached.GetOrCompute(date, kind, compressed, GetPictureOfTheDay, expiration);

Save the value to cache (if it fits) and keep the cached items count below specified limit

public SalesReport GetReport(Guid companyId)
{
  if (Cached<SalesReport>.TryGet(companyId, out var cached))
  {
    return cached;
  }
  ...
  return cached.Save(report, TimeSpan.FromMinutes(60), limit: 500_000);
}
// GetOrCompute with maximum cache size limit.
// RAM is usually plenty but what if the user runs Chrome?
var report = Cached.GetOrCompute(companyId, GetReport, TimeSpan.FromMinutes(60), limit: 500_000);

Add new data without accessing cache item first

Cached<SalesReport>.Save(companyId, report, TimeSpan.FromMinutes(60));
// Same as above but via extension method for more concise syntax
using FastCache.Extensions;
...
report.Cache(companyId, TimeSpan.FromMinutes(60));

Save an entire range of values in one call. Fast for IEnumerable, extremely fast for lists, arrays and ROM/Memory.

using FastCache.Collections;
...
var reports = ReportsService
  .GetReports(11, 2022)
  .Select(report => (report.CompanyId, report));

CachedRange<SalesReport>.Save(reports, TimeSpan.FromMinutes(60));

Save range of cached values with multiple arguments as key

var februaryReports = reports.Select(report => ((report.CompanyId, 02, 2022), report));

CachedRange<SalesReport>.Save(februaryReports, TimeSpan.FromMinutes(60));

var companyId = februaryReports.First().CompanyId;
var reportFound = Cached<SalesReport>.TryGet(companyId, 02, 2022, out _);
Assert.True(reportFound);

Store common type (string) in a shared cache store (other users may share the cache for the same <K, V> type, this time it's <int, string>)

// GetOrCompute<...V> where V is string.
// To save some other string for the same 'int' number simultaneously, look at the option below :)
var userNote = Cached.GetOrCompute(userId, GetUserNoteString, TimeSpan.FromMinutes(5));

Or in a separate one by using value object (Recommended)

readonly record struct UserNote(string Value);

// GetOrCompute<...V> where V is UserNote
var userNote = Cached.GetOrCompute(userId, GetUserNote, TimeSpan.FromMinutes(5));
// This is how it looks for TryGet
if (Cached<UserNote>.TryGet(userId, out var cached))
{
  return cached;
}
...
return cached.Save(userNote, TimeSpan.FromMinutes(5));

Features and design philosophy

Performance

BenchmarkDotNet=v0.13.1, OS=Windows 10.0.22000
AMD Ryzen 7 5800X, 1 CPU, 16 logical and 8 physical cores
.NET 6.0.5 (6.0.522.21309), X64 RyuJIT

TLDR: FastCache.Cached vs Microsoft.Extensions.Caching.Memory.MemoryCache

LibraryLowest read latencyRead throughput (M/1s)Lowest write latencyWrite throughput (M/1s)Cost per itemCost per 10M items
FastCache.Cached15.63 ns114-288M MT / 9-72M ST33.75 ns39-81M MT / 6-31M ST40 B381 MB
MemoryCache56.93 ns41-46M MT / 4-10M ST203.32 ns11-26M MT / 2-6M ST224 B2,136 MB
CacheManager87.54 nsN/A~436.85 nsN/A MT / 1.5-5M ST(+alloc)360 B1,602 MB

+CachedRange.Save(ReadOnlySpan<(K, V)>) provides parallelized bulk writes out of box

++CacheManager doesn't have read throughput results because test suite would take too long to run to include CacheManager and LazyCache. Given higher CPU usage by CacheManager and higher RAM usage by LazyCache it is reasonable to assume they would score lower and scale worse due to higher number of locks

Read/Write lowest achievable latency

MethodMeanErrorStdDevMedianRatioGen 0Allocated
Get: FastCache.Cached15.63 ns0.452 ns1.334 ns14.61 ns1.00--
Get: MemoryCache56.93 ns1.179 ns1.904 ns55.73 ns3.68--
Get: CacheManager*87.54 ns1.751 ns2.454 ns89.32 ns5.68--
Get: LazyCache73.43 ns1.216 ns1.138 ns73.25 ns4.71--
Set: FastCache.Cached33.75 ns0.861 ns2.539 ns31.92 ns2.180.002440 B
Set: MemoryCache203.32 ns4.033 ns6.956 ns199.77 ns13.230.0134224 B
Set: CacheManager*436.85 ns8.729 ns19.160 ns433.97 ns28.100.0215360 B
Set: LazyCache271.56 ns5.428 ns7.785 ns274.19 ns17.580.0286480 B

Read throughput detailed

MethodCountReads/1sMeanErrorStdDevRatio
Read(MT): FastCache1,000130.97M7.635 us0.1223 us0.1144 us1.00
Read(ST): FastCache1,00072.99M13.700 us0.2723 us0.5562 us1.78
Read(MT): MemoryCache1,00041.35M24.183 us1.2907 us3.7853 us2.68
Read(ST): MemoryCache1,00010.31M96.943 us0.9095 us0.8063 us12.71
Read(MT): FastCache100,000288.66M346.418 us5.2196 us6.6011 us1.00
Read(ST): FastCache100,00028.99M3,449.865 us66.4929 us81.6593 us9.96
Read(MT): MemoryCache100,00046.77M2,138.400 us175.2152 us516.6259 us6.32
Read(ST): MemoryCache100,0004.64M21,540.964 us394.9239 us499.4523 us62.20
Read(MT): FastCache1,000,000114.54M8,730.009 us173.8538 us170.7476 us1.00
Read(ST): FastCache1,000,0009.74M102,580.795 us926.3173 us866.4778 us11.76
Read(MT): MemoryCache1,000,00041.46M24,114.261 us369.3612 us308.4334 us2.76
Read(ST): MemoryCache1,000,0003.92M254,619.996 us2,585.3079 us2,291.8081 us29.17
Read(MT): FastCache10,000,000112.89M88,584.244 us1,709.9078 us1,599.4488 us1.00
Read(ST): FastCache10,000,0009.70M1,030,431.980 us9,874.4883 us9,236.6025 us11.64
Read(MT): MemoryCache10,000,00042.84M233,410.703 us2,945.8464 us2,299.9231 us2.63
Read(ST): MemoryCache10,000,0004.13M2,421,159.114 us35,280.8135 us31,275.5222 us27.33

Further reading

Notes

On benchmark data

Throughput saturation means that all necessary data structures are fully available in the CPU cache and branch predictor has learned branch patters of the executed code. This is only possible in scenarios such as items being retrieved or added/updated in a tight loop or very frequently on the same cores. This means that real world performance will not saturate maximum throughput and will be bottlenecked by memory access latency and branch misprediction stalls. As a result, you can expect resulting performance variance of 1-10x min latency depending on hardware and outside factors.


From 🇺🇦 Ukraine with ♥️