Solid Cache in Rails 8: Scaling, Security & Advanced Strategies (Part 2)
Ruby on Rails • Sunday, Apr 20, 2025
Dive into advanced Solid Cache topics like encryption, compression, capacity planning and multi‑server configurations to get the most out of disk‑backed caching.
In the first part of this series we explored the basics of Solid Cache and how disk‑backed caching can simplify your Rails deployment. Once you’ve embraced Solid Cache, there are additional levers you can pull to tailor it for your workload and ensure it scales with your application. In this post I’ll cover encryption, compression, capacity planning, eviction strategies and how to share caches across multiple servers.
Enabling encryption and compression
Solid Cache stores data on disk in plain text by default. If you cache sensitive information—personal data, API tokens or internal reports—you should encrypt entries at rest. You can enable encryption globally in your cache store configuration:
config.cache_store = :solid_cache, encrypt: true, expires_in: 45.days
This instructs Solid Cache to encrypt every entry using Rails’ encryption keys (managed via credentials). Encryption protects your cache directory from unauthorized reads but adds computational overhead. In my benchmarks the overhead was modest (2–3% slower reads/writes), but test with your own workload.
Compression is another feature. If you cache large strings or objects, compressing them can save disk space and reduce I/O. Enable compression with the compress: true
option:
config.cache_store = :solid_cache, compress: true, compress_threshold: 1.kilobyte
Here compress_threshold
sets the minimum entry size (in bytes) before compression is attempted. Smaller entries may not benefit from compression and could grow larger due to overhead. Compression uses Gzip, which is CPU intensive. Consider your server’s CPU capacity before enabling it globally.
Retention, expiration and eviction
Solid Cache implements expiration based on the expires_in
option at the time of caching or through the default set on the cache store. Entries older than their TTL are considered expired and evicted on subsequent reads. However, expired entries still consume space until the cache compactor runs. To proactively clean up expired entries, schedule the built‑in compactor rake task:
bin/rails solid_cache:compact
You can run this task as a cron job or in a background worker. It scans the cache directory, removes expired segments and consolidates live entries. Running compaction regularly keeps disk usage in check and improves lookup performance. Set the frequency based on how quickly your cache grows; daily or weekly compaction suffices for most apps.
Capacity planning is critical. The max_size
option defines the maximum disk space (in bytes) that the cache may occupy. When this limit is reached, Solid Cache evicts the oldest segments regardless of whether they contain expired entries. Choose a size that fits comfortably on your server and leaves room for other processes. If you run multiple servers behind a load balancer, each server has its own cache directory. In that case you might allocate less per server because the combined cache footprint scales with the number of servers.
Multi‑server and shared caches
One limitation of Solid Cache is that it is per server. Unlike Redis, there is no built‑in mechanism for sharing cache entries across machines. In a multi‑server deployment you might warm each cache separately. For some workloads this is acceptable; for example, fragment caching will populate gradually on each server. However, if you have computed results that take minutes to generate (reports, expensive API calls) and you want to share them, consider building a small synchronization layer:
- HTTP fetch on miss: When a cache miss occurs, your application can attempt to fetch the value from a peer server via a simple HTTP endpoint. If found, you can write it locally. This approach is easy to implement but may cause thundering herd issues.
- Shared storage: Store computed results in a database or object storage like S3 instead of the cache. Use Solid Cache only for local caching of these resources. This pattern allows multiple servers to read the result from a central location without recomputing it.
- Hybrid cache: Use Solid Cache for durable, disk‑based fragments and a small Redis instance for values that need to be shared instantly across machines. This hybrid approach balances persistence with distribution.
In my current project we chose the hybrid strategy. We keep session data and small counters in Redis (which is easy to deploy with Kamal) and rely on Solid Cache for larger fragment caches and heavy computations. This combination reduced our Redis usage by 80% while preserving the ability to share session data across the cluster.
Monitoring and maintenance
Monitoring the health of your cache is essential. Useful metrics include cache hit rate, average fetch time, disk usage and compaction frequency. You can instrument Rails’ caching API with Active Support notifications. For example:
ActiveSupport::Notifications.subscribe('cache_read.active_support') do |*args|
event = ActiveSupport::Notifications::Event.new(*args)
Rails.logger.info("Cache read: key=#{event.payload[:key]}, hit=#{event.payload[:hit]}")
end
This logs cache reads and whether they were hits or misses. Tools like Prometheus and Grafana can visualize these metrics. If you see low hit rates, you may be caching the wrong things or your TTLs may be too short. Disk usage alerts help prevent your server from filling up unexpectedly.
Like any file‑based system, Solid Cache benefits from periodic maintenance. In addition to running compaction, monitor file descriptor usage and ensure your server’s filesystem doesn’t run out of inodes. Use RAID or snapshots if you need redundancy; while the cache can always be rebuilt, losing it may cause a temporary performance hit.
Lessons from the field
At a previous employer we used Solid Cache to store rendered HTML fragments for a heavily trafficked dashboard. The fragment templates were large and change rarely. By caching them on disk with a 60‑day TTL and enabling compression, we cut page render times by half and slashed CPU usage. Because the cache persisted across deploys, we no longer saw slow “cold starts” after shipping new releases. We ran compaction nightly and monitored the hit rate using a custom middleware. When our application later needed to scale horizontally, we paired Solid Cache with a small Redis instance for session storage and shared counters. This architecture proved robust and cost‑effective.
Solid Cache is a powerful addition to Rails that fits naturally with the framework’s philosophy of convention over configuration. It won’t replace every use case for Redis or Memcached, but it simplifies caching for many apps and offers features like encryption and compression that you’d previously have to cobble together yourself. By understanding its advanced features and trade‑offs, you can build a caching strategy tailored to your application’s needs. In the next post we’ll step outside of caching and explore how to deploy a Rails 8 application using Kamal for zero‑downtime, containerized deploys.