Solid Cable in Rails 8: Tuning, Edge Cases & Real‑Time Scaling (Part 2)
Ruby on Rails • Thursday, Mar 20, 2025
Dive deeper into Solid Cable by learning how to configure polling, retention, and multi‑database setups and how to deal with large payloads and scaling challenges.
In our previous post we introduced Solid Cable, Rails 8’s built‑in WebSocket adapter that runs on your existing database. While the defaults work well for small to medium applications, you can squeeze more performance and reliability out of Solid Cable by tuning its settings and understanding its limitations. In this installment I’ll share techniques I’ve used in production to handle higher throughput, reduce latency and avoid edge cases.
Adjusting polling and retention
When your database supports LISTEN/NOTIFY
(PostgreSQL) Solid Cable sends notifications as soon as you broadcast a message. For other databases the adapter falls back to polling the solid_cable_messages
table. The polling_interval
controls how often it checks for new messages. Decreasing this interval reduces latency but increases database load; increasing it does the opposite. In most applications I’ve found values between 50 ms and 100 ms strike a good balance:
production:
adapter: solid_cable
polling_interval: 0.05 # 50ms
message_retention: 2.days
Message retention determines how long sent messages remain in the database before they’re purged by a background cleaner. Retention should be long enough to allow slower consumers to reconnect and catch up, but not so long that the table grows uncontrollably. For low‑traffic apps a day or two is plenty; for chat rooms where messages may be delivered to offline users, you might keep a week’s worth. Remember to adjust your database autovacuum settings for the solid_cable_messages
table to prevent bloat.
If you experience high latency when using the polling mode, consider increasing the number of polling workers. Solid Cable spawns a single thread to poll for messages by default. You can configure additional pollers by setting poller_concurrency
in your cable.yml
:
production:
adapter: solid_cable
polling_interval: 0.05
poller_concurrency: 4
This setting starts four threads polling the table, reducing the chance of missing messages and improving throughput on busy systems. Keep an eye on your database connections; each poller will open a connection to the database. Adjust your connection pool size accordingly.
Tuning for large payloads
LISTEN/NOTIFY
in PostgreSQL imposes a payload size limit (around 8 KB). If you attempt to broadcast a message larger than this limit, Solid Cable falls back to storing the payload in the database and sending only a reference via NOTIFY
. This adds an extra round trip and some latency. To mitigate the impact:
- Compress your payloads: Instead of sending raw JSON objects or large HTML fragments, consider sending identifiers and have clients fetch the data via HTTP. This is often faster and avoids hitting the NOTIFY limit.
- Use binary encoding: Rails’ built‑in
ActionCable.server.broadcast
accepts binary data. If your payload can be encoded as a binary blob you may stay under the limit. - Chunk messages: For real‑time updates like collaborative editors you can break large diffs into smaller patches and broadcast them separately.
If you routinely exceed the NOTIFY limit and cannot refactor, you might revert to a dedicated broker like Redis for that specific channel. Solid Cable is flexible—you can configure different adapters per environment or per channel class if necessary.
Multi‑database and sharded architectures
Large applications often split their data across multiple databases for scalability. How does Solid Cable behave in this situation? By default, Solid Cable uses Rails’ Primary
database connection. If your application runs on multiple databases, you can tell Solid Cable which connection to use by setting database
in cable.yml
:
production:
adapter: solid_cable
database: primary_readonly
polling_interval: 0.05
The value corresponds to a named database connection defined in database.yml
. You can also create separate cable.yml
entries per environment. This allows you to run Solid Cable on a replica to offload pub/sub traffic from your primary database. Just ensure the replica lags are low enough to deliver messages promptly.
In sharded environments where each tenant has its own database, you can mount multiple instances of Solid Cable or run separate processes per shard. Each instance uses its own solid_cable_messages
table and polling loop. The key is to ensure that clients connect to the correct instance based on their tenant. Routing logic may be implemented in your load balancer or application code.
Monitoring and troubleshooting
As with any system, you should monitor Solid Cable’s performance and health. Useful metrics include:
- Message latency: Measure the time from broadcast to receipt in the client. You can instrument your application to log these durations or embed timestamps in messages and compute round‑trip times on the client.
- Poller lag: Track how long messages stay in the
solid_cable_messages
table before being processed. If lag grows, increase poller concurrency or reduce the polling interval. - Database load: Monitor your database CPU, I/O and connections. Bursts of WebSocket traffic will increase activity on the
solid_cable_messages
table. - Error rates: Log exceptions raised by your channels. Since Solid Cable stores messages in the database, failed broadcasts won’t automatically retry. Handle errors gracefully and notify your development team.
When debugging, remember that messages may linger in the table due to retention settings. If you’re seeing unexpected behavior, inspect the table with SQL and verify that your pollers are running. You can temporarily reduce retention for faster cleanup while testing.
Personal lessons learned
While deploying Solid Cable in a real‑world application, I encountered two memorable edge cases:
- Stale messages after deployment: During rolling deploys we noticed clients receiving duplicate messages. It turned out we weren’t pruning messages quickly enough, so new processes reprocessed messages that hadn’t yet been deleted. We fixed this by shortening
message_retention
and ensuring our cleanup job ran frequently during deployments. - Connection exhaustion: Our initial configuration used a 10‑thread poller on a database connection pool sized at 20. Under heavy load the application starved other queries because all connections were consumed by Action Cable and pollers. Increasing the pool size and reducing poller threads solved the issue. This taught us to consider all parts of the system when tuning concurrency.
Solid Cable brings real‑time web sockets into the Rails ecosystem without external dependencies, but it’s not magic. Understanding its internals and tuning the knobs appropriately will help you deliver snappy, reliable experiences to your users. In the next posts we’ll shift gears and look at caching in Rails 8 with Solid Cache, and later explore zero‑downtime deployment using Kamal.