So I used to do social gaming years ago and we kept a socket connection (via flash, since this was pre-websocket) open for each of our users. We at one point had 500,000 users online at once and our dispatch server (the one that kept the connections to the client open) was barely pushing 10% cpu. The server was built with eventmachine on MRI. Eventmachine on linux can use what's called the e-poll linux api. It basically schedules all io relating to sockets. Its the same thing nginx is built on for example. Using that you can scale to millions of connections on a single machine. (Google the C10k problem, or check out http://highscalability.com/blog/2013/5/13/the-secret-to-10-million-concurrent-connections-the-kernel-i.html for example) So long story short, running an evented server should prove no problem to keep connections open and manage them. Actually pushing the io is really easy. The one issue you might run into is if you have too many people subscribed to the same large amount of data, you might end up maxing out a network connection on large updates. We're working on a distributed data bus (similar to the one in vert.x) to allow you to easily distribute updates and connections across many machines. In both thin and the goliath server, you get a thread per rack request, but all websocket requests can run in a single thread and use the evented io to manage dispatch. I would also like to integrate with vert.x on the jruby side to do similar things. Hopefully that answers some questions. There are a lot of difficult problems for Volt to solve, but I don't believe that handling the persistent connections is one of them. For the most part that's a solved problem thanks to epoll and the O(1) linux scheduler.