Want to stay on top? Ruby Weekly is a once-weekly e-mail newsletter covering the latest Ruby and Rails news.
     Feed Icon

Ruby 1.9 Fibers + EventMachine for Big Ruby Webapp Performance Gains

By Peter Cooper / June 14, 2010

Developers hankering for more performance from their Rack and Rails applications are using Ruby 1.9 fibers and event-based EventMachine-driven libraries as a way to boost the performance of their applications - in opposition to scaling by merely running multiple processes or using threads.

It's no secret that thread-based development can be Hardâ„¢, even if you didn't have to deal with Ruby quirks like autoloading not working properly and the GIL (Global Interpreter Lock). Luckily, Ruby 1.9 provides fibers, light-weight "execution units" that are manually scheduled by their parent application.

Back in April, Mike Perham introduced Phat, an asynchronous Rails 2.3.5 app running on Ruby 1.9 and supporting "many concurrent requests in a single Ruby process." In his explanation, he referred back to Scalable Ruby Processing with EventMachine, a talk he gave at Austin On Rails that's worth checking out.

Event-based database drivers are used to keep database queries asynchronous (this, however, is not a new concept) so that execution can be switched deftly between multiple fibers each serving a separate Web request by the rack-fiber_pool middleware component. Ilya Grigorik's EM-Synchrony can then be used to make a collection of common EventMachine clients fiber-aware (for using Memcached, MongoDB, Beanstalk, and more). All of this works with any Rack app, not just in Rails.

This week, Aaron Gough has written an article called Improving application throughput 9x with asynchronous responses in Rails 3 that covers the concepts at a higher level and demonstrates how he ported an existing Rails app to use the concepts outlined above. In doing so, he increased his requests per second from 5.46 to 52.

Comments

  1. Stephen Eley says:

    Autoload does work properly in Ruby 1.9. See: http://stackoverflow.com/questions/2837912

  2. Peter Cooper says:

    Interesting, but something more authoritative than a casual example would be interesting - I hope we will find a citation.

  3. raggi says:

    Um sorry to burst the bubble a little, but that app gained so significantly because it was halting the whole stack whilst waiting (a long time) for external requests.

    Apps which have already solved this problem (normally by pushing long running operations into background runners) won't see these kinds of gains for their main runtime.

    The most expensive part of the process at the moment, arbitrarily, is rendering, and no ones written stream rendering yet. DHH did mention 'flush' at railsconf, which I think is a bad idea. Apps shouldn't (generally) be in control of IO, the flushing should be implicit wherever possible.

    What this requires is for someone to wire the renderer up with the servers scheduler (think env['async.schedule'].call(&work_for_next_chunk)) and then to work out how to deal with the content_for problem.

    Without doing this properly, you'll only see a very small gain in concurrency, which in reality (and I'm saying this from experience) will just eat up heap space, and eventually, the GC will kill you again.

    It is possible to make some gains in concurrency not waiting for overloaded / slow database servers, but again, you have to be careful not to shoot yourself in the foot, as balancing application memory overheads with concurrency levels in the servers is key to managing whether or not you actually get higher throughput. Especially under MRI.

  4. raggi says:

    autoload works fine in 1.9, that problem comes up on 1.8 too.

  5. Jean says:

    You wouldn't happen to have a link to some discussion on how thread unsafe libraries fare on fiber? From all I can read/watch it sounds like fiber is a green thread implementation. From the sound of it, there are no deadlock risks but I don't see how it protects against data corrution if a library stores data in a thread unsafe way.
    Take the following sequence :
    - IO call (fake blocking using EM/Fiber)
    - call to unsafe library which stores request specific information in memory
    - IO call (fake blocking ...)
    - call to unsage librare which reads previously stored state.

    Now you get 2 requests at the same time R1 and R2 .
    - IO 1 from R1 goes to EM and waits
    - IO 1 from R2 goes to EM and waits
    - IO 1 from R1 calls back, R1 executes the misbehaving library call and IO 2 from R1 goes to EM and waits
    - IO 1 from R2 calls back, R2 executes the misbehaving library call and IO 2 from R2 goes to EM and waits

    At this point whenever R1 comes back from waiting on IO 2 it's context is corrupted.

    Is this discussed anywhere ? I don't see how the execution flow with fiber and EM can prevent this.

  6. Aaron Gough says:

    @Raggi:

    You are right, in this case the gain is huge because the external HTTP calls were taking so long. But there are plenty of other cases where big gains can be seen. A warm Rails stack generally has a pretty small overhead for a simple action, but a database call might be slow and inflate the overall request time, particularly if you're accessing a DB on another server (for example on a different EC2 instance)

    An example taken from a common action on one of my production apps:

    View 11.4ms
    Controller 5.7ms
    ActiveRecord 25.9ms

    That 25.9 ms could definitely be better spent serving other requests...

  7. Aaron Gough says:

    @ Jean:

    You're right, I'm sure this is addressed to an extent in libraries like EM-Synchrony, but right now I don't really have the technical knowledge to discuss it further. My reading/experimenting list for the next little while is going to focus heavily on EM, Fibers and Ruby concurrency in general...

  8. Stephen Eley says:

    Peter: you're right, and I'd have preferred a proper citation too describing the bug fix. But I couldn't find one after some earnest time spent searching. Not in ruby-forum.com, nor in any 1.9 changelogs, nor via Google. Repeating the reproducible experiment was the best I could do; and because the experiment correctly emulated the conditions described in Charles Nutter's original bug description, it convinced me.

    (Aside: I also couldn't find any evidence that the defect in 1.8 autoload had ever impacted anyone in a real-world application. It's an edge case involving multi-threading and bad timing, so a NON-trivial example would likely be difficult to demonstrate.)

  9. Peter Cooper says:

    @Stephen: Thanks for the clarifications. I am by no means against empirical evidence on things like this, but.. when it comes to stuff as notoriously complicated as threading, more authoritative words are certainly reassuring. I think it's proof of how tricky thread related topics are that so few authoritative posts are made about them.

  10. roger says:

    I had a bit discussion about this at

    http://www.igvita.com/2010/06/07/rails-performance-needs-an-overhaul/

    My conclusion is that threading per-se, though annoying to program, is not the bottleneck in rails, and careful profiling has to be done to see *what is* before running off and declaring fibers the answer.

    If you want to be able to handle requests without blocking on the DB, use the mysqlplus or postgres drivers with threads. Should work like a champ.

    -r

Other Posts to Enjoy

Twitter Mentions