Nginx Passenger vs JRuby on Jetty
I am in the process of evaluating which option to choose for a new production deployment of a Sinatra application.
Pros and Cons of the implementations:
JRuby Stack:
Pros:
• Fast for serial execution once warmed up.
• Multi-threaded, easy to scale with spiked traffic / shared resources.
Cons:
• Slow warm up time, app restarts can really hurt.
• Single process is a single point of failure.
MRI Ruby Stack:
Pros:
• Starts off fast, with no warmup time.
• Scaled via processes, no single point of failure.
Cons:
• Slower than JRuby in serial execution.
• Single Process, no shared resources (Possibly using more memory over time).
These tests are run against a real-world application that is soon to be released, not some dummy “hello world” application.
Application Background:
Sinatra / HAML templates (not compiled, rendered per request) / CouchDB / R18N Translation
Server Specifications:
OS: OpenSolaris - SNV98
Hardware: 8Gig / Quad Core Xeon x5355
MRI Stack:
Ruby 1.8.7 (2008-08-11 patchlevel 72)
Nginx Passenger 2.2.4
Passenger Config: passenger_max_pool_size 8, passenger_use_global_queue on
Java Stack:
JRuby 1.3.1 (ruby 1.8.6p287) (2009-06-15 6586)
Jetty-6.1.15
JDK Flags: -server -Xverify:none -XX:MaxPermSize=96m -XX:+AggressiveOpts -Xss128k -Xms256m -Xmx384m -XX:+UseParallelGC -XX:+UseParallelOldGC
JDK 1.7.0 b67
Here are the results. I have taken the best time out of 10 runs, giving enough time for the JDK to warmup and passenger to load all the children. The results are clipped for brevity.
Benchmark command:
ab -n1000 -c10 http://service/
JRuby Results:
Requests per second: 85.97 [#/sec] (mean)
Time per request: 116.316 [ms] (mean)
Time taken for tests: 11.632 seconds
Memory Use After Test: 437M (RSS)
MRI Results:
Requests per second: 118.85 [#/sec] (mean)
Time per request: 84.142 [ms] (mean)
Time taken for tests: 8.414 seconds
Memory Use After Test: 264M (RSS)
Conclusions and final thoughts:
Seems like MRI Ruby has a 39% performance advantage on JRuby executing my application. I am still a bit skeptical if MRI Ruby would still win out in production when it turns into a long running process marathon with varied traffic patterns. At the end of the day the JVM currently has the edge in garbage collection on MRI Ruby, so in “theory” JRuby should be the better choice. This is all a hypothetical guesstimate[sic] on my behalf. I will most likely end up trying both variants in production and see which works best.
Victor, your benchmark is possibly flawed because (IIRC) JRuby with a default config only warms up after 10,000 rounds. What happens if you run ab with -n100000?
10,000 iterations? that is quite a lot. Servlet container restarts would be *really* painful if that is the case.
There’s a few tweaks to your JRuby opts that I would suggest.
I would try using the latest 1.6 release first and foremost. 1.7 is still an unknown quantity at this time. Most people enjoying the performance benefits of JRuby are using it on 1.6 currently.
I would also suggest benchmarking with and without -XX:+AggressiveOpts. That maybe hurting more than helping.
JRuby is definitely higher than MRI on its memory requirements. If you can spare more than 384m to the JVM heap great. Give JRuby all you can.
More specifically JRuby is very hard on the New (or Eden) generation of the heap. I set the new generation for my JRuby apps anywhere from 1/3 to 1/2 the total size of my heap. You can do that with : -Xmn192m.
Here are a few other options than I am currently setting :
-XX:ParallelGCThreads=8 (where 8 is the number of cpus/cores)
-XX:MaxTenuringThreshold=15 (let objects sit in the new generation longer than they normally would, it will fill up soon enough causing a GC to kick in)