I’ve enjoyed seeing some writeups on requests/second and memory usage for upcoming versions of Plone. It’s great to see things trending in that direction. Hopefully with some tough choices and deprecation, more gains can be made (just my personal opinion.)
I thought I’d give a primitive try at the same numbers for KARL, the collaboration application atop BFG that we’ve been working on and deploying to customers.
Using the ‘ab -n 100 -c 2’ on my first gen MacBook 2 GHz, 2 Gb of RAM, I leveled off at just over 134 requests per second. Memory usage was 31 Mb.
Obviously it’s not an apples-apples comparison. The feature set is smaller. Although we do have cataloging, text search, workflow, security, and the like, there’s a ton of stuff we don’t do. We’re an end-user application with specific features, versus a framework.
On the other hand, all requests in KARL are authenticated and fully-dynamic. So the 137 rps above? That’s our slow number: authenticated, personalized, security-aware, fully dynamic.
For more fun, we recently built an ugly, cheap Core i5 box in the Agendaless office for $600, with 4 Gb of RAM. In production we deploy under modwsgi, so we fired it up to have 3 processes (for 3 of the four cores). We also have a script that lets us bulk load 300 sample communities, each containing a bunch of content.
That’s a bit more realistic of a test, since we start paying the price of having content in the catalog.
In that “with content” test, we got 349 requests/second.
Sometime soon we’re going to think a bit harder about a more realistic test. Pounding the same URL over and over as the same user just doesn’t mean squat. Well, it’s valuable in so much as it is a veto: if your numbers are pathetically low on the fastest-possible “test”, it’s only going to get worse. We are slowing building up some Funkload scripts that cover a scenario which includes different users, different activities, and some writes as well as reads.
We need this as we are evaluating various KARL ideas in 2010. First and foremost, we bought a solid-state disk for the test box. We had a query (prefix match on text search, where only one letter was entered) which blew up our system previously. Think, 60+ seconds. That time fell down to 2 with the SSD.
Next, we’d like to see some before/after on RelStorage using some real-world scenarios. Finally, I’d like to see some before/after on repoze.pgtextindex, where we swap out just one of our catalog index types (the text one) with transactional text indexing in Postgresql.