Performance and memory usage for KARL

I’ve enjoyed seeing some writeups on requests/second and memory usage for upcoming versions of Plone.  It’s great to see things trending in that direction.  Hopefully with some tough choices and deprecation, more gains can be made (just my personal opinion.)

I thought I’d give a primitive try at the same numbers for KARL, the collaboration application atop BFG that we’ve been working on and deploying to customers.

Using the ‘ab -n 100 -c 2’ on my first gen MacBook 2 GHz, 2 Gb of RAM, I leveled off at just over 134 requests per second.  Memory usage was 31 Mb.

Obviously it’s not an apples-apples comparison.  The feature set is smaller.  Although we do have cataloging, text search, workflow, security, and the like, there’s a ton of stuff we don’t do.  We’re an end-user application with specific features, versus a framework.

On the other hand, all requests in KARL are authenticated and fully-dynamic.  So the 137 rps above?  That’s our slow number: authenticated, personalized, security-aware, fully dynamic.

For more fun, we recently built an ugly, cheap Core i5 box in the Agendaless office for $600, with 4 Gb of RAM.  In production we deploy under modwsgi, so we fired it up to have 3 processes (for 3 of the four cores).  We also have a script that lets us bulk load 300 sample communities, each containing a bunch of content.

That’s a bit more realistic of a test, since we start paying the price of having content in the catalog.

In that “with content” test, we got 349 requests/second.

Sometime soon we’re going to think a bit harder about a more realistic test.  Pounding the same URL over and over as the same user just doesn’t mean squat.  Well, it’s valuable in so much as it is a veto: if your numbers are pathetically low on the fastest-possible “test”, it’s only going to get worse.  We are slowing building up some Funkload scripts that cover a scenario which includes different users, different activities, and some writes as well as reads.

We need this as we are evaluating various KARL ideas in 2010.  First and foremost, we bought a solid-state disk for the test box.  We had a query (prefix match on text search, where only one letter was entered) which blew up our system previously.  Think, 60+ seconds.  That time fell down to 2 with the SSD.

Next, we’d like to see some before/after on RelStorage using some real-world scenarios.  Finally, I’d like to see some before/after on repoze.pgtextindex, where we swap out just one of our catalog index types (the text one) with transactional text indexing in Postgresql.


12 Responses to “Performance and memory usage for KARL”

  1. Martin Aspeli Says:

    Hi Paul,

    I probably should know the answer to this, but – is KARL open source? If so, where can it be found?


  2. Rd Hilman Hermarian Says:

    KARL is an open source web system for collaboration, organizational intranets, and knowledge management. We can find it here:


  3. Bas Roijen Says:

    It is open source and can be found here:
    Never heard of it, but I’ll have to look into it!


  4. Wichert Akkerman Says:

  5. Matt Hamilton Says:

    Great writeup, and good numbers. Interesting to see what you can get when you write something from scratch 😉 As a (not so useful, but interesting data point) we get 40 req/sec on a plain ole unloved, untuned, zope2 site ( from about 6 years ago on Zope 2.7 running on a 2.6Ghz Xeon. Anonymous. No personalisation. So really interesting to see how far the world has come.

    For your prefix match, you might want to look at constructing a Patricia Trie which are good for this kind of thing.

    I’m hoping that we should be able to wring out another doubling of Plone speed in the next 12 months as we seem to be getting pretty good at it 😉


    • Paul Everitt Says:

      Matt, you’re exactly right about getting to write something from scratch. In our case, we were porting KARL2, a (franken)Plone application that was getting around 2 requests/second.

      But your point about rewriting stands. If Plone does want another doubling, the choices are going to get tougher. The decision: if we want another doubling in performance, we have to rip stuff out. Especially if Plone wants to compete beyond static publishing and into beat-Sharepoint. We need to avoid proclamations (by others) that there is no performance issue, based on a meaningless test. Do a meaningful test for the beat-Sharepoint case and let’s confirm that we’re ready before blowing the trumpet.

  6. yurj Says:

    well, that’s because you’ve apache with mod_wsgi. Run zope on mod_wsgi (repoze?) and you’ll get about the same number.

    • Paul Everitt Says:

      The reason why we got 137 requests/sec wasn’t because of Apache, because I didn’t use Apache. I used the bundled, standalone Python server (Paste), just like using Zope. I pointed ‘ab’ directly at Paste.

  7. Roberto Allende Says:


    Great post!. I’m happy to see repoze and Karl are moving forward. To learn repoze is certainly on top of my todo-list.

    Although is not completely related, I heard it is possible to use repoze with Plone ( ) and i wonder if there are impact on performance and if there people using that configuration on production.

    Kind Regards

    • Paul Everitt Says:

      Hi Roberto, nice to hear from you. Just to be clear: the Repoze project has lots of various efforts, including one that lets Plone/Zope run as WSGI. BFG, which is what KARL is based on, isn’t so much part of the Repoze effort for Zope/Plone.

      About performance, I doubt Plone-under-WSGI-using-Repoze would change the performance for Plone too much. Most of the time is spent in the app.

      It could decrease complexity by allowing you to run multiple processes inside Apache, eliminating the load balancer.

    • garbas Says:

      i hope something like repoze.zope2 will soon be merged into zope2, so there is wsgi story otb.

      i was trying to deploy with repoze.zope2 and found weird issues with collective.indexing/solr. at that point i stopped since i didnt have time to investigate further. so repoze.zope2 is working just you might find some issues with it which you would need to tackle on your own.

  8. Casey Duncan Says:

    To quote a once down-trodden, but now liberated Zope minion: “The problem is Dynamicism” ;^)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: