The DB problem inherent to dynamic web pages

July 20, 2009

When building web sites, a popular demand is a maximum page load time.

For example, many would require < 0.5 seconds (or even less) for major pages loading time. Of course, there are numerous factors for page load time: network, caching, web servers, scripting language/code, database access and more.

Naturally I want to discuss the use of database access when creating web pages. I'll be referring to dynamic web pages, such that are created by common languages as PHP, Java/J2EE, Ruby, ASP(.NET) etc.

A very common programming style is - what's called in the Java jargon - using "scriptlets" as in the following JSP page:

<html>
<body>
    Time now is <%= new java.util.Date() %>
</body>
</html>

The above replaces the "<%= new java.util.Date() %>" part with a text representation of the current time.

If I were to produce a dynamic content site, say, a WordPress blog, like the one you're reading, I would need to generate several dynamic contents: the latest posts, the popular tags, the comments for this post, etc. These are generated by calling upon the database and running some queries. I suppose there's nothing new in what I've explained so far.

The problem

When generating a "heavyweight" page, like some online newspaper or bookstore, there may be many queries involved. Are you logged in? Do we have recommendations for you? What are the latest topics? What have you been interested in before? Do you have friends online? What content have you produced on the website?

I've recently reviewed a site which generated > 500 queries per single page. I personally thought that was a very high number, but that was a necessity. The problem was: the page took 2 seconds to load.

Some tuning, rewriting and indexing later, time dropped to 0.6 seconds to load; but that was not fast enough. It was then that we got to a major conclusion:

All database calls are serialized. They need to be parallelized.

Remember that MySQL can only utilize a single thread for the computation of a single query (though more threads can handle IO in the meantime). This leads to only one CPU being used on your standard Linux distribution, for a given web page.

Really, that sounds just too obvious! But not so easy to achieve when doing "scriptlets". The templating engine parses the scriptlets one by one, executing them in order. In fact, you assume it does so, so that you can rely on the outcome of the previous scriptlet in the next one. In Java, for example, it goes beyond that: a JSP page is rewritten as a normal Java Servlet class, where the "scriptlets" become the main code, and the HTML becomes just printing to standard output. So you get linear executing code.

Even with more sophisticated frameworks, the "normal" way of doing things is linear. For example, using the Spring framework, you have Java objects -- controllers -- which are responsible for web pages. You can avoid doing scpriting within your dynamic web pages, and only ask for data provided by those controllers. So, for example, using Spring + Velocity, a web page could look like this:

<html>
<body>
    Login time as recorded in DB is: ${user.loginTime}
</body>
</html>

This (usually) translates to calling the getLoginTime() method on a pre-built user object. But just how does this method work?

  • Does it do lazy initialization, so that it calls upon the DB to get the answer?
  • Did the controller set up the value during some init() method?
  • Did the controller set up the value in response to the web page's request parameter, parsing them one by one?

All the above options lead to linear, or serial execution.

How to parallelize?

Parallelization with web pages is not so simple, and requires understanding of multi threading programming. The programmer needs to be aware of race conditions, deadlocks, starvation issues, etc. (though, to be honest, in dynamic web pages context these do not usually become a real issue). Some programming languages provide good support for multi threaded programming. Java is one such language.

Let's assume, then, that we need to spawn some 10 queries in response to a page request. With Jjava, we can write something like:

CountDownLatch doneSignal = new CountDownLatch(10);

Runnable task1 = new Runnable() {
    public void run()
    {
        user.setLoginTime(this.jdbcTemplate.queryForInt("SELECT ... FROM ..."));
        doneSignal.countDown();
    }
} ;

Runnable task2 = new Runnable() {
    public void run()
    {
        headlines = getSimpleJdbcTemplate().query("SELECT * FROM headline WHERE...",
            new ParameterizedRowMapper<Headline>() {
                public Headline mapRow(ResultSet rs, int rowNum)
                {
                    Headline headline = new Headline();
                    headline.setTitle(rs.getString("title");
                    headline.setUrl(rs.getString("url");
                    ...
                }
            }
        doneSignal.countDown();
    }
} ;

...

Runnable task10 = new Runnable() {
    ...
    doneSignal.countDown();
}
Executor executor = Executors.newFixedThreadPool(numberOfAvailableProcessors);
executor.execute(task1);
...
executor.execute(task10);

doneSignal.await();

// Now fill in the Model

The above code is simplified and presented in a way which is more readable. What it says is:

  • Let's create the 10 tasks, but not execute them: just lay out the commands.
  • Each task, upon completion, lets the CountDownLatch know it has completed (but remeber we have not executed it yet).
  • We create or use a thread pool, using some n threads; n may relate to the number of processors we have.
  • We ask the pool to execute all threads. At the discretion of the pool, it will either run them all concurrently, or some sequentially - depending on how many threads are available.
  • We ask the CountDownLatch -- a one-time barrier -- to block, until all 10 tasks have notified they're done.
  • We can now go on and do our stuff.

Spring has a built in TaskExecutor mechanism to provide solution similar to the thread pool above.

I'm mostly a C/C++/Java programmer; I have no knowledge on how this can be achieved in PHP, Ruby, ASP.NET or other languages. The above code is certainly not the most straightforward to use. I would like to see frameworks provide wrappers for this kind of solution, so as to support the common web developer with parallelization.

tags: ,
posted in MySQL by shlomi

« | »

Follow comments via the RSS Feed | Leave a comment | Trackback URL

6 Comments to "The DB problem inherent to dynamic web pages"

  1. jsled wrote:

    If you're going to block until all of the queries have returned anyways, you might as well dispense with the CountDownLatch … just submit the queries and use the Futures to either return immediately (if they're already finished) or block until they have finished. It's a bit more straightforward, and will have the same runtime behavior.

  2. shlomi wrote:

    Jsled,

    You are absolutely right. I could have blocked on the collection of Futures.
    I chose the above code because I believe it is slightly easier to understand for non-Java programmers. At least that was my thinking.

    Thanks,
    Shlomi

  3. Roland Bouman wrote:

    Hi!

    "Parallelization with web pages is not so simple, and requires understanding of multi threading programming. "

    For many cases, it might be easier to use Ajax to do parallel HTTP requests. Of course, this requires client side javascript, and you can't parrallelize a whole lot due to a limited number of simultaneous HTTP connections, but still, Ajax does get the job done many times without requiring multi-threaded server side programming.

    I do agree that a multi-threaded server side programming language would be nice - I have considered trying Java instead of PHP for that reason. But it'd be even nicer if there would be some way to do it in PHP too.

    kind regards,

    Roland

  4. Robert Wultsch wrote:

    In a short time I think lib drizzle will provide the answer that we are looking for in term of parallelization of queries.

    When I worked on a newspaper website I got the front page down to around 5 queries, and story pages around 10. However even these few fast queries are not fast enough for many applications. One way or another cache management needs to be part of the discussion as I think it is a a powerful stop gap and part of an ideal complete solution.

  5. shlomi wrote:

    @Roland,

    True, with Ajax you can do nifty things. It does make for multiple HTTP connections, which, in itself, can slow down loading time; plus the page can get rendered awkwardly before all ajax calls are returned.

  6. Roland Bouman wrote:

    " in itself, can slow down loading time; plus the page can get rendered awkwardly before all ajax calls are returned"

    sure it can. but the trick is of course to make sure the most important things are rendered ASAP with the initial request. The remainder of the page gcan then trickle in through one ore more ajax requests.

Leave Your Comment

 
Powered by Wordpress and MySQL. Theme by openark.org