The common way of solving the classic SQL problem of ranking, involves a self join. I wish to present a different solution, which only iterates the table once, and provides the same output.
The ranking problem
Given a table with names and scores (e.g. students exams scores), add rank for each row, such that the rank identifies her position among other rows. Rows with identical scores should receive the same rank (e.g. both contenders got the silver medal).
Consider the following table (download score.sql):
mysql> select * from score; +----------+--------------+-------+ | score_id | student_name | score | +----------+--------------+-------+ | 1 | Wallace | 95 | | 2 | Gromit | 97 | | 3 | Shaun | 85 | | 4 | McGraw | 92 | | 5 | Preston | 92 | +----------+--------------+-------+ 5 rows in set (0.00 sec)
We wish to present ranks in some way similar to:
+----------+--------------+-------+------+ | score_id | student_name | score | rank | +----------+--------------+-------+------+ | 2 | Gromit | 97 | 1 | | 1 | Wallace | 95 | 2 | | 4 | McGraw | 92 | 3 | | 5 | Preston | 92 | 3 | | 3 | Shaun | 85 | 4 | +----------+--------------+-------+------+
Note that McGraw and Preston got same scores, therefore both share 3rd rank, whereas Shaun gets ranked #4.
The common solution
Following is the SQL for the common solution:
SELECT s1.score_id, s1.student_name, s1.score, COUNT(DISTINCT s2.score) AS rank FROM score s1 JOIN score s2 ON (s1.score <= s2.score) GROUP BY s1.score_id ; +----------+--------------+-------+------+ | score_id | student_name | score | rank | +----------+--------------+-------+------+ | 1 | Wallace | 95 | 2 | | 2 | Gromit | 97 | 1 | | 3 | Shaun | 85 | 4 | | 4 | McGraw | 92 | 3 | | 5 | Preston | 92 | 3 | +----------+--------------+-------+------+
(The above can by ORDERed at will, more on this later)
What I’m suggesting is that this self join is an overkill. It recalculates over and over what we knew a second before: to get the Preston’s rank, we need to count how many students got score >=92. But when we need to find Shaun’s rank, we re-iterate these, and in addition count those with grades 85..91. We’re reading, re-reading, then re-re-reading (you get the point) the same data all over again. It’s a waste of energy.
Offering a new solution
I propose a simpler solution: do a one-time sorting of rows according to score (descending). The first row in the sorted set should obviously get the score “1”. Now we iterate the rows one by one, and keep a rank variable. Whenever the score remains the same, we just keep on iterating. Whenever the score changes (it can only change in the direction of “downwards”, since we sorted by score. descending), we increment the rank.
Actually, the above explanation makes it sound as if we do this with multiple steps. This is not so. We do this all in one step:
SELECT score_id, student_name, score, @prev := @curr, @curr := score, @rank := IF(@prev = @curr, @rank, @rank+1) AS rank FROM score, (SELECT @curr := null, @prev := null, @rank := 0) sel1 ORDER BY score DESC ; +----------+--------------+-------+----------------+----------------+------+ | score_id | student_name | score | @prev := @curr | @curr := score | rank | +----------+--------------+-------+----------------+----------------+------+ | 2 | Gromit | 97 | NULL | 97 | 1 | | 1 | Wallace | 95 | 97 | 95 | 2 | | 4 | McGraw | 92 | 95 | 92 | 3 | | 5 | Preston | 92 | 92 | 92 | 3 | | 3 | Shaun | 85 | 92 | 85 | 4 | +----------+--------------+-------+----------------+----------------+------+
Execution plan comparison
Do we have an index on the score column?
If not, I clearly win. The self join (when there’s more than mere 5 rows, of course) will make for repeated full table scans, thereby making for O(n²).
+----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+ | 1 | SIMPLE | s1 | ALL | NULL | NULL | NULL | NULL | 5 | Using temporary; Using filesort | | 1 | SIMPLE | s2 | ALL | NULL | NULL | NULL | NULL | 5 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+
Whereas the suggested solution requires one filesort. This still means table data can be re-read, but significantly less so: it only takes O(n*log(n)), where the log(n) part is usually very small (and it all depend on sort_buffer_size).
+----+-------------+------------+--------+---------------+------+---------+------+------+----------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+---------------+------+---------+------+------+----------------+ | 1 | PRIMARY | <derived2> | system | NULL | NULL | NULL | NULL | 1 | Using filesort | | 1 | PRIMARY | score | ALL | NULL | NULL | NULL | NULL | 5 | | | 2 | DERIVED | NULL | NULL | NULL | NULL | NULL | NULL | NULL | No tables used | +----+-------------+------------+--------+---------------+------+---------+------+------+----------------+
Testing on the somewhat larger sakila.film table (1000 rows is all) on my laptop, it takes 47 seconds for the common query to complete, 0.01 seconds to presented solution (repeatedly, no cache issues).
What happens when we do have an index? Again, I win. Testing on sakila.film now, having added an index on location column:
+----+-------------+-------+-------+---------------+--------+---------+------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------+---------------+--------+---------+------+------+----------------------------------------------+ | 1 | SIMPLE | s2 | index | length | length | 3 | NULL | 1140 | Using index; Using temporary; Using filesort | | 1 | SIMPLE | s1 | ALL | length | NULL | NULL | NULL | 1140 | Using where | +----+-------------+-------+-------+---------------+--------+---------+------+------+----------------------------------------------+
The above still needs to do index scan. The GROUP BY requires either sorting or utilizing the PRIMARY KEY, and the execution plan with reversed tables ordering does not improve.
Presented solution utilized index for a single pass, with O(n) complexity:
+----+-------------+------------+--------+---------------+--------+---------+------+------+----------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+---------------+--------+---------+------+------+----------------+ | 1 | PRIMARY | <derived2> | system | NULL | NULL | NULL | NULL | 1 | | | 1 | PRIMARY | film | index | NULL | length | 3 | NULL | 1140 | | | 2 | DERIVED | NULL | NULL | NULL | NULL | NULL | NULL | NULL | No tables used | +----+-------------+------------+--------+---------------+--------+---------+------+------+----------------+
This is done with a single index pass. Again, on my laptop, I get ~41 seconds for 1st query, 0.01 seconds for the proposed solution.
I also argue that in addition, the results would often be required by order of rank, in which case the common solution must eventually sort anyhow.
Conclusion
To be honest, I’ve seen the self-join solution in so many places: textbooks, training material, online tutorials… Maybe it’s just a silly exercise, perhaps not your daily real-world task; but it’s one of those classic SQL problems. The so-often-repeated common solution is ANSI SQL, for sure, but at what cost?
This is very interesting, but I wonder if it’s technically correct. According to the MySQL 5.1 manual:
“The order of evaluation for user variables is undefined and may change based on the elements contained within a given query. In SELECT @a, @a := @a+1 …, you might think that MySQL will evaluate @a first and then do an assignment second, but changing the query (for example, by adding a GROUP BY, HAVING, or ORDER BY clause) may change the order of evaluation…The general rule is never to assign a value to a user variable in one part of a statement and use the same variable in some other part of the same statement. You might get the results you expect, but this is not guaranteed.”
Reference: http://dev.mysql.com/doc/refman/5.1/en/user-variables.html
Also, the cross join to set the initial values of the variables seems unnecessarily complex – does this really have a defined behavior? You could just precede the SELECT with a separate SET to initialize the variables. True, it’s no longer a single step, but in what context would this really matter?
Many self joins are fundamentally quadratic, so these results should not be very surprising. What would be interesting would be an optimiser that could convert the self join expression into a less-than-quadratic plan.
🙂 Now try your trick when you need a GROUP BY 😉
Now if MySQL had windowing functions…
@GSnyder
Thanks for the reference. A problem indeed.
As for the cross join: it actually does have a defined behavior: the evaluation of “tables” happens first. which means the assignment of rank := 0, for example, happens before any row is iterated,