The common way of solving the classic SQL problem of ranking, involves a self join. I wish to present a different solution, which only iterates the table once, and provides the same output.
The ranking problem
Given a table with names and scores (e.g. students exams scores), add rank for each row, such that the rank identifies her position among other rows. Rows with identical scores should receive the same rank (e.g. both contenders got the silver medal).
Consider the following table (download score.sql):
mysql> select * from score; +----------+--------------+-------+ | score_id | student_name | score | +----------+--------------+-------+ | 1 | Wallace | 95 | | 2 | Gromit | 97 | | 3 | Shaun | 85 | | 4 | McGraw | 92 | | 5 | Preston | 92 | +----------+--------------+-------+ 5 rows in set (0.00 sec)
We wish to present ranks in some way similar to:
+----------+--------------+-------+------+ | score_id | student_name | score | rank | +----------+--------------+-------+------+ | 2 | Gromit | 97 | 1 | | 1 | Wallace | 95 | 2 | | 4 | McGraw | 92 | 3 | | 5 | Preston | 92 | 3 | | 3 | Shaun | 85 | 4 | +----------+--------------+-------+------+
Note that McGraw and Preston got same scores, therefore both share 3rd rank, whereas Shaun gets ranked #4.
The common solution
Following is the SQL for the common solution:
SELECT s1.score_id, s1.student_name, s1.score, COUNT(DISTINCT s2.score) AS rank FROM score s1 JOIN score s2 ON (s1.score <= s2.score) GROUP BY s1.score_id ; +----------+--------------+-------+------+ | score_id | student_name | score | rank | +----------+--------------+-------+------+ | 1 | Wallace | 95 | 2 | | 2 | Gromit | 97 | 1 | | 3 | Shaun | 85 | 4 | | 4 | McGraw | 92 | 3 | | 5 | Preston | 92 | 3 | +----------+--------------+-------+------+
(The above can by ORDERed at will, more on this later)
What I’m suggesting is that this self join is an overkill. It recalculates over and over what we knew a second before: to get the Preston’s rank, we need to count how many students got score >=92. But when we need to find Shaun’s rank, we re-iterate these, and in addition count those with grades 85..91. We’re reading, re-reading, then re-re-reading (you get the point) the same data all over again. It’s a waste of energy.
Offering a new solution
I propose a simpler solution: do a one-time sorting of rows according to score (descending). The first row in the sorted set should obviously get the score “1”. Now we iterate the rows one by one, and keep a rank variable. Whenever the score remains the same, we just keep on iterating. Whenever the score changes (it can only change in the direction of “downwards”, since we sorted by score. descending), we increment the rank.
Actually, the above explanation makes it sound as if we do this with multiple steps. This is not so. We do this all in one step:
SELECT score_id, student_name, score, @prev := @curr, @curr := score, @rank := IF(@prev = @curr, @rank, @rank+1) AS rank FROM score, (SELECT @curr := null, @prev := null, @rank := 0) sel1 ORDER BY score DESC ; +----------+--------------+-------+----------------+----------------+------+ | score_id | student_name | score | @prev := @curr | @curr := score | rank | +----------+--------------+-------+----------------+----------------+------+ | 2 | Gromit | 97 | NULL | 97 | 1 | | 1 | Wallace | 95 | 97 | 95 | 2 | | 4 | McGraw | 92 | 95 | 92 | 3 | | 5 | Preston | 92 | 92 | 92 | 3 | | 3 | Shaun | 85 | 92 | 85 | 4 | +----------+--------------+-------+----------------+----------------+------+
Execution plan comparison
Do we have an index on the score column?
If not, I clearly win. The self join (when there’s more than mere 5 rows, of course) will make for repeated full table scans, thereby making for O(n²).
+----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+ | 1 | SIMPLE | s1 | ALL | NULL | NULL | NULL | NULL | 5 | Using temporary; Using filesort | | 1 | SIMPLE | s2 | ALL | NULL | NULL | NULL | NULL | 5 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+
Whereas the suggested solution requires one filesort. This still means table data can be re-read, but significantly less so: it only takes O(n*log(n)), where the log(n) part is usually very small (and it all depend on sort_buffer_size).
+----+-------------+------------+--------+---------------+------+---------+------+------+----------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+---------------+------+---------+------+------+----------------+ | 1 | PRIMARY | <derived2> | system | NULL | NULL | NULL | NULL | 1 | Using filesort | | 1 | PRIMARY | score | ALL | NULL | NULL | NULL | NULL | 5 | | | 2 | DERIVED | NULL | NULL | NULL | NULL | NULL | NULL | NULL | No tables used | +----+-------------+------------+--------+---------------+------+---------+------+------+----------------+
Testing on the somewhat larger sakila.film table (1000 rows is all) on my laptop, it takes 47 seconds for the common query to complete, 0.01 seconds to presented solution (repeatedly, no cache issues).
What happens when we do have an index? Again, I win. Testing on sakila.film now, having added an index on location column:
+----+-------------+-------+-------+---------------+--------+---------+------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------+---------------+--------+---------+------+------+----------------------------------------------+ | 1 | SIMPLE | s2 | index | length | length | 3 | NULL | 1140 | Using index; Using temporary; Using filesort | | 1 | SIMPLE | s1 | ALL | length | NULL | NULL | NULL | 1140 | Using where | +----+-------------+-------+-------+---------------+--------+---------+------+------+----------------------------------------------+
The above still needs to do index scan. The GROUP BY requires either sorting or utilizing the PRIMARY KEY, and the execution plan with reversed tables ordering does not improve.
Presented solution utilized index for a single pass, with O(n) complexity:
+----+-------------+------------+--------+---------------+--------+---------+------+------+----------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+---------------+--------+---------+------+------+----------------+ | 1 | PRIMARY | <derived2> | system | NULL | NULL | NULL | NULL | 1 | | | 1 | PRIMARY | film | index | NULL | length | 3 | NULL | 1140 | | | 2 | DERIVED | NULL | NULL | NULL | NULL | NULL | NULL | NULL | No tables used | +----+-------------+------------+--------+---------------+--------+---------+------+------+----------------+
This is done with a single index pass. Again, on my laptop, I get ~41 seconds for 1st query, 0.01 seconds for the proposed solution.
I also argue that in addition, the results would often be required by order of rank, in which case the common solution must eventually sort anyhow.
Conclusion
To be honest, I’ve seen the self-join solution in so many places: textbooks, training material, online tutorials… Maybe it’s just a silly exercise, perhaps not your daily real-world task; but it’s one of those classic SQL problems. The so-often-repeated common solution is ANSI SQL, for sure, but at what cost?
@Jay,
Am prepared to abandon the trick altogether due to the undefined variables order behavior. Why, oh why?
Hi!
it is a nice trick, and it reminds me of what I did about a yearr ago on calculating Percentiles:
http://rpbouman.blogspot.com/2008/07/calculating-percentiles-with-mysql.html
Unfortunately, it is indeed unreliable.
But there is another way, that is still better than the cartesian self join. This alternative solution relies on GROUP_CONCAT and can be written using either a scalar subquery or a derived table (which still is kind of a self join in this case)
You can read more about it here:
http://rpbouman.blogspot.com/2009/09/mysql-another-ranking-trick.html
Shlomi,
Nice post. Jay wrote a nice article about a similar matter some time ago.
http://dev.mysql.com/tech-resources/articles/rolling_sums_in_mysql_followup.html
Cheers
Giuseppe
@Roland:
Thanks, and have commented in your post as well!
@Giuseppe,
Once again, thanks for the reference!