There’s a trick for using LAST_INSERT_ID() to generate sequences in MySQL. Quoting from the Manual:
- Create a table to hold the sequence counter and initialize it:
mysql>CREATE TABLE sequence (id INT NOT NULL);
mysql>INSERT INTO sequence VALUES (0);
- Use the table to generate sequence numbers like this:
mysql>UPDATE sequence SET id=LAST_INSERT_ID(id+1);
mysql>SELECT LAST_INSERT_ID();
This trick calls for trouble.
Contention
A customer was using this trick to generate unique session IDs for his JBoss sessions. These IDs would eventually be written back to the database in the form of log events. Business go well, and one day the customer adds three new JBoss servers (doubling the amount of webapps). All of a sudden, nothing works quite as it used to. All kinds of queries take long seconds to complete; load average becomes very high.
A short investigation reveals that a very slight load is enough to make for an accumulation of sequence-UPDATE queries. Dozens of them are active at any given time, waiting for long seconds.
InnoDB or MyISAM both make for poor response times. No wonder! Everyone’s contending for one lock.
Not just one
Other queries seem to hang as well. Why?
It is easy to forget or let go unnoticed that there are quite a few global locks involved with each query. If query cache is activated, then any query must pass through that cache, holding the query cache mutex. There’s a global mutex on MyISAM’s key cache. There’s one on InnoDB’s buffer pool (see multiple buffer pools in InnoDB 5.5), albeit less of an overhead. And there’s the table cache.
When table cache is enabled, any completed query attempts to return file handles to the cache. Any new query attempts to retrieve handles from the cache. While writing to the cache (extracting, adding), the cache is locked. When everyone’s busy doing the sequence-UPDATE, table cache lock is being abused. Other queries are unable to find the time to squire the lock and get on with their business.
What can be done?
One could try and increase the table_open_cache value. That may help to some extent, and for limited time. But the more requests are made, the quicker the problem surfaces again. When, in fact, reducing the table_open_cache to zero (well, minimum value is 1) can make for a great impact. If there’s nothing to fight for, everyone just get by on their own.
I know the following is not a scientific explanation, but it hits me as a good comparison: when my daughter brings a friend over, and there’s a couple of toys, both are happy. A third friend makes for a fight: “I saw it first! She took it from me! I was holding it!”. Any parent knows the ultimate solution to this kind of fight: take away the toys, and have them find something else to enjoy doing. OK, sorry for this unscientific display, I had to share my daily stress.
When no table cache is available, a query will go on opening the table by itself, and will not attempt to return the file handle back to the cache. The file handle will simply be destroyed. Now, usually this is not desired. Caching is good. But in our customer’s case, the cost of not using a table cache was minified by the cost of having everyone fight for the sequence table. Reducing the table cache made for an immediate relaxation of the database, with observable poorer responsiveness on peak times, however way better than with large table cache.
Other tools?
I don’t consider the above to be a good solution. It’s just a temporary hack.
I actually don’t like the LAST_INSERT_ID() trick. Moreover, I don’t see that it’s the database’s job to provide with unique IDs. Let it do relational stuff. If generating IDs is too intensive, let someone else do it.
NoSQL solutions provide such a service. Memcached, redis, MongoDB (and probably more) all provide with increment functions. Check them out.
Application level solutions
I actually use an application level solution to generate unique IDs. I mean, there’s always GUID(), but it’s result is just too long. Take a look at the following Java code:
public class Utils { private static long lastUniqueNumber = 0; public static synchronized long uniqueNumber() { long unique = System.currentTimeMillis(); if (unique <= lastUniqueNumber) unique = lastUniqueNumber + 1; lastUniqueNumber = unique; return unique; } }
Within a Java application this above method returns with unique IDs, up to 1000 per second on average (and it can perform way more than 1000 times per second).
On consequential executions of applications on the same machine one would still expect unique values due to the time-related nature of values. However, computer time changes. It’s possible that System.currentTimeMillis() would return a value already used in the past.
And, what about two processes running on the same machine at the same time? Or on different machine?
Which is why I use the following combination to generate my unique IDs:
- Server ID (much like MySQL’s server_id parameter). this could be the last byte in the server’s IP address, or just 4 or 5 bits if not too many players are expected.
- Process ID (plain old pid) which I pass to the Java runtime in the form of system properties. Any two processes running on the same machine are assured to have different IDs. Two consequently spawned processes will have different IDs. The time it would take to cycle the process IDs is way more than would make for a “time glitch” problem as described above
- Current time in milliseconds.
If you have to have everything withing 64 bit (BIGINT) then you’ll have to do bit manipulation, and drop some of the MSB on the milliseconds so as to overwrite with server & process IDs.
If you are willing to have your IDs unique in the bounds of a given time frame (so, for example, a month from now you wouldn’t mind reusing old IDs), then the problem is significantly easier. You may just use “day of month” and “millis since day start” and save those precious bits.
Still other?
Please share your solutions below!
How about caching the sequence in the application itself?
Get the application to generate N session id’s at a time, allocating as needed:
UPDATE sequence SET id=LAST_INSERT_ID(id+N);
When the application used all N session id’s it will request more from the DB.
This should reduce the load on the sequence table by a factor of N.
If each application uses a random multiple of N, it should also prevent all applications trying to refresh their cache at the same time – which could happen in an evenly distributed load . . .
@Rami,
doing something like LAST_INSERT_ID(id+1000) sounds just fine to me. Then you can get the result number and the 999 below it and use by the webapp.
I would suggest that the likelihood of all webapps returning to get the next 1000 values – all at the same time – recurringly – is zero. That is, it will eventually evenly spread. Isn’t the anthropic law in place here 😉 ?
It is funny actually that you posted this at the same time as I posted http://bugs.mysql.com/bug.php?id=59885
Hi Peter,
Funny indeed. Karma, I would say!
Do you find the way you used sequence generation (with AUTO_INCREMENT) to work faster / with less locks than the original documentation suggests?
Sorry Shlomo, but I have no such benchmarks.
My concern is only that after updating using a calculation on LAST_INSERT_ID, one of two (as far as I can see) valid INSERT syntax’es fail (ie: if NULL is not specified explicitly).