How many organization use version control for development? Probably almost every single one.
How many store the database schema under version control? Alas, not as many.
Coupling one’s application with table schema is essential. Organization who actively support multiple versions of the product understand that well. Smaller organizations not always have this realization.
How is it done?
Ideally
Ideally one would have:
- The schema, generated by hand
- Essential data (INSERT INTO statements for those lookup tables without which you cannot have an application)
- Sample data: sufficient real-life data on which to act. This would include customers data, logs, etc.
If you can work this way, then creating a staging environment consists of re-creating the entire schema from out schema code, and filling in the essential & sample data.
The thing with this method is that one does not (usually?) apply it on one’s live system. Say we were to add a column. On our live servers we would issue an ALTER TABLE t ADD COLUMN.
But this means we’re using different methods on our staging server and on our production server.
Incremental
Another kind of solution would be to hold:
- The static schema, as before
- Essential data, as before
- Sample data, as before
- A migration script, which is the concatenation of all ALTERs, CREATEs etc. as of the static schema.
Once in a while one can do a “reset”, and update the static schema with the existing design.
As you go along
This solution simply means “we apply the changes on staging; test + version them; then apply on production”.
A side effect of this solution is that the database generates the schema, but the schema does not generate the database as in previous cases. This makes for an uglier solution, where you first apply the changes to the database, and then, based on what the database report, enter data into the version control.
How to do that? Easiest would be to use mysqldump –routines –no-data. Some further parsing should be done to strip out the AUTO_INCREMENT values, which tend to change, as well as the surrounding variables settings (strip out the character set settings etc.).
Summary
However you do it, make sure you have some kind of version control on your schema. It pays off just as with doing version control for your code. You get to compare, understand the incremental changes, understand the change in design, etc.
This post reminds me that I have always been (almost) angry about that AUTO_INCREMENT = XXXX information was added to SHOW CREATE TABLE. This makes it harder to get importable structural-only information from a table. It has broken many tools making use of SHOW CREATE. Integration with version control as described here is another example. Any idea why they did this? Some ugly workaround for a InnoDB slowness issue maybe?
@Peter,
If my memory serves me right, one reason for this was to be certain that auto_increment values would be consistent in a replication setup from the dump (when latest innodb rows are removed, auto_increment remains highest removed row, but that would not reflect in a server built from the dump)
For version control, you might be interested in taking a look at the neXtep designer project.
It is a database development environment where you develop on a version control repository. You work on the model, and the system handles all generation of the SQL delta scripts for you. You can synchronize your current developments at any time with a target database of your choice, where you can integrate changes from the database or push developments from the repository.
This is a free opensource GPL product whose aim is to solve many problems related to database version control and developments. We are trying to gather people around the project to get more feedbacks on it.
Tell me what you think.
Here is the link to the project website :
http://www.nextep-softwares.com
And another link of the product’s wiki containing more information on the concepts, tutorials, technical information :
http://www.nextep-softwares.com/wiki
Kind regards,
Christophe