On free and open blog posts: this post is free

I’d like to criticize and stress some opinions on free and open blog posts, including such appearing on planet MySQL.

Some rants follow; skip if you’re only after technical stuff

1. On free reading

This post is completely free. You don’t need to pay or register to read it, not will you require to pass personal details to comment.

If you happen to see this post on some website, which only provides you with 1st couple of sentences, then asks you to login in order to view the complete text — you should know you need no registration to read this.

Continue reading » “On free and open blog posts: this post is free”

How to calculate a good InnoDB log file size – recap

Following Baron Schwartz’ post: How to calculate a good InnoDB log file size, which shows how to make an estimate for the InnoDB log file size, and based on SQL: querying for status difference over time, I’ve written a query to run on MySQL 5.1, which, upon sampling 60 seconds of status, estimates the InnoDB transaction log bytes that are expected to be written in the period of 1 hour.

Recap: this information can be useful if you’re looking for a good innodb_log_file_size value, such that will not pose too much I/O (smaller values will make for more frequent flushes), not will make for a too long recovery time (larger values mean more transactions to recover upon crash).

It is assumed that the 60 seconds period represents an average system load, not some activity spike period. Edit the sleep time and factors as you will to sample longer or shorter periods. Continue reading » “How to calculate a good InnoDB log file size – recap”

SQL: querying for status difference over time

The InnoDB plugin has a nice INFORMATION_SCHEMA concept: resetting tables. For example, the INNODB_CMP table lists information about compression operation. A similar table, INNODB_CMP_RESET, provides the same information, but resets the values. The latter can be used to measure, for example, number of compression operations over time.

I wish to present a SQL trick which does the same, without need for resetting tables. Suppose you have some status table, and you wish to measure the change in status per second, per minute etc. The trick is to query for the value twice in the same query, with some pause in between, and make the difference calculation.

For sake of simplicity, I’ll demonstrate using 5.1’s INFORMATION_SCHEMA.GLOBAL_STATUS. Please refer to INFORMATION_SCHEMA.GLOBAL_STATUS: watch out for some discussion on this.

Continue reading » “SQL: querying for status difference over time”

How to export OpenOffice presentations to PDF

OpenOffice 3.x has an improved PDF export mechanism. With File->Export as PDF…, one can export a presentation file (*.odp) to PDF format.

[Edit: see updated notes at the end of this post]

But there is a “feature” which is very annoying. One can export the presentation:

  • Without notes, in which case the presentation frames are exported one per page. This is the standard export, and what you would normally use to present in front of an audience.
  • With notes, which would normally be used for printing. The problem is, this kind of export first produces the pages without notes, exactly as in previous case, then followed by the slides (again!) with notes. It was argued that this is a feature, not a bug; I fail to see the reasoning behind this.

If I want to create a PDF for printing, why would I want the bare slides? Anyway, for those who wonder like me, here’s how to overcome this using OpenOffice:

  1. Install the PDF import extension. It will be used in following steps, but it’s a nice extension to have anyway.
  2. Export your presentation using File->Export as PDF…, make sure Export notes is checked. Say you export to “my_notes.pdf”.
  3. Open “my_notes.pdf” with OpenOffice (opens with Draw). The PDF extension allows for some PDF editing capabilities.
  4. Delete pages starting page #1 and until the last slide-PDF. 1st page should now be page #1 with slide+notes.
  5. Yet again, export with File->Export as PDF…, make sure Export notes is checked. Save as “my_notes_final.pdf”.

And there you have it. The “my_notes_final.pdf” file contains a printable version of slides+notes, without initial slides-only pages.

Of course, if you only want to produce slides in PDF, all you have to do is File->Export as PDF…, then make sure Export notes is unchecked.

Notes

On ubuntu, the best way to install the PDF import extension it to:

sudo apt-get install openoffice.org-pdfimport

Edit:

More than two years have passed. The “feature” is still there, but a couple more solutions are available:

  1. Just use cups-PDF on your Linux machine. However, at the time of writing this, the pdftops module on Ubuntu is CPU hogging, and doesn’t do the job. Argggh! I hate regressions.
  2. On the good side, you can just use pdfmod, a nice GUI application allowing you to remove or reorder PDF pages. So, just export to PDF with notes, then use pdfmod to remove slides, or to remove notes.

INFORMATION_SCHEMA.GLOBAL_STATUS: watch out

MySQL 5.1 boasts some new and useful INFORMATION_SCHEMA tables. Among them is the GLOBAL_STATUS table.

At last, it is possible to ask questions like:

node1> SELECT * FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'innodb_os_log_written';
+-----------------------+----------------+
| VARIABLE_NAME         | VARIABLE_VALUE |
+-----------------------+----------------+
| INNODB_OS_LOG_WRITTEN | 512            |
+-----------------------+----------------+
1 row in set (0.00 sec)

node1> SELECT * FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME = 'questions';
+---------------+----------------+
| VARIABLE_NAME | VARIABLE_VALUE |
+---------------+----------------+
| QUESTIONS     | 28             |
+---------------+----------------+
1 row in set (0.00 sec)

Watch out #1

As with all INFORMATION_SCHEMA tables, to get a single row one needs to materialize the entire table. To ask the above two questions, the table will materialize twice. This means gathering all the information — twice. To get 20 values, we materialize the table 20 times. It not only takes time, but also increases some of the status variables themselves, like questions, select_scan, created_tmp_tables. Ironically, when we used SHOW GLOBAL STATUS and had to parse the results in our application code, we only issued the query once. But with the convenience of INFORMATION_SCHEMA, it’s much easier (and makes more sense!) to query per variable.

Continue reading » “INFORMATION_SCHEMA.GLOBAL_STATUS: watch out”

How NOT to test that mysqld is alive

I had a call from a new customer last week. They had issues with their MySQL server. Apparently, it was repeatingly crashing, every few hours. To have their production system kept alive, they used a script to periodically see if MySQL was still alive, and if not – start it.

I was first thinking in directions of old installations; wrong memory allocations (too little memory for large content; to large memory allocation for weak machine). When I reached the customer’s premises, I quickly reviewed general settings, while explaining some basic configuration guidelines. There were peculiarities (e.g. query_cache_limit being larger than query_cache_size), but nothing to obviously suggest a reason for crash.

I then observed the error log. Took some time to find it, since the log_error parameter appeared twice in the my.cnf file; first one appearing at the very beginning, the second (overwriting the first) was far into the file.

Sure enough, there were a lot of startup messages. And… shutdown messages. In fact, the log looked something like: Continue reading » “How NOT to test that mysqld is alive”