Linux – code.openark.org http://shlomi-noach.github.io/blog/ Blog by Shlomi Noach Sun, 29 Sep 2013 13:44:19 +0000 en-US hourly 1 https://wordpress.org/?v=5.3.3 32412571 Trick: recovering from “no space left on device” issues with MySQL https://shlomi-noach.github.io/blog/mysql/trick-recovering-from-no-space-left-on-device-issues-with-mysql https://shlomi-noach.github.io/blog/mysql/trick-recovering-from-no-space-left-on-device-issues-with-mysql#comments Fri, 23 Aug 2013 11:25:15 +0000 https://shlomi-noach.github.io/blog/?p=6487 Just read Ronald Bradford’s post on an unnecessary 3am (emergency) call. I sympathize! Running out of disk space makes for some weird MySQL behaviour, and in fact whenever I encounter weird behaviour I verify disk space.

But here’s a trick I’ve been using for years to avoid such cases and to be able to recover quickly. It helped me on such events as running out of disk space during ALTER TABLEs or avoiding purging of binary logs when slave is known to be under maintenance.

Ronald suggested it — just put a dummy file in your @@datadir! I like putting a 1GB dummy file: I typically copy+paste a 1GB binary log file and call it “placeholder.tmp”. Then I forget all about it. My disk space should not run out — if it does it’s a cause for emergency. I have monitoring, but sometimes I’m hoping to make an operation on 97%99% utilization.

If I do run out of disk space: well, MySQL won’t let me connect; won’t complete an important statement; not sync transaction to disk — bad situation. Not a problem in our case: we can magically recover 1GB worth of data from the @@datadir, buying us enough time (maybe just minutes) to gracefully complete so necessary operations; connect, KILL, shutdown, abort etc.

]]>
https://shlomi-noach.github.io/blog/mysql/trick-recovering-from-no-space-left-on-device-issues-with-mysql/feed 5 6487
Installing MySQL from source/binary tarball as a Linux service https://shlomi-noach.github.io/blog/mysql/installing-mysql-from-sourcebinary-tarball-as-a-linux-service https://shlomi-noach.github.io/blog/mysql/installing-mysql-from-sourcebinary-tarball-as-a-linux-service#comments Tue, 01 May 2012 08:10:10 +0000 https://shlomi-noach.github.io/blog/?p=4858 I’ve written before I prefer to do a manual install of MySQL over a repository one. I still do: I typically install from binary tarball or by compiling from source.

I’d like to share my setup procedure for Linux installation and service setup. I’ve done this dozens of times, on different Linux flavors, and it works well for me.

Installing from source

To get this straight: you sometimes have to compile the source files. I, for example, happen to use the Sphinx MySQLSE extension. You can only use it if compiled with MySQL. You had to compile a “vanilla” 5.1 version without query cache in order to completely remove the cache’s mutex contention.

Anyway, I find the easiest way is to install onto a path associated with the server version. For example, I would install a 5.5 server onto /usr/local/mysql55

This way, a new version gets its own path, and no ambiguity.

To do that, use the prefix option on configuration step:

cd /path/to/extracted/source/tarball
sh BUILD/autorun.sh
./configure --prefix=/usr/local/mysql55
make
sudo make install

Once this is complete, you have everything under /usr/local/mysql55. This means binaries, libraries, scripts, etc.

To install the MySQL server as a service, copy the mysql.server script to /etc/init.d:

sudo cp /usr/local/mysql55/support-files/mysql.server /etc/init.d/mysql55

Again, I’m naming the script after the MySQL version. This avoids conflict with possible past or future installations of the MySQL server, which typically create a service named mysql or mysqld.

A thing to note about the mysql.server script is that it allows you (at around line #45) to set two variables:

  • basedir: path to your installation directory. When compiling from source this is already setup with the path provided to the configure script. Thus, in our example, you can expect this variable to read /usr/local/mysql55. So basically nothing to do here.
  • datadir: path to your data directory. If you’re putting your my.cnf file in /etc or /etc/mysql, then setting datadir in my.cnf suffices. However, if you’re going to put my.cnf itself on the data directory (e.g. so as to avoid collisions) then make sure to set the variable in the mysql.server init script.

Depending on your $PATH configuration, it is also a good idea to specify basedir variable on your my.cnf‘s [mysqld] section.

Which leads us to $PATH: your linux system is still unaware of the many binaries you’ve got in there. I typically add the following line at the end of /etc/bash.bashrc:

export PATH=/usr/local/mysql55:${PATH}

This is the most global PATH settings one can do. Alternatively, use /etc/profile, ~/.bashrc etc. (you may have noticed by now I’m working with bash).

Finally, need to setup the init script to run at startup and stop at shutdown.

  • On Debian/Ubuntu/related I use rcconf (I’m too lazy to remember the command line setup).
  • On RedHat/CentOS/related I use chkconfig –add mysql55, or  linuxconf (since I’m lazy).

Installing from binary tarball

The only difference is that the mysql.server script is unaware of our deployment path. So the basedir variable must be set in that file. Other than that, follow same steps as for source installation (oh, of course no need to configure & make…).

]]>
https://shlomi-noach.github.io/blog/mysql/installing-mysql-from-sourcebinary-tarball-as-a-linux-service/feed 1 4858
Test-driven SQL development https://shlomi-noach.github.io/blog/mysql/test-driven-sql-development https://shlomi-noach.github.io/blog/mysql/test-driven-sql-development#comments Thu, 20 Oct 2011 17:55:04 +0000 https://shlomi-noach.github.io/blog/?p=4036 I’m having a lot of fun writing common_schema, an SQL project which includes views, tables and stored routines.

As the project grows (and it’s taking some interesting directions, in my opinion) more dependencies are being introduced, and a change to one routine or view may affect many others. This is why I’ve turned the development on common_schema to be test driven.

Now, just how do you test drive an SQL project?

Well, much like the way you test any other project in your favorite programming language. If its functions you’re testing, that’s all too familiar: functions get some input and provide some output. Hmmm, they might be changing SQL data during that time. With procedures it’s slightly more complex, since they do not directly return output but result sets.

Here’s the testing scheme I use:

  • Tests are divided to families. For example, there is a family of tests for the eval() function.
  • Each test in a family is responsible for checking the simplest, most “atomic” issue. This means many small tests.
  • Each test can have a “pre-test” step, which prepares the ground (for example, create a table and populate it)
  • Likewise, a test can have a “post-test” step, which is typically just cleanup code (since the test is already complete by the time the post step is invoked).
  • Each test is an SQL file: a set of commands to be executed.
  • A test may have an “expected output” file.
  • If no explicit expected exists, the test is expected to return “1” (just as the most basic JUnit test assumes an “assert true”)
  • A test family may also have pre- and post- steps.
  • Any failure in any step fails the entire process. Failures may include:
    • Failure to prepare the grounds for a test or family of tests
    • Failure in executing the test
    • Mismatch between test’s output and expected result.
    • Failure in executing the post- step (may indicate yet invalid test result not intercepted by the test)

An example

The following image presents a single test family: the eval family, testing the eval() routine.

Test driven SQL development - sample

  • In this family, there are two tests.
  • In both tests, we have a pre-test step, and a test.
  • We have no post-test here.
  • Nor do we have an expected-output sample, which means the tests expect to return with “1”.

Implementation

But how are tests conducted? Via mysql, of course. While tests are plain SQL text file, they are being executed against a running MySQL server using the mysql client. It is given the test files as input, and its output is directed to file as well.

This makes it very easy to code the test using a simple shell script. It takes a small measure of file iteration, process invocation, exit code check, and diff execution.

For example, to test eval()‘s 01 test, we first execute mysql with 01/pre.sql as input. Assuming success, we execute mysql again, this time with 01/test.sql. We capture the output of this execution, and compare it with expected-output, or with “1” when no expected-output specified.

Tests pass, or no code!

Some 12 years ago, I worked with a less-known version system called aegis. The thing I remember most from aegis was that it had a good tests infrastructure. Long before “test-driven development” was coined, or was even commonly practiced, aegis supported tests right into your version control. “Right into”, in the sense that you could not merge your code back to the baseline if it didn’t pass all of the tests.

I work with SVN for common_schema, and I do not know of such an option in SVN. But I also use ant to build this project, and the dependency is clear there: ant dist, my target which creates the distribution files, is dependent on ant test, the target which works out the tests.

That is, you cannot generate the distribution files when tests fail.

Further notes

Since I’m now retroactively patching tests for already existing functionality, calling it test-driven development is an overstatement; nevertheless new tests are already proving invaluable when I keep changing and improving existing code. Suddenly dependent functionality no longer works as expected. What fun!

The code for the testing suite is actually much shorter than this blog post.

]]>
https://shlomi-noach.github.io/blog/mysql/test-driven-sql-development/feed 2 4036
Where’s my cnf file? https://shlomi-noach.github.io/blog/mysql/wheres-my-cnf-file https://shlomi-noach.github.io/blog/mysql/wheres-my-cnf-file#comments Tue, 07 Dec 2010 10:24:44 +0000 https://shlomi-noach.github.io/blog/?p=2981 So you have a running MySQL server, it’s working well and everyone’s happy. You want to make a minor change to the configuration file, so you edit the file, restart MySQL – but the change doesn’t catch!

Or maybe you want to check that some global variable has not been dynamically changed without an update to the configuration file. But the configuration file doesn’t make any sense — it looks like nothing is common between the file and the server.

Wait, which my.cnf file does MySQL read? Rather, which my.cnf files?

Ever happened to you? If you’re well organized, and only keep a single /etc/my.cnf file, you know exactly where everything is. But some systems are messier, with lots of configuration files hanging around. Which ones apply?

Let’s find out which configuration files apply.

No direct information

It would all be easier if we could just SHOW GLOBAL VARIABLES LIKE ‘configuration_files_that_this_server_has_read_list’. There isn’t such an option.

The MySQL documentation explains about the configuration files search path, and that’s one path you can follow. Also, you can detect another estimated search path by invoking:

root@myhost:~# mysqld --verbose --help | head -n 20
100927 19:53:06 [ERROR] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root!

mysqld  Ver 5.1.41 for unknown-linux-gnu on x86_64 (MySQL Community Server (GPL))
Copyright (C) 2000-2008 MySQL AB, by Monty and others
Copyright (C) 2008 Sun Microsystems, Inc.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL license

Starts the MySQL database server

Usage: mysqld [OPTIONS]

Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf /usr/local/mysql/etc/my.cnf ~/.my.cnf
The following groups are read: mysqld server mysqld-5.1
The following options may be given as the first argument:
...

Easy enough, right? Just walk through that search path and you’ve covered it all. Better yet, see which of these even exist!

root@myhost:~# ls -l /etc/my.cnf /etc/mysql/my.cnf /usr/local/mysql/etc/my.cnf ~/.my.cnf
ls: cannot access /etc/my.cnf: No such file or directory
ls: cannot access /etc/mysql/my.cnf: No such file or directory
ls: cannot access /usr/local/mysql/etc/my.cnf: No such file or directory
-rw-r--r-- 1 shlomi shlomi 32 2010-03-03 15:21 /home/shlomi/.my.cnf

Seems like we got it. The mysqld process only reads /home/shlomi/.my.cnf. Right?

Wrong!

There are two running instances of MySQL running on my machine. Neither of the primary my.cnf files used by these instances is listed above.

root@myhost:~# ps aux | grep mysqld
shlomi   12092  0.0  0.0   4096   352 pts/1    S    Sep26   0:00 /bin/sh /home/shlomi/sandboxes/5.1/5.1.50/bin/mysqld_safe --defaults-file=/home/shlomi/sandboxes/msb_5_1_50/my.sandbox.cnf
shlomi   12167  0.0 14.5 765520 587924 pts/1   Sl   Sep26   1:12 /home/shlomi/sandboxes/5.1/5.1.50/bin/mysqld --defaults-file=/home/shlomi/sandboxes/msb_5_1_50/my.sandbox.cnf --basedir=/home/shlomi/sandboxes/5.1/5.1.50 --datadir=/home/shlomi/sandboxes/msb_5_1_50/data --log-error=/home/shlomi/sandboxes/msb_5_1_50/data/msandbox.err --pid-file=/home/shlomi/sandboxes/msb_5_1_50/data/mysql_sandbox5150.pid --socket=/tmp/mysql_sandbox5150.sock --port=5150
root     22827  0.0  0.0   4096   668 pts/3    S    16:50   0:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --datadir=/usr/local/mysql/data --pid-file=/usr/local/mysql/data/eeyore.pid
mysql    22960  0.1  2.2 274584 90188 pts/3    Sl   16:50   0:18 /usr/local/mysql/bin/mysqld --defaults-extra-file=/usr/local/mysql/data/my.cnf --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data --user=mysql --log-error=/usr/local/mysql/data/eeyore.err --pid-file=/usr/local/mysql/data/eeyore.pid --socket=/tmp/mysql.sock --port=3306

Can you see the mess above?

The first two lines refer to a MySQL instance running under mysqlsandbox. The mysqld_safe script is passed the defaults-file parameter, and passes it on to the mysqld service.

However the next couple of lines refer to a MySQL server installed as a service; installed from a binary tarball, this instance reads configuration from the datadir. This time the mysqld_safe instance is passed nothing, but invokes mysqld with default-extra-file.

To be fair, I wasn’t expecting the “mysqld –verbose –help” invocation to find the mysqlsandbox configuration files. I did expect it to find the /usr/local/mysql/data/my.cnf file which it eventually used.

That’s nice & ugly. I can see the my.cnf file used by peeking at ps. A bit overkill.

Not quite there yet…

Because there’s still my private configuration file (resides on /home/shlomi/.my.cnf on my account). Now I do not expect this file to be read by my standard MySQL server, since it does not run as user “shlomi”. However my command line clients do actually read this file, and so I am affected by its settings.

I can verify whether such files have been used on a file system which is configured to support the atime option:

root@myhost:~# ls -lt --time=atime $(locate *my.cnf)

I usually keep the atime option enabled for my “/” and “/home” partitions, but disable it on data partitions.

]]>
https://shlomi-noach.github.io/blog/mysql/wheres-my-cnf-file/feed 1 2981
mylvmbackup HOWTO: minimal privileges & filesystem copy https://shlomi-noach.github.io/blog/mysql/mylvmbackup-howto-minimal-privileges-filesystem-copy https://shlomi-noach.github.io/blog/mysql/mylvmbackup-howto-minimal-privileges-filesystem-copy#comments Tue, 17 Aug 2010 17:42:40 +0000 https://shlomi-noach.github.io/blog/?p=2839 This HOWTO discusses two (unrelated) issues with mylvmbackup:

  • The minimal privileges required to take MySQL backups with mylvmbackup.
  • Making (non compressed) file system copy of one’s data files.

Minimal privileges

Some just give mylvmbackup the root account, which is far too permissive. We now consider what the minimal requirements of mylvmbackup are.

The queries mylvmbackup issues are:

  • FLUSH TABLES
  • FLUSH TABLES WITH READ LOCK
  • SHOW MASTER STATUS
  • SHOW SLAVE STATUS
  • UNLOCK TABLES

Both SHOW MASTER STATUS & SHOW SLAVE STATUS require either the SUPER or REPLICATION CLIENT privilege. Since SUPER is more powerful, we choose REPLICATION CLIENT.

The FLUSH TABLES * and UNLOCK TABLES require the RELOAD privilege.

However, we are not done yet. mylvmbackup connects to the mysql database, which means we must also have some privilege there, too. We choose the SELECT privilege.

Finally, here are the commands to create a mylvmbackup user with minimal privileges:

CREATE USER 'mylvmbackup'@'localhost' IDENTIFIED BY '12345';
GRANT RELOAD, REPLICATION CLIENT ON *.* TO 'mylvmbackup'@'localhost';
GRANT SELECT ON mysql.* TO 'mylvmbackup'@'localhost';

In the mylvmbackup.conf file, the correlating rows are:

[mysql]
user=mylvmbackup
password=12345
host=localhost

Filesystem copy

By default, mylvmbackup creates a .tar.gz compressed backup file of your data. This is good if the reason you’re running mylvmbackup is to, well, make a backup. However, as with all backups, one may be making the backup so as to create a replication server. But in this case you don’t really want compressed data: you want the data extracted on the replication server, just as it is on the original host.

mylvmbackup supports backing up the files using rsync.

To copy MySQL data to a remote host, configure the following in the mylvmbackup.conf file:

[fs]
backupdir=shlomi@backuphost:/data/backup/mysql
[misc]
backuptype=rsync

You may be prompted to enter password, unless you have the user’s public key stored on the remote host.

Normally, rsync is considered as remote-sync, but it also works on local file systems. If you have a remote directory mounted on your file system (e.g. with nfs), you can use the fact that rsync works just as well with local file systems:

[fs]
backupdir=/mnt/backup/mysql
[misc]
backuptype=rsync

Voila! Your backup is complete.

]]>
https://shlomi-noach.github.io/blog/mysql/mylvmbackup-howto-minimal-privileges-filesystem-copy/feed 7 2839
Tips for taking MySQL backups using LVM https://shlomi-noach.github.io/blog/mysql/tips-for-taking-mysql-backups-using-lvm https://shlomi-noach.github.io/blog/mysql/tips-for-taking-mysql-backups-using-lvm#comments Tue, 03 Aug 2010 06:45:29 +0000 https://shlomi-noach.github.io/blog/?p=2717 LVM uses copy-on-write to implement snapshots. Whenever you’re writing data to some page, LVM copies the original page (the way it looked like when the snapshot was taken) to the snapshot volume. The snapshot volume must be large enough to accommodate all pages written to for the duration of the snapshot’s lifetime. In other words, you must be able to copy the data somewhere outside (tape, NFS, rsync, etc.) in less time than it would take for the snapshot to fill up.

While LVM allows for hot backups of MySQL, it still poses an impact on the disks. An LVM snapshot backup may not go unnoticed by the MySQL users.

Some general guidelines for making life easier with LVM backups follow.

Lighter, longer snapshots

If you’re confident that you have enough space on your snapshot volume, you may take the opportunity to make for a longer backup time. Why? Because you would then be able to reduce the stress from the file system. Use ionice when copying your data from the snapshot volume:

ionice -c 2 cp -R /mnt/mysql_snapshot /mnt/backup/daily/20100719/

[Update: this is only on the cfq I/O scheduler; thanks, Vojtech]

Are you running out of space?

Monitor snapshot’s allocated size: if there’s just one snapshot, do it like this:

lvdisplay | grep Allocated                                                                                                                  Mon Jul 19 09:51:29 2010

 Allocated to snapshot  3.63%

Don’t let it reach 100%.

Avoid running out of space

To make sure you don’t run out of snapshot allocated size, stop all administrative scripts.

  • Are you running your weekly purging of old data? You will be writing a lot of pages, and all will have to fit in the snapshot.
  • Building your reports? You may be creating large temporary tables; make sure these are not on the snapshot volume.
  • Rebuilding your Sphinx fulltext index? Make sure it is not on the snapshot volume, or postpone till after backup.

You will gain not only snapshot space, but also faster backups.

Someone did the job before you

Use mylvmbackup: the MySQL LVM backup script by Lenz Grimmer. Or do it manually: follow this old-yet-relevant post by Peter Zaitsev.

]]>
https://shlomi-noach.github.io/blog/mysql/tips-for-taking-mysql-backups-using-lvm/feed 5 2717
mycheckpoint (rev. 170): improved custom queries; local charting; page/swap I/O monitoring; improved HTML reports https://shlomi-noach.github.io/blog/mysql/mycheckpoint-rev-170-improved-custom-queries-local-charting-pageswap-io-monitoring-improved-html-reports https://shlomi-noach.github.io/blog/mysql/mycheckpoint-rev-170-improved-custom-queries-local-charting-pageswap-io-monitoring-improved-html-reports#comments Fri, 16 Jul 2010 08:58:40 +0000 https://shlomi-noach.github.io/blog/?p=2650 Revision 170 of mycheckpoint, a MySQL monitoring solution, has been released. New and updated in this revision:

  • Improved custom queries: lifting of limitations from previous, introductory revision; better HTML presentation
  • Local, inline charting: no rendering of Google Charts, unless explicitly requested. All charts are now rendered locally using JavaScript. No data is now sent over the network.
  • Page/Swap I/O monitoring: now monitoring for page ins and outs, swap ins and outs (Linux only).
  • Improved HTML reports: several improvements on presentation (see sample, more follow).

Improved custom queries

Some limitations, introduced in revision 132, are now lifted. New features are introduced.

  • There is now no limit to the number of custom queries (well, an INT limit).
  • In fact, the data tables adjust themselves to the existing custom queries in the form of auto-deploy: once a new custom query is added or an old one removed, mycheckpoint will add or remove the relevant columns from the data tables.
  • The chart_order column is now utilized: HTML reports which include custom query charts now order those charts according to chart_order values. This makes for nicer reports.
  • The standard HTML brief report (SELECT html FROM sv_report_html_brief) now automatically includes all custom charts. The HTML brief report is the report one usually wants to look at: it provides with the latest 24 hours metrics for selected values. It now becomes a centralized place for all that is interesting in the past 24 hours.
  • Custom queries are now allowed to return NULL, treated as a missing value. This is a bugfix from previous revisions.

Local charting

Motivation for local charting is clear: no one likes having their data being sent over the network. And no one likes Google to know about their DML values.

I’ve been playing around with quite a few charting solutions, and have gone into depths with two of them, adding and rewriting quite a lot of code. Eventually, I settled on my very own rendering. Here’s what I’ve seen & tested:

  • dygraphs: a very nice time series charting library. I’ve presented a use case on a previous post.
    • Pros: slick, easy to work with.
    • Cons: uses HTML Canvas for rendering. This is fine on Firefox, Chrome, Safari, you name it. This isn’t fine on IE, which does not support Canvas. There’s ExplorerCanvas, a hack tool which converts canvas to IE’s VML, but it is far from being satisfactory: it is sloooow. Very, very slow. It is slow with one chart; but loading of 21 charts, as I do in some of mycheckpoint‘s reports can take long minutes on Internet explorer.
    • Cons: Only provides with a time series chart. No scatter plots.
  • Because they’re using ExplorerCanvas for IE, flot, jqPlot etc., are all unacceptable.
  • gRaphael: very slick charts based on Raphael. The original line charts are very basic, and I have invested a lot of time rewriting a great deal (you can find it all here). Raphael uses VML on IE, and SVG for all other browsers.
    • Pros: very slick. Supports various chart types, including line (though not time-series) and scatter.
    • Cons: slooooooooow when instantiating multiple charts. Unbearably slow, both on Firefox and IE. Slow as in minutes of waiting.

In addition, all of the above solutions were quite heavyweight: at about 45KB to start with, then add ExplorerCanvas or jQuery, or Raphael as supporting libraries, these became a real burden.

So, I had some time to spare (business is fine, thank you. I was a bit Ill. I’m feeling well now, thank you), and was upset what with all the time I invested in the above coding. And I decided to invest even more time, and build my own charts.

Enter openark-charts.



Currently, these line charts and scatter charts know how to parse a Google Image chart URL (only some features supported — only those I’m actually using with mycheckpoint). These are not full blown solutions: they come to serve mycheckpoint. And they do so nicely, if I may say so. Using Canvas for most browsers, or VML for IE, these very small pieces of code (10K for line chart, 6K for scatter chart, minified) load fast, use very little memory, and do their work well.

Granted, neither provides with interactive features: this is planned for the future.

Page/swap I/O monitoring

(Linux only) mycheckpoint now reads /proc/vmstat to get the pageins, pageouts, swapins and swapouts (since last reboot). I was actually looking at completely different places on the /proc file system to get swap info, and was frustrated with the complexity involved, till I bumped on /proc/vmstat… New tricks every day!

Improved HTML reports

This is mostly HTML make-up. Some minimal design, some more details thrown into the HTML pages (name of DB, MySQL version, mycheckpoint version). A little more verbosity; all sorts of stuff which was neglected so far.

Here are some show off examples of the new HTML views: [full report], [brief report], [24/7 report], [custom full report], [custom brief report], [alert pending report].

All HTML views now utilize the new openark-charts, and none renders charts with Google charts. This means when you use your HTML view, your data is safe. No data is sent over the net. All charts are rendered using Javascript, which is loaded and executed locally.

But if you like, there’s a [url] link next to each chart, which leads to a (online) Google chart image. Why? Because neither HTML Canvas nor VML allow for a complete rendering of the charts to an image. So this is a way for one to retrieve & store a chart’s image. Don’t use it if you see no reason for it; it’s just there.

And I even threw in rounded corners (IE users: only as of Windows 7).

Future plans

Work is going on. These are the non-scheduled future tasks I see:

  • Monitoring InnoDB Plugin & XtraDB status.
  • Interactive charts. See my earlier post.
  • A proper man page.
  • Anything else that interests me.

Try it out

Try out mycheckpoint. It’s a different kind of monitoring solution. You will need basic SQL skills, and in return you’ll get a lot of power under your hands.

mycheckpoint is released under the New BSD License.

]]>
https://shlomi-noach.github.io/blog/mysql/mycheckpoint-rev-170-improved-custom-queries-local-charting-pageswap-io-monitoring-improved-html-reports/feed 3 2650
To not yum or to not apt-get? https://shlomi-noach.github.io/blog/mysql/to-not-yum-or-to-not-apt-get https://shlomi-noach.github.io/blog/mysql/to-not-yum-or-to-not-apt-get#comments Tue, 16 Feb 2010 11:44:25 +0000 https://shlomi-noach.github.io/blog/?p=1776 I’ve written shortly on this before. I like yum; I love apt-get; I prefer not to use them for MySQL installations. I consider a binary tarball to be the best MySQL installation format (source installations being a different case altogether).

Why?

I use yum and apt-get whenever I can and for almost all needs (sometimes preferring CPAN for Perl installations). But on a MySQL machine, I avoid doing so. The reason is either dependency hell or dependency mismatch.

Package managers are supposed to solve the dependency hell issue. But package managers will rarely have an up to date MySQL version.

I’ve had several experiences where a simple yum installation re-installed the MySQL version. I’ve had customers calling me up when, having installed something with yum, MySQL would not work anymore.

yum install package-which-depends-on-mysql-server will install MySQL server on your system if it hasn’t been installed with yum. Are you on CentOS 5.0? You’ll get MySQL 5.0.22. Oh, did you already have a RPM installation for MySQL 5.0.81? Sorry – it’s just been downgraded, plus it won’t work anymore since the error messages file has been changed since then.

Don’t press ‘Y‘ too soon!

Things are slightly better with apt-get. I’ve encountered less situations where mysql-server was on the dependency list. Many times it’s just the libmysqlclient package or the mysql-common one.

But wait! Did you install mysql-common? Bonus! You get the elusive /etc/mysql/my.cnf file created, and there goes your server configuration. Future spawns of the MySQL server / clients will read from the wrong configuration file, and will probably fail to load.

Not to mention neither will help you out with multiple instances installation.

My argument

A sys admin recently argued with me that it was wrong of me to have the entire machine set up with yum, but have MySQL installed with binary tarball. He argued that it broke the entire setup. I expressed my opinion: on a MySQL dedicated server, MySQL gets to be prioritized. It’s special. It is the reason for the existence of the machine. I would imagine that same would hold for Apache on an Apache dedicated machine, for Sendmail on a Sendmail dedicated machine, etc. As a DBA, I want to have best control of the MySQL installation; I want to be able to upgrade minor versions quickly: I often find newer versions to solve bugs I was concerned with; I want to be able to install multiple instances; I want to be able to downgrade without having to remove and uninstall the previous version.

I want to have control. World domination aside, that is.

]]>
https://shlomi-noach.github.io/blog/mysql/to-not-yum-or-to-not-apt-get/feed 26 1776
mycheckpoint rev. 76: OS monitoring, auto deploy, brief HTML and 24/7 reports https://shlomi-noach.github.io/blog/mysql/mycheckpoint-rev-76-os-monitoring-auto-deploy-brief-html-and-247-reports https://shlomi-noach.github.io/blog/mysql/mycheckpoint-rev-76-os-monitoring-auto-deploy-brief-html-and-247-reports#comments Tue, 05 Jan 2010 08:55:14 +0000 https://shlomi-noach.github.io/blog/?p=1784 Revision 76 of mycheckpoint comes with quite a few improvements, including:

  • OS monitoring (CPU, load average, memory)
  • Auto-deploy
  • Improved charting
  • Brief HTML reports
  • 24/7 charts

OS Monitoring

When monitoring the local machine, mycheckpoint now monitors CPU utilization, load average, memory and swap space.

This only applies to the Linux operating system; there is currently no plan to work this out for other operating systems.

Examples:

mysql> SELECT os_cpu_utilization_percent FROM sv_report_chart_sample;

mycheckpoint-chart-cpu-sample
mysql> SELECT ts, os_loadavg FROM mycheckpoint.sv_report_sample;
+---------------------+------------+
| 2009-12-27 11:45:01 |       1.78 |
| 2009-12-27 11:50:01 |       2.48 |
| 2009-12-27 11:55:01 |       2.35 |
...
+---------------------+------------+
mysql> SELECT report FROM mycheckpoint.sv_report_human_sample ORDER BY id DESC LIMIT 1 \G
*************************** 1. row ***************************
report:
Report period: 2009-12-27 13:20:01 to 2009-12-27 13:25:01. Period is 5 minutes (0.08 hours)
Uptime: 100.0% (Up: 334 days, 06:37:28 hours)

OS:
 Load average: 1.67
 CPU utilization: 25.2%
 Memory: 7486.4MB used out of 7985.6484MB (Active: 6685.8906MB)
 Swap: 3835.2MB used out of 8189.3750MB
...

Auto-deploy

mycheckpoint now has a version recognition mechanism. There is no need to call mycheckpoint with the “deploy” argument on first install or after upgrade. mycheckpoint will recognize a change of version and will auto-deploy before moving on to monitoring your system.

It is still possible, though, to use “deploy“, in case you just want to make sure an upgrade takes place, without issuing a monitoring action.

Improved charting

Further improvements and bug fixes made to the Google charts, including the implementation of missing values charting.

Brief HTML report

In contrast with the full blown HTML report (see sample), which presents hourly/daily/weekly reports for the many metrics, the new brief report only presents with a few hourly based charts. These include InnoDB performance, DML, OS metrics, and replication status.

To get a brief HTML report, issue:

mysql> SELECT html FROM sv_report_html_brief;


See sample brief HTML report.

24/7 charts

24/7 charts present the various metrics on a 24×7 matrix, which allows for diagnostics of usage throughout the day and week. For example, it makes it easier to see how things slow down on Saturday/Sunday; how load increases on 10:00am every day, etc.

24/7 charts are provided by the sv_report_chart_24_7 view.

DESC sv_report_chart_24_7;
+---------------------------------------+----------+------+-----+---------+-------+
| Field                                 | Type     | Null | Key | Default | Extra |
+---------------------------------------+----------+------+-----+---------+-------+
| innodb_read_hit_percent               | longblob | YES  |     | NULL    |       |
| innodb_buffer_pool_reads_psec         | longblob | YES  |     | NULL    |       |
| innodb_buffer_pool_pages_flushed_psec | longblob | YES  |     | NULL    |       |
| innodb_os_log_written_psec            | longblob | YES  |     | NULL    |       |
| innodb_row_lock_waits_psec            | longblob | YES  |     | NULL    |       |
| mega_bytes_sent_psec                  | longblob | YES  |     | NULL    |       |
| mega_bytes_received_psec              | longblob | YES  |     | NULL    |       |
| key_read_hit_percent                  | longblob | YES  |     | NULL    |       |
| key_write_hit_percent                 | longblob | YES  |     | NULL    |       |
| com_select_psec                       | longblob | YES  |     | NULL    |       |
| com_insert_psec                       | longblob | YES  |     | NULL    |       |
| com_delete_psec                       | longblob | YES  |     | NULL    |       |
| com_update_psec                       | longblob | YES  |     | NULL    |       |
| com_replace_psec                      | longblob | YES  |     | NULL    |       |
| com_set_option_percent                | longblob | YES  |     | NULL    |       |
| com_commit_percent                    | longblob | YES  |     | NULL    |       |
| slow_queries_percent                  | longblob | YES  |     | NULL    |       |
| select_scan_psec                      | longblob | YES  |     | NULL    |       |
| select_full_join_psec                 | longblob | YES  |     | NULL    |       |
| select_range_psec                     | longblob | YES  |     | NULL    |       |
| table_locks_waited_psec               | longblob | YES  |     | NULL    |       |
| opened_tables_psec                    | longblob | YES  |     | NULL    |       |
| created_tmp_tables_psec               | longblob | YES  |     | NULL    |       |
| created_tmp_disk_tables_psec          | longblob | YES  |     | NULL    |       |
| connections_psec                      | longblob | YES  |     | NULL    |       |
| aborted_connects_psec                 | longblob | YES  |     | NULL    |       |
| threads_created_psec                  | longblob | YES  |     | NULL    |       |
| seconds_behind_master                 | longblob | YES  |     | NULL    |       |
| os_loadavg                            | longblob | YES  |     | NULL    |       |
| os_cpu_utilization_percent            | longblob | YES  |     | NULL    |       |
| os_mem_used_mb                        | longblob | YES  |     | NULL    |       |
| os_mem_active_mb                      | longblob | YES  |     | NULL    |       |
| os_swap_used_mb                       | longblob | YES  |     | NULL    |       |
+---------------------------------------+----------+------+-----+---------+-------

Example:

mysql> SELECT com_select_psec, innodb_buffer_pool_pages_flushed_psec FROM mycheckpoint.sv_report_chart_24_7 \G
mycheckpoint-chart-247-sample

Trying mycheckpoint

Future plans

I haven’t got any major immediate issues; planning on user customization of charts and HTML reports. Considering thresholds and alerting for the future.

]]>
https://shlomi-noach.github.io/blog/mysql/mycheckpoint-rev-76-os-monitoring-auto-deploy-brief-html-and-247-reports/feed 2 1784
Manually installing multiple MySQL instances on Linux: HOWTO https://shlomi-noach.github.io/blog/mysql/manually-installing-multiple-mysql-instances-on-linux-howto https://shlomi-noach.github.io/blog/mysql/manually-installing-multiple-mysql-instances-on-linux-howto#comments Thu, 19 Feb 2009 06:02:00 +0000 https://shlomi-noach.github.io/blog/?p=531 Installing a single MySQL instance on a linux machine is a very simple operation. It may be as simple as:

apt-get install mysql-server

But you cannot use this method to install another MySQL instance. Moreover, if you try to manually install another instance, you may find that some collisions occur.

For example, when trying to install two 5.0 servers, apt-get or yum will just tell me “package is already installed”.

When trying to install a new 5.1 server along with 5.0, an implicit upgrade is performed.

But even if we try forcing the installation, there are collisions:

  • A dpkg or rpm will install my.cnf under /etc. Two installations will override one another. With RPM you may get a .rpmsave backup file, but that doesn’t help too much.
  • The daemon file: /etc/init.d/mysql(d) is overwritten.
  • The default data directory is used for both installations: /var/lib/mysql
  • The binaries are overwritten
  • Both installations will use port 3306.
  • In both installations, the same socket file (e.g. /var/run/mysql/mysql.sock) is used.

Interestingly, on Windows, multiple MySQL installations are by far easier:

  • Binaries are under Program Files\\MySQL\\MySQLX.X. With two installations, you specify different directories.
  • Data files are by default directly under the installations paths (MySQL 5.0) or under “Documents And Settings…” (MySQL 5.1) with no collisions.
  • The my.ini files are located directly under the installation paths.
  • The installer asks you for a service name, and notifies you if that name is already in use.
  • The installer let’s you know if port 3306 is already taken, and allows you to specify another one.
  • Of course, there’s no unix socket file.

I usually install MySQL on Linux using the binary tarball. When there’s only one instance expected, I go with the standards: my.cnf is in /etc, mysqld is under /etc/init.d, etc. (no pun intended)

Steps for multiple installation on Linux

When more than one installation is expected, here’s a safe way to ensure no collisions occur. We will assume a 5.0 and 5.1 installation (say we want to upgrade):

Install the MySQL binaries under /usr/local

Following the INSTALL document file, we make symbolic links to the full path in the names

ln -s /usr/local/your-mysql-5.0-full-installation-path /usr/local/mysql50
ln -s /usr/local/your-mysql-5.1-full-installation-path /usr/local/mysql51

Do not put my.cnf under /etc

Instead, put them directly in the installation path:

touch /usr/local/mysql50/my.cnf
touch /usr/local/mysql51/my.cnf

Setup different port numbers in the my.cnf files

For example, in /usr/local/mysql50/my.cnf, use port 3350:

[mysql]
port=3350

[mysqld]
port=3350

Choose another port (e.g. 3351) for the 5.1 installation, then have it written as above in the 5.1 my.cnf file.

Choose distinct socket files

For example, in /usr/local/mysql50/my.cnf, add:

[mysql]
port=3350
socket=/tmp/mysql50.sock

[mysqld]
port=3350
socket=/tmp/mysql50.sock

Choose another socket and set it up in the second my.cnf file. You may also choose to put the socket files under the data paths or installation paths.

Choose distinct data paths

Either do not specify them at all, in which case they will reside under the installation path, or, if you want to enjoy another partition, use:

[mysql]
port=3350
socket=/tmp/mysql50.sock

[mysqld]
port=3350
socket=/tmp/mysql50.sock
datadir=/my_raid_path/mysql50/

Create distinct daemons

Manually copy support_files/mysql.server to /etc/init.d under distinct names. For example:

cp /usr/local/mysql50/support_files/mysql.server /etc/init.d/mysqld50
cp /usr/local/mysql51/support_files/mysql.server /etc/init.d/mysqld51

Other settings

You may wish to set up a soft link for the client binaries, for example:

ln -s /usr/local/mysql50/bin/mysql /usr/bin/mysql50

chkconfig (RedHat and derived) can be used to start/stop daemon as service:

chkconfig --add mysqld50

Conclusion

I would prefer MySQL to come bundled in self-contained directory. The tarball is almost that, except it expects socket file to be on /tmp, and by default uses the 3306 port. I would further like to have a dpkg-reconfigure script to setup the above issues.

Till then, it’s manual configuration.

]]>
https://shlomi-noach.github.io/blog/mysql/manually-installing-multiple-mysql-instances-on-linux-howto/feed 28 531