Trick: recovering from “no space left on device” issues with MySQL

Just read Ronald Bradford’s post on an unnecessary 3am (emergency) call. I sympathize! Running out of disk space makes for some weird MySQL behaviour, and in fact whenever I encounter weird behaviour I verify disk space.

But here’s a trick I’ve been using for years to avoid such cases and to be able to recover quickly. It helped me on such events as running out of disk space during ALTER TABLEs or avoiding purging of binary logs when slave is known to be under maintenance.

Ronald suggested it — just put a dummy file in your @@datadir! I like putting a 1GB dummy file: I typically copy+paste a 1GB binary log file and call it “placeholder.tmp”. Then I forget all about it. My disk space should not run out — if it does it’s a cause for emergency. I have monitoring, but sometimes I’m hoping to make an operation on 97%99% utilization.

If I do run out of disk space: well, MySQL won’t let me connect; won’t complete an important statement; not sync transaction to disk — bad situation. Not a problem in our case: we can magically recover 1GB worth of data from the @@datadir, buying us enough time (maybe just minutes) to gracefully complete so necessary operations; connect, KILL, shutdown, abort etc.

5 thoughts on “Trick: recovering from “no space left on device” issues with MySQL

  1. Disk full problems vary depending on what type of data it is, and where it is.

    Having all data on the root (/) partition is bad, I have had 2 recent experiences where deleting files would not reclaim space (i.e. df stayed at 100% even hours after), the only option was a system restart.

    As you say having a dummy file for the @@datadir is a tip I have used in the past.

    In my specific post this was a binary log space problem, where this was a dedicate partition. This could also benefit from the dummy file solution.

    MySQL acts very differently depending on if the @@tmpdir, @@datadir or @@log-bin partition locations are full.

  2. I do not agree with this dummy file hack. The best way is to write a shell script and set it up in crontab that will send an email alert whenever the free disk space is low. (Assuming you do not have nagios or any other monitoring tool in place)

  3. @Shantanu,
    Quoting the above: “I have monitoring, but sometimes I’m hoping to make and operation on 97%-99% utilization.”

    Of course you must have monitoring on this. And you might be intersted to get notified when the disk space runs over 95%. But what if I want to push the limits for a specific operation? OK, set it for 99%. Cool. This may still leave me with 6GB I could utilize. I may want to push the limits — and history shows that sometimes you may just get by at a close margin.

  4. I do not prefer “pushing the limits” like this on production server. I make sure that there is always “more than enough” free disk space available. But this is a good tip and I will use it on slave / development machines.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.