Hack – code.openark.org http://shlomi-noach.github.io/blog/ Blog by Shlomi Noach Mon, 20 Oct 2014 06:40:04 +0000 en-US hourly 1 https://wordpress.org/?v=5.3.3 32412571 Making UUID() and RAND() replication safe https://shlomi-noach.github.io/blog/mysql/making-uuid-and-rand-replication-safe https://shlomi-noach.github.io/blog/mysql/making-uuid-and-rand-replication-safe#comments Mon, 20 Oct 2014 06:40:04 +0000 https://shlomi-noach.github.io/blog/?p=7034 MySQL’s UUID() and RAND() functions both provide with (pseudo) indeterministic result. UUID()‘s result is moreover bound to the host on which it executes. For this reason, both are unsafe to replicate with STATEMENT binlog format. As an example, consider:

master> create table test.uuid_test (id int, u varchar(64));

master> insert into test.uuid_test values (1, UUID());
Query OK, 1 row affected, 1 warning (0.03 sec)

master> select * from test.uuid_test;
+------+--------------------------------------+
| id   | u                                    |
+------+--------------------------------------+
|    1 | 7e3596d8-56ac-11e4-b284-3c970ea31ea8 |
+------+--------------------------------------+

The warning we got on the insert directly relates to the following inconsistency on a slave:

slave1> select * from test.uuid_test;
+------+--------------------------------------+
| id   | u                                    |
+------+--------------------------------------+
|    1 | 7e379d63-56ac-11e4-8477-3c970ea31ea8 |
+------+--------------------------------------+

The data on the slave is clearly inconsistent with the master’s. The slave, replicating via STATEMENT binlog format, re-executes the INSERT command and gets a different UUID value.

External

One solution to the above is to generate the UUID value from your application. By the time MySQL gets the INSERT statement, the UUID value is a constant string, as far as MySQL is concerned.

Internal

However there’s a way to do it from within MySQL, by decoupling the UUID() function from the INSERT statement. It takes a session variable. Consider:

master> set @safe_uuid := UUID();
Query OK, 0 rows affected (0.00 sec)

master> insert into test.uuid_test values (2, @safe_uuid);
Query OK, 1 row affected (0.02 sec)

master> select * from test.uuid_test;
+------+--------------------------------------+
| id   | u                                    |
+------+--------------------------------------+
|    1 | 7e3596d8-56ac-11e4-b284-3c970ea31ea8 |
|    2 | 29c51fb9-56ad-11e4-b284-3c970ea31ea8 |
+------+--------------------------------------+

And on a slave:

slave1> select * from test.uuid_test;
+------+--------------------------------------+
| id   | u                                    |
+------+--------------------------------------+
|    1 | 7e379d63-56ac-11e4-8477-3c970ea31ea8 |
|    2 | 29c51fb9-56ad-11e4-b284-3c970ea31ea8 |
+------+--------------------------------------+

The reason why this succeeds is that MySQL stores session variable values that are being used by DML queries in the binary log. It just so happened that @safe_uuid was assigned the UUID() value, but it could just as well have been assigned a constant or other computation. MySQL stored the resulting value into the binary log, where it is forces upon the slave to use. Check out this binary log snippet:

# at 14251
#141018 12:57:35 server id 1  end_log_pos 14319         Query   thread_id=2     exec_time=0     error_code=0
SET TIMESTAMP=1413626255/*!*/;
SET @@session.sql_auto_is_null=0/*!*/;
BEGIN
/*!*/;
# at 14319
#141018 12:57:35 server id 1  end_log_pos 14397         User_var
SET @`safe_uuid`:=_utf8 0x32396335316662392D353661642D313165342D623238342D336339373065613331656138 COLLATE `utf8_general_ci`/*!*/;
# at 14397
#141018 12:57:35 server id 1  end_log_pos 14509         Query   thread_id=2     exec_time=0     error_code=0
SET TIMESTAMP=1413626255/*!*/;
insert into test.uuid_test values (2, @safe_uuid)
/*!*/;
# at 14509
#141018 12:57:35 server id 1  end_log_pos 14536         Xid = 145
COMMIT/*!*/;

The same can be applied for RAND(). Funny thing about RAND() is that it is already taken care of by the binary log via SET @@RAND_SEED1, SET @@RAND_SEED2 statements (i.e. it works), though the documentation clearly states it is unsafe.

With Row Based Replication (RBR) the problem never arises in the first place since the binlog contains the values of the new/updated rows.

]]>
https://shlomi-noach.github.io/blog/mysql/making-uuid-and-rand-replication-safe/feed 5 7034
Cheating mysqlsandbox to install MariaDB 10.0 https://shlomi-noach.github.io/blog/mysql/cheating-mysqlsandbox-to-install-mariadb-10-0 https://shlomi-noach.github.io/blog/mysql/cheating-mysqlsandbox-to-install-mariadb-10-0#comments Sun, 17 Mar 2013 06:19:50 +0000 https://shlomi-noach.github.io/blog/?p=6139 mysqlsandbox is version-aware. The new 5.6 version, for example, requires special care because of the system InnoDB tables or otherwise modified system tables.

At this moment, it will refuse to install MariaDB 10.0 (alpha):

bash$ make_sandbox /tmp/mariadb-10.0.1-linux-x86_64.tar.gz 
unpacking /tmp/mariadb-10.0.1-linux-x86_64.tar.gz
unsupported version 10.0

This is perfectly legitimate, and I have no quarrel with this fact. However, I did want to setup MariaDB 10.0 as a sandbox.

As it turns out mysqlsandbox relies on MySQL package naming conventions to detect the version: the fact that a tgz file distribution is named mariadb-10.0.1-linux-x86_64.tar.gz and extracts onto mariadb-10.0.1-linux-x86_64, tells mysqlsandbox that this is version 10.0.1. Easy enough to cheat, then: pick a version that mysqlsandbox will work with, and which is compatible sandbox-wise with your own, and do the renames. For that matter, I picked 5.5, and, actually, made it 5.5.100:

bash:/tmp$ tar xzf mariadb-10.0.1-linux-x86_64.tar.gz
bash:/tmp$ mv mariadb-10.0.1-linux-x86_64 mariadb-5.5.100-linux-x86_64
bash:/tmp$ tar czf /tmp/mariadb-5.5.100-linux-x86_64.tar.gz ./mariadb-5.5.100-linux-x86_64
bash:/tmp$ make_sandbox mariadb-5.5.100-linux-x86_64.tar.gz 

unpacking /tmp/mariadb-5.5.100-linux-x86_64.tar.gz
Executing low_level_make_sandbox --basedir=/tmp/5.5.100 \
        --sandbox_directory=msb_5_5_100 \
        --install_version=5.5 \
        --sandbox_port=55100 \
        --no_ver_after_name \
        --my_clause=log-error=msandbox.err
>>/tmp
    The MySQL Sandbox,  version 3.0.29
    (C) 2006-2013 Giuseppe Maxia
installing with the following parameters:
upper_directory                = /home/shlomi/sandboxes
sandbox_directory              = msb_5_5_100
sandbox_port                   = 55100
check_port                     = 
no_check_port                  = 
datadir_from                   = script
install_version                = 5.5
basedir                        = /tmp/5.5.100
tmpdir                         = 
my_file                        = 
operating_system_user          = shlomi
db_user                        = msandbox
remote_access                  = 127.%
ro_user                        = msandbox_ro
rw_user                        = msandbox_rw
repl_user                      = rsandbox
db_password                    = msandbox
repl_password                  = rsandbox
my_clause                      = log-error=msandbox.err
master                         = 
slaveof                        = 
high_performance               = 
prompt_prefix                  = mysql
prompt_body                    =  [\h] {\u} (\d) > 
force                          = 
no_ver_after_name              = 1
verbose                        = 
load_grants                    = 1
no_load_grants                 = 
no_run                         = 
no_show                        = 
do you agree? ([Y],n)

It worked well for me; perhaps I’m missing on something, so watch your own setup.

]]>
https://shlomi-noach.github.io/blog/mysql/cheating-mysqlsandbox-to-install-mariadb-10-0/feed 3 6139
Looking for a hack: share data between MySQL sessions https://shlomi-noach.github.io/blog/mysql/looking-for-a-hack-share-data-between-mysql-sessions https://shlomi-noach.github.io/blog/mysql/looking-for-a-hack-share-data-between-mysql-sessions#comments Mon, 28 Jan 2013 13:27:58 +0000 https://shlomi-noach.github.io/blog/?p=6034 I’m looking for a way to share data between two MySQL connections/sessions. Obviously tables are the trivial answer, but for reasons of security (possibly insufficient privileges) I wish to avoid that.

The type of data to be passed can vary. Ideally I would be able to pass multiple pieces of information (dates, texts, integers, etc.) around. If impossible, I would do with texts only, and if impossible yet, I could do with a single text (but reasonably long).

There is a way to do so: by writing to the file system (SELECT INTO OUTFILE + LOAD_FILE()). However I wish to avoid it, since writing to files from within MySQL requires creation of a new file each time; no overwrite and no purging; this litters the file system.

So: any other tricks? Is there some way to pass data via GET_LOCK()/RELEASE_LOCK() (none that I can see other than Morse code)?

Is there some global variable that is unused, can be changed dynamically, and has enough space? (I couldn’t find one)

I appreciate any input.

]]>
https://shlomi-noach.github.io/blog/mysql/looking-for-a-hack-share-data-between-mysql-sessions/feed 20 6034
SQL: selecting top N records per group, another solution https://shlomi-noach.github.io/blog/mysql/sql-selecting-top-n-records-per-group-another-solution https://shlomi-noach.github.io/blog/mysql/sql-selecting-top-n-records-per-group-another-solution#comments Tue, 21 Aug 2012 04:49:43 +0000 https://shlomi-noach.github.io/blog/?p=5249 A while back I presented SQL: selecting top N records per group, a “give me the top 5 countries in each continent” type of query, and which used an external numbers table and a lot of tedious casting.

Here’s another solution I came up with (*). Still using GROUP_CONCAT (how else?), but no external table and no casting. The query outputs the largest 5 countries (by surface area) per continent.

SELECT
  Continent,
  Name,
  SurfaceArea,
  Population
FROM
  world.Country,
  (
    SELECT 
      GROUP_CONCAT(top_codes_per_group) AS top_codes
    FROM
      (
        SELECT 
          SUBSTRING_INDEX(GROUP_CONCAT(Code ORDER BY SurfaceArea DESC), ',', 5) AS top_codes_per_group
        FROM
          world.Country
        GROUP BY
          Continent
      ) s_top_codes_per_group
  ) s_top_codes
WHERE
  FIND_IN_SET(Code, top_codes)
ORDER BY
  Continent,
  SurfaceArea DESC
;

+---------------+----------------------------------------------+-------------+------------+
| Continent     | Name                                         | SurfaceArea | Population |
+---------------+----------------------------------------------+-------------+------------+
| Asia          | China                                        |  9572900.00 | 1277558000 |
| Asia          | India                                        |  3287263.00 | 1013662000 |
| Asia          | Kazakstan                                    |  2724900.00 |   16223000 |
| Asia          | Saudi Arabia                                 |  2149690.00 |   21607000 |
| Asia          | Indonesia                                    |  1904569.00 |  212107000 |
| Europe        | Russian Federation                           | 17075400.00 |  146934000 |
| Europe        | Ukraine                                      |   603700.00 |   50456000 |
| Europe        | France                                       |   551500.00 |   59225700 |
| Europe        | Spain                                        |   505992.00 |   39441700 |
| Europe        | Sweden                                       |   449964.00 |    8861400 |
| North America | Canada                                       |  9970610.00 |   31147000 |
| North America | United States                                |  9363520.00 |  278357000 |
| North America | Greenland                                    |  2166090.00 |      56000 |
| North America | Mexico                                       |  1958201.00 |   98881000 |
| North America | Nicaragua                                    |   130000.00 |    5074000 |
| Africa        | Sudan                                        |  2505813.00 |   29490000 |
| Africa        | Algeria                                      |  2381741.00 |   31471000 |
| Africa        | Congo, The Democratic Republic of the        |  2344858.00 |   51654000 |
| Africa        | Libyan Arab Jamahiriya                       |  1759540.00 |    5605000 |
| Africa        | Chad                                         |  1284000.00 |    7651000 |
| Oceania       | Australia                                    |  7741220.00 |   18886000 |
| Oceania       | Papua New Guinea                             |   462840.00 |    4807000 |
| Oceania       | New Zealand                                  |   270534.00 |    3862000 |
| Oceania       | Solomon Islands                              |    28896.00 |     444000 |
| Oceania       | New Caledonia                                |    18575.00 |     214000 |
| Antarctica    | Antarctica                                   | 13120000.00 |          0 |
| Antarctica    | French Southern territories                  |     7780.00 |          0 |
| Antarctica    | South Georgia and the South Sandwich Islands |     3903.00 |          0 |
| Antarctica    | Heard Island and McDonald Islands            |      359.00 |          0 |
| Antarctica    | Bouvet Island                                |       59.00 |          0 |
| South America | Brazil                                       |  8547403.00 |  170115000 |
| South America | Argentina                                    |  2780400.00 |   37032000 |
| South America | Peru                                         |  1285216.00 |   25662000 |
| South America | Colombia                                     |  1138914.00 |   42321000 |
| South America | Bolivia                                      |  1098581.00 |    8329000 |
+---------------+----------------------------------------------+-------------+------------+

In bold are the conditions by which we nominate our selected rows (condition is SurfaceArea DESC, number of rows is 5, so 5 largest countries).

What’s going on here?

So the inner s_top_codes_per_group query produces the codes for largest countries per continent:

+---------------------+
| top_codes_per_group |
+---------------------+
| CHN,IND,KAZ,SAU,IDN |
| RUS,UKR,FRA,ESP,SWE |
| CAN,USA,GRL,MEX,NIC |
| SDN,DZA,COD,LBY,TCD |
| AUS,PNG,NZL,SLB,NCL |
| ATA,ATF,SGS,HMD,BVT |
| BRA,ARG,PER,COL,BOL |
+---------------------+

The wrapping s_top_codes query concatenates all the above to one long text:

+---------------------------------------------------------------------------------------------------------------------------------------------+
| top_codes                                                                                                                                   |
+---------------------------------------------------------------------------------------------------------------------------------------------+
| CHN,IND,KAZ,SAU,IDN,RUS,UKR,FRA,ESP,SWE,CAN,USA,GRL,MEX,NIC,SDN,DZA,COD,LBY,TCD,AUS,PNG,NZL,SLB,NCL,ATA,ATF,SGS,HMD,BVT,BRA,ARG,PER,COL,BOL |
+---------------------------------------------------------------------------------------------------------------------------------------------+

And the final query simply demands that Code must be found within this string, by calling upon FIND_IN_SET(Code, top_codes).

Notes

  • This solution works for PRIMARY KEYs or otherwise UNIQUE KEYs of all sorts (a CHAR(3) in our example, but same for integers etc.)
  • And you still have to have a sufficient group_concat_max_len (see this post). You must have a large enough value to fit in the very long text you may be expecting in s_top_codes.
  • Performance-wise there are full scans here, as well as string searching.

* UPDATE

I should pay closer attention. This comment had it 5 years ago.

]]>
https://shlomi-noach.github.io/blog/mysql/sql-selecting-top-n-records-per-group-another-solution/feed 10 5249
Getting rid of huge ibdata file, no dump required, part II https://shlomi-noach.github.io/blog/mysql/getting-rid-of-huge-ibdata-file-no-dump-required-part-ii https://shlomi-noach.github.io/blog/mysql/getting-rid-of-huge-ibdata-file-no-dump-required-part-ii#comments Wed, 30 May 2012 07:03:18 +0000 https://shlomi-noach.github.io/blog/?p=4845 This post continues Getting rid of huge ibdata file, no dump required, part I, where I describe way of converting your single-tablespace InnoDB database into a file-per-table one, without the pain of exporting and importing everything at once.

In previous part we put aside the issue of foreign keys. We address this issue now.

What if my InnoDB tables have foreign keys?

MyISAM does not support them, so you can’t just ALTER an InnoDB table to MyISAM and back into InnoDB, and expect everything to work.

Alas, this calls for additional steps (i.e. additional ALTER commands). However, these still fall well under the concept of “do it one table at a time, then take time to recover your breath and replication lag”.

Save , drop and restore your Foreign Keys setup

You can use common_schema‘s  sql_foreign_keys to get the full listing and create definition of your foreign keys. For example, assume we use the sakila database:

SELECT create_statement FROM common_schema.sql_foreign_keys WHERE TABLE_SCHEMA='sakila' INTO OUTFILE '/somewhere/safe/create_foreign_keys.sql'

(replace TABLE_SCHEMA=’sakila’ with whatever you want).

A sample output would be something like this (note: no semicolon on end of line):

ALTER TABLE `sakila`.`address` ADD CONSTRAINT `fk_address_city` FOREIGN KEY (`city_id`) REFERENCES `sakila`.`city` (`city_id`) ON DELETE RESTRICT ON UPDATE CASCADE
ALTER TABLE `sakila`.`city` ADD CONSTRAINT `fk_city_country` FOREIGN KEY (`country_id`) REFERENCES `sakila`.`country` (`country_id`) ON DELETE RESTRICT ON UPDATE CASCADE
ALTER TABLE `sakila`.`customer` ADD CONSTRAINT `fk_customer_address` FOREIGN KEY (`address_id`) REFERENCES `sakila`.`address` (`address_id`) ON DELETE RESTRICT ON UPDATE CASCADE
ALTER TABLE `sakila`.`customer` ADD CONSTRAINT `fk_customer_store` FOREIGN KEY (`store_id`) REFERENCES `sakila`.`store` (`store_id`) ON DELETE RESTRICT ON UPDATE CASCADE
ALTER TABLE `sakila`.`film` ADD CONSTRAINT `fk_film_language` FOREIGN KEY (`language_id`) REFERENCES `sakila`.`language` (`language_id`) ON DELETE RESTRICT ON UPDATE CASCADE
ALTER TABLE `sakila`.`film` ADD CONSTRAINT `fk_film_language_original` FOREIGN KEY (`original_language_id`) REFERENCES `sakila`.`language` (`language_id`) ON DELETE RESTRICT ON UPDATE CASCADE
...

Once the above is in a safe place, you will want to DROP all of your foreign keys. Again, using common_schema:

SELECT drop_statement FROM common_schema.sql_foreign_keys WHERE TABLE_SCHEMA='sakila';
+-----------------------------------------------------------------------------------+
| drop_statement                                                                    |
+-----------------------------------------------------------------------------------+
| ALTER TABLE `sakila`.`address` DROP FOREIGN KEY `fk_address_city`                 |
| ALTER TABLE `sakila`.`city` DROP FOREIGN KEY `fk_city_country`                    |
| ALTER TABLE `sakila`.`customer` DROP FOREIGN KEY `fk_customer_address`            |
| ALTER TABLE `sakila`.`customer` DROP FOREIGN KEY `fk_customer_store`              |
| ALTER TABLE `sakila`.`film` DROP FOREIGN KEY `fk_film_language`                   |
| ALTER TABLE `sakila`.`film` DROP FOREIGN KEY `fk_film_language_original`          |
| ...                                                                               |
+-----------------------------------------------------------------------------------+

You don’t want to issue all these at once: do them one at a time, and wait for your slave to catch up.

Once this is done, you can move on to the steps described in Part I of this post: converting tables to MyISAM, shutting down, removing InnoDB files, then converting back to InnoDB.

And then, taking breath again, you must re-import the foreign keys. Use the ADD CONSTRAINT commands you have saved earlier on. Again, one at a time, wait for slave to catch up.

To reiterate, for each table you would take the following steps:

  1. Make sure the FK definition is safely stored somewhere
  2. STOP SLAVE
  3. Drop all table’s foreign keys: ALTER TABLE … DROP FOREIGN KEY …, DROP FOREIGN KEY …
  4. START SLAVE
  5. Wait for slave to catch up
  6. STOP SLAVE
  7. ALTER TABLE … ENGINE=MyISAM (*)
  8. START SLAVE
  9. Wait for slave to catch up

(*) Altering to MyISAM drops FK constraints, so the above could actually be done in one step. I’m cautious and illustrate in two.

Once all tables are altered, and InnoDB tablespace is removed, restoration is as follows: for each table,

  1. STOP SLAVE
  2. ALTER TABLE … ENGINE=InnoDB [create options]
  3. START SLAVE
  4. Wait for slave to catch up
  5. STOP SLAVE
  6. ALTER TABLE … ADD CONSTRAINT …, ADD CONSTRAINT …(+)
  7. START SLAVE
  8. Wait for slave to catch up

(+) Alas, you can’t convert to InnoDB and add constraints at the same time…

This is not entirely safe

A MyISAM slave to an InnoDB master with foreign keys is a tricky business. It really depends on the type of foreign keys you have and the use you make of them. See Impact of foreign keys absence on replicating slaves.

]]>
https://shlomi-noach.github.io/blog/mysql/getting-rid-of-huge-ibdata-file-no-dump-required-part-ii/feed 6 4845
Getting rid of huge ibdata file, no dump required https://shlomi-noach.github.io/blog/mysql/getting-rid-of-huge-ibdata-file-no-dump-required https://shlomi-noach.github.io/blog/mysql/getting-rid-of-huge-ibdata-file-no-dump-required#comments Tue, 22 May 2012 05:33:05 +0000 https://shlomi-noach.github.io/blog/?p=3442 You have been told (guilty as charged), that the only way to get rid of the huge InnoDB tablespace file (commonly named ibdata1), when moving to innodb_file_per_table, is to do a logical dump of your data, completely erase everything, then import the dump.

To quickly reiterate, you can only delete the ibdata1 file when no InnoDB tables exist. Delete this file with an existing InnoDB table, even a table in its own tablespace, and nothing ever works anymore.

The problem with the dump-based solution

The impact of doing a logical dump is often overwhelming. Well, the dump may be tolerable, but the restore is much longer. The real pain is that you can’t do this one table at a time: you have to destroy everything before dropping the ibdata1 file; you then have to import everything.

Perhaps the most common scenario is that we do the changes on a slave, so as not to completely shut down our database. This is nice; no one is aware of the shutdown process. However, Huston, we have a problem: we need to make sure we can keep up the binary logs on the master for the duration of the entire process.

A semi-solution for binary logs

You may get by by keeping the SQL_IO_THREAD running on the slave while dump is taken (SQL thread is better turned off). If you’re careful, you could do the same after restarting the database: you should still be able to acquire relay logs. With row based replication becoming more common, the problem of binary logs disk space returns: the logs (rather, log entries) are just so much larger!

Either way, the process can takes long days, at the end of which your slave is up, but lags for long days behind.

Wishful thought: do it one table at a time

If we could do it one table at a time, and assuming our dataset is fairly split among several tables (i.e. not all of our 500GB of data is in one huge table), life would be easier: we could work on a single table, resume replication, let the slave catch up, then do the same for the next table.

How? Didn’t we just say one can only drop the ibdata1 file when no InnoDB tables exist?

Solution: do it one table at a time

I’m going to illustrate what seems like a longer procedure. I will later show why it is not, in fact, longer.

The idea is to first convert all your tables to MyISAM (Yay! A use for MyISAM!). That is, convert your tables one table at a time, using normal ALTER TABLE t ENGINE=MyISAM.

Please let go of the foreign keys issue right now. I will address it later, there’s a lot to be addressed.

So, on a slave:

  1. STOP SLAVE
  2. One ALTER TABLE … ENGINE=MyISAM
  3. START SLAVE again
  4. Wait for slave catch up
  5. GOTO 1

What do we end up with? A MyISAM only database. What do we do with it? Why, convert it back to InnoDB, of course!

But, before that, we:

  1. Shut MySQL down
  2. Delete ibdata1 file, ib_logfile[01] (i.e. delete all InnoDB files)
  3. Start MySQL

A new ibdata1 file, and new transaction log files will be created. Note: the new ibdata1 file is small. Mission almost accomplished.

We then:

  1. STOP SLAVE
  2. Do one ALTER TABLE … ENGINE=InnoDB [ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8 …]
  3. START SLAVE again
  4. Wait for slave catch up
  5. GOTO 1

What do we end up with? An InnoDB only database, with true file per table, and a small ibdata1 file. Space recovered!

The advantage of this method

The thing is, we resume replication after each table alteration. This means breaking the lag period into many smaller periods. While the total runtime does not reduce, we do reduce the maximum lag time. And this makes for easier recovery: no need to store multitudes of binary logs!

So what about the foreign keys?

Phew. Continued next post.

]]>
https://shlomi-noach.github.io/blog/mysql/getting-rid-of-huge-ibdata-file-no-dump-required/feed 4 3442
Auto caching INFORMATION_SCHEMA tables: seeking input https://shlomi-noach.github.io/blog/mysql/auto-caching-information_schema-tables-seeking-input https://shlomi-noach.github.io/blog/mysql/auto-caching-information_schema-tables-seeking-input#comments Thu, 08 Mar 2012 18:31:56 +0000 https://shlomi-noach.github.io/blog/?p=4761 The short version

I have it all working. It’s kind of magic. But there are issues, and I’m not sure it should even exist, and am looking for input.

The long version

In Auto caching tables I presented with a hack which allows getting cached or fresh results via a simple SELECT queries.

The drive for the above hack was INFORMATION_SCHEMA tables. There are two major problems with INFORMATION_SCHEMA:

  1. Queries on schema-oriented tables such as TABLES, COLUMNS, STATISTICS, etc. are heavyweight. How heavyweight? Enough to make a lockdown of your database. Enough to crash down your database in some cases.
  2. The data is always generated on-the-fly, as you request it. Query the COLUMNS table twice, and risk two lockdowns of your database.

The auto-cache mechanism solves issue #2. I have it working, time based. I have an auto-cache table for each of the INFORMATION_SCHEMA heavyweight tables. Say, every 30 minutes the cache is invalidated. Throughout those 30 minutes, you get a free pass!

The auto-cache mechanism also paves the road to solving issue #1: since it works by invoking a stored routine, I have better control of the way I read INFORMATION_SCHEMA. This, I can take advantage of INFORMATION_SCHEMA optimization. It’s tedious, but not complicated.

For example, if I wanted to cache the TABLES table, I don’t necessarily read the entire TABLES data in one read. Instead, I can iterate the schemata, get a list of table names per schema, then read full row data for these, table by table. The result? Many many more SELECTs, but more optimized, and no one-big-lock-it-all query.

And the problem is…

I have two burning problems.

  1. INFORMATION_SCHEMA optimization only works that much. It sometimes does not work. In particular, I’ve noticed that if you have a view which relies on another view (possibly relying on yet another view), things get out of hand. I author a monitoring tool for MySQL called mycheckpoint. It uses some fancy techniques for generating aggregated data, HTML and charts, by means of nested views. There are a few views there I can never query for in COLUMNS. It just crashes down my server. Repeatedly. And it’s a good machine with good configuration. Make that 5 machines. They all crash, repeatedly. I just can’t trust INFORMATION_SCHEMA!
  2. Replication: any caching table is bound to replicate. Does it make any sense to replicate cache for internal metadata? Does it make sense to query for the cached table on slave, to have it answer for master’s data? With plain old INFORMATION_SCHEMA, every server is on its own. Caching kinda works against this. Or is it fair enough, since we would usually expect master/slaves to reflect same schema structure?

I would feel much better if I could read SHOW statements with a SELECT query. Though I’ve found this nice hack, it can’t work from a stored function, only via stored procedure. So it can’t be used from within a SELECT query. I’ve been banging my head for months now, I think I gave up on this one.

Any insights are welcome!

]]>
https://shlomi-noach.github.io/blog/mysql/auto-caching-information_schema-tables-seeking-input/feed 11 4761
Auto caching tables https://shlomi-noach.github.io/blog/mysql/auto-caching-tables https://shlomi-noach.github.io/blog/mysql/auto-caching-tables#comments Tue, 06 Mar 2012 13:18:36 +0000 https://shlomi-noach.github.io/blog/?p=4353 Is there a way to create a caching table, some sort of a materialized view, such that upon selecting from that table, its data is validated/invalidated?

Hint: yes.

But to elaborate the point: say I have some table data_table. Can I rewrite all my queries which access data_table to read from some autocache_data_table, but have nothing changed in the query itself? No caveats, no additional WHEREs, and still have that autocache_data_table provide with the correct data, dynamically updated by some rule of our choice?

And: no crontab, no event scheduler, and no funny triggers on data_table? In such way that invalidation/revalidation occurs upon SELECT?

Well, yes.

This post is long, but I suggest you read it through to understand the mechanism, it will be worthwhile.

Background

The following derives from my long research on how to provide better, faster and safer access to INFORMATION_SCHEMA tables. It is however not limited to this exact scenario, and in this post I provide with a simple, general purpose example. I’ll have more to share about INFORMATION_SCHEMA specific solutions shortly.

I was looking for a server side solution which would not require query changes, apart from directing the query to other tables. Solution has to be supported by all standard MySQL installs; so: no plugins, no special rebuilds.

Sample data

I’ll explain by walking through the solution. Let’s begin with some sample table:

CREATE TABLE sample_data (
  id INT UNSIGNED NOT NULL PRIMARY KEY,
  dt DATETIME,
  msg VARCHAR(128) CHARSET ascii
);

INSERT INTO sample_data VALUES (1, NOW(), 'sample txt');
INSERT INTO sample_data VALUES (2, NOW(), 'sample txt');
INSERT INTO sample_data VALUES (3, NOW(), 'sample txt');

SELECT * FROM sample_data;
+----+---------------------+------------+
| id | dt                  | msg        |
+----+---------------------+------------+
|  1 | 2011-11-24 11:01:30 | sample txt |
|  2 | 2011-11-24 11:01:30 | sample txt |
|  3 | 2011-11-24 11:01:30 | sample txt |
+----+---------------------+------------+

In this simplistic example, I wish to create a construct which looks exactly like sample_data, but which caches data according to some heuristic. It will, in fact, cache the entire content of sample_data.

That much is not a problem: just create another table to cache the data:

CREATE TABLE cache_sample_data LIKE sample_data;

The big question is: how do you make the table invalidate itself while SELECTing from it?

Here’s the deal. I’ll ask for your patience while I draw the outline, and start with failed solutions. By the end, everything will work.

Failed attempt: purge rows from the table even while reading it

My idea is to create a stored function which purges the cache_sample_data table, then fills in with fresh data, according to some heuristic. Something like this:

DELIMITER $$

CREATE FUNCTION `revalidate_cache_sample_data`() RETURNS tinyint unsigned
    MODIFIES SQL DATA
    DETERMINISTIC
    SQL SECURITY INVOKER
BEGIN
  if(rand() > 0.1) then
    return 0; -- simplistic heuristic
  end if;

  DELETE FROM cache_sample_data;
  INSERT INTO cache_sample_data SELECT * FROM sample_data;
  RETURN 0;
END $$

DELIMITER ;

So the function uses some heuristic. It’s a funny RAND() in our case; you will want to check up on time stamps, or some flags, what have you. But this is not the important part here, and I want to keep the focus on the main logic.

Upon deciding the table needs refreshing, the function purges all rows, then copies everything from sample_data. Sounds fair enough?

Let’s try and invoke it. Just write some query by hand:

mysql> SELECT revalidate_cache_sample_data();
+--------------------------------+
| revalidate_cache_sample_data() |
+--------------------------------+
|                              0 |
+--------------------------------+

mysql> SELECT revalidate_cache_sample_data();
+--------------------------------+
| revalidate_cache_sample_data() |
+--------------------------------+
|                              0 |
+--------------------------------+

mysql> SELECT revalidate_cache_sample_data();
+--------------------------------+
| revalidate_cache_sample_data() |
+--------------------------------+
|                              1 |
+--------------------------------+

First two invocations – nothing. The third one indicated a revalidation of cache data. Let’s verify:

mysql> SELECT * FROM cache_sample_data;
+----+---------------------+------------+
| id | dt                  | msg        |
+----+---------------------+------------+
|  1 | 2011-11-24 11:01:30 | sample txt |
|  2 | 2011-11-24 11:01:30 | sample txt |
|  3 | 2011-11-24 11:01:30 | sample txt |
+----+---------------------+------------+

OK, seems like the function works.

We now gather some courage, and try combining calling to this function even while SELECTing from the cache table, like this:

SELECT
  cache_sample_data.*
FROM
  cache_sample_data,
  (SELECT revalidate_cache_sample_data()) AS select_revalidate
;
+----+---------------------+------------+
| id | dt                  | msg        |
+----+---------------------+------------+
|  1 | 2011-11-24 11:01:30 | sample txt |
|  2 | 2011-11-24 11:01:30 | sample txt |
|  3 | 2011-11-24 11:01:30 | sample txt |
+----+---------------------+------------+

To explain what happens in the above query, consider its programmatic nature: we create a derived table, populated by the function’s result. That means the function is invoked in order to generate the derived table. The derived table itself must be materialized before the query begins execution, and so it is that we first invoke the function, then make the SELECT.

Don’t open the champagne yet. While the above paragraph is correct, we are deceived: in this last invocation, the function did not attempt a revalidation. The RAND() function just didn’t provide with the right value.

Let’s try again:

SELECT
  cache_sample_data.*
FROM
  cache_sample_data,
  (SELECT revalidate_cache_sample_data()) AS select_revalidate
;
ERROR 1442 (HY000): Can't update table 'cache_sample_data' in stored function/trigger because it is already used by statement which invoked this stored function/trigger.

Aha! Bad news. The MySQL manual says on Restrictions on Stored Programs:

A stored function or trigger cannot modify a table that is already being used (for reading or writing) by the statement that invoked the function or trigger.

Anyone to the rescue?

I was quite upset. Can we not make this work? At sorrow times like these, one reflects back on words of wiser people. What would Roland Bouman say on this?

Oh, yes; he would say: “we can use a FEDERATED table which connect onto itself, thus bypass the above restriction”.

Unfortunately, FEDERATED is by default disabled nowadays; I cannot rely on its existence. Besides, to use FEDERATED one has to fill in passwords and stuff. Definitely not an out-of-the-box solution in this case.

Few more days gone by. Decided the problem cannot be solved. And then it hit me.

MyISAM to the rescue

MyISAM? Really?

Yes, and not only MyISAM, but also its cousin: it’s long abandoned cousin, forgotten once views and partitions came into MySQL. MERGE.

MERGE reflects the data contained within MyISAM tables. Perhaps the most common use for MERGE is to work out partitioned-like table of records, with MyISAM table-per month, and an overlooking MERGE table dynamically adding and removing tables from its view.

But I intend for MERGE a different use: just be an identical reflection of cache_sample_data.

So we must work out the following:

ALTER TABLE cache_sample_data ENGINE=MyISAM;
CREATE TABLE cache_sample_data_wrapper LIKE cache_sample_data;
ALTER TABLE cache_sample_data_wrapper ENGINE=MERGE UNION=(cache_sample_data);

I just want to verify the new table is setup correctly:

mysql> SELECT * FROM cache_sample_data_wrapper;
+----+---------------------+------------+
| id | dt                  | msg        |
+----+---------------------+------------+
|  1 | 2011-11-24 11:01:30 | sample txt |
|  2 | 2011-11-24 11:01:30 | sample txt |
|  3 | 2011-11-24 11:01:30 | sample txt |
+----+---------------------+------------+

Seems fine.

So the next step is what makes the difference: the two tables are not the same. One relies on the other, but they are distinct. Our function DELETEs from and INSERTs to cached_sample_data, but it does not affect, nor lock, cache_sample_data_wrapper.

We now rewrite our query to read:

SELECT
  cache_sample_data_wrapper.*
FROM
  cache_sample_data_wrapper,
  (SELECT revalidate_cache_sample_data()) AS select_revalidate
;

This query is perfectly valid. It works. To illustrate, I do:

-- Try this a few times till RAND() is lucky:

TRUNCATE cache_sample_data;

SELECT
  cache_sample_data_wrapper.*
FROM
  cache_sample_data_wrapper,
  (SELECT revalidate_cache_sample_data()) AS select_revalidate
;
+----+---------------------+------------+
| id | dt                  | msg        |
+----+---------------------+------------+
|  1 | 2011-11-24 11:01:30 | sample txt |
|  2 | 2011-11-24 11:01:30 | sample txt |
|  3 | 2011-11-24 11:01:30 | sample txt |
+----+---------------------+------------+

Whoa! Where did all this data come from? Didn’t we just TRUNCATE the table?

The query worked. The function re-populated cache_sample_data.

The final touch

Isn’t the above query just beautiful? I suppose not many will share my opinion. What happened to my declaration that “the original query need not be changed, apart from querying a different table”?

Yes, indeed. It’s now time for the final touch. There’s nothing amazing in this step, but we all know the way it is packaged is what makes the sale. We will now use views. We use two of them since a view must not contain a subquery in the FROM clause. Here goes:

CREATE OR REPLACE VIEW revalidate_cache_sample_data_view AS
  SELECT revalidate_cache_sample_data()
;

CREATE OR REPLACE VIEW autocache_sample_data AS
  SELECT
    cache_sample_data_wrapper.*
  FROM
    cache_sample_data_wrapper,
    revalidate_cache_sample_data_view
;

And finally, we can make a very simple query like this:

SELECT * FROM autocache_sample_data;
--
-- Magic in work now!
--
+----+---------------------+------------+
| id | dt                  | msg        |
+----+---------------------+------------+
|  1 | 2011-11-24 11:01:30 | sample txt |
|  2 | 2011-11-24 11:01:30 | sample txt |
|  3 | 2011-11-24 11:01:30 | sample txt |
+----+---------------------+------------+

Much as we would query the original sample_data table.

Summary

So what have we got? A stored routine, a MyISAM table, a MERGE table and two views. Quite a lot of constructs just to cache a table! But a beautiful cache access: plain old SQL queries. The flow looks like this:

Our cache table is a MyISAM table. It can get corrupted, which is bad. But not completely bad: it’s nothing more than a cache; we can throw away its entire data, and revalidate. We can actually ask the function to revalidate (say, pass a parameter).

]]>
https://shlomi-noach.github.io/blog/mysql/auto-caching-tables/feed 9 4353
More MySQL foreach() https://shlomi-noach.github.io/blog/mysql/more-mysql-foreach https://shlomi-noach.github.io/blog/mysql/more-mysql-foreach#comments Fri, 02 Dec 2011 13:55:32 +0000 https://shlomi-noach.github.io/blog/?p=4171 In my previous post I’ve shown several generic use cases for foreach(), a new scripting functionality introduced in common_schema.

In this part I present DBA’s handy syntax for schema and table operations and maintenance.

Confession: while I love INFORMATION_SCHEMA‘s power, I just hate writing queries against it. It’s just so much typing! Just getting the list of tables in a schema makes for this heavy duty query:

SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='sakila' AND TABLE_TYPE='BASE TABLE';

When a join is involved this really becomes a nightmare. I think it’s cumbersome, and as result, many do not remember the names and meaning of columns, making for “oh, I need to read the manual all over again just to get that query right”. Anyway, that’s my opinion.

A SHOW TABLES statement is easier to type, but cannot be integrated into a SELECT query (though we have a partial solution for that, too), and besides, when filtering out the views, the SHOW statement becomes almost as cumbersome as the one on INFORMATION_SCHEMA.

Which is why foreach() offers handy shortcuts to common iterations on schemata and tables, as follows:

Use case: iterate all databases

call foreach('schema', 'CREATE TABLE ${schema}.event(event_id INT, msg VARCHAR(128))');

In the above we execute a query on each database. Hmmm, maybe not such a good idea to perform this operation on all databases? Let’s filter them:

Use case: iterate databases by name match

call foreach('schema like wordpress_%', 'ALTER TABLE ${schema}.wp_posts MODIFY COLUMN comment_author VARCHAR(96) NOT NULL');

The above will only iterate my WordPress databases (I have several of these), performing an ALTER on wp_posts for each of those databases.

I don’t have to quote the like expression, but I can, if I wish to.

I can also use a regular expression match:

call foreach('schema ~ /^wordpress_[0-9]+$/', 'ALTER TABLE ${schema}.wp_posts MODIFY COLUMN comment_author VARCHAR(96) NOT NULL');

Use case: iterate tables in a specific schema

Time to upgrade our sakila tables to InnoDB’s compressed format. We use $(), a synonym for foreach().

call $('table in sakila', 'ALTER TABLE ${schema}.${table} ENGINE=InnoDB ROW_FORMAT=COMPRESSED');

The above will iterate on tables in sakila. I say tables, since it will avoid iterating views (there is still no specific syntax for views iteration). This is done on purpose, as my experience shows there is very little in common between tables and views when it comes to maintenance and operations.

Use case: iterate tables by name match

Here’s a interesting scenario: you wish to work on all tables matching some name. The naive approach would be to:

SELECT TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'wp_posts' AND TABLE_TYPE = 'BASE TABLE'

Wait! Are you aware this may bring your server down? This query will open all databases at once, opening all .frm files (though thankfully not data files, since we only check for name and type).

Here’s a better approach:

call foreach('table like wp_posts', 'ALTER TABLE ${schema}.${table} ENGINE=InnoDB');

(There’s now FULLTEXT to InnoDB, so the above can make sense in the near future!)

The good part is that foreach() will look for matching tables one database at a time. It will iterate the list of database, then look for matching tables per database, thereby optimizing the query on INFORMATION_SCHEMA.

Here, too, I can use regular expressions:

call $('table ~ /^wp_.*$/', 'ALTER TABLE ${schema}.${table} ENGINE=InnoDB');

Conclusion

This is work in the making, but, as someone who maintains a few productions servers, I’ve already put it to work.

I’m hoping the syntax is easy to comprehend. I know that since I developed it it must be far more intuitive to myself than to others. I’ve tried to keep close on common syntax and concepts from various programming languages.

I would like to get as much feedback as possible. I have further ideas and thoughts on the direction common_schema is taking, but wish take it in small steps. Your feedback is appreciated!

]]>
https://shlomi-noach.github.io/blog/mysql/more-mysql-foreach/feed 2 4171
MySQL foreach() https://shlomi-noach.github.io/blog/mysql/mysql-foreach https://shlomi-noach.github.io/blog/mysql/mysql-foreach#comments Fri, 02 Dec 2011 04:59:03 +0000 https://shlomi-noach.github.io/blog/?p=4002 A new routine is now available in common_schema, which makes for an easier execution syntax for some operations:

foreach(collection_to_iterate_over, queries_to_execute_per_iteration_step);

To illustrate what it can do, consider:

call foreach('table in sakila', 'ALTER TABLE ${schema}.${table} ENGINE=InnoDB ROW_FORMAT=COMPACT');

call $('schema like shard_%', 'CREATE TABLE ${schema}.messages (id INT)');

call $('2000:2009', 'INSERT IGNORE INTO report (report_year) VALUES (${1})');

$() stands as a synonym to foreach(). I suspect it should look familiar to web programmers.

The idea for foreach() was introduced by Giuseppe Maxia during a correspondence. At first I was skeptic: this isn’t jQuery; this is SQL. Why would I want to use foreach()?

Then Giuseppe provided some use cases, and as I started thinking about it, I found more and more cases where such a tool might considerably reduce scripting overhead and avoid requiring SQL-fu skills. In fact, I have been using it myself for the past few weeks

I provide examples which I hope will convince the reader as for the simplicity of using such syntax. Showing off the types of input foreach() accepts (query, table search, schema search, set of constants, single or double numbers range), and the types of queries it can execute (single, multiple, using placeholders).

I stress that this is not a replacement for common queries (i.e. does not come to replace your common SELECT, UPDATE, DELETE), but more for working out administrative tasks. Nevertheless, the last example in this post does provide with an interesting insight on possible “normal” use.

Use case: using values from query

Let’s kill all queries running for over 20 seconds:

call foreach('SELECT id FROM INFORMATION_SCHEMA.PROCESSLIST WHERE time > 20', 'KILL QUERY ${1}');

The thing I like most about foreach() is that it’s self explanatory. Nevertheless, I note:

  • The KILL command is executed for each process running for more than 20 seconds (I did round up corners, since I didn’t check for sleeping processes, for simplicity).
  • I also use the ${1} placeholder: much like in awk, this will get the first column in the result set. In our case, it is the single column, id.
  • I chose to invoke a single query/command per iteration step.

Compare the above with another solution to the same problem, using eval():

call eval('SELECT CONCAT(\'KILL QUERY \',id) FROM INFORMATION_SCHEMA.PROCESSLIST WHERE TIME > 20');

They both get the same thing done. But foreach() is just a bit more friendly to write (and read).

Let’s move to a more complicated example.

Use case: using multiple values from a query, invoking multiple commands

Let’s kill some queries, as above, but also write down a log entry so that we know what happened:

call foreach(
  'SELECT id, user FROM INFORMATION_SCHEMA.PROCESSLIST WHERE time > 20',
  'KILL QUERY ${1}; INSERT INTO my_log VALUES (\'have just killed query ${1}, executed by ${2}\');')
;

In the above, for each long running process, we:

  • Kill the process’ query. id being the first column, is referenced by ${1}.
  • INSERT to my_log that said process has been killed. We note both id and user using placeholders ${1} and ${2}, respectively.

It’s possible to invoke as many queries/commands per iteration step. It is possible to use placeholders ${1} through ${9}, as well as ${NR}, which works as in awk: it is a row-counter, 1-based.

This example can still be written with eval(), but in much uglier form. I can’t just first KILL the processes, then log about them, since by the time I want to log, the queries will not be running; the commands must be coupled. This is naturally done with foreach().

Use case: iterating constant values, invoking DDL

The commands invoked by foreach() can take the form of DML (INSERT/UPDATE/…), DDL (CREATE/ALTER/…) or other (KILL/SET/…). The placeholders can be used anywhere within the text.

Take an installation where different schemata have the same exact table structure. We want to refactor a table on all schemata:

call $('{USA, UK, Japan, NZ}', 'ALTER TABLE db_region_${1}.customer ADD COLUMN personal_notes VARCHAR(4096) CHARSET utf8');

The above translates to the following commands:

ALTER TABLE db_region_USA.customer ADD COLUMN personal_notes VARCHAR(4096) CHARSET utf8;
ALTER TABLE db_region_UK.customer ADD COLUMN personal_notes VARCHAR(4096) CHARSET utf8;
ALTER TABLE db_region_Japan.customer ADD COLUMN personal_notes VARCHAR(4096) CHARSET utf8;
ALTER TABLE db_region_NZ.customer ADD COLUMN personal_notes VARCHAR(4096) CHARSET utf8;

In the above, we:

  • Provide a list of constant values. These can be strings, numbers, whatever. They are space delimited.
  • Use the ${1} place holder. We can also use ${NR}.

Use case: loop through number sequence

We wish to populate a table with values:

call foreach('1970:2038', 'INSERT INTO test.test_dates (dt) VALUES (DATE(\'${1}-01-01\'))');

The above results with:

mysql> SELECT dt FROM test_dates;
+------------+
| dt         |
+------------+
| 1970-01-01 |
| 1971-01-01 |
| 1972-01-01 |
| 1973-01-01 |
| 1974-01-01 |
...
| 2036-01-01 |
| 2037-01-01 |
| 2038-01-01 |
+------------+

With numbers range:

  • Integers are assumed
  • Range is indicated by low and high values, both inclusive
  • Negatives allowed (e.g. ‘-5:5’, resulting with 11 steps)
  • Placeholders ${1} and ${NR} are allowed.

Use case: iterating through two dimensional numbers range:

We use 3 template tables; we create 15 schemata; in each we create 3 tables based on the template tables:

call foreach( '1:15,1:3',
  'CREATE DATABASE IF NOT EXISTS db_test_${1}; CREATE TABLE db_test_${1}.tbl_${2} LIKE db_template.my_table_${2};'
);

Notes:

  • Each of the number ranges has the same restrictions and properties as listed above (integers, inclusive, ascending)
  • We can now use ${1} and ${2} placeholders, noting the first and second numbers range, respectively.
  • We may also use ${NR}, which, in this case, will run 1 through 45 (15 times 3).
  • We use multiple queries per iteration step.

Use case: overcoming MySQL limitations

MySQL does not support ORDER BY & LIMIT in multi-table UPDATE and DELETE statements (as noted last year). So we cannot:

DELETE FROM t1 USING t1 JOIN t2 ON (...) JOIN t3 ON (..) WHERE x = 7 ORDER BY ts LIMIT 100;

However, we can:

call foreach(
  'SELECT t1.id FROM t1 JOIN t2 ON (...) JOIN t3 ON (..) WHERE x = 7 ORDER BY ts LIMIT 100',
  'DELETE FROM t1 WHERE id = ${1}'
);

Of course, it will do a lot of single row DELETEs. There are further MySQL limitations which complicate things if I want to overcome this. Perhaps at a later blog post.

Acknowledgements

I hit a weird bug which prevented me from releasing this earlier on. Actually it’s a duplicate of this bug, which makes it 6 years old. Hurray.

To the rescue came Roland Bouman, who suggested an idea so crazy even I was skeptic: to parse and modify the original query so as to rename column names according to my scheme. And of course he made it happen, along with some additional very useful stuff. It’s really a super-ultra-meta-meta-sql-fu magic he does there.

So, thanks, Roland, for joining the ride, and thanks, Giuseppe, for testing and helping out to shape this functionality. It’s great fun working with other people on open-source — a new experience for me.

Continued

In this post I’ve covered the general-purpose iterations. There are also more specific types of iterations with foreach(). Continued next.

]]>
https://shlomi-noach.github.io/blog/mysql/mysql-foreach/feed 3 4002