Verifying GROUP_CONCAT limit without using variables

I have a case where I must know if group_concat_max_len is at its default value (1024), which means there are some operation I cannot work out. I’ve ranted on this here.

Normally, I would simply:

SELECT @@group_concat_max_len

However, I am using views, where session variables are not allowed. Using a stored function can do the trick, but I wanted to avoid stored routines. So here’s a very simple test case: is the current group_concat_max_len long enough or not? I’ll present the long version and the short version.

The long version

SELECT
  CHAR_LENGTH(
    GROUP_CONCAT(
      COLLATION_NAME SEPARATOR ''
    )
  )
FROM
  INFORMATION_SCHEMA.COLLATIONS;

If the result is 1024, we are in a bad shape. I happen to know that the total length of collation names is above 1800, and so it is trimmed down. Another variance of the above query would be:

SELECT
  CHAR_LENGTH(
    GROUP_CONCAT(
      COLLATION_NAME SEPARATOR ''
    )
  ) = SUM(CHAR_LENGTH(COLLATION_NAME))
    AS group_concat_max_len_is_long_enough
FROM
  INFORMATION_SCHEMA.COLLATIONS;

+-------------------------------------+
| group_concat_max_len_is_long_enough |
+-------------------------------------+
|                                   0 |
+-------------------------------------+

The COLLATIONS, CHARACTER_SETS or COLLATION_CHARACTER_SET_APPLICABILITY tables provide with known to exist variables (assuming you did not compile MySQL with particular charsets). It’s possible to CONCAT, UNION or JOIN columns and tables to detect longer than 1800 characters in group_concat_max_len. I admit this is becoming ugly, so let’s move on.

The short version

Don’t want to rely on existing tables? Not sure what values to expect? Look at this:

SELECT CHAR_LENGTH(GROUP_CONCAT(REPEAT('0', 1025))) FROM DUAL

GROUP_CONCAT doesn’t really care about the number of rows. In the above example, I’m using a single row (retrieved from the DUAL virtual table), making sure it is long enough. Type in any number in place of 1025, and you have a metric for your group_concat_max_len.

SELECT
  CHAR_LENGTH(GROUP_CONCAT(REPEAT('0', 32768))) >= 32768 As group_concat_max_len_is_long_enough
FROM
  DUAL;
+-------------------------------------+
| group_concat_max_len_is_long_enough |
+-------------------------------------+
|                                   0 |
+-------------------------------------+

The above makes a computation with REPEAT. One can replace this with a big constant.

11 thoughts on “Verifying GROUP_CONCAT limit without using variables

  1. Hi Shlomi,

    why not simply use one of the %_VARIABLES views in the information_schema?

    mysql> select * from information_schema.global_variables where variable_name = ‘group_concat_max_len’;
    +———————-+—————-+
    | VARIABLE_NAME | VARIABLE_VALUE |
    +———————-+—————-+
    | GROUP_CONCAT_MAX_LEN | 1024 |
    +———————-+—————-+
    1 row in set (0.00 sec)

  2. Hi Roland,
    Right, didn’t even mention it… 😀
    Only available as of 5.1, while I have a requirement for 5.0 – 5.1.

    Thank you for noting this down.

  3. Hi Sheeri,
    Fair question. Actually, it all comes up by issues I’m having with developing the openark-kit & mycheckpoint, two open source projects I’m working on.
    The problem is, I have no idea who will install these, what their MySQL expertise will be; will they have privileges to change these settings, and so on.
    I want the tools to handle these cases where the settings are insufficient.

    Who knows how the next installation will go? Since the defaults are so low, how can I be sure that the next machine anyone sets up will have a manual setting for group_concat_max_len?

    As long as the machines are mine, or under my control, yes – I just simple set up the variables as I please.

  4. Shlomi — I figured it was in those tools. But if it’s just the tool checking a few things, then setting it *within* the tool will ensure that the tool always has it set appropriately. I’m talking about setting the *session* variable, not the *global* variable.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.