MySQL High Performance

Syndicate content
Percona's MySQL & InnoDB performance and scalability blog
Updated: 2 hours 39 min ago

MySQL benchmarking: Know your baseline variance!

Mi, 2014-11-26 11:00

Often enough I find MySQL benchmark results where the difference between results is 1% or even less and some conclusions are drawn. Now it is not that 1% is not important – especially when you’re developing the product you should care about those 1% improvements or regressions because they tend to add up. However with such a small difference it is very important to understand whenever this is for real or it is just the natural variance for your baseline test.

Take a look at this graph:
Click the image for a larger view)

 

This is the result for a simple in-memory, read-only “select by primary key” SysBench benchmark on dedicated physical hardware that is otherwise idle, simple 1 socket system. I tried to stabilize it as much as possible, for example disabling CPU frequency scaling. But still I saw some 3% difference between “good runs” and bad runs.

What is the difference between those runs? Simply mysqld restarts.

Does this mean you can’t measure a smaller difference? You can by setting the appropriate test plan. Often having several runs makes sense, in others you need to make sure the system warms up before taking measurements or having benchmark runs that are long enough. Whatever method you use it is a good idea to apply your test methodology by conducting several runs of your baseline run to ensure the results are stable enough for your purpose. For example If I decided to do five 30-minute runs and average the results, if they all run within 0.1% I will consider 0.3% differences as meaningful.

Another practical trick that often helps me to separate real differences from some side effects is mixing the tests. Say if I have configurations I’m testing A and B instead of doing AAA BBB I would do ABABAB which helps with the case when there is some regression that can accumulate over time, such as with Flash.

You should also note that in modern systems there is almost always something happening in the background that can change performance – the SSD is doing garbage collection, MySQL (or Kernel) is flushing dirty pages, the CPU can even simply cool off and as a result being able to support Turbo-boost operations a little longer. So when you are stat running your benchmarks make sure you keep the idle time between runs the same – scripting benchmarks and iterating scenarios helps here.

Happy MySQL benchmarking!

The post MySQL benchmarking: Know your baseline variance! appeared first on MySQL Performance Blog.

Percona XtraDB Cluster 5.6.21-25.8 is now available

Di, 2014-11-25 15:45

Percona is glad to announce the new release of Percona XtraDB Cluster 5.6 on Novmeber 25th 2014. Binaries are available from downloads area or from our software repositories. We’re also happy to announce that Ubuntu 14.10 and CentOS 7 users can download, install, and upgrade Percona XtraDB Cluster 5.6 from Percona’s software repositories.

Based on Percona Server 5.6.21-70.1 including all the bug fixes in it, Galera Replicator 3.8, and on Codership wsrep API 25.7, Percona XtraDB Cluster 5.6.21-25.8 is now the current General Availability release. All of Percona‘s software is open-source and free, and all the details of the release can be found in the 5.6.21-25.8 milestone at Launchpad.

New Features:

  • Galera 3.8 introduces auto-eviction for nodes in the cluster experiencing network issues like packet loss. It is off by default and is turned on with evs.auto_evict option. This feature requires EVS protocol version (evs.version) 1. During the EVS protocol upgrade all membership changes are communicated over EVS protocol version 0 to preserve backwards compatibility, protocol is upgraded to the highest commonly supported version when forming a new group so if there exist a single node with older version in the group, the group protocol version remains as 0 and auto-eviction is not functional. (#1274192).
  • Percona XtraDB Cluster now supports backup locks in XtraBackup SST (in the default xtrabackup-v2 wsrep_sst_method). Backup locks are used in lieu of FLUSH TABLES WITH READ LOCK on the donor during SST. This should allow for minimal disruption of existing and incoming queries, even under high load. Thus, this should allow for even faster SST and node being in ‘donor/desynced’ state. This also introduces following constraints: Percona XtraDB Cluster 5.6.21 requires Percona XtraBackup 2.2.5 or higher; An older (< 5.6.21) joiner cannot SST from a newer (>= 5.6.21) donor. This is enforced through SST versioning (sent from joiner to donor during SST) and logged to error log explicitly. (#1390552).
  • Percona XtraDB Cluster is now shipped with Galera MTR test suite.

Bugs Fixed:

  • Percona XtraDB Cluster now shows a warning in case additional utilities, like pv which may not affect critical path of SST, are not installed. Bug fixed #1248688.
  • mysqldump SST can now use username/password from wsrep_sst_auth under group of [sst] in my.cnf in order not to display the credentials in the error log. Bug fixed #1293798.
  • Normal shutdown under load would cause server to remain hanging because replayer failed to finish. Bug fixed #1358701.
  • wsrep_causal_reads variable was not honored when declared as global. Bug fixed #1361859.
  • garbd would not work when cluster address was specified without the port. Bug fixed #1365193.
  • If mysqld gets killed during the SST it will leave an unclean data directory behind. This would cause Percona XtraDB Cluster to fail when the server would be started next time because the data directory would be corrupted. This was fixed by resuming the startup in case wsrep-recover failed to recover due to corrupted data directory. The old behavior is still achievable through --exit-on-recover-fail command line parameter to mysqld_safe or exit-on-recover-fail under [mysqld_safe] in my.cnf. Bug fixed #1378578.
  • Percona XtraDB Cluster now reads environment variables for mysqld from following files (if present): /etc/default/mysql in Debian/Ubuntu; /etc/sysconfig/mysql in CentOS 6 or lower; /etc/sysconfig/mysql in CentOS 7 with mysql.service; /etc/sysconfig/XYZ in CentOS 7 with mysql@XYZ.service (/etc/sysconfig/bootstrap is supplied by default). Bug fixed #1381492.
  • gvwstate.dat file was removed on joiner when Xtrabackup SST method was used. Bug fixed #1388059.
  • Percona XtraDB Cluster now detects older joiners which don’t have the backup lock support. Bug fixed #1390552.
  • Inserts to a table with auto-increment primary key could result in duplicate key error if another node joined or dropped from the cluster during the insert processing. Bug fixed #1366997.
  • TOI now skips replication if all tables are temporary. Bugs fixed #11 and #13.
  • Bootstrapping a node tried to resolve gcomm address list specified in wsrep-cluster-address. Bug fixed #88.
  • In case stored procedure containing a non-InnoDB statement (MyISAM) performed autocommit, that commit would be entered two times: at statement end and next time at stored procedure end. Bug fixed #2.
  • Two appliers conflicting with local transaction and resulting later in (acceptable) BF-BF lock conflict, would cause cluster to hang when the other BF thread would not grant the lock back after its local transaction got aborted. Bug fixed #7.

Other bugs fixed #1378138, #1377226, #1376965, #1356859, #1026181, #1367173, #1390482, #1391634, and #1392369.

Release notes for Percona XtraDB Cluster 5.6.21-25.8 are available in our online documentation along with the installation and upgrade instructions.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

The post Percona XtraDB Cluster 5.6.21-25.8 is now available appeared first on MySQL Performance Blog.

(More) Secure local passwords in MySQL 5.6 and up

Di, 2014-11-25 08:00

I log into a lot of different servers running MySQL and one of the first things I do is create a file in my home directory called ‘.my.cnf’ with my credentials to that local mysql instance:

[client] user=root password=secret

This means I don’t have to type my password in every time, nor am I tempted to include it on the command line with -p and get the dreaded (but completely accurate):

Warning: Using a password on the command line interface can be insecure.

MySQL 5.6 introduces a utility to make this easier and more secure. First, let’s start with a new mysqld instance with a blank root password and make it more secure:

[vagrant@localhost ~]$ mysqladmin -u root password New password:secret Confirm new password:secret [vagrant@localhost ~]$ mysql -u root ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)

Ok, so I’ve added a password, now I want to create my .my.cnf file:

[vagrant@localhost ~]$ mysql_config_editor set --user=root --password Enter password:secret [vagrant@localhost ~]$ mysql Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 10 Server version: 5.6.21-70.0 Percona Server (GPL), Release 70.0, Revision 688 Copyright (c) 2009-2014 Percona LLC and/or its affiliates Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql>

What did ‘mysql_config_editor set’ actually do? It creates a .mylogin.cnf file (which stands in for a .my.cnf) in my home directory that contains my credentials, just in encrypted form:

[vagrant@localhost ~]$ ls -alh .my* -rw-------. 1 vagrant vagrant 100 Nov 19 16:20 .mylogin.cnf -rw-------. 1 vagrant vagrant 29 Nov 19 16:20 .mysql_history [vagrant@localhost ~]$ cat .mylogin.cnf ��>NTv�&�S���/�, >ј$%KZ 9i�V�jK䉦H[��� k. [vagrant@localhost ~]$ mysql_config_editor print [client] user = root password = *****

The mysql client picks this up right away and will use it by default. This file has good default filesystem permissions, is local to my homedir, and is a whole lot better than specifying it on the command line or typing it in every time.

This utility also supports a feature called ‘login-path’ wherein you can add multiple mysql logins (perhaps to different servers) and refer to them with the —login-path option in the mysql client:

[vagrant@localhost ~]$ mysql_config_editor set --login-path=remote --host=remote --user=remote --password Enter password:secure [vagrant@localhost ~]$ mysql_config_editor print --all [client] user = root password = ***** [remote] user = remote password = ***** host = remote [vagrant@localhost ~]$ mysql --login-path=remote ERROR 2005 (HY000): Unknown MySQL server host 'remote' (0)

The ‘remote’ host doesn’t exist here, but you get the idea. You can create as many login-paths as you want with varied hostnames, credentials and other login parameters and quickly access them with any client supporting login-path.

Now, how secure is this really?  This isn’t secure from anyone who roots your DB server.  I would say the benefits are more about reducing careless password storage and tidier management of local credentials.

The post (More) Secure local passwords in MySQL 5.6 and up appeared first on MySQL Performance Blog.

Percona Server 5.6.21-70.1 is now available

Mo, 2014-11-24 15:28

Percona is glad to announce the release of Percona Server 5.6.21-70.1 on November 24, 2014. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.21, including all the bug fixes in it, Percona Server 5.6.21-70.1 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free. Complete details of this release can be found in the 5.6.21-70.1 milestone on Launchpad.

Bugs Fixed:

  • A slave replicating in RBR mode would crash, if a table definition between master and slave differs with an allowed conversion, and the binary log contains a table map event followed by two row log events. This bug is an upstream regression introduced by a fix for bug #72610. Bug fixed #1380010.
  • An incorrect source code function attribute would cause MySQL to crash on an InnoDB row write, if compiled with a recent GCC with certain compilation options. Bug fixed #1390695 (upstream #74842).
  • MTR tests for Response Time Distribution were not packaged in binary packages. Bug fixed #1387170.
  • The RPM packages provided for CentOS 5 were built using a debugging information format which is not supported in the gdb version included with CentOS 5.10. Bug fixed #1388972.
  • A session on a server in mixed mode binlogging would switch to row-based binlogging whenever a temporary table was created and then queried. This switch would last until the session end or until all temporary tables in the session were dropped. This was unnecessarily restrictive and has been fixed so that only the statements involving temporary tables were logged in the row-based format whereas the rest of the statements would continue to use the statement-based logging. Bug fixed #1313901 (upstream #72475).

Other bugs fixed: #1387227, and #1388001.

Release notes for Percona Server 5.6.21-70.1 are available in the online documentation. Please report any bugs on the launchpad bug tracker

The post Percona Server 5.6.21-70.1 is now available appeared first on MySQL Performance Blog.

Schema changes in MySQL for OpenStack Trove users

Mo, 2014-11-24 08:00

People using OpenStack Trove instances can hit a common issue in the MySQL world: how to perform schema change operations while minimizing the impact on the database server? Let’s explore the options that can allow online schema changes.

Summary

With MySQL 5.5, pt-online-schema-change from Percona Toolkit is your best option for large tables while regular ALTER TABLE statements are only acceptable for small tables. Also beware of metadata locks.

With MySQL 5.6, almost all types of schema changes can be done online. Metadata locks can also be an issue. pt-online-schema-change can still be worth using as it is also online on read replicas.

Regular ALTER TABLE with MySQL 5.5

If you are still using MySQL 5.5, almost all schema changes will require a table rebuild and MySQL will set a write lock. Therefore all writes to the table that gets modified will be blocked. As soon as the table gets large or if you cannot afford to have a maintenance window, using ALTER TABLE becomes tricky.

The only exception is that secondary indexes can be added or removed without rebuilding the whole table. The table is still write locked during the operation but it is much faster.

You can spot this ‘fast index creation’ process by looking at SHOW PROCESSLIST (see manage keys in the State field):

+----+------+-----------+--------+---------+------+-------------+-----------------------------------------+-----------+---------------+-----------+ | Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined | Rows_read | +----+------+-----------+--------+---------+------+-------------+-----------------------------------------+-----------+---------------+-----------+ | 1 | root | localhost | sbtest | Query | 4 | manage keys | alter table sbtest2 add index idx_k (k) | 0 | 0 | 0 | +----+------+-----------+--------+---------+------+-------------+-----------------------------------------+-----------+---------------+-----------+

It is helpful for small or even medium tables, but it is not fast enough for large tables.

A common workaround when you have a master-slave configuration is to perform the schema change on the slave first, promote the slave and then perform the schema change on the old master. However replication is not only available if you are using Icehouse and even with Juno, replicas are read-only. So this workaround is not usable with Trove.

pt-online-schema-change

A good alternative is to use pt-online-schema-change. In a nutshell the tool creates a new table with the desired structure, creates triggers on the original table, and copies rows in chunks from the original table to the new table.

Nice features of the tool include being able to write to the original table while the schema change is being performed (hence the name of the tool), monitoring replication lag on slaves and automatic throttling to keep lag under a reasonable threshold.

Let’s assume you want to add a new field called ‘city’ in the ‘user’ table. Instead of running:

mysql> ALTER TABLE mydb.user ADD city VARCHAR(50) NOT NULL DEFAULT '';

you will run:

# pt-online-schema-change --progress=percentage,10 --alter="ADD city VARCHAR(50) NOT NULL DEFAULT ''" h=localhost,u=root,D=mydb,t=user --execute

All ALTER TABLE statements can be converted to an equivalent pt-online-schema-change command.

Is there any tradeoff? The first one is that pt-online-schema-change will be slower than a plain ALTER TABLE. This is expected as the tool has extra code to throttle itself if it adds too much load.

How much slower is it? As always it depends. On a lightly loaded server the difference will be minimal, for instance I measured 3mn24s for ALTER TABLE and 3mn30s for pt-online-schema-change on a test table. However on a heavily loaded server, it can be like 5x slower.

The second tradeoff is that pt-online-schema-change adds triggers on the original table to capture data change. So if you already have triggers, you can’t use the tool (this will be lifted in MySQL 5.7).

Metadata Locks

Metadata locks were introduced in MySQL 5.5 for better transaction isolation.

But one side effect is: if you start a schema change operation on a table while another transaction is reading or writing on the same table, ALTER TABLE will be stuck in the Waiting for metadata lock state.

Another negative side effect of metadata locks is that in some situations ALTER TABLE will also block all queries to the table that is being altered, even reads. In the worst cases, this may create a query pileup situation that will freeze the whole server. For more on this topic, have a look at this post.

What about pt-online-schema-change and metadata locks? It is of course hit by this issue/feature. However the good thing is that the timeout for metadata locks is set by the tool to 60s instead of the default 1 year. So after 60s, pt-online-schema-change will simply retry the operation that failed because of metadata locks.

MySQL 5.6: Online Schema Changes?

Metadata locks also exist with MySQL 5.6, the same issues as described above can then happen.

However the good news with MySQL 5.6 is that most schema changes can be done online. Now the question is: should you use pt-online-schema-change or a regular online ALTER TABLE statement?

Both have pros and cons:

  • ALTER TABLE is easy to use while being confident with pt-online-schema-change requires some time.
  • There is no way for ALTER TABLE to know if it’s overloading a replica while pt-online-schema-change monitors replication lag and throttles itself if needed.
  • ALTER TABLE is only online for the master, not for replicas while pt-online-schema-change is online for all servers.

The last point can be a bit confusing: why an online ALTER TABLE on the master wouldn’t be online on the slave as well? The truth is that unless you are using multi-threaded replication, writes on slaves are serialized. So while the ALTER TABLE is running, the slave will not process any other write, which is not much different from an offline ALTER TABLE.

So if you are running a single server (all Icehouse users for instance), ALTER TABLE is probably the right choice. But if you have read replicas and specifically if the application cannot tolerate replication lag, pt-online-schema-change is definitely a better choice.

Conclusion

Performing schema changes is becoming easier with newer versions of MySQL. However potential issues can be found with any version, so always be careful when you need to run ALTER TABLE.

The post Schema changes in MySQL for OpenStack Trove users appeared first on MySQL Performance Blog.

Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process

Fr, 2014-11-21 16:00

I’m happy to announce that Google has added Click-to-deploy functionality for Percona XtraDB Cluster (PXC) on Google Cloud Compute nodes. This gives you the ability to rapidly spin up a cluster for experimentation, performance testing, or even production use on Google Compute Engine virtual machines.

What is Percona XtraDB Cluster?

Percona XtraDB Cluster is a virtually synchronous cluster of MySQL Innodb nodes. Unlike conventional MySQL asynchronous replication which has a specific network topology of master and slaves, PXC’s nodes have no specific topology.  Functionally, this means that there are no masters and slaves, so you can read and write on any node.

Further, any failure in the cluster does not require any re-arranging of the replication topology. Instead, clients just reconnect to another node and continue reading and writing.

We have a ton of material about Percona XtraDB Cluster in previous posts, in the PXC manual, and in various webinars. If you want a concentrated hour overview of Percona XtraDB Cluster, I’d recommend watching this webinar.

How do I use Click-to-deploy?

Simply visit Google Cloud’s solutions page here: https://cloud.google.com/solutions/percona to get started. You are given a simple setup wizard that allows you choose the size and quantity of nodes you want, disk storage type and volume, etc.  Once you ‘Deploy Cluster’, your instances will launch and form a cluster automatically with sane default tunings. After that, it’s all up to you what you want to do.

Seeing it in action

Once your instances launch, you can add an SSH key to access (you can also install the Google Cloud SDK). This, handily, creates a new account based on the username in the SSH public key text. Once I add the key, I can easily just ssh to the public IP of the instance and I have my own account:

jayj@~ [500]$ ssh [public ip] Linux percona-mysql-niyr 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u3 x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. jayj@percona-mysql-niyr:~$

Once there I installed sysbench 0.5 from the Percona apt repo:

jayj@percona-mysql-niyr:~$ sudo -i root@percona-mysql-niyr:~# apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A root@percona-mysql-niyr:~# echo "deb http://repo.percona.com/apt wheezy main" > /etc/apt/sources.list.d/percona.list root@percona-mysql-niyr:~# apt-get update; apt-get install sysbench -y ... Setting up sysbench (0.5-3.wheezy) ... root@percona-mysql-niyr:~# logout jayj@percona-mysql-niyr:~$

Now we can load some data and run a quick test:

jayj@percona-mysql-niyr:~$ sysbench --test=/usr/share/doc/sysbench/tests/db/parallel_prepare.lua --oltp-tables-count=16 --oltp-table-size=1000000 --oltp-auto-inc=off --num-threads=8 --mysql-user=root --mysql-password=[the admin password you set in the wizard] --mysql-db=test run sysbench 0.5: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 8 Random number generator seed is 0 and will be ignored Threads started! thread prepare4 Creating table 'sbtest5'... thread prepare2 Creating table 'sbtest3'... thread prepare6 Creating table 'sbtest7'... thread prepare0 Creating table 'sbtest1'... thread prepare7 Creating table 'sbtest8'... thread prepare1 Creating table 'sbtest2'... thread prepare5 Creating table 'sbtest6'... thread prepare3 Creating table 'sbtest4'... Inserting 100000 records into 'sbtest5' Inserting 100000 records into 'sbtest3' Inserting 100000 records into 'sbtest7' Inserting 100000 records into 'sbtest1' Inserting 100000 records into 'sbtest8' Inserting 100000 records into 'sbtest2' Inserting 100000 records into 'sbtest6' Inserting 100000 records into 'sbtest4' Creating table 'sbtest13'... Creating table 'sbtest15'... Inserting 100000 records into 'sbtest13' Inserting 100000 records into 'sbtest15' Creating table 'sbtest11'... Creating table 'sbtest12'... Inserting 100000 records into 'sbtest11' Creating table 'sbtest16'... Inserting 100000 records into 'sbtest12' Creating table 'sbtest10'... Creating table 'sbtest14'... Creating table 'sbtest9'... Inserting 100000 records into 'sbtest16' Inserting 100000 records into 'sbtest9' Inserting 100000 records into 'sbtest10' Inserting 100000 records into 'sbtest14' OLTP test statistics: queries performed: read: 0 write: 608 other: 32 total: 640 transactions: 0 (0.00 per sec.) read/write requests: 608 (11.33 per sec.) other operations: 32 (0.60 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 53.6613s total number of events: 10000 total time taken by event execution: 0.0042s response time: min: 0.00ms avg: 0.00ms max: 0.02ms approx. 95 percentile: 0.00ms Threads fairness: events (avg/stddev): 1250.0000/3307.19 execution time (avg/stddev): 0.0005/0.00 jayj@percona-mysql-niyr:~$ sysbench --test=/usr/share/doc/sysbench/tests/db/update_index.lua --oltp-tables-count=16 --oltp-table-size=1000000 --num-threads=8 --mysql-user=root --mysql-password=[the admin password you set in the wizard]--mysql-db=test --max-requests=0 --max-time=10 --report-interval=1 --oltp-auto-inc=off --rand-init=on --rand-type=uniform run sysbench 0.5: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 8 Report intermediate results every 1 second(s) Initializing random number generator from timer. Random number generator seed is 0 and will be ignored Threads started! [ 1s] threads: 8, tps: 0.00, reads: 0.00, writes: 1396.43, response time: 10.85ms (95%), errors: 0.00, reconnects: 0.00 [ 2s] threads: 8, tps: 0.00, reads: 0.00, writes: 1314.13, response time: 14.91ms (95%), errors: 0.00, reconnects: 0.00 [ 3s] threads: 8, tps: 0.00, reads: 0.00, writes: 1382.87, response time: 12.34ms (95%), errors: 0.00, reconnects: 0.00 [ 4s] threads: 8, tps: 0.00, reads: 0.00, writes: 949.09, response time: 12.88ms (95%), errors: 0.00, reconnects: 0.00 [ 5s] threads: 8, tps: 0.00, reads: 0.00, writes: 1312.01, response time: 11.27ms (95%), errors: 0.00, reconnects: 0.00 [ 6s] threads: 8, tps: 0.00, reads: 0.00, writes: 898.92, response time: 11.64ms (95%), errors: 0.00, reconnects: 0.00 [ 7s] threads: 8, tps: 0.00, reads: 0.00, writes: 1541.71, response time: 10.59ms (95%), errors: 0.00, reconnects: 0.00 [ 8s] threads: 8, tps: 0.00, reads: 0.00, writes: 1551.35, response time: 11.48ms (95%), errors: 0.00, reconnects: 0.00 [ 9s] threads: 8, tps: 0.00, reads: 0.00, writes: 923.07, response time: 10.40ms (95%), errors: 0.00, reconnects: 0.00 [ 10s] threads: 8, tps: 0.00, reads: 0.00, writes: 1273.99, response time: 11.01ms (95%), errors: 0.00, reconnects: 0.00 OLTP test statistics: queries performed: read: 0 write: 12551 other: 0 total: 12551 transactions: 0 (0.00 per sec.) read/write requests: 12551 (1254.65 per sec.) other operations: 0 (0.00 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 10.0036s total number of events: 12551 total time taken by event execution: 79.9602s response time: min: 1.13ms avg: 6.37ms max: 389.52ms approx. 95 percentile: 11.68ms Threads fairness: events (avg/stddev): 1568.8750/4.81 execution time (avg/stddev): 9.9950/0.00

So, we see ~1200 tps on an update test in our little cluster, not too bad!

What it is not

This wizard is a handy way to get a cluster setup for some experimentation or testing. However, it is not a managed service:  after you launch it, you’re responsible for maintaining, tuning, etc. You could use it for production, but you may want some further fine tuning, operational procedures, etc. All of this is absolutely something Percona can help you with.

The post Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process appeared first on MySQL Performance Blog.

Sys Schema for MySQL 5.6 and MySQL 5.7

Do, 2014-11-20 14:31

Performance Schema (P_S) has been available since MySQL 5.5, more than 4 years ago. It is still difficult to see production servers with P_S enabled, even with MySQL 5.6 where the feature is enabled by default. There have been several complaints like the overhead, that the fix is a work in progress, and the ease of use. 52 tables to query and 31 configuration variables is enough to scare people.

There is a solution for the second problem, the usability. It’s name is “sys schema“. It is a collection of views, functions and procedures to make P_S more user friendly.

Installation

If you are a MySQL Workbench user the installation is pretty easy because sys schema is already included. You just need to install it. Click on “Performance – Performance Reports” and there you will find the “Install Helper” button that will install sys schema.

If you don’t use MySQL Workbench you need to download sys_56.sql or sys_57.sql (depends if you use 5.6 or 5.7) from the github repository. Then, just import the sql file as usual:

mysql -u root -p < ./sys_56.sql

Usage

After the import, you will have a new “sys” schema with some very descriptive table names. Let’s see an example. Do you want to know what tables are using most of our InnoDB buffer memory? Easy:

mysql> select * from sys.innodb_buffer_stats_by_table; +---------------+--------------------+-----------+-----------+-------+--------------+-----------+-------------+ | object_schema | object_name | allocated | data | pages | pages_hashed | pages_old | rows_cached | +---------------+--------------------+-----------+-----------+-------+--------------+-----------+-------------+ | test | t | 63.61 MiB | 58.06 MiB | 4071 | 4071 | 4071 | 2101222 | | InnoDB System | SYS_FOREIGN | 32.00 KiB | 0 bytes | 2 | 2 | 2 | 0 | | InnoDB System | SYS_TABLES | 32.00 KiB | 1.11 KiB | 2 | 2 | 2 | 10 | | InnoDB System | SYS_COLUMNS | 16.00 KiB | 4.68 KiB | 1 | 1 | 1 | 71 | | InnoDB System | SYS_DATAFILES | 16.00 KiB | 324 bytes | 1 | 1 | 1 | 6 | | InnoDB System | SYS_FIELDS | 16.00 KiB | 722 bytes | 1 | 1 | 1 | 17 | | InnoDB System | SYS_INDEXES | 16.00 KiB | 836 bytes | 1 | 1 | 1 | 12 | | InnoDB System | SYS_TABLESPACES | 16.00 KiB | 318 bytes | 1 | 1 | 1 | 6 | | mysql | innodb_index_stats | 16.00 KiB | 274 bytes | 1 | 1 | 1 | 3 | | mysql | innodb_table_stats | 16.00 KiB | 53 bytes | 1 | 1 | 1 | 1 | +---------------+--------------------+-----------+-----------+-------+--------------+-----------+-------------+

Pretty easy and useful, right? You can also get what is the database using more memory in the buffer pool querying innodb_buffer_stats_by_schema.

For each table there is another similar ones that it’s name starts with x$. For example, you have user_summary_by_file_io and x$user_summary_by_file_io. The difference is that the first table has a human readable output values while the second one has the real values. Let’s see an example:

mysql> select * from sys.user_summary_by_file_io; +------------+-------+------------+ | user | ios | io_latency | +------------+-------+------------+ | root | 19514 | 2.87 s | | background | 5916 | 1.91 s | +------------+-------+------------+ 2 rows in set (0.00 sec) mysql> select * from sys.x$user_summary_by_file_io; +------------+-------+---------------+ | user | ios | io_latency | +------------+-------+---------------+ | root | 19514 | 2871847094292 | | background | 5916 | 1905079715132 | +------------+-------+---------------+

For humans, at least for me, it is easier to read seconds rather than picoseconds

There are multiple tables with very descriptive names.

– io_by_thread_by_latency
– schema_unused_indexes
– statements_with_temp_tables
– statements_with_errors_or_warnings
– user_summary_by_statement_type
– waits_by_host_by_latency

There are lot more, and they are explained with examples in project’s README file.

Configuration

On the MySQL side nothing special is needed. Just enable performance_schema:

performance_schema="on"

sys schema also provides some procedures to enable/disable some features. For example:

– ps_setup_enable_background_threads
– ps_setup_enable_consumers
– ps_setup_enable_instrument
and so on…

We also have the same procedures with “disable”. After you have made the changes you can save them calling ps_setup_save() and reload it later on if you want calling ps_setup_reload_saved(). If you want to reset the configuration to default values just call ps_setup_reset_to_default().

For example, we can check that some consumers are disabled. It is possible to enable them and save the configuration:

mysql> CALL sys.ps_setup_show_disabled_consumers(); +--------------------------------+ | disabled_consumers | +--------------------------------+ | events_stages_current | | events_stages_history | | events_stages_history_long | | events_statements_history | | events_statements_history_long | | events_waits_current | | events_waits_history | | events_waits_history_long | +--------------------------------+ mysql> CALL sys.ps_setup_enable_consumers('events'); +---------------------+ | summary | +---------------------+ | Enabled 8 consumers | +---------------------+ mysql> CALL sys.ps_setup_show_disabled_consumers(); Empty set (0.00 sec) mysql> CALL sys.ps_setup_save(TRUE);

Conclusion

Performance Schema is very useful when we want to know what is happening inside the database. It is getting more features with each new GA and will probably be the single point of information in near future. Now thanks to sys schema it is also easy to use.

The post Sys Schema for MySQL 5.6 and MySQL 5.7 appeared first on MySQL Performance Blog.

How a set of queries can be killed in MySQL using Percona Toolkit’s pt-kill

Mi, 2014-11-19 15:18

You might have encountered situations where you had to kill some specific select queries that were running for long periods and choking the database. This post will go into more detail with an example of report query offloading.

Report query (select) offloading to a slave server is a common practice to reduce the workload of the master DB server. The long running selects will be executed in the slave for report generation. I have observed in many cases where the slave used to get delayed or the slave DB encounters a slowdown due to some heavy long-running orphaned selects from some wrong reports.

There are two main ways to kill queries in MySQL: 1. use custom scripts that match on a regular expression, or 2. use a tool written and supported by Percona that is designed to kill queries based on matching conditions. Below is one script that will help you to kill those queries. The script will take the process list from MySQL and filter the long-running select query considering the “User”, “Time” and “State” from the list. However I suggest that you use the pt-kill tool from Percona Toolkit which provides a more reliable choice and options based on your requirements.

Process list:

+-----+------+-----------+------+---------+------+------------------------------+-----------------------------+ | Id  | User | Host      | db   | Command | Time | State                        | Info                        | +-----+------+-----------+------+---------+------+------------------------------+-----------------------------+ | 103 | root | localhost | test | Query   |    0 | init                         | show processlist            | | 104 | root | localhost | test | Sleep   |  383 |                              | NULL                        | | 106 | root | localhost | test | Query   |  377 | Waiting for table level lock | SELECT * FROM t FOR UPDATE  | | 107 | root | localhost | test | Query   |  364 | Waiting for table level lock | insert into t value(5)      | | 108 | rpt  | localhost | test | Query   |  345 | Waiting for table level lock | SELECT c1 FROM t FOR UPDATE | +-----+------+-----------+------+---------+------+------------------------------+-----------------------------+

1. Shell script example:

List all queries from rpt user having query time greater than 1 minute: [root@test3 ~]# for id in $(mysql -t -e "SHOW FULL PROCESSLIST" | /bin/awk -F "|" '($6 ~ /Query/) && (60< $7) && ($3 ~ /rpt/) { print $2}'); do echo "KILL QUERY $id;"; done KILL QUERY 108; Kill all queries from rpt user having query time greater than 1 minute: [root@test3 ~]# for id in $(mysql -t -e "SHOW FULL PROCESSLIST" | /bin/awk -F "|" '($6 ~ /Query/) && (60< $7) && ($3 ~ /rpt/) { print $2}'); do mysql -e "KILL QUERY $id;"; done

2. Pt-kill example:

List all queries from rpt user having query time greater than 1 minute (–print):

[root@test3 ~]# pt-kill --busy-time 60 --match-command Query --user root --print --match-user rpt --socket /tmp/mysql.sock # 2014-11-12T03:51:01 KILL 108 (Query 485 sec) SELECT c1 FROM t FOR UPDATE # 2014-11-12T03:51:31 KILL 108 (Query 515 sec) SELECT c1 FROM t FOR UPDATE # 2014-11-12T03:52:01 KILL 108 (Query 545 sec) SELECT c1 FROM t FOR UPDATE

Kill all queries from rpt user having query time greater than 1 minute (–kill-query):

[root@test3 ~]# pt-kill --busy-time 60 --match-command Query --user root --print --kill-query --match-user rpt --socket /tmp/mysql.sock # 2014-11-12T03:53:26 KILL QUERY 108 (Query 630 sec) SELECT c1 FROM t FOR UPDATE

Process list:

+-----+------+-----------+------+---------+------+------------------------------+----------------------------+ | Id | User | Host | db | Command | Time | State | Info | +-----+------+-----------+------+---------+------+------------------------------+----------------------------+ | 103 | root | localhost | test | Query | 0 | init | show processlist | | 104 | root | localhost | test | Sleep | 843 | | NULL | | 106 | root | localhost | test | Query | 837 | Waiting for table level lock | SELECT * FROM t FOR UPDATE | | 107 | root | localhost | test | Query | 824 | Waiting for table level lock | insert into t value(5) | | 108 | rpt | localhost | test | Sleep | 805 | | NULL | | 111 | root | localhost | NULL | Sleep | 25 | | NULL | +-----+------+-----------+------+---------+------+------------------------------+----------------------------+

 Note : –kill-query option makes pt-kill kill matching queries. This requires MySQL 5.0 or newer. Unlike –kill which kills the connection for matching queries, this option only kills the query, not its connection.

pt-kill captures queries from SHOW PROCESSLIST, filters them, and then either kills or prints them. This is also known as a “slow query sniper” in some circles. The idea is to watch for queries that might be consuming too many resources, and kill them. For details please read the tool’s (pt-kill) documentation.

The post How a set of queries can be killed in MySQL using Percona Toolkit’s pt-kill appeared first on MySQL Performance Blog.

Avoiding MySQL ALTER table downtime

Di, 2014-11-18 17:50

MySQL table alterations can interrupt production traffic causing bad customer experience or in worst cases, loss of revenue. Not all DBAs, developers, syadmins know MySQL well enough to avoid this pitfall. DBAs usually encounter these kinds of production interruptions when working with upgrade scripts that touch both application and database or if an inexperienced admin/dev engineer perform the schema change without knowing how MySQL operates internally.

Truths
* Direct MySQL ALTER table locks for duration of change (pre-5.6)
* Online DDL in MySQL 5.6 is not always online and may incurr locks
* Even with Percona Toolkit‘s pt-online-schema-change there are several workloads that can experience blocking

Here on the Percona MySQL Managed Services team we encourage our clients to work with us when planning and performing schema migrations. We aim to ensure that we are using the best method available in their given circumstance. Our intentions to avoid blocking when performing DDL on large tables ensures that business can continue as usual whilst we strive to improve response time or add application functionality. The bottom line is that a business relying on access to its data cannot afford to be down during core trading hours.

Many of the installations we manage are still below MySQL 5.6, which requires us to seek workarounds to minimize the amount of disruption a migration can cause. This may entail slave promotion or changing the schema with an ‘online schema change’ tool. MySQL version 5.6 looks to address this issue by reducing the number of scenarios where a table is rebuilt and locked but it doesn’t yet cover all eventualities, for example when changing the data type of a column a full table rebuild is necessary. The topic of 5.6 Online Schema Change was discussed in great detail last year in the post, “Schema changes – what’s new in MySQL 5.6?” by Przemysław Malkowski

With new functionality arriving in MySQL 5.7, we look forward to non-blocking DDL operations such as; OPTIMIZE TABLE and RENAME INDEX. (More info)

The best advice for MySQL 5.6 users is to review the matrix to familiarize with situations where it might be best to look outside of MySQL to perform schema changes, the good news is that we’re on the right path to solving this natively.

Truth be told, a blocking alter is usually going to go unnoticed on a 30MB table and we tend to use a direct alter in this situation, but on a 30GB or 300GB table we have some planning to do. If there is a period of time where activity is low and the this is permissive of locking the table then sometimes it is better execute within this window. Frequently though we are reactive to new SQL statements or a new performance issue and an emergency index is required to reduce load on the master in order to improve the response time.

To pt-osc or not to pt-osc?

As mentioned, pt-online-schema-change is a fixture in our workflow. It’s usually the right way to go but we still have occasions where pt-online-schema-change cannot be used, for example; when a table already uses triggers. It’s an important to remind ourselves of the the steps that pt-online-schema-change traverses to complete it’s job. Lets look at the source code to identify these;

[moore@localhost]$ egrep 'Step' pt-online-schema-change # Step 1: Create the new table. # Step 2: Alter the new, empty table. This should be very quick, # Step 3: Create the triggers to capture changes on the original table and <--(metadata lock) # Step 4: Copy rows. # Step 5: Rename tables: orig -> old, new -> orig <--(metadata lock) # Step 6: Update foreign key constraints if there are child tables. # Step 7: Drop the old table.

I pick out steps 3 and 5 from above to highlight a source of a source of potential downtime due to locks, but step 6 is also an area for concern since foreign keys can have nested actions and should be considered when planning these actions to avoid related tables from being rebuilt with a direct alter implicitly. There are several ways to approach a table with referential integrity constraints and they are detailed within the pt-osc documentation a good preparation step is to review the structure of your table including the constraints and how the ripples of the change can affect the tables around it.

Recently we were alerted to an incident after a client with a highly concurrent and highly transactional workload ran a standard pt-online-schema-change script over a large table. This appeared normal to them and a few hours later our pager man was notified that this client was experiencing max_connections limit reached. So what was going on? When pt-online-schema-change reached step 5 it tried to acquire a metadata lock to rename the the original and the shadow table, however this wasn’t immediately granted due to open transactions and thus threads began to queue behind the RENAME command. The actual effect this had on the client’s application was downtime. No new connections could be made and all existing threads were waiting behind the RENAME command.

Metadata locks
Introduced in 5.5.3 at server level. When a transaction starts it will acquire a metadata lock (independent of storage engine) on all tables it uses and then releases them when it’s finished it’s work. This ensures that nothing can alter the table definition whilst a transaction is open.

With some foresight and planning we can avoid these situations with non-default pt-osc options, namely –nodrop-new-table and –no-swap-tables. This combination leaves both the shadow table and the triggers inplace so that we can instigate an atomic RENAME when load permits.

EDIT: as of percona-toolkit version 2.2 we have a new variable –tries which in conjunction with –set-vars has been deployed to cover this scenario where various pt-osc operations could block waiting for a metadata lock. The default behaviour of pt-osc (–set-vars) is to set the following session variables when it connects to the server;

wait_timeout=10000
innodb_lock_wait_timeout=1
lock_wait_timeout=60

when using –tries we can granularly identify the operation, try count and the wait interval between tries. This combination will ensure that pt-osc will kill it’s own waiting session in good time to avoid the thread pileup and provide us with a loop to attempt to acquire our metadata lock for triggers|rename|fk management;

–tries swap_tables:5:0.5,drop_triggers:5:0.5

The documentation is here http://www.percona.com/doc/percona-toolkit/2.2/pt-online-schema-change.html#cmdoption-pt-online-schema-change–tries

This illustrates that even with a tool like pt-online-schema-change it is important to understand the caveats presented with the solution you think is most adequate. To help decide the direction to take use the flow chart to ensure you’re taking into account some of the caveats of the MySQL schema change. Be sure to read up on the recommended outcome though as there are uncharted areas such as disk space, IO load that are not featured on the diagram.

Choosing the right DDL option

Ensure you know what effect ALTER TABLE will have on your platform and pick the right method to suit your uptime. Sometimes that means delaying the change until a period of lighter use or utilising a tool that will avoid holding a table locked for the duration of the operation. A direct ALTER is sometimes the answer like when you have triggers installed on a table.

– In most cases pt-osc is exactly what we need
– In many cases pt-osc is needed but the way in which it’s used needs tweaking
– In few cases pt-osc isn’t the right tool/method and we need to consider native blocking ALTER or using failovers to juggle the change into place on all hosts in the replica cluster.

If you want to learn more about avoiding avoidable downtime please tune into my webinar Wednesday, November 19 at 10 a.m. PST. It’s titled “Tips from the Trenches: A Guide to Preventing Downtime for the Over-Extended DBA.” Register now! (If you miss it, don’t worry: Catch the recording and download the slides from that same registration page.)

The post Avoiding MySQL ALTER table downtime appeared first on MySQL Performance Blog.

MySQL’s INNODB_METRICS table: How much is the overhead?

Di, 2014-11-18 13:00

Starting with MySQL 5.6 there is an INNODB_METRICS table available in INFORMATION_SCHEMA which contains some additional information than provided in the SHOW GLOBAL STATUS output – yet might be more lightweight than PERFORMANCE_SCHEMA.

Too bad INNODB_METRICS was designed during the Oracle-Sun split under MySQL leadership and so it covers only InnoDB counters. I think this would be a great replacement to all counters that are currently provided though SHOW STATUS – it captures more information such as providing MIN/MAX counts for variables as well as providing the type of the counter (whenever it is current or commutative) as well as human readable comment – describing what such counter means.

The examples of data you can get only from the INNODB_METRICS table includes information about InnoDB Page Splits and merging (which can cause quite an impact to the database performance).

As well as details of InnoDB purging performance, adaptive hash index activity, details about InnoDB flushing or how index condition pushdown (ICP) is working for you.

The InnoDB Metrics come disabled by default as of MySQL 5.6 and it provides very elaborate configuration commands – you can enable/disable individual counters or counters for specific subsystems you’re interested in. I would expect most users though would need only basic configuration:

set global innodb_monitor_disable = all; set global innodb_monitor_enable = all;

…which enables and disables all InnoDB Metrics appropriately. Of course if you just rather keep changes permanently you would want to keep it as a setting in the MySQL Configuration file. Small side note – some of the InnoDB metrics correspond to SHOW STATUS counters and those are permanently enabled.

As those metrics are disabled by default I was wondering if they really do have so huge a overhead that we can’t keep them enabled. In my tests I’ve measured less than 1% overhead, really the variance between runs of benchmark was larger than caused by having metrics enabled. It might be on very large systems with heavy workloads that the overhead might be higher – if you observe any significant overhead from using INNODB_METRICS please comment on this post so we know.

Finally let me post a selection of metrics that have been actively running a simple sysbench test, in total there are 214 metrics as of the current MySQL 5.6 release.

mysql> select name,count,round(avg_count,3) avg_cnt ,type,comment from innodb_metrics where count>0 or max_count>0; +------------------------------------------+---------------+--------------+----------------+-------------------------------------------------------------------------------------------------------------------+ | name | count | avg_cnt | type | comment | +------------------------------------------+---------------+--------------+----------------+-------------------------------------------------------------------------------------------------------------------+ | metadata_table_handles_opened | 1 | 0.000 | counter | Number of table handles opened | | metadata_table_reference_count | -6 | -0.000 | counter | Table reference counter | | metadata_mem_pool_size | 8388608 | NULL | value | Size of a memory pool InnoDB uses to store data dictionary and internal data structures in bytes | | lock_deadlocks | 521 | 0.002 | counter | Number of deadlocks | | lock_timeouts | 6857 | 0.027 | counter | Number of lock timeouts | | lock_rec_lock_waits | 1837441 | 16.150 | counter | Number of times enqueued into record lock wait queue | | lock_rec_lock_requests | 65270862 | 573.694 | counter | Number of record locks requested | | lock_rec_lock_created | 36872834 | 324.091 | counter | Number of record locks created | | lock_rec_lock_removed | 36872830 | 324.091 | counter | Number of record locks removed from the lock queue | | lock_rec_locks | 4 | 0.000 | counter | Current number of record locks on tables | | lock_table_lock_created | 12239169 | 107.575 | counter | Number of table locks created | | lock_table_lock_removed | 12239165 | 107.575 | counter | Number of table locks removed from the lock queue | | lock_table_locks | 4 | 0.000 | counter | Current number of table locks on tables | | lock_row_lock_current_waits | 55 | 0.000 | status_counter | Number of row locks currently being waited for (innodb_row_lock_current_waits) | | lock_row_lock_time | 1936523774 | 7548.123 | status_counter | Time spent in acquiring row locks, in milliseconds (innodb_row_lock_time) | | lock_row_lock_time_max | 51969 | NULL | value | The maximum time to acquire a row lock, in milliseconds (innodb_row_lock_time_max) | | lock_row_lock_waits | 1901237 | 7.411 | status_counter | Number of times a row lock had to be waited for (innodb_row_lock_waits) | | lock_row_lock_time_avg | 1018 | NULL | value | The average time to acquire a row lock, in milliseconds (innodb_row_lock_time_avg) | | buffer_pool_size | 6442450944 | NULL | value | Server buffer pool size (all buffer pools) in bytes | | buffer_pool_reads | 370556494 | 1444.344 | status_counter | Number of reads directly from disk (innodb_buffer_pool_reads) | | buffer_pool_read_requests | 48016336694 | 187156.603 | status_counter | Number of logical read requests (innodb_buffer_pool_read_requests) | | buffer_pool_write_requests | 4273025649 | 16655.268 | status_counter | Number of write requests (innodb_buffer_pool_write_requests) | | buffer_pool_read_ahead | 6461109 | 25.184 | status_counter | Number of pages read as read ahead (innodb_buffer_pool_read_ahead) | | buffer_pool_pages_total | 393208 | NULL | value | Total buffer pool size in pages (innodb_buffer_pool_pages_total) | | buffer_pool_pages_misc | 13 | NULL | value | Buffer pages for misc use such as row locks or the adaptive hash index (innodb_buffer_pool_pages_misc) | | buffer_pool_pages_data | 387844 | NULL | value | Buffer pages containing data (innodb_buffer_pool_pages_data) | | buffer_pool_bytes_data | 6354436096 | NULL | value | Buffer bytes containing data (innodb_buffer_pool_bytes_data) | | buffer_pool_pages_dirty | 2646 | NULL | value | Buffer pages currently dirty (innodb_buffer_pool_pages_dirty) | | buffer_pool_bytes_dirty | 43352064 | NULL | value | Buffer bytes currently dirty (innodb_buffer_pool_bytes_dirty) | | buffer_pool_pages_free | 5351 | NULL | value | Buffer pages currently free (innodb_buffer_pool_pages_free) | | buffer_pages_created | 15712031 | 61.242 | status_counter | Number of pages created (innodb_pages_created) | | buffer_pages_written | 88472873 | 344.847 | status_counter | Number of pages written (innodb_pages_written) | | buffer_pages_read | 389195724 | 1516.995 | status_counter | Number of pages read (innodb_pages_read) | | buffer_data_reads | 6370189115392 | 24829527.611 | status_counter | Amount of data read in bytes (innodb_data_reads) | | buffer_data_written | 3238687916032 | 12623658.353 | status_counter | Amount of data written in bytes (innodb_data_written) | | buffer_flush_batch_scanned | 303528 | 0.000 | set_owner | Total pages scanned as part of flush batch | | buffer_flush_batch_num_scan | 1066763 | 9.376 | set_member | Number of times buffer flush list flush is called | | buffer_flush_batch_scanned_per_call | 0 | 0.000 | set_member | Pages scanned per flush batch scan | | buffer_flush_batch_rescan | 89 | 0.001 | counter | Number of times rescan of flush list forced | | buffer_flush_batch_total_pages | 634250 | 3.000 | set_owner | Total pages flushed as part of flush batch | | buffer_flush_batches | 197859 | 1.739 | set_member | Number of flush batches | | buffer_flush_batch_pages | 1 | 0.000 | set_member | Pages queued as a flush batch | | buffer_flush_neighbor_total_pages | 6051959 | 0.000 | set_owner | Total neighbors flushed as part of neighbor flush | | buffer_flush_neighbor | 34246946 | 301.011 | set_member | Number of times neighbors flushing is invoked | | buffer_flush_neighbor_pages | 0 | 0.000 | set_member | Pages queued as a neighbor batch | | buffer_flush_n_to_flush_requested | 0 | 0.000 | counter | Number of pages requested for flushing. | | buffer_flush_avg_page_rate | 0 | 0.000 | counter | Average number of pages at which flushing is happening | | buffer_flush_lsn_avg_rate | 16570 | 0.146 | counter | Average redo generation rate | | buffer_flush_pct_for_lsn | 0 | 0.000 | counter | Percent of IO capacity used to avoid reusable redo space limit | | buffer_flush_adaptive_total_pages | 631237 | 11.000 | set_owner | Total pages flushed as part of adaptive flushing | | buffer_flush_adaptive | 56061 | 0.493 | set_member | Number of adaptive batches | | buffer_flush_adaptive_pages | 1 | 0.000 | set_member | Pages queued as an adaptive batch | | buffer_flush_background_total_pages | 39668544 | 420.000 | set_owner | Total pages flushed as part of background batches | | buffer_flush_background | 94284 | 0.829 | set_member | Number of background batches | | buffer_flush_background_pages | 26 | 0.000 | set_member | Pages queued as a background batch | | buffer_LRU_batch_scanned | 5664563354 | 3153.000 | set_owner | Total pages scanned as part of LRU batch | | buffer_LRU_batch_num_scan | 1796523 | 15.790 | set_member | Number of times LRU batch is called | | buffer_LRU_batch_scanned_per_call | 2748 | 0.024 | set_member | Pages scanned per LRU batch call | | buffer_LRU_batch_total_pages | 39665531 | 420.000 | set_owner | Total pages flushed as part of LRU batches | | buffer_LRU_batches | 94281 | 0.829 | set_member | Number of LRU batches | | buffer_LRU_batch_pages | 26 | 0.000 | set_member | Pages queued as an LRU batch | | buffer_LRU_get_free_search | 378687232 | 3328.446 | counter | Number of searches performed for a clean page | | os_data_reads | 389095626 | 1516.605 | status_counter | Number of reads initiated (innodb_data_reads) | | os_data_writes | 93493818 | 364.417 | status_counter | Number of writes initiated (innodb_data_writes) | | os_data_fsyncs | 7023487 | 27.376 | status_counter | Number of fsync() calls (innodb_data_fsyncs) | | os_log_bytes_written | 339601504256 | 1323688.320 | status_counter | Bytes of log written (innodb_os_log_written) | | os_log_fsyncs | 2181429 | 8.503 | status_counter | Number of fsync log writes (innodb_os_log_fsyncs) | | os_log_pending_fsyncs | 0 | 0.000 | status_counter | Number of pending fsync write (innodb_os_log_pending_fsyncs) | | os_log_pending_writes | 0 | 0.000 | status_counter | Number of pending log file writes (innodb_os_log_pending_writes) | | trx_rw_commits | 12239165 | 107.575 | counter | Number of read-write transactions committed | | trx_commits_insert_update | 12223766 | 107.440 | counter | Number of transactions committed with inserts and updates | | trx_rollbacks | 2073 | 0.018 | counter | Number of transactions rolled back | | trx_rollbacks_savepoint | 6860 | 0.060 | counter | Number of transactions rolled back to savepoint | | trx_active_transactions | -10769 | -0.095 | counter | Number of active transactions | | trx_rseg_history_len | 9469305 | NULL | value | Length of the TRX_RSEG_HISTORY list | | trx_undo_slots_used | 81 | 0.001 | counter | Number of undo slots used | | trx_undo_slots_cached | -16487 | -0.145 | counter | Number of undo slots cached | | trx_rseg_current_size | 498385 | NULL | value | Current rollback segment size in pages | | purge_del_mark_records | 2900462 | 25.493 | counter | Number of delete-marked rows purged | | purge_upd_exist_or_extern_records | 2900334 | 25.492 | counter | Number of purges on updates of existing records and updates on delete marked record with externally stored field | | purge_invoked | 15036193 | 132.160 | counter | Number of times purge was invoked | | purge_undo_log_pages | 2901506 | 25.503 | counter | Number of undo log pages handled by the purge | | log_checkpoints | 7880 | 0.069 | counter | Number of checkpoints | | log_lsn_last_flush | 338495604612 | NULL | value | LSN of Last flush | | log_lsn_last_checkpoint | 337905680629 | NULL | value | LSN at last checkpoint | | log_lsn_current | 338495612626 | NULL | value | Current LSN value | | log_lsn_checkpoint_age | 589931997 | 5185.167 | counter | Current LSN value minus LSN at last checkpoint | | log_lsn_buf_pool_oldest | 337905680629 | NULL | value | The oldest modified block LSN in the buffer pool | | log_max_modified_age_async | 6087493925 | NULL | value | Maximum LSN difference; when exceeded, start asynchronous preflush | | log_max_modified_age_sync | 6522314920 | NULL | value | Maximum LSN difference; when exceeded, start synchronous preflush | | log_pending_checkpoint_writes | 0 | 0.000 | counter | Pending checkpoints | | log_num_log_io | 1618018 | 14.221 | counter | Number of log I/Os | | log_write_requests | 733043164 | 2857.233 | status_counter | Number of log write requests (innodb_log_write_requests) | | log_writes | 2160056 | 8.419 | status_counter | Number of log writes (innodb_log_writes) | | index_page_splits | 725862 | 6.380 | counter | Number of index page splits | | index_page_merge_attempts | 6318656 | 55.537 | counter | Number of index page merge attempts | | index_page_merge_successful | 600659 | 5.279 | counter | Number of successful index page merges | | index_page_reorg_attempts | 324359 | 2.851 | counter | Number of index page reorganization attempts | | index_page_reorg_successful | 324359 | 2.851 | counter | Number of successful index page reorganizations | | index_page_discards | 850 | 0.007 | counter | Number of index pages discarded | | adaptive_hash_searches | 996683138 | 3884.841 | status_counter | Number of successful searches using Adaptive Hash Index | | adaptive_hash_pages_added | 6787699 | 59.660 | counter | Number of index pages on which the Adaptive Hash Index is built | | adaptive_hash_pages_removed | 6165530 | 54.192 | counter | Number of index pages whose corresponding Adaptive Hash Index entries were removed | | adaptive_hash_rows_added | 971809814 | 8541.656 | counter | Number of Adaptive Hash Index rows added | | adaptive_hash_rows_removed | 450086501 | 3956.005 | counter | Number of Adaptive Hash Index rows removed | | adaptive_hash_rows_deleted_no_hash_entry | 59 | 0.001 | counter | Number of rows deleted that did not have corresponding Adaptive Hash Index entries | | file_num_open_files | 4 | NULL | value | Number of files currently open (innodb_num_open_files) | | ibuf_merges_insert | 281620042 | 1097.690 | status_counter | Number of inserted records merged by change buffering | | ibuf_merges_delete_mark | 21569667 | 84.074 | status_counter | Number of deleted records merged by change buffering | | ibuf_merges_delete | 3332320 | 12.989 | status_counter | Number of purge records merged by change buffering | | ibuf_merges | 13938295 | 54.328 | status_counter | Number of change buffer merges | | ibuf_size | 17478 | 0.068 | status_counter | Change buffer size in pages | | innodb_master_thread_sleeps | 82021 | 0.721 | counter | Number of times (seconds) master thread sleeps | | innodb_activity_count | 167679087 | 653.574 | status_counter | Current server activity count | | innodb_master_active_loops | 82021 | 0.721 | counter | Number of times master thread performs its tasks when server is active | | innodb_background_drop_table_usec | 58451 | 0.514 | counter | Time (in microseconds) spent to process drop table list | | innodb_ibuf_merge_usec | 2815277852 | 24744.692 | counter | Time (in microseconds) spent to process change buffer merge | | innodb_log_flush_usec | 2843869697 | 24995.998 | counter | Time (in microseconds) spent to flush log records | | innodb_dict_lru_usec | 25556359671 | 224625.875 | counter | Time (in microseconds) spent to process DICT LRU list | | innodb_checkpoint_usec | 417383675 | 3668.565 | counter | Time (in microseconds) spent by master thread to do checkpoint | | innodb_dblwr_writes | 1991948 | 7.764 | status_counter | Number of doublewrite operations that have been performed (innodb_dblwr_writes) | | innodb_dblwr_pages_written | 88472873 | 344.847 | status_counter | Number of pages that have been written for doublewrite operations (innodb_dblwr_pages_written) | | innodb_page_size | 16384 | NULL | value | InnoDB page size in bytes (innodb_page_size) | | innodb_rwlock_s_spin_waits | 48060144 | 187.327 | status_counter | Number of rwlock spin waits due to shared latch request | | innodb_rwlock_x_spin_waits | 5197488 | 20.259 | status_counter | Number of rwlock spin waits due to exclusive latch request | | innodb_rwlock_s_spin_rounds | 1476389094 | 5754.624 | status_counter | Number of rwlock spin loop rounds due to shared latch request | | innodb_rwlock_x_spin_rounds | 434074879 | 1691.924 | status_counter | Number of rwlock spin loop rounds due to exclusive latch request | | innodb_rwlock_s_os_waits | 42616148 | 166.108 | status_counter | Number of OS waits due to shared latch request | | innodb_rwlock_x_os_waits | 11139632 | 43.420 | status_counter | Number of OS waits due to exclusive latch request | | dml_reads | 5382057652 | 20978.019 | status_counter | Number of rows read | | dml_inserts | 1012583832 | 3946.818 | status_counter | Number of rows inserted | | dml_deletes | 12583838 | 49.049 | status_counter | Number of rows deleted | | dml_updates | 25177697 | 98.137 | status_counter | Number of rows updated | +------------------------------------------+---------------+--------------+----------------+-------------------------------------------------------------------------------------------------------------------+ 133 rows in set (0.00 sec)

To learn more about the INNODB_METRICS table check out the MySQL Manual as well as a Getting Started blog post by Oracle.

The post MySQL’s INNODB_METRICS table: How much is the overhead? appeared first on MySQL Performance Blog.

Percona Toolkit 2.2.12 is now available

Mo, 2014-11-17 18:44

Percona is pleased to announce the availability of Percona Toolkit 2.2.12.  Released on November 14, Percona Toolkit is a collection of advanced command-line tools to perform a variety of MySQL server and system tasks that are too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.

This release is the current GA (Generally Available) stable release in the 2.2 series. Downloads are available here and from the Percona Software Repositories.

New Features:

  • pt-stalk now gathers dmesg output from up to 60 seconds before the triggering event.

Bugs Fixed:

  • Fixed bug 1376561: pt-archiver was not able to archive all the rows when a table had a hash partition. Fixed by implementing support for tables which have primary or unique indexes.
  • Fixed bug 1217466: pt-table-checksum would refuses to run on Percona XtraDB Cluster if server_id was the same on all nodes. Fixed by using the wsrep_node_incoming_address as a unique identifier for cluster nodes, instead of relying on server_id.
  • Fixed bug 1269695: pt-online-schema-change documentation now contains more information about limitations on why it isn’t running ALTER TABLE for a table which has only a non-unique index.
  • Fixed bug 1328686: Running pt-hearbeat with --check-read-only option would cause an error when running on server with read_only option. Tool now waits for server read_only status to be disabled before starting to run.
  • Fixed bug 1373937: pt-table-checksum now supports none as valid --recursion-method when using with Percona XtraDB Cluster.
  • Fixed bug 1377888: Documentation was stating that pt-query-digest is able to parse a raw binary log file, while it can only parse a file which was decoded with mysqlbinlog tool before. Fixed by improving the documentation and adding a check for binary file and providing a relevant error message.

Details of the release can be found in the release notes and the 2.2.12 milestone at Launchpad. Bugs can be reported on the Percona Toolkit launchpad bug tracker.

The post Percona Toolkit 2.2.12 is now available appeared first on MySQL Performance Blog.

Typical misconceptions on Galera Replication for MySQL

Mo, 2014-11-17 15:43

Even if a Galera node looks like a regular MySQL server, the underlying replication mechanism is very different. This implies some changes in the way you have to configure the Galera nodes. Here are some of the most common misconceptions about Galera when using Percona XtraDB Cluster.

Q: Why should I enable binary logging as it is not needed by Galera replication?
Unlike for regular asynchronous MySQL replication, it is true that you don’t need to enable binary logging to use Galera replication. However what if someone runs an accidental DROP TABLE?

In this case, the statement will be replicated immediately on all nodes. Then your main option to recover lost data is to use a backup. But if binary logging is not enabled, all changes to the table after the backup will be lost.

What if you have an asynchronous slave that is intentionally delayed? This is another good option to quickly recover the lost data, but to be able to set up an async slave, you will also have to enable binary logging!

So don’t forget to add on all your nodes:

log_bin log_slave_updates binlog_format = ROW

Q: If I set innodb_flush_log_at_trx_commit = 2, I may lose data in some cases, right?
For a regular MySQL master, it is recommended to set innodb_flush_log_at_trx_commit = 1 because it is the only way to make sure that every committed transaction is stored permanently on disk. The main drawback is that it can slow down writes a lot because it involves one fsync at each commit.

With Galera, the difference is that commit is synchronous: this means that a transaction is committed on node #1, it has already been replicated to all the other nodes (it has not necessarily been executed on the remote nodes though).

As Galera implements cluster-wide durability, there is no need to have single server durability and you can safely use innodb_flush_log_at_trx_commit = 2 on all nodes.

Actually this is not totally accurate… For instance if all nodes lose power at the same time, you may lose some transactions. The likeliness of such a failure is related to how nodes are hosted: with each node in a separate datacenter, this is very unlikely. But with 3 nodes being VMs on a single physical host, it could well happen from time to time.

Q: The network RTT between my servers is 100ms. I know that each commit takes at least as much time as a network RTT so can I only hope to run 10 writes/s?

This needs some clarification: commit is synchronous because the full transaction is replicated to all nodes when it is committed. However you pay the same price whether the transaction has a single statements or several statements.

So if all your writes are auto-committed transactions, each write will trigger a commit that will need at least one RTT to complete. If RTT is 100ms, that means 10 writes/s.

But if your transactions have 10 writes, you will only need a commit every 10 writes: with 10 commits/s, you can now run 100 writes/s.

And finally several threads can commit at the same time, increasing the write throughput. With 10 concurrent threads executing transactions with 10 statements, you get 1000 writes/s.

Of course, this is only theory. In the real world, you will probably not get so perfectly aligned numbers. You can look at this post (Comparing Percona XtraDB Cluster with Semi-Sync replication Cross-WAN) to see real numbers with 1 thread and 32 threads.

Conclusion: network latency is a limiting factor for write throughput, that’s true. But that may not be as bad as you can think. And remember Callaghan’s law: “In a Galera cluster a given row can’t be modified more than once per RTT”.

I want to write on all nodes to get write scalability. Is it a good idea?

Let me start by stating that Galera cannot be a true solution for scale writes: the reason is simply that all writes need to be applied on all nodes.

But Galera brings a limited form of write scalability when writing concurrently to several nodes because:

  • Write sets can be applied in parallel on the remote nodes.
  • Galera is using row-based replication, so applying the replicated events can be faster than executing the original write.

There is a caveat however when writing on several nodes: because of optimistic locking, concurrent transactions on different nodes can lead to write conflicts. In this case Galera will roll back one of the transactions, and it is up to the application to retry executing the transaction.

Conclusion
Galera Replication is a nice technology that can help solve challenges around high availability. But a basic understanding of how it works is useful as it can avoid frustration or unreasonable expectations.

The post Typical misconceptions on Galera Replication for MySQL appeared first on MySQL Performance Blog.

RDS for Aurora unveiled at AWS re:Invent

Fr, 2014-11-14 16:25

One of the big announcements at the Amazon Web Services re:Invent 2014 conference this week was the unveiling of Aurora. The result of years of internal work, Aurora, currently in preview, is a MySQL 5.6-compatible option that “combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases” on the AWS infrastructure. The Aurora database engine will be made available through the Amazon RDS for Aurora service. This new database option is another example of the vibrant innovation coming from the MySQL ecosystem and key role that relational databases play in applications of today and tomorrow.

Integration with other AWS components

Since the Aurora relational database engine will only be made available as a service via RDS, Amazon Web Services can do some interesting new things including:

  • Fully leveraging available compute, memory and networking resources
  • Automatically growing the size of database volumes as storage needs grow
  • Tighter integration with AWS monitoring and restore capabilities
  • Moving from RDS for MySQL to RDS for Aurora will be enabled by a migration tool
Aurora performance

In the keynote, Amazon Web Services shared that Aurora delivers up to five times the performance of stock MySQL 5.6 without requiring any changes to most MySQL applications. Their FAQ reveals some insights into their testing methodology: “Amazon Aurora delivers over 500,000 SELECTs/sec and 100,000 updates/sec, five times higher than MySQL running the same benchmark on the same hardware.”

More RDS for Aurora details coming soon

We’re very excited about Amazon pushing the boundaries in relational databases for the enterprise and look forward to see what we, as a MySQL community, can learn from their new cloud-centric and cross-service approach.

Like many, Percona is looking forward to trying out Aurora. We’ll definitely do some benchmark testing to compare additional database options such as a tuned Percona Server 5.6. Additionally, we’ll share our expert advice and ideas for improvement that we uncover from our testing of RDS for Aurora back to the team at AWS so the service becomes better for all.

The post RDS for Aurora unveiled at AWS re:Invent appeared first on MySQL Performance Blog.

Q&A: Percona XtraDB Cluster as a MySQL HA solution for OpenStack

Fr, 2014-11-14 14:00

Thanks to all who attended my Nov. 12 webinar titled, “Percona XtraDB Cluster as a MySQL HA Solution for OpenStack.” I had several questions which I covered at the end and a few that I didn’t. You can view the entire webinar and download the slides here.

Q: Is the read,write speed reduced in Galera compared to normal MySQL?

For reads, it’s the same (unless you use the sync_wait feature, used to be called causal reads).

For writes, the cost of replication (~1 RTT to the worst node), plus the cost of certification will be added to each transaction commit.  This will have a non-trivial impact on your write performance, for sure.  However, I believe most OpenStack meta store use cases should not suffer overly by this performance penalty.

Q: Does state transfers affect a continuing transaction within the nodes?

Mostly, no.  The joining node will queue ongoing cluster replication while receiving its state transfer and use that to do its final ‘catchup’.  The node donating the state transfer may get locked briefly during a full SST, which could temporarily block writes on that node.  If you use the built-in clustercheck (or check the same things it checks), you can avoid this by diverting traffic away from a donor.

Q: Perhaps not the correct webinar for this question, but I was also expecting to hear something about using PXC in combination with OpenStack Trove. If you’ve got time, could you tell something about that?

Trove has not supported the concept of more than a single SQL target.  My understanding is a recent improvement here for MongoDB setups may pave the way for more robust Trove instances backed by Percona XtraDB Cluster.

Q: For Loadbalancing using the Java Mysql driver, would you suggest HA proxy or the loadbalancing connection in the java driver. Also how does things work in case of persistent connections and connection pools like dbcp?

Each node in a PXC cluster is just a mysql server that can handle normal mysql connections.  Obviously if a node fails, it’s easy to detect that you should probably not keep using that node, but in Percona XtraDB Cluster you need to watch out for things like cluster partitions where nodes lose quorum and stop taking queries (but still allow connections)  I believe it’s possible to configure advanced connection pooling setups to properly health check the nodes (unless the necessary features are not in the pool implementation), but I don’t have a reference architecture to point you to.

Q: Are there any manual solutions to avoid deadlocks within a high write context to force a query to execute on all nodes?

Yes, write to a single node, at least for the dataset that has high write volume.

Remember what I said in the webinar:  you can only update a given row once per RTT, so there’s an upper cap of throughput on the smallest locking granularity in InnoDB (i.e., a single row).

This manifests itself in two possible ways:

1) In a multi-node writing cluster by triggering deadlocks on conflicts.  Only approximately 1 transaction modifying a given row per RTT would NOT receive a deadlock

2) In a single-node writing cluster by experiencing longer lock waits.  Transaction times (and subsequently lock times) are extended by the replication and certification time, so other competing transactions will lock wait until the blocking transaction commit. There are no replication conflict deadlocks in this case, but the net effect is exactly the same:  only 1 transaction per row per RTT.

Galera offers us high data redundancy at the cost of some throughput.  If that doesn’t work for your workload, then asynchronous replication (or perhaps semi-sync in 5.7) will work better for you.

Note that there is wsrep_retry_autocommit.  However, this only works for autocommited transactions.  If your write volume was so high that you need to increase this a lot to get the conflict rate down, you are likely sacrificing a lot of CPU power (rollbacks are expensive in Innodb!) if a single transaction needs multiple retries to commit.  This still doesn’t get around the law:  1 trx per row per RTT at best.

That was all of the questions. Be sure to check out our next OpenStack webinar on December 10 by my colleague Peter Boros. It’s titled “MySQL and OpenStack Deep Dive” and you can register here (and like all of our webinars, it’s free). I also invite you to visit our OpenStack Live 2015 page and learn more about that conference in Santa Clara, California this coming April 13-14. The Call for Papers ends Nov. 16 and speakers get a full-access conference pass. I hope to see you there!

The post Q&A: Percona XtraDB Cluster as a MySQL HA solution for OpenStack appeared first on MySQL Performance Blog.

Optimizing MySQL for Zabbix

Fr, 2014-11-14 11:00


This blog post was inspired by my visit at the annual Zabbix Conference in Riga, Latvia this year, where I gave a couple of talks on MySQL and beyond.

It was a two day single-track event with some 200 participants, a number of interesting talks on Zabbix (and related technologies) and really well-organized evening activities. I was amazed how well organized the event was and hope to be invited to speak there next year as well.   (Just in case you’re not sure what Zabbix is, it is an enterprise-class open source distributed monitoring solution for networks and applications)

I must secretly confess, it was also the first conference where I honestly enjoyed being on stage and connecting with the audience – I was even looking forward to it rather than being scared as hell (which is what typically happens to me)! I guess it was all about the positive atmosphere, so big thanks to all the speakers and attendees for that. It meant a lot to me.

If I had to mention one negative vibe I heard from attendees, it would be that there was not enough deeply technical content, however, I think this is slightly biased, because people I talked to most, were ones who enjoyed my technical talks and so they were craving for more.

And now, without further ado, let me get to the essence of this blog post.

Zabbix and MySQL

The very first thing I did when I arrived at the conference was to approach people who I knew use Zabbix on a large scale and tried to figure out what were the biggest challenges they face. Apparently, in all of the cases, it was MySQL and more specifically, MySQL disk IO.

With that in mind, I would like to suggest a few optimizations that will help your MySQL get the best out of your disks (and consequentially will help your Zabbix get the best out of MySQL) and the available hardware resources in general.

SSD is a game changer

“Will MySQL run better on SSDs?” I’ve been hearing this question over and over again, both publicly and privately.

I can tell you without a shadow of doubt, if IO is currently your bottle-neck – either because some queries take a long time to run and you see diskstats reporting 100-250 reads per second until the query completes (latency), or because you are overloading the disks with requests and wait time suffers (throughput), SSDs will definitely help and not just by little, by much!

Consider this: the fastest spinning disk (15k rpm) can do 250 random IO operations per second tops (at this point it is limited by physics) and single query will only ever read from one disk even if you have RAID10 made of 16 disks, so if you need to read 15,000 data points to display a graph, reading those data points from disk will take 60s.

Enterprise-class SSD disk, on the other hand, can do 15,000 or even more 16k random reads per second with a single-thread (16k is the size of an InnoDB block). And as you increase the throughput, it only gets better! So that means that the query in the previous example would take 1s instead of 60s, which is a significant difference. Plus you can run more requests on the same SSD at the same time and the total number of IO operations will only increase, while a single spinning disk would have to share the available 250 IO operations between multiple requests.

The only area where SSDs don’t beat spinning disks (yet) is sequential operation, especially single-threaded sequential writes. If that is your typical workload (which might be the case if you’re mostly collecting data and rarely if ever reading it), then you may want to consider other strategies.

MySQL configuration

Besides improving your disk IO subsystem, there’s ways to reduce the pressure on IO and I’m going to cover a few my.cnf variables that will help you with that (and with other things such as internal contention).

Note, most of the tunables are common for any typical high-performance MySQL setup, though some are explicitly suited for Zabbix because you can relax a few parameters for great effect at the price of, in the worst case, loosing up to 1s worth of collected data which, from discussions during the conference, didn’t seem like a big deal to anyone.

- innodb_buffer_pool_size – if you have a dedicated MySQL server, set it as high as you can (ceiling would be 75% of total available memory). Otherwise, you should balance it with other processes on the server, but if it’s only zabbix server, I would still leave it very high, close to 75% of total RAM.

- innodb_buffer_pool_instances – on MySQL 5.5, set it to 4, on MySQL 5.6 – 8 or even 16.

- innodb_flush_log_at_trx_commit = 0 – this is where you compromise durability for significantly improved write throughput, especially if you don’t own a disk subsystem with non-volatile cache. Basically the loss you may incur is up to 1s worth of writes during MySQL or server crash. A lot of websites actually run with that (a lot of websites still run on MyISAM!!!), I’m quite sure it’s not an issue for Zabbix setup.

- innodb_flush_method = O_DIRECT – if you are running Linux, just leave it set to that.

- innodb_log_file_size – you want these transaction logs (there’s two of them by default) to hold 1 to 2 hours worth of writes. To determinte that, you can probably have a look at the Zabbix graphs for your MySQL server, but also you can run the following from the mysql command line:

mysql> pager grep seq; show engine innodb statusG select sleep(3600); show engine innodb statusG PAGER set to 'grep seq' Log sequence number 8373513970951 ... Log sequence number 8373683996767

The difference between the two numbers is how many bytes InnoDB has written during last hour. So on this server above, I would set innodb_log_file_size=128M and would end up with 256M of log file space allowing me to store more than 1h worth of writes in transaction logs (See this on changing the log file size if you run MySQL 5.5 or earlier)

- innodb_read_io_threads, innodb_write_io_threads – don’t overthink these, they are not as important as they may seem, especially if you are using Async IO (you can check that by running “show global variables like ‘innodb_use_native_aio'” in mysql cli). On MySQL 5.5 and 5.6 you generally want to be using Async IO (AIO), so check mysql log to understand why, if you are not. That said, if you are not using AIO and you are not going to, just set these values to 8 and leave them there.

- innodb_old_blocks_time = 1000 – this will help you prevent buffer pool pollution due to occasional scans. This is now default in MySQL 5.6 (On 5.5, it needs to be set explicitly).

- innodb_io_capacity – set this to as many write iops as your disk subsystem can handle. For SSDs this should be at least few thousand (2000 could be a good start) while for rotating disks somewhat lower values – 500-800, depending on number of bearing disks, will do. Best to benchmark disks or do the math for actual numbers, but default of 200 is definitely too low for most systems nowadays.

- sync_binlog=0 – this is the default setting, but just in case it’s above 0, turn it off, unless you run something else besides Zabbix. The price of not synchronising binary logs is that in case of a master crash, replicas can get out of sync, but if you are constantly hitting IO bottle-neck due to binary log synchronisation just because you want to avoid the hassle of synchronising the slave once every five years when master crashes, you should reconsider this option.

- query_cache_size=0, query_cache_type=0 – that will disable the query cache. Most of the time you don’t want query cache. And if it’s not disabled in the kernel by these settings, queries (especially small ones) will likely suffer due to query cache mutex contention.

- sort_buffer_size, join_buffer_size, read_rnd_buffer_size – if you ever configured these variables, cancel those changes (just remove them or comment them out). I find these are the top three mistuned variables on most customer servers, while in many cases it’s best if they are not touched at all. Just leave them at their defaults and you’re set.

- tmpdir – sometimes it’s a good idea to point tmpdir to /dev/shm so that on-disk temporary tables are actually written to memory, but there’s one important caveat starting with MySQL 5.5: if you do that, it disables AIO acorss the board, because tmpfs does not support AIO. So I would monitor the activity on current tmpdir (/tmp usually) and only switch it to /dev/shm if I see it being an issue.

MySQL Partitioning

I know that Zabbix now supports partitions with a purpose of easier data pruining, however I think there are some extra benefits you could get from partitions. Well actually subpartitions if you are already using partitions by date.

The KPI for Zabbix, that you could hear over and over again, is the “new values per second” number that you can find in the status of Zabbix. Basically the higher the value (given you have enough values to monitor), the better is the throughput of your Zabbix. And this is where a lot of people are hitting the Zabbix limits – MySQL can’t insert enough new values per second.

Besides the optimizations I have already mentioned above (they should greatly increase your write throughput!), I would encourage you to try out partitions (if you’re not using partitions already) or subpartitions (if you are) BY HASH as we found that partitioning in some cases can increase the throughput of InnoDB.

I did not test it with Zabbix specifically and as it’s not supported by Zabbix out of the box, you would have to hack it to make it work, but if you’ve done all the changes above and you still can’t get enough new values per second (AND it is not the hardware that is limiting you), try partitioning or subpartitioning the key tables by hash.

If this sounds interesting but you’re not sure where to start, feel free to contact us and we’ll work with you to make it work.

On MySQL High Availability

There are options to make MySQL highly available, even though many believed it’s not the case. We’ve been writing a lot on it on our blog so I will not paraphrase or repeat, instead I would like to point you to a few valuable resources on that topic:

 

Percona Server, Percona XtraDB Cluster, Percona Toolkit – it’s all FREE!

I’m not really sure why, but many people I talked to at the conference thought that all of the Percona software either needs to be bought or that it has some enterprise features that are not available unless you buy a license.

The truth is that neither of it is true. All Percona software is completely free of charge. Feel free to download it from our website or through repositories and use it as you please.

See you at the Zabbix conference next year!

The post Optimizing MySQL for Zabbix appeared first on MySQL Performance Blog.

Percona Live London 2014 Wrap Up

Do, 2014-11-13 14:16

The 2014 edition of Percona Live London brought together attendees from 30 countries to hear insightful talks from leaders in the MySQL community. The conference kicked off on Monday with a full day of tutorials followed by the very popular Community Dinner featuring a double decker bus shuttle from the conference to the event.

Tuesday started with keynote talks by representatives from MySQL, VMware, HGST, Codership, and Percona. I particularly enjoyed the talks by Tomas Ulin of MySQL (which highlighted the upcoming MySQL 5.7 release) and Robert Hodges of VMware (which focused on the evolution of MySQL). The remainder of the day was filled by six time slots of breakout sessions (30 sessions in all) broken into 10 tracks. The day wrapped up with the always popular Community Networking Reception. Attesting to the quality of the conference, 4 out of 5 respondents to our post conference survey indicate they are likely to attend the conference again in 2015.

The session slides are available by visiting the Percona Live London 2014 conference website (look for the “Session Slides” button in the right hand column). Slides are added as they come in from the speakers so please check back if the slides are not yet available for a specific talk that interests you.

Special thanks goes out to the Percona Live London 2014 Conference Committee which put together such a great program:

  • Cedric Peintre of Dailymotion
  • David Busby of Percona
  • Colin Charles of MariaDB
  • Luis Motta Campos of the ebay Classified Group
  • Nicolai Plum of Booking.com
  • Morgan Tocker of Oracle
  • Art van Scheppingen of Spil Games
Percona Live London 2014 Attendee Survey

This year we collaborated with ComputerworldUK to run a short survey at the conference which should appear in that publication in the near future. We had 64 responses, all of them being form MySQL professionals who attended the conference. The results were interesting:

Do you agree with the statement that “Oracle has been a good steward of MySQL over the past twelve months”?
YES = 81%
NO = 19%

Are you currently running a production OpenStack environment in your organization?
YES = 17%
NO = 83%

Have you evaluated OpenStack within the past twelve months?
YES = 25%
NO = 75%

Do you plan to evaluate OpenStack in the next twelve months?
YES = 48%
NO = 52%

Are you currently using an AWS product to run MySQL in the Cloud?
YES = 28%
NO = 72%

Are you more likely to switch to a non-MySQL open source database now than you were twelve months ago?
YES = 35%
NO = 65%

The sentiment about Oracle’s stewardship of MySQL compares favorably with the comments by our own Peter Zaitsev in a recent ZDNet article titled “MySQL: Why the open source database is better off under Oracle“.

Percona Live MySQL Conference and OpenStack Live Conference

The ratings related to OpenStack mirror our experience with the strong growth in interest in that technology. In response, we are launching the inaugural OpenStack Live 2015 conference in Silicon Valley which will focus on making attendees more successful with OpenStack with a particular emphasis on the role of MySQL and Trove. The event will be April 13-14, 2015 at the Santa Clara Convention Center. The call for proposals closes on November 16, 2014.

Our next Percona Live MySQL Conference and Expo is April 13-16, 2015 in Silicon Valley. Join us for the largest MySQL conference in the world – last year’s event had over 1,100 registered attendees from 40 countries who enjoyed a full four days of tutorials, keynotes, breakout sessions, networking events, BOFs, and more. The call for speaking proposals closes November 16, 2014 and Early Bird registration rates are still available so check out the conference website now for full details.

Thanks to everyone who helped make Percona Live London 2014 a great success. I look forward to the Percona Live MySQL Conference and the OpenStack Live Conference next April in Silicon Valley!

The post Percona Live London 2014 Wrap Up appeared first on MySQL Performance Blog.

Log rotate and the (deleted) MySQL log file mystery

Mi, 2014-11-12 15:41
Did your logging stop working after you set up logrotate? Then this post might be for you.

Archive your log files!

Some time ago, Peter Boros wrote about Rotating MySQL Slow Logs safely, explaining the steps of a “best practice” log rotate/archive. This post will add more info about the topic.

When running logrotate for MySQL (after proper setting the /etc/logrotate.d/mysql conf file) from anacron, there’s a situation that you might potentially face if the user and password used to execute the “flush logs” command is stored in, for example, /root/.my.cnf file.

The situation:

You might find out that you have a new MySQL log file ready to receive data, but nothing is being written to it.

Why did this happen?

The logrotate script is executed, but the postrotate fails to successfully flush logs. If this happened to you, you might think, “I’ve lost my slow log file!” The good news: You didn’t lose it. What just happened is that your MySQL log file is no longer visible from the filesystem perspective, but the file still exists and is still receiving data.

So where is it? How can I find it again?

Through the file descriptor. If your mysqld still running, you can find your log under /proc/[pid of mysqld process]/fd path:

[root@hostname]# cd /proc/$(/sbin/pidof mysqld)/fd [root@hostname fd]# ls -lh | grep deleted lrwx------ 1 root root 64 Oct 21 11:39 131 -> /tmp/MLQKbznR (deleted) lrwx------ 1 root root 64 Oct 21 11:39 26 -> /tmp/ib95UPJ8 (deleted) lrwx------ 1 root root 64 Oct 21 11:39 5 -> /tmp/ib9nYywT (deleted) lrwx------ 1 root root 64 Oct 21 11:39 501 -> /var/log/mysql/log-slow-queries.log.1 (deleted)

And how big is it? lsof can give us the answer with the file descriptor number, which for this example is 501:

[root@hostname fd]# /usr/sbin/lsof -p $(/sbin/pidof mysqld) -ad 501 COMMAND  PID  USER   FD   TYPE DEVICE  SIZE/OFF     NODE NAME mysqld  2813 mysql  501u   REG  253,0 976746174 70516762 /var/log/mysql/log-slow-queries.log.1 (deleted)

The output of lsof tell us that this file size is 976746174 bytes, which is 931MB aprox.

Can I recover the file contents?

Yes, you can. You just need to use the “cat” command and knowing the File Descriptor number. In this case, is 501:

cat /proc/$(/sbin/pidof mysqld)/fd/501 > /path/to/new/logfile.log

Remember that once you execute a success “flush logs” commands on the MySQL client, the old contents will disappear, so do this prior any further log rotation.

How did this happen?

Let’s examine the logrotate script:

/var/log/mysql/log-slow-queries.log {        create 600 mysql mysql        daily        rotate 3        missingok        compress        sharedscripts    postrotate        if test -x /usr/bin/mysqladmin &&             /usr/bin/mysqladmin ping &>/dev/null        then            /usr/bin/mysql -e 'select @@global.long_query_time into @lqt_save; set global long_query_time=2000; set global slow_query_log = 0; select sleep(2); FLUSH LOGS; select sleep(2); set global long_query_time=@lqt_save; set global slow_query_log = 1;' > /var/log/mysqladmin.flush-logs 2>&1         fi    endscript }

Everything seems okay, except for one thing: When executing from cron, the HOME term environment variable will be blank. Meaning: /usr/bin/mysql won’t be able to find the file with the access credentials (user and password) and thus cannot execute the “flush logs” command.

What is the solution?

Add the HOME variable to the postscript line: env HOME=/root/

env HOME=/root/ /usr/bin/mysql -e 'select @@global.long_query_time into @lqt_save; set global long_query_time=2000; set global slow_query_log = 0; select sleep(2); FLUSH LOGS; select sleep(2); set global long_query_time=@lqt_save; set global slow_query_log = 1;'  > /var/log/mysqladmin.flush-logs 2>&1

Can I get an alert if this happens to me?

Yes! With the Percona Nagios Plugin pmp-check-mysql-deleted-files. The Percona Nagios Plugin, which like all Percona software is free, looks at the files that the mysqld process has open and warns if any of them are deleted that shouldn’t be. For example: a slow MySQL log file that has being deleted by a poorly written logrotate script. (Download the Percona Nagios Plugin here)

In conclusion: Don’t fall into a situation where you suddenly realize, to your horror, that you’ve lost your slow MySQL log file. And if you do, relax: Recover your MySQL log file contents and add the proper monitoring alert. Problem solved!

The post Log rotate and the (deleted) MySQL log file mystery appeared first on MySQL Performance Blog.

Percona Live MySQL Conference Call for Proposals closes Nov. 16

Di, 2014-11-11 17:40

The Percona Live MySQL Conference in Silicon Valley (April 13-16, 2015) features a full day of tutorial sessions and three full days with keynote talks and breakout sessions from leading MySQL experts from around the world. Birds of a Feather sessions, two networking receptions, and a community dinner all offer great opportunities to network with the MySQL community. To make the Percona Live MySQL Conference 2015 even better than the great 2014 event, we need outstanding content from community members.

Submit your speaking proposals now for tutorials or breakout session on the Percona Live conference website. Accepted speakers receive a full conference pass and the sincere gratitude of attendees who will eagerly await your presentations. The Percona Live MySQL Conference call for proposals closes November 16.

We are currently seeking proposals for both breakout and tutorial sessions on the following topics:

  • High Availability
  • DevOps
  • Programming
  • Performance Optimization
  • Replication & Backup
  • MySQL in the Cloud
  • MySQL and NoSQL
  • MySQL Case Studies
  • Security
  • What’s New in MySQL

All submissions will be reviewed by the outstanding Conference Committee of community members:

  • Shlomi Noach from Outbrain
  • Tamar Bercovici from Box
  • Colin Charles from MariaDB
  • Sean Chighizola from Big Fish Games
  • Jeremy Cole from Google
  • Harrison Fisk from Facebook
  • Patrick Galbraith from HP
  • Jay Janssen from Percona
  • Chris Schneider from Ning.com
  • John Scott from Wellcentive
  • Gwen Shapira from Cloudera
  • Shivinder Singh from Verizon
  • Calvin Sun from Twitter
  • Morgan Tocker from Oracle
  • Peter Zaitsev from Percona

Submit your proposals now by visiting the Percona Live MySQL Conference 2015 website. The submission deadline is November 16.

If you plan to attend the conference but won’t be submitting, register now to get the lowest available rate. Visit the Percona Live MySQL Conference 2015 website for full details.

The post Percona Live MySQL Conference Call for Proposals closes Nov. 16 appeared first on MySQL Performance Blog.

OpenStack Live Call for Proposals closes November 16

Di, 2014-11-11 15:20

The OpenStack Live conference in Silicon Valley (April 13-14, 2015) will emphasize the essential elements of making OpenStack perform better with emphasis on the critical role of MySQL and Trove. If you use OpenStack and have a story to share or a skill to teach, we encourage you to submit a speaking proposal for a breakout or tutorial session. The OpenStack Live call for proposals is your chance to put your ideas, case studies, best practices and technical knowledge in front of an intelligent, engaged audience of OpenStack Users. If you are selected as a speaker, you will receive one complimentary full conference pass. November 16th is the last day to submit.

We are seeking submissions for both breakout and tutorial sessions on the following topics:

  • Performance Optimization of OpenStack
  • OpenStack Operations
  • OpenStack Trove
  • Replication and Backup for OpenStack
  • High Availability for OpenStack
  • OpenStack User Stories
  • Monitoring and Tools for OpenStack

All submissions will be reviewed by our highly qualified Conference Committee:

  • Mark Atwood from HP
  • Rich Bowen from Red Hat
  • Andrew Mitty from Comcast
  • Jason Rouault from Time Warner
  • Peter Boros from Percona

If you don’t plan to submit a speaking proposal, now is a great time to purchase your ticket at the low Super Saver rates. Visit the OpenStack Live 2015 conference website for full details.

The post OpenStack Live Call for Proposals closes November 16 appeared first on MySQL Performance Blog.

Impressions from MongoDB Day London 2014

Mo, 2014-11-10 15:36

I visited MongoDB Day in London on November 6. Here are a few observations:

App-Developer Centric. It is interesting to see how much MongoDB is about developers; the ops side is something which is a necessary evil developers have to deal with. The ops topics covered in principle that there are no topics about choices of operating systems or hardware for MongoDB beyond flash and more memory.

Development Stacks. Being application centric there was good coverage of the MongoDB-powered stacks – MEAN and METEOR specifically got attention. Especially the METEOR presentation by Henrik Ingo was cool – real-time view synchronization between the Web browser (or mobile app) and database as well as the same language for server-side and client-side development is a really great concept. Though Henrik did not got into detail on how well it scales besides mentioning that it does not work with Sharded MongoDB at this point.

Sharding Focused. Where in the MySQL world the approach is what which applications can get by with without sharding, MongoDB shards almost everything – often employing multiple instances of MongoDB daemon running on the same OS image (on different ports). It is acknowledged that MongoDB does not scale up very well with database-level locking, though this is about to change.

Cluster aware connector. Where in the MySQL world the traditional API is to connect to a single node, in the MongoDB world you often connect to a “cluster” containing many replica sets with data sharded across them. This is really good as this means you do not have to try to emulate the single server with a cluster (especially have single highly available IP).

Pluggable Storage Engines. This was the big thing at this show with its being a top topic of the keynote as well as in-depth sessions. Unlike in MySQL, the MongoDB storage engine is chosen for the whole instance and not one collection/table. This is a transformational time for MongoDB with both the core storage engine being worked on to get document-level locking and the “Wired Tiger” storage engine being added as a write-optimized storage engine option. Hopefully MongoDB is acting to protect themselves from something akin to Innodb-Oracle fiasco in the MySQL space.

MMS. The MongoDB Monitoring Service, which now includes Backups and Deployments, was showcased a lot as an answer to all problems in the MongoDB space. There was a lot of work put into this product, including a really advanced configurator for Amazon Web Services deployments where you can configure many instance properties directly.

Not much hardcore details. I will end basically where I’ve started – there was not much detail for those longing for details. I’m not an expert in MongoDB (yet) so relatively more basic level detail about how exactly MongoDB operates would suffice, but it was not covered. The most helpful for me were some side conversations where I would hear about things like challenges with adding elements to the large array that are part of the document or whether there are any powerful optimization of Covering Indexes that exists for MongoDB as for MySQL. Perhaps this was because it was a smaller show and maybe the next MongoDB World event will have more of such in-depth content.

The post Impressions from MongoDB Day London 2014 appeared first on MySQL Performance Blog.