MySQL High Performance

Syndicate content
Updated: 12 hours 5 min ago

Installing MongoDB 3.2 and upgrading MongoDB replica set

Mi, 2016-06-15 20:55

In this post, we’ll examine a couple of ways for upgrading MongoDB replica set.

With the release of MongoDB 3.2, comes a rash of new features and improvements. One of these enhancements is improved replica sets. From MongoDB: “A replica set in MongoDB is a group of mongod processes that maintain the same data set. Replica sets provide redundancy and high availability, and are the basis for all production deployments.”

Config servers are replica sets!

This is HUGE. It signals a significant advancement in backups, metadata stability and overall maturity. It is a very long-awaited feature that shows MongoDB is maturing. It means:

  • Mongos’ can retry connection vs error
  • Unified and consistent backups!
  • Up to 50 secondaries
    • Remove bugs with Mongos’ not near config servers!

How do we activate all these new awesome features? Let’s do it!

Upgrading to 3.2
  • Replace binaries and restart one secondary at a time
  • Then primaries as well
  • Restart configs in reverse order
    • If configdb=con1, con2, con3
      • Restart con3, con2, and then finally con1 with 3.2
      • Do con1 as FAST as possible, while the balancer is also disabled
    • You no longer need to restart a mongos –upgrade (as of 3.2)
    • Restart all mongos, this will reset ALL connections at some point (whether you do at once or space it out).
Upgrading the replset to the new protocol

This is by far the easiest upgrade bit but DON’T do it until you know your stable on 3.2. Log into each primary and run:

>cfg = rs.conf(); { "_id" : "r1", "version" : 2, "members" : [ { "_id" : 0, "host" : "localhost:17001" }, { "_id" : 1, "host" : "localhost:17002" }, { "_id" : 2, "host" : "localhost:17003", } ] } >cfg.protocolVersion=1; >rs.reconfig(cfg); { "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(1464947003, 1), "electionId" : ObjectId("7fffffff0000000000000018") } }

Or:

>db.getSiblingDB(‘config’).shards.forEach(function(shard){ x = new Mongo(shard.host); /* Assumes no auth needed */ conf =x.getDB("local").system.replset.findOne() conf.protcolVersion=1;conf.version++; x.getDB(‘admin’).runCommand({ replSetReconfig: conf }); });

The quick upgrade scripts
  • Upgrade_all_to_3.2.4.sh

Does what it says: kills every process and launches them on 3.2 binaries with no other changes.

  • Upgrade_replica_proto_version.sh

Simply runs the quick rs.reconfig() on each primary, adds the new settings to enable to new replication features.

Let’s upgrade the configs the right way!

This is not included as part of a normal upgrade so only do this AFTER you’re stable and don’t do it before upgrading the protocolVersion we just talked about. (I mean it! Disregard this advice and your life will seriously not be awesome!)

Upgrading to a Config ReplicaSet ( the official way)
  1. Run rs.initiate on the first config in the list (must be 3.2.4+)
    • Must be a fully configured document with configsrv:true defined.
  2. Restart same config server adding
    • configsvrMode = sccc
    • replSet = <name used in rs.initiate()>
    • storageEngine= WiredTiger
  3. Start the new config servers for the other two nodes (should be a new dbpath and port)
  4. Add those nodes to the replSet and check their status
  5. Remove the second original config server from the running
  6. Restart the 1st node you set “sccc” on to not have that setting
  7. At this point, the 1st node will transition to removed if using MMAP.
  8. Restart a mongos with a new configdb line
    • –configdb <replSetName>/node1:port,node2:port,…
    • Only replset members should be listed
  9. Verify you can work and query through mongos
  10. Repeat on all mongos
  11. Remove the 1st node with rs.remove
  12. Shutdown final original config and enable balancer
There is also an easy way.

The easy way, with a small maintenance window, which lets you just restore a good backup and have a nice and simple rollback plan:

  1. Stop all mongos after backing up the config directory
  2. Run rs.initiate on first config server
  3. Stop the 2nd, then the 3rd, restarting them with an empty dbpath directory
  4. Check the rs.status now
  5. Stop the 1st config server and restart with an empty dbpath directory
  6. Check Status
  7. Restart all mongos, adding <replSetName>/ to the front of the configdb line.
  8. Done!

Oh look there is a quick script we have made for you:

  • Upgrade_config_to_repliaset.sh
    • Kill all  config and mongos processes
    • Restart the first config server on non-standard port
    • Mongodump config database
    • Restart c1 as WiredTiger, clearing that data path
    • Import dump back into first config server
    • Restart on normal port
    • Initialize Replica Set
    • Restart second and third config server after clearing dbpath folder
    • After the initial sync, start all the mongos.
    • Done and script exits!

 

MongoDB and non-existent collections

Di, 2016-06-14 22:29

In this blog, I will discuss how I found some of my basic SQL assumptions that don’t hold true when dealing with MongoDB and non-existent collections.

Coming from a MySQL background, I have some assumptions about databases that don’t apply to MongoDB (or other kinds of databases that are neither SQL-based nor relationally-inspired).

An example of this is the assumption that data is organized in rows that are part of a table, with all tables having a strict schema (i.e., a single type of row). When working with MongoDB, this assumption must be transformed into the idea that data is organized in documents that are part of a collection and have a flexible schema so that different types of documents can reside in the same collection.

That’s an easy adjustment to make because a dynamic schema is one of the defining features of MongoDB. There are other less-obvious assumptions that need to be adjusted or redefined as you get familiar with a new product like MongoDB (for example, MySQL does not currently support built-in sharding, while MongoDB does).

There is a more fundamental kind of assumption, and by “fundamental” I mean an assumption that is deeply ingrained because you rely on it so often it’s automatic (i.e., unconscious). We’re usually hit by these when changing programming languages, especially in dynamic ones (“Will I be able to add a number to a string? If so, how will it behave?”). These can make it hard to adjust to a new database (or programming language, operating system, etc.) because we don’t consciously think about them and so we may forget to verify if they hold in the new system. This can happen in “real” life too: try going to a country where cars drive on the other side of the road from yours!

While working on a MongoDB benchmark recently, I was hit by one of these assumptions. I thought sharing my mistake may help others who are also coming to MongoDB from an SQL background.

One of my computing assumptions can be summarized as “reading from a non-existent source will fail with an error.”

Sure enough, it seems to be true for my operating system:

telecaster:~ fernandoipar$ cat notexists cat: notexists: No such file or directory

And for MySQL:

mysql> select 1 from notexists; ERROR 1146 (42S02): Table 'test.notexists' doesn't exist

But what happens in MongoDB?

> db.notexists.find() > db.notexists.find().count() 0

No errors. You get no results. Interestingly, you can even count the number of documents in a cursor that is associated with a non-existent collection.

As I said, I hit this while working on a benchmark. How? I was comparing the throughput for different engines and various configurations, and after preparing the graphs they all showed the same behavior. While it’s not impossible for this to be accurate, it was very unlikely given what I was trying to measure. Some investigation led me to discover a mistake in the preparation phase of my benchmarks. To save time and to use the same data baseline for all tests, I was only running sysbench prepare once, backing up the data directory for each engine, and then restoring this backup before each experiment. The error was that I was restoring one subdirectory below MongoDB’s expected dbpath (i.e., to /data/db/db instead of /data/db), and so my scripts were reading from non-existent collections.

On a MySQL experiment, this would have immediately blown up in my face; with MongoDB, that is not the case.

On reflection, this behavior makes sense for MongoDB in that it is consistent with the write behavior. You don’t need to create a new collection, or even a new database. It’s enough that you write a document to it, and it gets created for you. If writing to a non-existent collection produces no errors, reading from one shouldn’t either.

Still, sometimes an application needs to know if a collection exists. How can you do this? There are multiple ways to do this, and I think the best approach is to verify their existence during the application initialization stage. Here are a couple of examples:

> db.notexists.stats() { "ok" : 0, "errmsg" : "Collection [test.notexists] not found." } > db.getCollectionNames().indexOf("notexists") >= 0 false

I hope you find this useful, and if you’ve been hit by similar problems (such as MongoDB and non-existent collections) due to broken assumptions when moving to MongoDB with an SQL background, I’d love to read about them in the comments!

Scaling Percona XtraDB Cluster with ProxySQL in Docker Swarm

Di, 2016-06-14 17:07

In this post, we’ll look at scaling Percona XtraDB Cluster with ProxySQL in Docker Swarm.

In my previous post, I showed how to employ Percona XtraDB Cluster on multiple nodes in a Docker network.

The intention is to be able to start/stop nodes and increase/decrease the cluster size dynamically. This means that we should track running nodes, but also to have an easy way to connect to the cluster.

So there are two components we need: service discovery to register nodes and ProxySQL to handle incoming traffic.

The work with service discovery is already bundled with Percona XtraDB Cluster Docker images, and I have experimental images for ProxySQL https://hub.docker.com/r/perconalab/proxysql/.

For multi-node management, we also need some orchestration tool, and a good start is Docker Swarm. Docker Swarm is simple and only provides basic functionality, but it works for a good start. (For more complicated setups, consider Kubernetes.)

I assume you have Docker Swarm running, but if not here is some good material on how to get it rolling. You also need to have service discovery running (see http://chunqi.li/2015/11/09/docker-multi-host-networking/ and my previous post).

To start a cluster with ProxySQL, we need a docker-compose definition file docker-compose.yml.:

version: '2' services: proxy: image: perconalab/proxysql networks: - front - Theistareykjarbunga ports: - "3306:3306" - "6032:6032" env_file: .env percona-xtradb-cluster: image: percona/percona-xtradb-cluster:5.6 networks: - Theistareykjarbunga ports: - "3306" env_file: .env networks: Theistareykjarbunga: driver: overlay front: driver: overlay

For convenience, both proxy and percona-xtradb-cluster share the same environment files (.env):

MYSQL_ROOT_PASSWORD=secret DISCOVERY_SERVICE=10.20.2.4:2379 CLUSTER_NAME=cluster15 MYSQL_PROXY_USER=proxyuser MYSQL_PROXY_PASSWORD=s3cret

You can also get both files from https://github.com/percona/percona-docker/tree/master/pxc-56/swarm.

To start both the cluster node and proxy:

docker-compose up -d

We can start as many Percona XtraDB Cluster nodes as we want:

docker-compose scale percona-xtradb-cluster=5

The command above will make sure that five nodes are running.

We can check it with docker ps:

docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 725f5f2699cc percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 34 minutes ago Up 38 minutes 4567-4568/tcp, 10.20.2.66:10284->3306/tcp smblade04/swarm_percona-xtradb-cluster_5 1c85ea1367e8 percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 34 minutes ago Up 38 minutes 4567-4568/tcp, 10.20.2.66:10285->3306/tcp smblade04/swarm_percona-xtradb-cluster_2 df87e9c1342e percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 34 minutes ago Up 38 minutes 4567-4568/tcp, 10.20.2.66:10283->3306/tcp smblade04/swarm_percona-xtradb-cluster_4 cbb82f7a9789 perconalab/proxysql "/entrypoint.sh " 36 minutes ago Up 40 minutes 10.20.2.66:3306->3306/tcp, 10.20.2.66:6032->6032/tcp smblade04/swarm_proxy_1 59e049fe22a9 percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 36 minutes ago Up 40 minutes 4567-4568/tcp, 10.20.2.66:10282->3306/tcp smblade04/swarm_percona-xtradb-cluster_1 0921a2611c3c percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 37 minutes ago Up 42 minutes 4567-4568/tcp, 10.20.2.5:32774->3306/tcp centos/swarm_percona-xtradb-cluster_3

We can see that Docker schedules containers on two different nodes, the Proxy SQL container is smblade04/swarm_proxy_1, and the connection point is 10.20.2.66:6032.

To register Percona XtraDB Cluster in ProxySQL we can just execute the following:

docker exec -it smblade04/swarm_proxy_1 add_cluster_nodes.sh

The script will connect to the service discovery DISCOVERY_SERVICE (defined in .env file) and register nodes in ProxySQL.

To check that they are all running:

mysql -h10.20.2.66 -P6032 -uadmin -padmin MySQL [(none)]> select * from stats.stats_mysql_connection_pool; +-----------+-----------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms | +-----------+-----------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | 0 | 10.0.14.2 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 212 | | 0 | 10.0.14.4 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 155 | | 0 | 10.0.14.5 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 136 | | 0 | 10.0.14.6 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 123 | | 0 | 10.0.14.7 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 287 | +-----------+-----------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+

We can connect to a cluster using a ProxySQL endpoint:

mysql -h10.20.2.66 -uproxyuser -psecret mysql -h10.20.2.66 -P3306 -uproxyuser -ps3cret -e "SELECT @@hostname" +--------------+ | @@hostname | +--------------+ | 59e049fe22a9 | +--------------+ mysql -h10.20.2.66 -P3306 -uproxyuser -ps3cret -e "SELECT @@hostname" +--------------+ | @@hostname | +--------------+ | 725f5f2699cc | +--------------+

We can see that we connect to a different node every time.

Now if we want to get crazy and make sure we have ten Percona XtraDB Cluster nodes running, we can execute the following:

docker-compose scale percona-xtradb-cluster=10 Creating and starting swarm_percona-xtradb-cluster_6 ... Creating and starting swarm_percona-xtradb-cluster_7 ... Creating and starting swarm_percona-xtradb-cluster_8 ... Creating and starting swarm_percona-xtradb-cluster_9 ... Creating and starting swarm_percona-xtradb-cluster_10 ...

And Docker Swarm will make sure ten nodes are running.

I hope this demonstrates that you can easily start playing with multi-nodes using Percona XtraDB Cluster. In the next post, I will show how to use Percona XtraDB Cluster with Kubernetes.

RocksDB from Docker containers

Mo, 2016-06-13 18:14

This post will discuss how to get RocksDB from Docker containers to use with Percona Server for MongoDB.

With our Percona Server for MongoDB 3.2 release, we made RocksDB a first class citizen. With this newly-available engine, we want to make it easy for everybody interested to try it. So it is now available in docker images from https://hub.docker.com/r/percona/percona-server-mongodb/.

If you have docker running, starting RocksDB is very easy:

docker run -d -p 27017:27017 percona/percona-server-mongodb --storageEngine=rocksdb

Then run:

mongo --eval "printjson(db.serverStatus())"

You should see this as part of the output:

"storageEngine" : { "name" : "rocksdb", "supportsCommittedReads" : true, "persistent" : true },

Give it a try, and let us know how RocksDB works for you!

Webinar Thursday, June 16: MongoDB Schema Design

Mo, 2016-06-13 15:49

Please join Jon Tobin, Director of Solutions Engineering at Percona on Thursday, June 16, 2016 10:00am PDT (UTC-7) for a webinar on “MongoDB® Schema Design.”

Jon will discuss the most common misconception when evaluating the use of MongoDB: that it is “schemaless.” THIS IS NOT TRUE. MongoDB has a document structure, and thus, a schema. While the structure is much more dynamic than that of most relational database models, choices that you make can and will pay themselves forward (or haunt you forever).

In this webinar, we’ll cover what a document is, how they can be structured, and what structures work (and don’t work) for a particular use case. We will also touch on design decisions and how they affect the ability of the cluster to scale in the future. Some of the topics that will be covered are:

  • Document Structure
  • Embedding vs Referencing
  • Normalization vs De-Normalization
  • Atomicity
  • MongoDB Sharding

Register here.

Jon Tobin, Director of Solution Engineering

When not saving kittens from sequoias or helping the elderly across busy intersections, Jon Tobin is Percona’s Director of Solutions Engineering. He has spent over 15 years in the IT industry. For the last 6 years, Jon has been helping innovative IT companies assess and address customer’s business needs through well-designed solutions.

 

Percona XtraDB Cluster 5.6.30-25.16 is now available

Fr, 2016-06-10 22:49

Percona is glad to announce the new release of Percona XtraDB Cluster 5.6 on June 10, 2016. Binaries are available from the downloads area or our software repositories.

Percona XtraDB Cluster 5.6.30-25.16 is now the current release, based on the following:

All of Percona software is open-source and free, and all the details of the release can be found in the 5.6.30-25.16 milestone on Launchpad.

For more information about relevant Codership releases, see this announcement.

New Features:

  • PXC now uses wsrep_desync_count introduced in Galera 3.16 by Codership, instead of the concept that was previously implemented by Percona. The following logic applies:
    • If a node is explicitly desynced, then implicitly desyncing a node using RSU/FTWRL is allowed.
    • If a node is implicitly desynced using RSU/FTWRL, then explicitly desyncing a node is blocked until implicit desync is complete.
    • If a node is explicitly desynced and then implicitly desycned using RSU/FTWRL, then any request for another implicit desync is blocked until the former implicit desync is complete.

Bugs Fixed:

  • Changing wsrep_provider while the node is paused or desynced is not allowed.
  • TOI now checks that a node is ready to process DDL and DML before starting execution, to prevent a node from crashing if it becomes non-primary.
  • The wsrep_row_upd_check_foreign_constraints function now checks that fk-reference-table is open before marking it open.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Running Percona XtraDB Cluster in a multi-host Docker network

Fr, 2016-06-10 20:32

In this post, I’ll discuss how to run Percona XtraDB Cluster in a multi-host Docker network.

With our release of Percona XtraDB Cluster 5.7 beta, we’ve also decided to provide Docker images for both Percona XtraDB Cluster 5.6 and Percona XtraDB Cluster 5.7.

Starting one node is very easy, and not that different from starting Percona Server image. The only an extra requirement is to have the CLUSTER_NAME variable defined. The startup command might look like this:

docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=Theistareyk -e CLUSTER_NAME=Theistareykjarbunga -e XTRABACKUP_PASSWORD=Theistare percona/percona-xtradb-cluster

You might also notice we can optionally define an XTRABACKUP_PASSWORD password, which a xtrabackup@localhost user will employ for the xtrabackup-SST method.

Running Percona XtraDB Cluster in single mode kind of defeats the purpose of having the cluster. With our docker images, we tried to resolve the following tasks:

  1. Run in multiple-host environment (followed by running in Docker Swarm and Kubernetes)
  2. Start as many nodes in the cluster as we want
  3. Register all nodes in the service discovery, so that the client can see how many nodes are running and their status
  4. Integrate with ProxySQL

Let’s review these points one by one.

Using a multi-host network is when a Docker network becomes helpful. The recent Docker versions come with a network overlay driver, which we will use to run a virtual network over multiple boxes. Starting Docker overlay network is out of scope for this post, but check out this great introduction material on how to get it working.

With the network running, we can create an overlay network for our cluster:

docker network create -d overlay cluster1_net

Then we can start containers:

docker run -d -p 3306 --net=cluster1_net -e MYSQL_ROOT_PASSWORD=Theistareyk -e CLUSTER_NAME=cluster1 ... -e XTRABACKUP_PASSWORD=Theistare percona/percona-xtradb-cluster

The cool bit is that we can start Percona XtraDB Cluster on any node in the network, and they will communicate over the virtual network.

If you want to stay within a single Docker host (for example during testing), you still can create a bridge network and use it in one host environment.

The script above will run . . . almost. The problem is that every additional node needs to know the address of the running cluster.

To address this (if you prefer a manual process) we introduced the CLUSTER_JOIN variable, which should point to the IP address of one running nodes (or be empty to start the new cluster).

In this case, getting the script above to work might look like below:

docker run -d -p 3306 --net=cluster1_net -e MYSQL_ROOT_PASSWORD=Theistareyk -e CLUSTER_NAME=cluster1 -e CLUSTER_JOIN=10.0.5.5 -e XTRABACKUP_PASSWORD=Theistare percona/percona-xtradb-cluster

I think manually tracking IP addresses requires unnecessary extra work, especially if we want to start and stop nodes on the fly. So we also decided to use service discovery — especially since you need it to run the Docker overlay network overlay. Right now we support the etcd discovery service, but it isn’t a problem to add more (such as Consul).

Starting etcd is also out of the scope of this post, but you can read about the procedure in the manual.

When you run etcd service discovery (on the host 10.20.2.4:2379, for example) you can start the nodes:

docker run -d -p 3306 --net=cluster1_net -e MYSQL_ROOT_PASSWORD=Theistareyk -e CLUSTER_NAME=cluster1 -e DISCOVERY_SERVICE=10.20.2.4:2379 -e XTRABACKUP_PASSWORD=Theistare percona/percona-xtradb-cluster

The node will register itself in the service discovery and will join existing $CLUSTER_NAME.

There is convenient way to check all nodes:

curl http://$ETCD_HOST/v2/keys/pxc-cluster/$CLUSTER_NAME/?recursive=true | jq { "action": "get", "node": { "key": "/pxc-cluster/cluster4", "dir": true, "nodes": [ { "key": "/pxc-cluster/cluster4/10.0.5.2", "dir": true, "nodes": [ { "key": "/pxc-cluster/cluster4/10.0.5.2/ipaddr", "value": "10.0.5.2", "modifiedIndex": 19600, "createdIndex": 19600 }, { "key": "/pxc-cluster/cluster4/10.0.5.2/hostname", "value": "2af0a75ce0cb", "modifiedIndex": 19601, "createdIndex": 19601 } ], "modifiedIndex": 19600, "createdIndex": 19600 }, { "key": "/pxc-cluster/cluster4/10.0.5.3", "dir": true, "nodes": [ { "key": "/pxc-cluster/cluster4/10.0.5.3/ipaddr", "value": "10.0.5.3", "modifiedIndex": 26420, "createdIndex": 26420 }, { "key": "/pxc-cluster/cluster4/10.0.5.3/hostname", "value": "cfb29833f1d6", "modifiedIndex": 26421, "createdIndex": 26421 } ], "modifiedIndex": 26420, "createdIndex": 26420 } ], "modifiedIndex": 19600, "createdIndex": 19600 } }

With this, you can start as many cluster nodes as you want and on any host in Docker Network. Now it is convenient to use an SQL proxy in front of the cluster. In this case, we will use ProxySQL (I will show that in a follow-up post).

In later posts, we will also review how to run Percona XtraDB Cluster nodes in an orchestration environment (like Docker Swarm and Kubernetes).

Percona Monitoring and Management 1.0.1 Beta

Fr, 2016-06-10 20:05

Percona is glad to announce the release of Percona Monitoring and Management 1.0.1 Beta on 10 June, 2016.

Like prior versions, PMM is distributed through Docker Hub and is free to download. Full instructions for download and installation of the server and client are available in the documentation.

Notable changes to the tool include:

  • Grafana 3.0
  • Replaced custom web server with NGINX
  • Eliminated most of the ports for PMM server container (now only two – 9001 and configurable 80)
  • Updated to the latest versions of Prometheus, exporters, QAN agent
  • Added mongodb_exporter
  • Added MongoDB dashboards
  • Replaced prom-config-api with Consul
  • Improvements to pmm-admin and ability to set server address with the port
  • Added “Server Summary” with aggregated query metrics to QAN app
  • MySQL dashboard updates, added “MySQL InnoDB Metrics Advanced” dashboard
The new server summary in PMM Beta 1.0.1

 

 

Metric rates in Query Analytics

 

 

Available Dashboards in Metrics

 

Full documentation is available, and includes details on installation and architecture, and a demonstration of the tool has been set up at pmmdemo.percona.com.

We have also implemented forums for the discussion of PMM.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Percona XtraDB Cluster 5.7 beta is now available

Do, 2016-06-09 16:04

Percona is glad to announce the release of Percona XtraDB Cluster 5.7.11-4beta-25.14.2 on June 9, 2016. Binaries are available from the downloads area or our software repositories.

NOTE: This beta release is only available from the testing repository. It is not meant for upgrade from Percona XtraDB Cluster 5.6 and earlier versions. Only a fresh installation is supported.

Percona XtraDB Cluster 5.7.11-4beta-25.14.2 is based on the following:

This is the first beta release in the Percona XtraDB Cluster 5.7 series. It includes all changes from upstream releases and the following changes:

  • Percona XtraDB Cluster 5.7 does not include wsrep_sst_xtrabackup. It has been replaced by wsrep_sst_xtrabackup_v2.
  • The wsrep_mysql_replication_bundle variable has been removed.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Using MySQL 5.7 Document Store with Internet of Things (IoT)

Mi, 2016-06-08 17:17

In this blog post, I’ll discuss how to use MySQL 5.7 Document Store to track data from Internet of Things (IoT) devices.

Using JSON in MySQL 5.7

In my previous blog post, I’ve looked into MySQL 5.7.12 Document Store. This is a brand new feature in MySQL 5.7, and many people are asking when do I need or want to use the JSON or Document Store interface?

Storing data in JSON may be quite useful in some cases, for example:

  • You already have a JSON (i.e., from external feeds) and need to store it anyway. Using the JSON datatype will be more convenient and more efficient.
  • For the Internet of Things, specifically, when storing events from sensors: some sensors may send only temperature data, some may send temperature, humidity and light (but light information is only recorded during the day), etc. Storing it in JSON format may be more convenient in that you don’t have to declare all possible fields in advance, and do not have to run “alter table” if a new sensor starts sending new types of data.

Internet of Things

In this blog post, I will show an example of storing an event stream from Particle Photon. Last time I created a device to measure light and temperature and stored the results in MySQL. Particle.io provides the ability to use its own MQTT server and publish events with:

Spark.publish("temperature", String(temperature)); Spark.publish("humidity", String(humidity)); Spark.publish("light", String(light));

Then, I wanted to “subscribe” to my events and insert those into MySQL (for further analysis). As we have three different metrics for the same device, we have two basic options:

  1. Use a field per metric and create something like this: device_id int, temperature double, humidity double, light double
  2. Use a record per metric and have something like this: device_id int, event_name varchar(255), event_data text (please see this Internet of Things, Messaging and MySQL blog post for more details)

The first option above is not flexible. If my device starts measuring the soil temperature, I will have to “alter table add column”.

Option two is better in this regard, but I may significantly increase the table size as I have to store the name as a string for each measurement. In addition, some devices may send more complex metrics (i.e., latitude and longitude).

In this case, using JSON for storing metrics can be a better option. In this case, I’ve also decided to try Document Store as well.

First, we will need to enable X Plugin and setup the NodeJS / connector. Here are the steps required:

  1. Enable X Plugin in MySQL 5.7.12+, which uses a different port (33060 by default)
  2. Download and install NodeJS (>4.2) and mysql-connector-nodejs-1.0.2.tar.gz (follow the Getting Started with Connector/Node.JS guide).
    # node --version v4.4.4 # wget https://dev.mysql.com/get/Downloads/Connector-Nodejs/mysql-connector-nodejs-1.0.2.tar.gz # npm install mysql-connector-nodejs-1.0.2.tar.gz
    Please note: on older systems you will probably need to upgrade the nodejs version (follow the Installing Node.js via package manager guide).

Storing Events from Sensors

Particle.io provides you with an API that allows you to subscribe to all public events (“events” are what sensors send). The API is for NodeJS, which is really convenient as we can use NodeJS for MySQL 5.7.12 Document Store as well.

To use the Particle API, install the particle-api-js module:

$ npm install particle-api-js

I’ve created the following NodeJS code to subscribe to all public events, and then add the data (in JSON format) to a document store:

var mysqlx = require('mysqlx'); var Particle = require('particle-api-js'); var particle = new Particle(); var token = '<place your token here>' var mySession = mysqlx.getSession({ host: 'localhost', port: 33060, dbUser: 'root', dbPassword: '<place your pass here>' }); process.on('SIGINT', function() { console.log("Caught interrupt signal. Exiting..."); process.exit() }); particle.getEventStream({ auth: token}).then(function(stream) { stream.on('event', function(data) { console.log(data); mySession.then(session => { session.getSchema("iot").getCollection("event_stream") .add( data ) .execute(function (row) { // can log something here }).catch(err => { console.log(err); }) .then( function (notices) { console.log("Wrote to MySQL: " + JSON.stringify(notices)) }); }).catch(function (err) { console.log(err); process.exit(); }); }); }).catch(function (err) { console.log(err.stack); process.exit(); });

How it works:

  • particle.getEventStream({ auth: token}) gives me the stream of events. From there I can subscribe to specific event names, or to all public events using the generic name “events”: stream.on(‘event’, function(data).
  • function(data) is a callback function fired when a new event is ready. The event has JSON type “data.” From there I can simply insert it to a document store: .add( data ).execute() will insert the JSON data into the event_stream document store.

One of the reasons I use document store here is I do not have to know what is inside the event data. I do not have to parse it, I simply throw it to MySQL and analyze it later. If the format of data will change in the future, my application will not break.

Inside the data stream

Here is the example of running the above code:

{ data: 'Humid: 49.40 Temp: 25.00 *C Dew: 13.66 *C HeatI: 25.88 *C', ttl: '60', published_at: '2016-05-20T19:30:51.433Z', coreid: '2b0034000947343337373738', name: 'log' } Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["a3058c16-15db-0dab-f349-99c91a00"]}} { data: 'null', ttl: '60', published_at: '2016-05-20T19:30:51.418Z', coreid: '50ff72...', name: 'registerdev' } Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["eff0de02-726e-34bd-c443-6ecbccdd"]}} { data: '24.900000', ttl: '60', published_at: '2016-05-20T19:30:51.480Z', coreid: '2d0024...', name: 'Humid 2' } { data: '[{"currentTemp":19.25},{"currentTemp":19.19},{"currentTemp":100.00}]', ttl: '60', published_at: '2016-05-20T19:30:52.896Z', coreid: '2d002c...', name: 'getTempData' } Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["5f1de278-05e0-6193-6e30-0ebd78f7"]}} { data: '{"pump":0,"salt":0}', ttl: '60', published_at: '2016-05-20T19:30:51.491Z', coreid: '55ff6...', name: 'status' } Wrote to MySQL: {"_state":{"rows_affected":1,"doc_ids":["d6fcf85f-4cba-fd59-a5ec-2bd78d4e"]}}

(Please note: although the stream is public, I’ve tried to anonymize the results a little.)

As we can see the “data” is JSON and has that structure. I could have implemented it as a MySQL table structure (adding published_at, name, TTL and coreid as separate fields). However, I would have to depend on those specific fields and change my application if those fields changed. We also see examples of how the device sends the data back: it can be just a number, a string or another JSON.

Analyzing the results

Now I can go to MySQL and use SQL (which I’ve used for >15 years) to find out what I’ve collected. First, I want to know how many device names I have:

mysql -A iot Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 3289 Server version: 5.7.12 MySQL Community Server (GPL) Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> select count(distinct json_unquote(doc->'$.name')) from event_stream; +---------------------------------------------+ | count(distinct json_unquote(doc->'$.name')) | +---------------------------------------------+ | 1887 | +---------------------------------------------+ 1 row in set (5.47 sec)

That is slow! As described in my previous post, I can create a virtual column and index for doc->’$.name’ to make it faster:

mysql> alter table event_stream add column name varchar(255) -> generated always as (json_unquote(doc->'$.name')) virtual; Query OK, 0 rows affected (0.17 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> alter table event_stream add key (name); Query OK, 0 rows affected (3.47 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> show create table event_stream *************************** 1. row *************************** Table: event_stream Create Table: CREATE TABLE `event_stream` ( `doc` json DEFAULT NULL, `_id` varchar(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$._id'))) STORED NOT NULL, `name` varchar(255) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$.name'))) VIRTUAL, UNIQUE KEY `_id` (`_id`), KEY `name` (`name`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 1 row in set (0.00 sec) mysql> select count(distinct name) from event_stream; +----------------------+ | count(distinct name) | +----------------------+ | 1820 | +----------------------+ 1 row in set (0.67 sec)

How many beers left?

Eric Joyce has published a Keg Inventory Counter that uses a Particle Proton device to measure the amount of beer in a keg by 12oz pours. I want to see what was the average and the lowest amount of beer per day:

mysql> select date(json_unquote(doc->'$.published_at')) as day, -> avg(json_unquote(doc->'$.data')) as avg_beer_left, -> min(json_unquote(doc->'$.data')) as min_beer_left -> from event_stream -> where name = 'Beers_left' -> group by date(json_unquote(doc->'$.published_at')); +------------+--------------------+---------------+ | day | avg_beer_left | min_beer_left | +------------+--------------------+---------------+ | 2016-05-13 | 53.21008358996988 | 53.2 | | 2016-05-18 | 52.89973045822105 | 52.8 | | 2016-05-19 | 52.669233854792694 | 52.6 | | 2016-05-20 | 52.60644257702987 | 52.6 | +------------+--------------------+---------------+ 4 rows in set (0.44 sec)

Conclusion

UDocument Store can be very beneficial if an application is working with a JSON field and does not know or does not care about its structure. In this post, I’ve used the “save to MySQL and analyze later” approach here. We can then add virtual fields and add indexes if needed.

Choosing MySQL High Availability Solutions

Di, 2016-06-07 20:25

In this blog post we’ll look at various MySQL high availability solutions, and examine their pluses and minuses.

High availability environments provide substantial benefit for databases that must remain available. A high availability database environment co-locates a database across multiple machines, any one of which can assume the functions of the database. In this way, a database doesn’t have a “single point of failure.”

There are many HA strategies and solutions, so how do you choose the best solution among a myriad of options. The first question to ask is “what is the problem you are trying to solve?” The answers boil down to redundancy versus scaling versus high availability. These are not necessarily all the same!

  • Need multiple copies of data in event of a disaster
  • Need to increase read and/or write throughput
  • Need to minimize outage duration

When you are planning your database environment, it’s important to remember the CAP Theorem applies. The CAP Theorem breaks problems into three categories: consistency, availability, and partition tolerance. You can pick any two from those three, at the expense of the third.

  • Consistency. All nodes see the same data at the same time
  • Availability. Every request receives a response about whether it succeeded or not
  • Partition Tolerance. The system continues to operate despite arbitrary partitioning due to network failures

Whatever solution you choose, it should maximize consistency. The problem is that although MySQL replication is great, it alone does not guarantee consistency across all nodes. There is always the potential that data is out of sync, since transactions can be lost during failover and other reasons. Galera-based clusters such as Percona XtraDB Cluster are certification-based to prevent this!

Data loss

The first question you should ask yourself is “Can I afford to lose data?”

This often depends on the application. Apps should check status codes on transactions to be sure they were committed.  Many do not! It is also possible to lose transactions during a failover. During failover, simple replication schemes have the possibility of losing data

Inconsistent nodes are another problem. Without conflict detection and resolution, inconsistent nodes are unavoidable. One solution is to run pt-table-checksum often to check for inconsistent data across replication nodes. Another option is using a Galera-based Distributed Cluster, such as Percona XtraDB Cluster, with a certification process.

Avoiding a single point of failure

What is watching your system? Or is anything standing ready to intervene in a failure? For replication, take a look at MHA and MySQL Orchestrator.  Both are great tools to perform failover of a Replica.  There are others.

For Percona XtraDB Cluster, failover is typically much faster, but it is not the perfect solution in every case.

Can I afford lost transactions?

Many MySQL DBAs worry about setting innodb_flush_log_at_trx_commit to 1 for ACID compliance and sync_binlog, but then use replication with no consistency checks! Is this logically consistent? Percona XtraDB Cluster maintains consistency through certification.

Conflict detection and resolution

All solutions must have some means of conflict detection and resolutions. Galera’s certification process follows the following method:

  • Transaction continues on a node as normal until it reaches COMMIT stage
  • Changes are collected into a writeset
  • Writeset is sent to all nodes for certification
  • PKs are used to determine if the writeset can be applied
  • If certification fails, the writeset is dropped and the transaction is rolled back.
  • If it succeeds, the transaction commits and the writesets are applied to all of the nodes.
  • All nodes will reach the same decision on every transaction and is thus deterministic.
Do I want Failover or a Distributed System?

Another important consideration is whether you should have a failover or a distributed system. A failover system runs one instance at a time, and “fails over” to a different instance when an issue occurs. A distributed system runs several instances at one time, all handling different data.

  • Failover pitfalls:
    • Failover systems have a monitor which detects failed nodes and moves services elsewhere if available
    • Failover takes time!
  • Distributed systems:
    • Distributed systems minimize failover time

Another question is should your failover be automatic or manual?

  • Advantage of Manual Failover
    • The primary advantage to failing over manually is that a human usually can make a better decision as to whether failover is necessary.
    • Systems rarely get it perfect, but they can be close!
  • Advantage of Automatic Failover
    • More Nines due to minimized outages
    • No need to wait on a DBA to perform

A further question is how fast does failover have to occur? Obviously, the faster it happens, the less time there is for potential data loss.

  • Replication / MHA / MMM
    • Depends on how long it takes for pending Replica transactions to complete before failover can occur
    • Typically around 30 seconds
  • DRBD
    • Typically between 15 and 30 seconds
  • Percona XtraDB Cluster / MySQL Cluster
    • VERY fast failover. Typically less than 1 second depending upon Load Balancer
How many 9’s do you really need?

The “9” measure of accuracy is a standard for how perfect a system is. When it comes to “how many 9s,” each 9 is an order of magnitude more accurate. 99.99 is four nines, while 99.999 is five nines.

Every manager response to the question of how many nines is always “As many as I can get.” That sounds great, but the reality is that tradeoffs are required! Many applications can tolerate a few minutes of downtime with minimal impact. The following tables shows downtime as correlated to each “9”:

Do I need to scale reads and/or writes?

When looking at your environment, it’s important to understand your workload. Is your workload heavy on reads, writes, or both? Know whether you’re going to need to scale reads or writes is important to choosing your HA solution:

  • Scaling reads
    • Most solutions offer ability to read from multiple nodes or replicas
    • MHA, Percona XtraDB Cluster, MySQL Cluster, and others are well suited for this
  • Scaling writes
    • Many people wrongly try to scale writes by writing to multiple nodes in Percona XtraDB Cluster leading to conflicts
    • Others try it with Master-Master Replication which Is also problematic
    • Possibly the best solution in this regard is MySQL Cluster

What about provisioning new nodes?

  • Replication
    • Largely, this is a manual process
    • MySQL Utilities makes this easier than ever
  • Distributed Clusters
The rule of threes

With Percona XtraDB Cluster, try to have three of everything. If you span a data center, have three data centers. If your nodes are on a switch, try to have three switches.

Percona XtraDB Cluster needs at least three nodes in the cluster.  An odd number is preferred for voting reasons. Forget about trying to keep a cluster alive during failure with only two data centers.  You are better off making one a DR site. Forget about custom weighting to try to get by on two data centers.  The 51% rule will get you anyway!

How many data centers do I have?

Knowing how many data centers are involved in your environment is a critical factor. Running multiple data centers has implications for the HA solution you adopt.

What if I only have one data center? You can gain protection against a single failed node or more, depending on cluster size. If you have two data centers, you should probably be considering the second data center as a DR solution. Having three or more data centers is the most robust solution when using Galera-based clusters such as XPercona XtraDB Cluster.

How do I plan for disaster recovery?

Planning for disaster recovery is crucial in your HA environment. Make sure the DR node(s) can handle the traffic, if even at a minimized performance level.

  • Replicating from a Percona XtraDB Cluster to a DR site
    • Asynchronous Replication from Percona XtraDB Cluster to a single node
    • Asynchronous Replication from Percona XtraDB Cluster to a replication topology
    • Asynchronous Replication from Percona XtraDB Cluster to another Percona XtraDB Cluster
What storage engine(s) do I need?

Nowadays especially, there is a multitude of storage engines available for a database environment. Which one should you use for your HA solution? Your solution will help determine which storage engine you can employ.

  • Not storage engine dependent. Works with all storage engines
  • Percona XtraDB Cluster. Requires InnoDB. Support for MyISAM is experimental and should not be used in Production
  • MySQL Cluster. Requires NDB Storage Engine
Load balancer options

Load balancers provide a means to distribute your workload across your environment resources so as not to create a bottleneck at any one particular point. The following are some load balancing options:

  • HAProxy
    • Open-source software solution
    • Cannot split reads and writes. If that is a requirement, the app will need to do it!
  • F5 BigIP
    • Typical hardware solution
  • MaxScale
    • Can do read/write splitting
  • Elastic Load Balancer (ELB)
    • Amazon solution
What happens if the cluster reboots?

Some changes require that the cluster be rebooted for the changes to be applied. For example, changing a parameter value in a parameter group is only applied to the cluster after the cluster is rebooted. A cluster could also reboot due to power interruption or other technology failures.

  • A power outage in a single data center could lead to issues
    • Percona XtraDB cluster can be configured to auto bootstrap
    • May not always work when all nodes lose power simultaneously. While server is running, the grastate.dat file shows -1 for seqno
  • Surviving a Reboot
    • Helpful if nodes are shutdown by a System Administrator for a reboot or other such process
    • Normal shutdown sets seqno properly
Do I need to be able to read after writing?

Asynchronous Replication does not guarantee consistent views of data across nodes. Percona XtraDB Cluster offers causal reads. Replica will wait for the event to be applied before processing additional queries, guaranteeing a consistent read state across nodes.

What if I do a lot of data loading?

In the recent past, it was conventional wisdom to use replication in such scenarios over Percona XtraDB Cluster.  MTS does help if data is distributed over multiple schemas but is not a fit for all situations. Percona XtraDB Cluster is now a viable option since we discovered a bug in Galera which did not properly split large transactions.

Have I taken precautions against split brain?

Split Brain occurs when a cluster has its nodes divided from one another, most often due to network blip, and nodes form two or more new and independent (and thus divergent) clusters. XPercona XtraDB Cluster is configured to go into a non-primary state and refuse to take traffic. A newer setting with XtraDB Cluster will allow for dirty reads for non-primary nodes

Does my application require high concurrency?

Newer approaches to replication allow for parallel threads (Percona XtraDB Cluster has had this from the beginning), such as Multi-Thread Slaves (MTS). MTS allows a replica to have multiple SQL threads all with their own relay logs. It enable GTID to make backups via Percona XTRABackup safer due to not being able to trust SHOW SLAVE STATUS to get relay log position.

Am I limited on RAM?

Some Distributed solutions such as MySQL Cluster require a lot of RAM, even with file-based tables.  Be sure to plan appropriately. XtraDB Cluster works much more like a stand-alone node.

How stable is my network?

Networks are never really 100% reliable. Some “Network Problems” are due to outside factors such as system resource contention (especially on virtual machines). Network problems cause inappropriate failover issues. Use LAN segments with Percona XtraDB Cluster to minimize network traffic across the WAN.

Conclusion

Making the right choice depends on:

  • Knowing what you really need!
  • Knowing your options.
  • Knowing your constraints!
  • Understanding the pros/cons of each solution
  • Setting expectations properly!

For more information on how to plan your HA environment, and what tools are available, sign up for my webinar Choosing a MySQL® High Availability Solution today on June 23, 2016 at 10:00 am. You can also get some great insights by watching these videos on our high availability video playlist.

Severe performance regression in MySQL 5.7 crash recovery

Di, 2016-06-07 13:01

In this post, we’ll discuss some insight I’ve gained regarding severe performance regression in MySQL 5.7 crash recovery.

Working on different InnoDB log file sizes in my previous post:

What is a big innodb_log_file_size?

I tried to understand how we can make InnoDB crash recovery faster, but found a rather surprising 5.7 crash recovery regression.

Basically, crash recovery in MySQL 5.7 is two times slower, due to this issue: https://bugs.mysql.com/bug.php?id=80788. InnoDB now performs the log scan twice, compared to a single scan in MySQL 5.6 (no surprise that there is performance degradation).

Fortunately, there is a proposed patch for MySQL 5.7, so I hope it will be improved soon.

As for general crash recovery improvement, my opinion is that it would be much improved by making it multi-threaded. Right now it is significantly limited by the single thread that reads and processes log entries one-by-one. With the current hardware, consisting of tens of cores and fast SSD, we can improve crash recovery by utilizing all the resources we have.

One small improvement that can be made is to disable PERFORMANCE_SCHEMA during recovery (these stats are not needed anyway).

Webinar Thursday, June 9: Troubleshooting MySQL configuration issues

Mo, 2016-06-06 20:24

Please join us on Thursday June 9, at 10:00 am PDT (UTC-7) for the webinar Troubleshooting MySQL configuration issues.

MySQL Server is highly tunable. It has hundreds of configuration options which provide great tuning abilities and, at the same time, can be the source of various issues.

In this webinar you will learn which types of options MySQL Server supports, when they take effect and how to modify configuration safely. I will demonstrate best practices and tricks, used by Support engineers when they work with bug reports and customer issues which highly depend on configuration.

Register now.

Sveta Smirnova Principal Technical Services Engineer Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that help solve typical issues quicker, teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona Sveta worked as Support Engineer in MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle. She is the author of book “MySQL Troubleshooting” and JSON UDF functions for MySQL.

Percona Server 5.7.12-5 is now available

Mo, 2016-06-06 19:23

Percona is glad to announce the GA release of Percona Server 5.7.12-5 on June 6, 2016. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.7.12, including all the bug fixes in it, Percona Server 5.7.12-5 is the current GA release in the Percona Server 5.7 series. All of Percona’s software is open-source and free, all the details of the release can be found in the 5.7.12-5 milestone at Launchpad.

Bugs Fixed:

  • MEMORY storage engine did not support JSON columns. Bug fixed #1536469.
  • When Read Free Replication was enabled for TokuDB and there was no explicit primary key for the replicated TokuDB table there could be duplicated records in the table on update operation. The fix disables Read Free Replication for tables without explicit primary key and does rows lookup for UPDATE and DELETE binary log events and issues warning. Bug fixed #1536663 (#950).
  • Attempting to execute a non-existing prepared statement with Response Time Distribution plugin enabled could lead to a server crash. Bug fixed #1538019.
  • TokuDB was using using different memory allocators, this was causing safemalloc warnings in debug builds and crashes because memory accounting didn’t add up. Bug fixed #1546538 (#962).
  • Adding an index to an InnoDB temporary table while expand_fast_index_creation was enabled could lead to server assertion. Bug fixed #1554622.
  • Percona Server was missing the innodb_numa_interleave server variable. Bug fixed #1561091 (upstream #80288).
  • Running SHOW STATUS in parallel to online buffer pool resizing could lead to server crash. Bug fixed #1577282.
  • InnoDB crash recovery might fail if innodb_flush_method was set to ALL_O_DIRECT. Bug fixed #1529885.
  • Fixed heap allocator/deallocator mismatch in Metrics for scalability measurement. Bug fixed #1581051.
  • Percona Server is now built with system zlib library instead of the older bundled one. Bug fixed #1108016.
  • CMake would fail if TokuDB tests passed. Bug fixed #1521566.
  • Reduced the memory overhead per page in the InnoDB buffer pool. The fix was based on Facebook patch #91e979e. Bug fixed #1536693 (upstream #72466).
  • CREATE TABLE ... LIKE ... could create a system table with an unsupported enforced engine. Bug fixed #1540338.
  • Change buffer merge could throttle to 5% of I/O capacity on an idle server. Bug fixed #1547525.
  • Parallel doublewrite memory was not freed with innodb_fast_shutdown was set to 2. Bug fixed #1578139.
  • Server will now show more descriptive error message when Percona Server fails with errno == 22 "Invalid argument", if innodb_flush_method was set to ALL_O_DIRECT. Bug fixed #1578604.
  • The error log warning Too many connections was only printed for connection attempts when max_connections + one SUPER have connected. If the extra SUPER is not connected, the warning was not printed for a non-SUPER connection attempt. Bug fixed #1583553.
  • apt-cache show command for percona-server-client was showing innotop included as part of the package. Bug fixed #1201074.
  • A replication slave would fail to connect to a master running 5.5. Bug fixed #1566642 (upstream #80962).
  • Upgrade logic for figuring if TokuDB upgrade can be performed from the version on disk to the current version was broken due to regression introduced when fixing bug #684 in Percona Server 5.7.11-4. Bug fixed #717.
  • Fixed jemalloc version parsing error. Bug fixed #528.
  • If ALTER TABLE was run while tokudb_auto_analyze variable was enabled it would trigger auto-analysis, which could lead to a server crash if ALTER TABLE DROP KEY was used because it would be operating on the old table/key meta-data. Bug fixed #945.
  • The tokudb_pk_insert_mode session variable has been deprecated and the behavior will be that of the former tokudb_pk_insert_mode set to 1. The optimization will be used where safe and not used where not safe. Bug fixed #952.
  • Bug in TokuDB Index Condition Pushdown was causing ORDER BY DESC to reverse the scan outside of the WHERE bounds. This would cause query to hang in a sending data state for several minutes in some environments with large amounts of data (3 billion records) if the ORDER BY DESC statement was used. Bugs fixed #988, #233, and #534.

Other bugs fixed: #1510564 (upstream #78981), #1533482 (upstream #79999), #1553166, #1496282 (#964), #1496786 (#956), #1566790, #718, #914, #937, #954, #955, #970, #971, #972, #976, #977, #981, #982, #637, and #982.

NOTE: X Plugin is a part of the MySQL community server so it’s part of the Percona Server 5.7.12-5 release as well. But users will need to install MySQL Shell from official MySQL repositories.

Release notes for Percona Server 5.7.12-5 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

EL5 and why we’ve had to enable TLSv1.0 again

Mo, 2016-06-06 13:52

We have had to revert back to TLSv1.0.

If you saw my previous post on TLSv1.0 (https://www.percona.com/blog/2016/05/23/percona-disabling-tlsv1-0-may-31st-2016/), you’ll know I  wanted to deprecate TLSv1.0 well ahead of PCI’s changes. We made the changes May 31st.

Unfortunately, it has become apparent that EL 5, which is in the final phases of End Of Life, does not support TLSv1.1 or TLSv1.2. As such, I have had to re-enable TLSv1.0 support so that these users employing EL 5 can still receive updates from our repositories.

If you are running EL 5 (RHEL 5 / CentOS 5 / Scientific Linux 5 / etc …), I encourage you to update as soon as possible. As of March 31st, 2017 there will be no more updates at all, and at present EL 5 is effectively receiving very few updates. It also has known vulnerabilities.

Removal of TLSv1.0 support will now take place March 31st, 2017. If there are any EL 5 backports that bring support for TLSv1.1 / TLSv1.2 in the interim, I will seek to remove support earlier.

 

Percona Server for MongoDB 3.2.6-1.0 is now available

Fr, 2016-06-03 19:49

Percona is pleased to announce the release of Percona Server for MongoDB 3.2.6-1.0 on June 3, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB 3.2.6-1.0 is an enhanced, open-source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.2 protocol and drivers. Based on MongoDB 3.2.6, it extends MongoDB with MongoRocks and PerconaFT storage engines, as well as enterprise-grade features like external authentication and audit logging at no extra cost. Percona Server for MongoDB requires no changes to MongoDB applications or code.

NOTE: The PerconaFT storage engine has been deprecated and will not be available in future releases.

This release includes all changes from MongoDB 3.2.6 as well as the following:

  • PerconaFT has been deprecated. It is still available, but is no longer recommended for production use. PerconaFT will be removed in future releases.
  • MongoRocks is no longer considered experimental and is now recommended for production.

For more information, see the Embracing MongoRocks blog entry.

The release notes are available in the official documentation.

 

MySQL 5.7 By Default 1/3rd Slower Than 5.6 When Using Binary Logs

Fr, 2016-06-03 16:11

Researching a performance issue, we came to a startling discovery:

MySQL 5.7 + binlogs is by default 37-45% slower than MySQL 5.6 + binlogs when otherwise using the default MySQL settings

Test server MySQL versions used:
i7, 8 threads, SSD, Centos 7.2.1511
mysql-5.6.30-linux-glibc2.5-x86_64
mysql-5.7.12-linux-glibc2.5-x86_64

mysqld –options:
--no-defaults --log-bin=mysql-bin --server-id=2

Run details:
Sysbench version 0.5, 4 threads, socket file connection

Sysbench Prepare: 

sysbench --test=/usr/share/doc/sysbench/tests/db/parallel_prepare.lua --oltp-auto-inc=off --mysql-engine-trx=yes --mysql-table-engine=innodb --oltp_table_size=1000000 --oltp_tables_count=1 --mysql-db=test --mysql-user=root --db-driver=mysql --mysql-socket=/path_to_socket_file/your_socket_file.sock prepare

Sysbench Run:

sysbench --report-interval=10 --oltp-auto-inc=off --max-time=50 --max-requests=0 --mysql-engine-trx=yes --test=/usr/share/doc/sysbench/tests/db/oltp.lua --init-rng=on --oltp_index_updates=10 --oltp_non_index_updates=10 --oltp_distinct_ranges=15 --oltp_order_ranges=15 --oltp_tables_count=1 --num-threads=4 --oltp_table_size=1000000 --mysql-db=test --mysql-user=root --db-driver=mysql --mysql-socket=/path_to_socket_file/your_socket_file.sock run

Results:

5.6.30: transactions: 7483 (149.60 per sec.)
5.7.12: transactions: 4689 (93.71 per sec.)  — That is a 37.36% decrease!

Note: on high-end systems with premium IO (think Fusion-IO, memory-only, high-end SSD with good caching throughput), the difference would be much smaller or negligible.

The reason?

A helpful comment from Shane Bester on a related bug report made me realize what was happening. Note the following in the MySQL Manual:

“Prior to MySQL 5.7.7, the default value of sync_binlog was 0, which configures no synchronizing to disk—in this case, the server relies on the operating system to flush the binary log’s contents from time to time as for any other file. MySQL 5.7.7 and later use a default value of 1, which is the safest choice, but as noted above can impact performance.” — https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_sync_binlog

The culprit is thus the --sync_binlog=1 change which was made in 5.7.7 (in 5.6 it is 0 by default). While this may indeed be “the safest choice,” one has to wonder why Oracle chose to implement this default change in 5.7.7. After all, there are many other options t aid crash safety.

A related blog post  from the MySQL HA team states;

“Indeed, [with sync_binlog=1,] it increases the total number of fsyncs called, but since MySQL 5.6, the server groups transactions and fsync’s them together, which minimizes greatly a potential performance hit.” — http://mysqlhighavailability.com/replication-defaults-in-mysql-5-7-7/ (ref item #4)

This seems incorrect given our findings, unless perhaps it requires tuning some other option.

This raises some actions points/questions for Oracle’s team: why change this now? Was 5.6 never crash-safe in terms of binary logging? How about other options that aid crash safety? Is anything [before 5.7.7] really ACID compliant by default?

In 2009 my colleague Peter Zaitsev had already posted on performance matters in connection with sync_binlog issues. More than seven years later, the questions asked in his post may still be valid today;

“May be opening binlog with O_DSYNC flag if sync_binlog=1 instead of using fsync will help? Or may be binlog pre-allocation would be good solution.” — PZ

Testing the same setup again, but this time with sync_binlog=0  and sync_binlog=1  synchronized/setup on both servers, we see;

Results for sync_binlog=0:

5.6.30: transactions: 7472 (149.38 per sec.)
5.7.12: transactions: 6594 (131.86 per sec.)  — A 11.73% decrease

Results for sync_binlog=1:

5.6.30: transactions: 3854 (77.03 per sec.)
5.7.12: transactions: 4597 (91.89 per sec.)  — A 19.29% increase

Note: the increase here is to some extent negated by the fact that enabling sync_binlog is overall still causes a significant (30% on 5.7 and 48% on 5.6) performance drop. Also interesting is that this could be the effect of “tuning the defaults” of/in 5.7, and it also makes one think about the possibility o further defaults tuning/optimization in this area.

Results for sync_binlog=100:

5.6.30: transactions: 7564 (151.12 per sec.)
5.7.12: transactions: 6515 (130.22 per sec.) — A 13.83% decrease

Thus, while 5.7.12 made some improvements when it comes to --sync_binlog=1, when --sync_binlog is turned off or is set to 100, we still see a ~11% decrease in performance. This is the same when not using binary logging at all, as a test with only --no-defaults  (i.e. 100% vanilla out-of-the-box MySQL 5.6.30 versus MySQL 5.7.12) shows;

Results without binlogs enabled:

5.6.30: transactions: 7891 (157.77 per sec.)
5.7.12: transactions: 6963 (139.22 per sec.)  — A 11.76% decrease

This raises another question for Oracle’s team: with four threads, there is a ~11% decrease in performance for 5.7.12 versus 5.6.30 (both vanilla)?

Discussing this internally, we were interested to see whether the arbitrary low number of four threads skewed the results and perhaps only showed a less realistic use case. However, testing with more threads, the numbers became worse still:

Results with 100 threads:

5.6.30. transactions: 20216 (398.89 per sec.)
5.7.12. transactions: 11097 (218.43 per sec.) — A 45.24% decrease

Results with 150 threads:

5.6.30. transactions: 11852 (233.01 per sec.)
5.7.12. transactions: 6606 (129.80 per sec.) — A 44.29% decrease

The findings in this article were compiled from a group effort.

Galera warning “last inactive check”

Do, 2016-06-02 18:51

In this post, we’ll discuss the Galera warning “last inactive check” and what it means.

Problem

I’ve been working with Percona XtraDB Cluster quite a bit recently, and have been investigating various warnings. I came across this one today:

[Warning] WSREP: last inactive check more than PT1.5S ago (PT1.51811S), skipping check

This warning is related to the evs.inactive_check_period option. This option controls the poll period for the group communication response time. If a node is delayed, it is added to a delay list and it can lead to the cluster evicting the node.

Possible Cause

While some troubleshooting tips seem to associate the warning with VMWare snapshots, this isn’t the case here, as we see the warning on a physical machine.

I checked for backups or desynced nodes, and this also wasn’t the case. The warning was not accompanied by any errors or other information, so there was nothing critical happening.

In the troubleshooting link above, Galera developers said:

This can be seen on bare metal as well — with poorly configured mysqld, O/S, or simply being overloaded. All it means is that this thread could not get CPU time for 7.1 seconds. You can imagine that access to resources in virtual machines is even harder (especially I/O) than on bare metal, so you will see this in virtual machines more often.

This is not a Galera specific issue (it just reports being stuck, other mysqld threads are equally stuck) so there is no configuration options for that. You simply must make sure that your system and mysqld are properly configured, that there is enough RAM (buffer pool not over provisioned), that there is swap, that there are proper I/O drivers installed on guest and so on.

Basically, Galera runs in virtual machines as well as the virtual machines approximates bare metal.

It could also be an indication of unstable network or just higher average network latency than expected by the default configuration. In addition to checking network, do check I/O, swap and memory when you do see this warning.

Our graphs and counters otherwise look healthy. If this is the case, this is most likely nothing to worry about.

It is also a good idea to ensure your nodes are desynced before backup. Look for spikes in your workload. A further option to check for is that swappiness is set to 1 on modern kernels.

If all of this looks good, ensure the servers are all talking to the same NTP server, have the same time zone and the times and dates are in sync. While this warning could be a sign of an overloaded system, if everything else looks good this warning isn’t something to worry about.

Source

The warning comes from evs_proto.cpp in the Galera code:

if (last_inactive_check_ + inactive_check_period_*3 < now)
{
log_warn << "last inactive check more than " << inactive_check_period_*3
<< " ago (" << (now - last_inactive_check_)
<< "), skipping check";
last_inactive_check_ = now;
return;
}

Since the default for inactive_check_period is one second according to the Galera documentation, if it is now later than three seconds after the last check, it skips the rest of the above routine and adds the node to the delay list and does some other logic. The reason it does this is that it doesn’t want to rely on stale counters before making decisions. The message is really just letting you know that.

In Percona XtraDB Cluster, this setting defaults to 0.5s. This warning simply could be that your inactive_check_period is too low, and the delay is not high enough to add the node to the delay list. So you could consider increasing evs.inactive_check_period to resolve the warnings. (Apparently in Galera, it may also now be 0.5s but documentation is stale.)

Possible Solution

To find a sane value my colleague David Bennett came up with this command line, which gives you an idea of when your check warnings are happening:

$ cat mysqld.log | grep 'last inactive check more than' | perl -ne 'm/(PT(.*)S)/; print $1."n"' | sort -n | uniq -c
1 1.55228
1 1.5523
1 1.55257
1 1.55345
1 1.55363
1 1.5543
1 1.55436
1 1.55483
1 1.5552
1 1.55582

Therefore, in this case, it may be a good idea to set inactive_check_period at 1 or 1.5 to make the warnings go away.

Conclusion

Each node in the cluster keeps its own local copy of how it sees the topology of the entire cluster. check_inactive is a node event that is triggered every inactive_check_period seconds to help the node update its view of the whole cluster, and ensure it is accurate. Service messages can be broadcast to the cluster informing nodes of changes to the topology. For example, if a cluster node is going down it will broadcast a service message telling each node in the cluster to remove it. The action is queued but the actual view of the cluster is updated with check_inactive. This is why it adds nodes to its local copy of inactive, suspect and delayed nodes.

If a node thinks it might be looking at stale data, it doesn’t make these decisions and waits until the next time for a fresh queue. Unfortunately, if inactive_check_period is too low, it will keep giving you warnings.

Percona Live Europe call for papers is now open!

Do, 2016-06-02 18:13

The Percona Live Europe 2016 call for papers is now officially open! We’re looking forward to seeing you in Amsterdam this October 3-5, and hearing your speak! Ask yourself “Do I have…”:

  • Fresh ideas?
  • Enlightening case studies?
  • Insight on best practices?
  • In-depth technical knowledge?

If the answer to any of these is “YES,” then you need to submit your proposal for a chance to speak at Percona Live Europe 2016. Speaking is a great way to broaden not only the awareness of your company with an intelligent and engaged audience of software architects, senior engineers and developers, but also your own!

The deadline to submit is July 18th, 2016.

Tracks/Topics: 

Percona Live Europe 2016 is looking for topics for Breakout Sessions, Tutorial Sessions, and Lightning Talks:

  • Breakout Session. Submissions should be detailed and clearly indicate the topic and content of your proposal for the Conference Committee. Sessions should either be 25 minutes or 50 minutes in length, including Q&A.
  • Tutorial Session. Submissions should be detailed and include an agenda for review by the Conference Committee. Tutorial sessions should present immediate and practical applications of in-depth knowledge of MySQL, NoSQL, or Data in the Cloud technologies. They should be presented at a level between a training class and a conference breakout session. Attendees are expected to have their laptops to work through detailed and potentially hands-on presentations. Tutorials will be 3 hours in length including Q&A. If you would like to submit your proposal as a full day, six-hour tutorial, please indicate this in your submission.
  • Lightning Talks. Lightning talks are five-minute presentations focusing on one key point that will be of interest to the community. Talks can be technical, lighthearted, fun or otherwise entertaining submissions. These can include new ideas, a successful project, a cautionary story, quick tip or demonstration. This session is an opportunity for ideas to get the attention they deserve. The rules for this session are easy: five minutes and only five minutes. Use this time wisely to present the pertinent message of the subject matter and have fun doing so!

This year, the conference will feature a variety of formal tracks.We are particularly interested in proposals that fit into the areas of MySQL, MongoDB, NoSQL, ODBMS and Data in the Cloud. Our conference tracks are:

  • Analytics
  • Architecture/Design
  • Big Data
  • Case Stories
  • Development
  • New and Trending Topics
  • Operations and Management
  • Scalability/Performance
  • Security

Action Items:

With the call for papers ending July 18th, 2016the time to submit is now! Here’s what you need to submit:

1) Pull together your proposal information:

-Talk title
-Abstract
-Speaker bio, headshot, video

2) Submit your proposal here before July 18th: SUBMIT

3) After submitting, you will be given a direct link and badge to advertise to the community. Tweet, Facebook, blog – get the word out about your submission and get your followers to vote!

That’s all it takes! We’re looking forward to your submissions!

Sponsoring / Super Saver Tickets! 

Interested in sponsoring? Take advantage of early sponsorship opportunities to reach the influential Percona Live audience. Sponsorship Information

Want to attend at the cheapest rate possible? Super Saver tickets are on sale NOW! Register today.

Why use provisioned IOPS volumes for AWS databases?

Mi, 2016-06-01 22:05

In this blog, we’ll use some test results to look at the rationale for using provisioned IOPS volumes for AWS databases.

One piece of advice you often hear running MySQL, MongoDB or other databases in the AWS EC2 environment is that you should use volumes with provisioned IOPs. This kind of makes sense on the “marketing” level, where provisioned IOPS (io1) volumes are designed for IO-intensive database workloads, while General Purpose (gp2) volumes are not. But if you go to the AWS volume type description, you will find that gp2s are shown to have pretty good IO performance. So where do all these supposed database performance problems for Amazon Elastic Block Store (EBS), with no provisioned IOs, come from?

Here is what I found out running experiments with a beta of Percona Monitoring and Management.

I ran a typical database instance workload, where the OLTP workload uses around 20% of the system capacity, and periodically I have a single user IO intensive batch job hitting the same system. Even if you do not have batch jobs running, your backup is likely to show this same IO pattern.

What would happen in this case if you have conventional local storage? Some queueing happens on the storage level, but as there is only one user with intensive IO, the impact is typically not very significant. What do we see from the AWS gp2 volume?

At first, the read services spike to more than 1.5K IOPS, and while latency increases from normal 1-2ms, it remains below 10ms on average. However, after a couple of minutes IOPS drops to around 500, and read latency spikes to over 100ms (note the log scale on the graph).

What is happening here? The gp2 volumes behave differently than your conventional storage by allowing IO bursts for short periods of time – after a short period of time, however, the IOs are throttled (in this case to only 500/sec). How does the throttling work? By adding delay to IO completion so that only the required IOs are completed per second, and the more concurrency we add to such throttled devices, the higher the average IO response latency is!

What does this mean from an application point of view? Let’s say you have a database transaction that requires 100 reads from the disk. If you have an average of 1ms latency, this transaction takes about 100ms reading from the disk, and will likely be seen as very good user experience. If you have an average IO latency of 100ms, the same transaction spends ten seconds reading from the disk – well above the tolerance for many users.   

As a DBA, you can see how putting an extra (small) load on the database system (such as running batch job or backup) can cause your boss to come screaming that the website is down ten minutes later.

There is another key difference between conventional local storage such as RAID or SSD, and an EBS volume. Not all local storage IO is created equal, while an EBS general purpose volume seems to inject latencies into IO operations independent of what the IO is.

Transactional log flushes are one of the most latency critical IO operations databases perform. These are very small (often just 1 page) sequential writes. RAID controllers and SSDs can handle these very quickly by only writing in memory (battery or capacitor backed up), at a fraction of the costs of other operations. This is not the case for EBS gp2: log writes come with high latency.

We can see this latency in Performance Schema graphs, where such patch jobs correlate to a huge amount of time spent writing to the InnoDB Transactional Log file or Binary Log File:

We can also see the main InnoDB thread spending up to 30% of its time flushing the log – the number is drastically lower for typical storage configuration:

Another way AWS EBS storage is different from the typical local storage is that size directly buys you performance. GP2 volumes provide 3 IOPS/GB, up to 10000 IOPS (99 percentile figure),  which means that larger storage will have higher performance – though if anything, this means you’re getting better performance from your larger production volumes than your smaller test ones.

A final note: EBS storage is essentially connected to a network, which means both slightly higher latencies and limited throughput. According to the documentation, there is 160MiB/s throughput limit per volume, which is a lot less than even inexpensive SATA SSD. SSD often can provide 500MB/sec or more, and are generally limited by SATA bus capacity.  

My takeaways from these results:

  • EBS General Purpose volumes have decent performance for light-duty workloads – if you don’t demand a lot of IOPS from your storage for prolonged periods of time. If you do, storage with provisioned IOPS is a better choice
  • Whenever you’re using Amazon or other environments with multi-tenant virtualized storage, I would highly suggest running some benchmark on how it behaves for the above scenarios. The assumptions you have about your conventional RAID or SSD storage might not apply.

Want to play around with live graphs? Check out our PMM Demo, which is currently running the stated workload on Amazon EC2. You can also install the beta version to use with your own system.