MySQL High Performance
MyRocks Docker images
In this post, I’ll point you to MyRocks Docker images with binaries, allowing you to install and play with the software.
During the @Scale conference, Facebook announced that MyRocks is mature enough that it has been installed on 5% of Facebook’s MySQL slaves. This has saved 50% of the space on these slaves, which allows them to decrease the number of servers by half. Check out the announcement here: https://code.facebook.com/posts/190251048047090/myrocks-a-space-and-write-optimized-mysql-database/
Those are pretty impressive numbers, so I decided to take a serious look at MyRocks. The biggest showstopper is usually binary availability, since Facebook only provides the source code: https://github.com/facebook/mysql-5.6.
You can get the image from https://hub.docker.com/r/perconalab/myrocks/.
To start MyRocks:
docker run -d --name myr -P perconalab/myrocks
To access it, use a regular MySQL client:
mysql -h127.0.0.1
From there you should see RocksDB installed:
show engines; +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | ROCKSDB | DEFAULT | RocksDB storage engine | YES | YES | YES |I hope it makes easier to start experimenting with MyRocks!
MongoDB at Percona Live Europe
This year, you will find a great deal about MongoDB at Percona Live Europe.
As we continue to work on growing the independent MongoDB ecosystem, this year’s Percona Live Europe in Amsterdam includes many talks about MongoDB. If your company uses MongoDB technologies, is focused exclusively on developing with MongoDB or MongoDB operations, or is just evaluating MongoDB, attending Percona Live Europe will prove a valuable experience.
As always with Percona Live conferences, the focus is squarely on the technical content — not sales pitches. We encourage our speakers to tell the truth: the good, the bad and the ugly. There is never a “silver bullet” when it comes to technology — only tradeoffs between different solution options.
As someone who has worked in database operations for more than 15 years, I recognize and respect the value of “negative information.” I like knowing what does not work, what you should not do and where trouble lies. Negative information often proves more valuable than knowing how great the features of a specific technology work — especially since the product’s marketing team tends to highlight those very well (and they seldom require independent coverage).
For MongoDB at this year’s Percona Live Europe:- We have talks about MongoRocks, a RocksDB powered storage engine for MongoDB — the one you absolutely need to know about if you’re looking to run the most efficient MongoDB deployment at scale!
- We will discuss development and operations best practices for MongoDB, including MongoDB Replication, MongoDB Design Patterns, Sharding, Scalability practices, and secondary reads.
- We will cover MongoDB Backups best practices, as well as several talks about MongoDB monitoring and management (1, 2, 3) — all of them with MongoDB Community Edition and Percona Server for MongoDB (so they don’t require a MongoDB Enterprise subscription).
- Looking for MongoDB analytics, reporting and dashboarding options? We have those covered too!
There will also be a number of talks about how MongoDB interfaces with other technologies. We show how ToroDB can use the MongoDB protocol while storing data in a relational database (and why that might be a good idea), we contrast and compare MySQL and MongoDB Geospatial features, and examine MongoDB from MySQL DBA point of view.
We also how to use Apache Spark to unify data from MongoDB, MySQL, and Redis, and what are generally the best practices for choosing databases for different application needs.
Finally, if you’re just starting with MongoDB and would like a jump start before attending more detailed MongoDB talks, we’ve got a full day MongoDB 101 tutorial for you.
Join us for the full conference, or register for just one day if that is all your schedule allows. But come to Percona Live Europe in Amsterdam on October 3-5 to get the best and latest MongoDB information.
InnoDB Troubleshooting: Q & A
In this blog, I will provide answers to the Q & A for the InnoDB Troubleshooting webinar.
First, I want to thank everybody for attending the August 11 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that I wasn’t able to answer during the webinar, with responses:
Q: What’s a good speed for buffer pool speed/size for maximum query performance?
A: I am sorry, I don’t quite understand the question. InnoDB buffer pool is an in-memory buffer. In an ideal case, your whole active dataset (rows that are accessed by application regularly) should be in the buffer pool. There is a good blog post by Peter Zaitsev describing how to find the best size for the buffer pool.
Q: Any maximum range for these InnoDB options?
A: I am again sorry, I only see questions after the webinar and don’t know which slide you were on when you asked about options. But generally speaking, the maximum ranges should be limited by hardware: the size of InnoDB buffer pool limited by the amount of physical memory you have, the size of innodb_io_capacity limited by the number of IOPS which your disk can handle, and the number of concurrent threads limited by the number of CPU cores.
Q: On a AWS r3.4xlarge, 16 CPU, 119GB of RAM, EBS volumes, what innodb_thread_concurrency, innodb_read_io_threads, innodb_write_io_threads would you recommend? and innodb_read_io_capacity?
A: innodb_thread_concurrency = 16, innodb_read_io_threads = 8, innodb_write_io_threads = 8, innodb_io_capacity — but it depends on the speed of your disks. As far as I know, AWS offers disks with different speeds. You should consult IOPS about what your disks can handle when setting innodb_io_capacity, and “Max IOPS” when setting innodb_io_capacity_max.
Q: About InnoDB structures and parallelism: Are there InnoDB settings that can prevent or reduce latching (causes semaphore locks and shutdown after 600s) that occur trying to add an index object to memory but only DML queries on the primary key are running?
A: Unfortunately, semaphore locks for the CREATE INDEX command are not avoidable. You only can affect other factors that speed up index creation. For example, how fast you write records to the disk or how many concurrent queries you run. Kill queries that are waiting for a lock too long. There is an old feature request asking to handle long semaphore waits gracefully. Consider clicking “Affects Me” button to bring it to the developers’ attention.
Q: How can we check these threads?
A: I assume you are asking about InnoDB threads? You can find information about running threads in SHOW ENGINE INNODB STATUS :
-------- FILE I/O -------- I/O thread 0 state: waiting for completed aio requests (insert buffer thread) I/O thread 1 state: waiting for completed aio requests (log thread) I/O thread 2 state: waiting for completed aio requests (read thread) I/O thread 3 state: waiting for completed aio requests (read thread) I/O thread 4 state: waiting for completed aio requests (read thread) I/O thread 5 state: waiting for completed aio requests (read thread) I/O thread 6 state: waiting for completed aio requests (write thread) I/O thread 7 state: waiting for completed aio requests (write thread) I/O thread 8 state: waiting for completed aio requests (write thread) I/O thread 9 state: waiting for completed aio requests (write thread) Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] , ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 1; buffer pool: 0 529 OS file reads, 252 OS file writes, 251 OS fsyncs 0.74 reads/s, 16384 avg bytes/read, 7.97 writes/s, 7.94 fsyncs/sAnd in the Performance Schema THREADS table:
mysql> select thread_id, name, type from performance_schema.threads where name like '%innodb%'; +-----------+----------------------------------------+------------+ | thread_id | name | type | +-----------+----------------------------------------+------------+ | 2 | thread/innodb/io_handler_thread | BACKGROUND | | 3 | thread/innodb/io_handler_thread | BACKGROUND | | 4 | thread/innodb/io_handler_thread | BACKGROUND | | 5 | thread/innodb/io_handler_thread | BACKGROUND | | 6 | thread/innodb/io_handler_thread | BACKGROUND | | 7 | thread/innodb/io_handler_thread | BACKGROUND | | 8 | thread/innodb/io_handler_thread | BACKGROUND | | 9 | thread/innodb/io_handler_thread | BACKGROUND | | 10 | thread/innodb/io_handler_thread | BACKGROUND | | 11 | thread/innodb/io_handler_thread | BACKGROUND | | 13 | thread/innodb/srv_lock_timeout_thread | BACKGROUND | | 14 | thread/innodb/srv_monitor_thread | BACKGROUND | | 15 | thread/innodb/srv_error_monitor_thread | BACKGROUND | | 16 | thread/innodb/srv_master_thread | BACKGROUND | | 17 | thread/innodb/srv_purge_thread | BACKGROUND | | 18 | thread/innodb/page_cleaner_thread | BACKGROUND | | 19 | thread/innodb/lru_manager_thread | BACKGROUND | +-----------+----------------------------------------+------------+ 17 rows in set (0.00 sec)Q: Give brief on InnoDB thread is not same as connection thread.
A: You create a MySQL connection thread each time the client connects to the server. Generally, the lifetime of this thread is the same as the connection (I won’t discuss the thread cache and thread pool plugin here to avoid unnecessary complexity). This way, if you have 100 connections you have 100 connection threads. But not all of these threads do something. Some are actively querying MySQL, but others are sleeping. You can find the number of threads actively doing something if you examine the status variable Threads_running. InnoDB doesn’t create as many threads as connections to perform its job effectively. It creates fewer threads (ideally, it is same as the number of CPU cores). So, for example just 16 InnoDB threads can handle100 and more connection threads effectively.
Q: How can we delete bulk data in Percona XtraDB Cluster? without affecting production? nearly 6 million records worth 40 GB size table
A: You can use the utility pt-archiver. It deletes rows in chunks. While your database will still have to handle all these writes, the option --max-flow-ctl pauses a purge job if the cluster spent too much time pausing for flow control.
Q: Why do we sometimes get “–tc-heuristic-recover” message in error logs? Especially when we recover after a crash? What does this indicate? And should we commit or rollback?
A: This means you used two transactional engines that support XA in the same transaction, and mysqld crashed in the middle of the transaction. Now mysqld cannot determine which strategy to use when recovering transactions: either COMMIT or ROLLBACK. Strangely, this option is documented as “not used”. It certainly is, however. Test case for bug #70860 proves it. I reported a documentation bug #82780.
Q: Which parameter controls the InnoDB thread count?
A: The main parameter is innodb_thread_concurrency. For fine tuning, use innodb_read_io_threads, innodb_write_io_threads, innodb_purge_threads, innodb_page_cleaners. Q:
Q: At what frequency will the InnoDB status be dumped in a file by using innodb-status-file?
A: Approximately every 15 seconds, but it can vary slightly depending on the server load.
Q: I faced an issue that once disk got detached from running server due to some issue on AWS ec2. MySQL went to default mode. After MySQL stopped and started, we observed slave skipped some around 15 mins data. We got it by foreign key relationship issue. Can you please explain why it was skipped data in slave?
A: Amazon Aurora supports two kinds of replication: physical as implemented by Amazon (this is the default for replicas in the same region), and the regular asynchronous replication for cross-region replication. If you use the former, I cannot help you because this is a closed-source Amazon feature. You need to report a bug to Amazon. If you used the latter, this looks buggy too. According to my experience, it should not happen. With regular replication you need to check which transactions were applied (best if you use GTIDs, or at least the log-slave-updates option) and which were not. If you find a gap, report a bug at bugs.mysql.com.
Q: Can you explain more about adaptive hash index?
A: InnoDB stores its indexes on disks as a B-Tree. While B-Tree indexes are effective in general, some queries can take advantage of using much simpler hash indexes. While your server is in use, InnoDB analyzes the queries it is currently processing and builds an in-memory hash index inside the buffer pool (using the prefix of the B-Tree key). While adaptive hash index generally works well, “with some workloads, the speedup from hash index lookups greatly outweighs the extra work to monitor index lookups and maintain the hash index structure” Another issue with adaptive hash index is that until version 5.7.8, it was protected by a single latch — which could be a contention point under heavy workloads. Since 5.7.8, adaptive hash index can be partitioned. The number of parts is controlled by option innodb_adaptive_hash_index_parts.
Save
MHA Quick Start Guide
MHA (Master High Availability Manager and tools for MySQL) is one of the most important pieces of our managed services. When properly set up, it can check replication health, move writer and reader virtual IPs, perform failovers, and have its output constantly monitored by Nagios. Is it easy to deploy and follows the KISS (Keep It Simple, Stupid) philosophy that I love so much.
This blog post is a quick start guide to try it out and play with it in your own testing environment. I assume that you already know how to install software, deal with SSH keys and setup replication in MySQL. The post just covers MHA configuration.
Testing environmentTaken from /etc/hosts
192.168.1.116 mysql-server1 192.168.1.117 mysql-server2 192.168.1.118 mysql-server3 192.168.1.119 mha-managermysql-server1: Our master MySQL server with 5.6
mysql-server2: Slave server
mysql-server3: Slave server
mha-manager: The server monitors the replication and from where we manage MHA. The installation is also required to meet some Perl dependencies.
We just introduced some new concepts, the MHA Node and MHA Manager:
MHA Node
It is installed and runs on each MySQL server. This is the piece of software that it is invoked by the manager every time we want to do something, like for example a failover or a check.
MHA Manager
As explained before, this is our operations center. The manager monitors the services, replication, and includes several administrative command lines.
Pre-requisites- Replication must already be running. MHA manages replication and monitors it, but it is not a tool to deploy it. So MySQL and replication need to be running already.
- All hosts should be able to connect to each other using public SSH keys.
- All nodes need to be able to connect to each other’s MySQL servers.
- All nodes should have the same replication user and password.
- In the case of multi-master setups, only one writable node is allowed. All others need to be configured with read_only.
- MySQL version has to be 5.0 or later.
- Candidates for master failover should have binary log enabled. The replication user must exist there too.
- Binary log filtering variables should be the same on all servers (replicate-wild%, binlog-do-db…).
- Disable automatic relay-log purge and do it regularly from a cron task. You can use an MHA-included script called “purge_relay_logs”.
While that is a large list of requisites, I think that they are pretty standard and logical.
MHA installationAs explained before, the MHA Node needs to be installed on all the nodes. You can download it from this Google Drive link.
This post shows you how to install it using the source code, but there are RPM packages available. Deb too, but only for older versions. Use the installation method you prefer. This is how to compile it:
tar -xzf mha4mysql-node-0.57.tar.gz perl Makefile.PL make make installThe commands included in the node package are save_binary_logs, filter_mysqlbinlog, purge_relay_logs, apply_diff_relay_logs. Mostly tools that the manager needs to call in order to perform a failover, while trying to minimize or avoid any data loss.
On the manager server, you need to install MHA Node plus MHA Manager. This is due to MHA Manager dependance on a Perl library that comes with MHA Node. The installation process is just the same.
ConfigurationWe only need one configuration file on the Manager node. The example below is a good starting point:
# cat /etc/app1.cnf [server default] # mysql user and password user=root password=supersecure ssh_user=root # working directory on the manager manager_workdir=/var/log/masterha/app1 # working directory on MySQL servers remote_workdir=/var/log/masterha/app1 [server1] hostname=mysql-server1 candidate_master=1 [server2] hostname=mysql-server2 candidate_master=1 [server3] hostname=mysql-server3 no_master=1So pretty straightforward. It specifies that there are three servers, two that can be master and one that can’t be promoted to master.
Let’s check if we meet some of the pre-requisites. We are going to test if replication is working, can be monitored, and also if SSH connectivity works.
# masterha_check_ssh --conf=/etc/app1.cnf [...] [info] All SSH connection tests passed successfully.It works. Now let’s check MySQL:
# masterha_check_repl --conf=/etc/app1.cnf [...] MySQL Replication Health is OK. Start the manager and operationsEverything is setup, we meet the pre-requisites. We can start our manager:
# masterha_manager --remove_dead_master_conf --conf=/etc/app1.cnf [...] [info] Starting ping health check on mysql-server1(192.168.1.116:3306).. [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..The manager found our master and it is now actively monitoring it using a SELECT command. –remove_dead_master_conf tells the manager that if the master goes down, it must edit the config file and remove the master’s configuration from it after a successful failover. This avoids the “there is a dead slave” error when you restart the manager. All servers listed in the conf should be part of the replication and in good health, or the manager will refuse to work.
Automatic and manual failoverGood, everything is running as expected. What happens if the MySQL master dies!?!
[...] [warning] Got error on MySQL select ping: 2006 (MySQL server has gone away) [info] Executing SSH check script: save_binary_logs --command=test --start_pos=4 --binlog_dir=/var/lib/mysql,/var/log/mysql --output_file=/var/log/masterha/app1/save_binary_logs_test --manager_version=0.57 --binlog_prefix=mysql-bin Creating /var/log/masterha/app1 if not exists.. ok. Checking output directory is accessible or not.. ok. Binlog found at /var/log/mysql, up to mysql-bin.000002 [info] HealthCheck: SSH to mha-server1 is reachable. [...]First, it tries to connect by SSH to read the binary log and save it. MHA can apply the missing binary log events to the remaining slaves so they are up to date with all the before-failover info. Nice!
Theses different phases follow:
* Phase 1: Configuration Check Phase.. * Phase 2: Dead Master Shutdown Phase.. * Phase 3: Master Recovery Phase.. * Phase 3.1: Getting Latest Slaves Phase.. * Phase 3.2: Saving Dead Master's Binlog Phase.. * Phase 3.3: Determining New Master Phase.. [info] Finding the latest slave that has all relay logs for recovering other slaves.. [info] All slaves received relay logs to the same position. No need to resync each other. [info] Starting master failover.. [info] From: mysql-server1(192.168.1.116:3306) (current master) +--mysql-server2(192.168.1.117:3306) +--mysql-server3(192.168.1.118:3306) To: mysql-server2(192.168.1.117:3306) (new master) +--mysql-server3(192.168.1.118:3306) * Phase 3.3: New Master Diff Log Generation Phase.. * Phase 3.4: Master Log Apply Phase.. * Phase 4: Slaves Recovery Phase.. * Phase 4.1: Starting Parallel Slave Diff Log Generation Phase.. * Phase 4.2: Starting Parallel Slave Log Apply Phase.. * Phase 5: New master cleanup phase..The phases are pretty self-explanatory. MHA tries to get all the data possible from the master’s binary log and slave’s relay log (the one that is more advanced) to avoid losing any data or promote a slave that it was far behind the master. So it tries to promote a slave with the most current data as possible. We see that server2 has been promoted to master, because in our configuration we specified that server3 shouldn’t be promoted.
After the failover, the manager service stops itself. If we check the config file, the failed server is not there anymore. Now the recovery is up to you. You need to get the old master back in the replication chain, then add it again to the config file and start the manager.
It is also possible to perform a manual failover (if, for example, you need to do some maintenance on the master server). To do that you need to:
- Stop masterha_manager.
- Run masterha_master_switch –master_state=alive –conf=/etc/app1.cnf. The line says that you want to switch the master, but the actual master is still alive, so no need to mark it as dead or remove it from the conf file.
And that’s it. Here is part of the output. It shows the tool making the decision on the new topology and asking the user for confirmation:
[info] From: mysql-server1(192.168.1.116:3306) (current master) +--mysql-server2(192.168.1.117:3306) +--mysql-server3(192.168.1.118:3306) To: mysql-server2(192.168.1.117:3306) (new master) +--mysql-server3(192.168.1.118:3306) Starting master switch from mha-server1(192.168.1.116:3306) to mha-server2(192.168.1.117:3306)? (yes/NO): yes [...] [info] Switching master to mha-server2(192.168.1.117:3306) completed successfully.You can also employ some extra parameters that are really useful in some cases:
–orig_master_is_new_slave: if you want to make the old master a slave of the new one.
–running_updates_limit: if the current master executes write queries that take more than this parameter’s setting, or if any of the MySQL slaves behind master take more than this parameter, the master switch aborts. By default, it’s 1 (1 second). All these checks are for safety reasons.
–interactive=0: if you want to skip all the confirmation requests and questions masterha_master_switch could ask.
Check this link in case you use GTID and want to avoid problems with errant transactions during the failover:
https://www.percona.com/blog/2015/12/02/gtid-failover-with-mysqlslavetrx-fix-errant-transactions/
Custom scriptsSince this is a quick guide to start playing around with MHA, I won’t cover advanced topics in detail. But I will mention a few:
- Binary log servers. If you are using GTID, MHA can connect to binary log streaming servers. MHA can check them, and if they are ahead, apply the differences to the remaining servers after a failover.https://code.google.com/p/mysql-master-ha/wiki/Configuration#Binlog_server
- Custom scripts. MHA can move IPs around, shutdown a server and send you a report in case something happens. It needs a custom script, however. MHA comes with some example scripts, but you would need to write one that fits your environment.The directives are master_ip_failover_script, shutdown_script, report_script. With them configured, MHA will send you an email or a message to your mobile device in the case of a failover, shutdown the server and move IPs between servers. Pretty nice!
Hope you found this quickstart guide useful for your own tests. Remember, one of the most important things: don’t overdo automation!
Percona Live Europe featured talk with Manyi Lu — MySQL 8.0: what’s new in Optimizer
Welcome to a new Percona Live Europe featured talk with Percona Live Europe 2016: Amsterdam speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference. We’ll also discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live Europe registration bonus!
In this Percona Live Europe featured talk, we’ll meet Manyi Lu, Director Software Development at Oracle. Her talk will be on MySQL 8.0: what’s new in Optimizer. There are substantial improvements in the optimizer in MySQL 5.7 and MySQL 8.0. Most noticeably, users can now combine relational data with NoSQL using the new JSON features. I had a chance to speak with Manyi and learn a bit more about the MySQL 8.0:
Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it.
Manyi: Oh, my interest in database development goes way back to university almost twenty years ago. After graduation, I joined local startup Clustra and worked on the development of a highly available distributed database system for the telecom sector. Since then, I have worked on various aspects of the database, kernel, and replication. Lately I am heading the MySQL optimizer and GIS team.
What I love most about my work are the talented and dedicated people I am surrounded by, both within the team and in the MySQL community. We are passionate about building a database used by millions.
Percona: Your talk is called “MySQL 8.0: what’s new in Optimizer.” So, obvious question, what is new in the MySQL 8.0 Optimizer?
Manyi: There are a number of interesting features in 8.0. CTE or Common Table Expression has been one of the most demanded SQL features. MySQL 8.0 will support both the WITH and WITH RECURSIVE clauses. A recursive CTE is quite useful for reproducing reports based on hierarchical data. For DBAs, Invisible Index should make life easier. They can mark an index invisible to the optimizer, check the performance and then decide to either drop it or keep it. On the performance side, we have improved the performance of table scans, range scans and similar queries by batching up records read from the storage engine into the server. We have significant work happening in the cost model area. In order to produce more optimal query plans, we have started the work on adding support for histograms, and for taking into account whether data already is in memory or needs to be read from disk.
Besides the optimizer, my team is also putting a major effort into utf8mb4 support. We have added a large set of utf8mb4 collations based on the latest Unicode standard. These collations have better support for emojis and languages. Utf8 is the dominating character encoding for the web, and this move will make the life easier for the vast majority of MySQL users. We also plan to add support for accent and case sensitive collations.
Please keep in mind that 8.0.0 is the first milestone release. There are quite a few features in the pipeline down the road.
Percona: How are some of the bridges between relational and NoSQL environments (like JSON support) of benefit to database deployments?
Manyi: The JSON support that we introduced in 5.7 has been immensely popular because it solves some very basic day-to-day problems. Relational database forces you to have a fixed schema, and the JSON datatype gives you the flexibility to store data without a schema. In the past, people stored relational data in MySQL and had to install yet another datastore to handle unstructured or semi-structured data that are schema-less in nature. With JSON support, you can store both relational and non-relational data in the same database, which makes database deployment much simpler. And not only that, but you can also perform queries across the boundaries of relational and non-relational data.
Clients that communicate with a MySQL Server using the newly introduced X Protocol can use the X DevAPI to develop applications. Developers do not even need to understand SQL if they do not want to. There are a number of connectors that support the X protocol, so you can use X DevApi in your preferred programming language. We have made MySQL more appealing to a larger range of developers.
Percona: What is the latest on the refactoring of the MySQL Optimizer and Parser?
Manyi: The codebase of optimizer and parser used to be quite messy. The parsing, optimizing and execution stages were intermingled, and the code was hard to maintain. We have had a long-running effort to clean up the codebase. In 5.7, the optimization stage was separated from the execution stage. In 8.0, the focus is refactoring the prepare stage and complete parser rewrite.
We have already seen the benefits of the refactoring work. Development time on new features has been reduced. CTE is a good example. Without refactoring done previously, it would have taken much longer to implement CTE. With a cleaner codebase, we also managed to reduce the bug count, which means more development resources can be allocated to new features instead of maintenance.
Percona: Where do you see MySQL heading in order to deal with some of the database trends that keep you awake at night?
Manyi: One industry trend is cloud computing and Database as a Service becoming viable options to in-house databases. In particular, it speeds up technology deployments and reduces initial investments for smaller organizations. MySQL, being the most popular open source database, fits well into the cloud data management trend.
What we can do is make MySQL even better in the cloud setting. E.g., better support for horizontal scaling, fail-over, sharding, cross-shard queries and the like.
Percona: What are looking forward to the most at Percona Live Europe this year?
Manyi: I like to speak and get feedback from MySQL users. Their input has a big impact on our roadmap. I also look forward to learning more about innovations by web-scale players like Facebook, Alibaba and others. I always feel more energized after talking to people who are passionate about MySQL and databases in general.
You can learn more about Manyi and her thoughts on MySQL 8.0 here: http://mysqlserverteam.com/
Want to find out more about Manyi, MySQL and Oracle? Register for Percona Live Europe 2016, and see her talk MySQL 8.0: what’s new in Optimizer.
Use the code FeaturedTalk and receive €25 off the current registration price!
Percona Live Europe 2016: Amsterdam is the premier event for the diverse and active open source database community. The conferences have a technical focus with an emphasis on the core topics of MySQL, MongoDB, and other open source databases. Percona live tackles subjects such as analytics, architecture and design, security, operations, scalability and performance. It also provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs. This conference is an opportunity to network with peers and technology professionals by bringing together accomplished DBA’s, system architects and developers from around the world to share their knowledge and experience. All of these people help you learn how to tackle your open source database challenges in a whole new way.
This conference has something for everyone!
Percona Live Europe 2016: Amsterdam is October 3-5 at the Mövenpick Hotel Amsterdam City Centre.
Webinar Thursday, September 1 – MongoDB Security: A Practical Approach
Please join David Murphy as he presents a webinar Thursday, September 1 at 10 am PDT (UTC-7) on MongoDB Security: A Practical Approach. (Date changed*)
This webinar will discuss the many features and options available in the MongoDB community to help secure your database environment. First, we will cover how these features work and how to fill in known gaps. Next, we will look at the enterprise-type features shared by Percona Server for MongoDB and MongoDB Enterprise. Finally, we will examine some disk and network designs that can ensure your security out of the gate – you don’t want to read about how your MongoDB database leaked hundreds of gigs of data on someone’s security blog!
We will cover the following topics:
- Using SSL for all things
- Limiting network access
- Custom roles
- The missing true SUPER user
- Wildcarding databases and collections
- Adding specific actions to a normal role
- LDAP
- Auditing
- Disk encryption
- Network designs
Register for the webinar here.
David Murphy, MongoDB Practice Manager David joined Percona in October 2015 as Practice Manager for MongoDB. Prior to that, David was on the ObjectRocket by Rackspace team as the Lead DBA. With the growth involved with any recently acquired startup, David’s role covered a wide range from evangelism, research, run book development, knowledgebase design, consulting, technical account management, mentoring and much more. Prior to the world of MongoDB had been a MySQL and NoSQL architect at Electronic Arts working with some of the largest titles in the world like FIFA, SimCity, and Battle Field providing tuning, design, and technology choice responsibilities. David maintains an active interest in database speaking and exploring new technologies.MySQL Sharding with ProxySQL
This article demonstrates how MySQL sharding with ProxySQL works.
Recently a colleague of mine asked me to provide a simple example on how ProxySQL performs sharding.
In response, I’m writing this short tutorial in the hope it will illustrate ProxySQL’s sharding functionalities, and help people out there better understand how to use it.
ProxySQL is a very powerful platform that allows us to manipulate and manage our connections and queries in a simple but effective way. This article shows you how.
Before starting let’s clarify some basic concepts.
- ProxySQL organizes its internal set of servers in Host Groups (HG), and each HG can be associated with users and Query Rules (QR)
- Each QR can be final (apply = 1) or = let ProxySQL continue to parse other QRs
- A QR can be a rewrite action, be a simple match, have a specific target HG, or be generic
- QRs are defined using regex
You can see QRs as a sequence of filters and transformations that you can arrange as you like.
These simple basic rules give us enormous flexibility. They allow us to create very simple actions like a simple query re-write, or very complex chains that could see dozens of QR concatenated. Documentation can be found here.
The information related to HGs or QRs is easily accessible using the ProxySQL administrator interface, in the tables mysql_servers, mysql_query_rules and stats.stats_mysql_query_rules. The last one allows us to evaluate if and how the rule(s) is used.
With regards to sharding, what can ProxySQL do to help us achieve what we need (in a relatively easy way)? Some people/companies include sharding logic in the application, use multiple connections to reach the different targets, or have some logic to split the load across several schemas/tables. ProxySQL allows us to simplify the way connectivity and query distribution is supposed to work reading data in the query or accepting HINTS.
No matter what the requirements, the sharding exercise can be summarized in a few different categories.
- By splitting the data inside the same container (like having a shard by State where each State is a schema)
- By physical data location (this can have multiple MySQL servers in the same room, as well as having them geographically distributed)
- A combination of the two, where I do split by State using a dedicated server, and again split by schema/table by whatever (say by gender)
In the following examples, I show how to use ProxySQL to cover the three different scenarios defined above (and a bit more).
The example below will report text from the Admin ProxySQL interface and the MySQL console. I will mark each one as follows:
- Mc for MySQL console
- Pa for ProxySQL Admin
Please note that the MySQL console MUST use the -c flag to pass the comments in the query. This is because the default behavior in the MySQL console is to remove the comments.
I am going to illustrate procedures that you can replicate on your laptop, and when possible I will mention a real implementation. This because I want you to directly test the ProxySQL functionalities.
For the example described below I have a PrxySQL v1.2.2 that is going to become the master in few days. You can download it from:
git clone https://github.com/sysown/proxysql.git git checkout v1.2.2Then to compile:
cd <path to proxy source code> make make installIf you need full instructions on how to install and configure ProxySQL, read here and here.
Finally, you need to have the WORLD test DB loaded. WORLD test DB can be found here.
Shard inside the same MySQL Server using three different schemas split by continentObviously, you can have any number of shards and relative schemas. What is relevant here is demonstrating how traffic gets redirected to different targets (schemas), maintaining the same structure (tables), by discriminating the target based on some relevant information in the data or pass by the application.
OK, let us roll the ball.
[Mc] +---------------+-------------+ | Continent | count(Code) | +---------------+-------------+ | Asia | 51 | <-- | Europe | 46 | <-- | North America | 37 | | Africa | 58 | <-- | Oceania | 28 | | Antarctica | 5 | | South America | 14 | +---------------+-------------+For this exercise, I will use three hosts in replica.
To summarize, I will need:
- Three hosts: 192.168.1.[5-6-7]
- Three schemas: Continent X + world schema
- One user : user_shardRW
- Three hostgroups: 10, 20, 30 (for future use)
First, we will create the schemas Asia, Africa, Europe:
[Mc] Create schema [Asia|Europe|North_America|Africa]; create table Asia.City as select a.* from world.City a join Country on a.CountryCode = Country.code where Continent='Asia' ; create table Europe.City as select a.* from world.City a join Country on a.CountryCode = Country.code where Continent='Europe' ; create table Africa.City as select a.* from world.City a join Country on a.CountryCode = Country.code where Continent='Africa' ; create table North_America.City as select a.* from world.City a join Country on a.CountryCode = Country.code where Continent='North America' ; create table Asia.Country as select * from world.Country where Continent='Asia' ; create table Europe.Country as select * from world.Country where Continent='Europe' ; create table Africa.Country as select * from world.Country where Continent='Africa' ; create table North_America.Country as select * from world.Country where Continent='North America' ;Now, create the user
grant all on *.* to user_shardRW@'%' identified by 'test';Now let us start to configure the ProxySQL:
[Pa] insert into mysql_users (username,password,active,default_hostgroup,default_schema) values ('user_shardRW','test',1,10,'test_shard1'); LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK; INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.5',10,3306,100); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.6',20,3306,100); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.7',30,3306,100); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;With this we have defined the user, the servers and the host groups.
Let us start to define the logic with the query rules:
[Pa] delete from mysql_query_rules where rule_id > 30; INSERT INTO mysql_query_rules (rule_id,active,username,match_pattern,replace_pattern,apply) VALUES (31,1,'user_shardRW',"^SELECT\s*(.*)\s*from\s*world.(\S*)\s(.*).*Continent='(\S*)'\s*(\s*.*)$","SELECT \1 from \4.\2 WHERE 1=1 \5",1); LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;I am now going to query the master (or a single node), but I am expecting ProxySQL to redirect the query to the right shard, catching the value of the continent:
[Mc] SELECT name,population from world.City WHERE Continent='Europe' and CountryCode='ITA' order by population desc limit 1; +------+------------+ | name | population | +------+------------+ | Roma | 2643581 | +------+------------+You can say: “Hey! You are querying the schema World, of course you get back the correct data.”
This is not what really happened. ProxySQL did not query the schema World, but the schema Europe.
Let’s look at the details:
[Pa] select * from stats_mysql_query_digest; Original :SELECT name,population from world.City WHERE Continent='Europe' and CountryCode='ITA' order by population desc limit 1; Transformed :SELECT name,population from Europe.City WHERE ?=? and CountryCode=? order by population desc limit ?Let me explain what happened.
Rule 31 in ProxySQL will take all the FIELDS we will pass in the query. It will catch the CONTINENT in the WHERE clause, it will take any condition after WHERE and it will reorganize the queries all using the RegEx.
Does this work for any table in the sharded schemas? Of course it does.
A query like: SELECT name,population from world.Country WHERE Continent='Asia' ;
Will be transformed into: SELECT name,population from Asia.Country WHERE ?=?
Another possible approach is to instruct ProxySQL to shard is to pass a hint inside a comment. Let see how.
First let me disable the rule I just inserted. This is not really needed, but we’ll do it so you can see how.
Percona Live Europe Discounted Pricing and Community Dinner!
Get your Percona Live Europe discounted tickets now, and sign up for the community dinner.
The countdown is on for the annual Percona Live Europe Open Source Database Conference! This year the conference will be taking place in the great city of Amsterdam October 3-5. This three-day conference will focus on the latest trends, news and best practices in the MySQL, MongoDB, PostgreSQL and other open source databases, while tackling subjects such as analytics, architecture and design, security, operations, scalability and performance. Percona Live provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs.
With breakout sessions, tutorial sessions and keynote speakers, there will certainly be no lack of content.
Advanced Rate Registration ENDS September 5, so make sure to register now to secure the best price possible.
As it is a Percona Live Europe conference, there will certainly be no lack of FUN either!!!!
As tradition holds, there will be a Community Dinner. Tuesday night, October 4, Percona Live Diamond Sponsor Booking.com hosts the Community Dinner at their very own headquarters located in historic Rembrandt Square in the heart of the city. After breakout sessions conclude, attendees are picked up right outside of the venue and taken to booking.com’s headquarters by canal boats! This gives all attendees the opportunity to play “tourist” while viewing the beauty of Amsterdam from the water. Attendees are dropped off right next to Booking.com’s office (return trip isn’t included)! The Lightning Talks for this year’s conference will be featured at the dinner.
Come and show your support for the community while enjoying dinner and drinks! The first 50 people registered for the dinner get in the doors for €10 (after that the price goes to €15 euro). Space is limited so make sure to sign up ASAP!
So don’t forget, register for the conference and sign up for the community dinner before space is gone! See you in Amsterdam!
Percona Live Europe featured talk with Alexander Krasheninnikov — Processing 11 billion events a day with Spark in Badoo
Welcome to a new Percona Live Europe featured talk with Percona Live Europe 2016: Amsterdam speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference. We’ll also discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live Europe registration bonus!
In this Percona Live Europe featured talk, we’ll meet Alexander Krasheninnikov, Head of Data Team at Badoo. His talk will be on Processing 11 billions events a day with Spark in Badoo. Badoo is one of the world’s largest and fastest growing social networks for meeting new people. I had a chance to speak with Alexander and learn a bit more about the database environment at Badoo:
Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it?
Alexander: Currently, I work at Badoo as Head of Data Team. Our team is responsible for providing internal API’s for statistics data collecting and processing.
I started as a developer at Badoo, but the project I am going to cover in my talk lead to creating a separate department.
Percona: Your talk is called “Processing 11 billion events a day with Spark in Badoo.” What were the issues with your environment that led you to Spark? How did Spark solve these needs?
Alexander: When we designed the Unified Data Stream system in Badoo, we’ve extracted several requirements: scalability, fault tolerance and reliability. Altogether, these requirements moved us towards using Hadoop as deep data storage and data processing framework. Our initial implementation was built on top of Scribe + WebHDFS + Hive. But we’ve realized that processing speed and any lag of data delivery is unacceptable (we need near-realtime data processing). One of our BI team mentioned Spark as being significantly faster than Hive in some cases, (especially ones similar to ours). When investigated Spark’s API, we found the Streaming submodule — ideal for our needs. Additionally, this framework allowed us to use some third-party libraries, and write code. We’ve actually created an aggregation framework that follows “divide and conquer” principle. Without Spark, we definitely went way re-inventing lot of things from it.
Percona: Why is tracking the event stream important for your business model? How are you using the data Spark is providing you to reach business goals?
Alexander: The event stream always represents some important business/technical metrics — votes, messages, likes and so on. All this, brought together, forms the “health” of our product. The primary goal of our Spark-based system is to process a heterogeneous event stream one way, and draw charts automatically. We acheived this goal, and now we have hundreds of charts and dozens of developers/analysts/product team members using them. The system also evolved, and now we perform automatic anomaly detection over the event stream. We report strange data behavior to all the interested people.
Percona: What is changing in data use in your businesses model that keeps you awake at night? What tools or features are you looking for to address these issues?
Alexander: As I’ve mentioned before, we have an anomaly detection process for our metrics. If some of our metrics are out of expected bounds, it is treated as being an anomaly, and notification are sent. Also, we have a self-monitoring functionality for the whole system — a small event rate of heartbeats is generated, and processed with two different systems. If those show a significant difference — that defintely keeps me awake at night!
Percona Live Europe featured talk with Krzysztof Książek — MySQL Load Balancers – MaxScale, ProxySQL, HAProxy, MySQL Router & nginx
Welcome to the first Percona Live Europe featured talk with Percona Live Europe 2016: Amsterdam speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference. We’ll also discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live Europe registration bonus!
In this Percona Live Europe featured talk, we’ll meet Krzysztof Książek, Senior Support Engineer at Severalnines AB. His talk will be on MySQL Load Balancers – MaxScale, ProxySQL, HAProxy, MySQL Router & nginx: a close up look. Load balancing MySQL connections and queries using HAProxy has been popular in the past years. However, the recent arrival of MaxScale, MySQL Router, ProxySQL and now also Nginx as a reverse proxy have changed the game. Which use cases are best for which solution, and how well do they integrate into your environment?
I had a chance to speak with Krzysztof and learn a bit more about these questions:
Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it?
Krzysztof: I was working as a system administrator in a hosting company in Poland. They had a need for a dedicated MySQL DBA. So I volunteered for the job. Later, I decided it was time to move on and joined Laine Campbell’s PalominoDB. I had a great time there, working with large MySQL deployments. At the beginning of 2015, I joined Severalnines as Senior Support Engineer. It was a no-brainer for me as I was always interested in building and managing scalable clusters based on MySQL — this is exactly what Severalnines helps its customers with.
Percona: Your talk is called “MySQL Load Balancers: MaxScale, ProxySQL, HAProxy, MySQL Router & nginx – a close up look.” Why are more load balancing solutions becoming available? What problems does load balancing solve for database environments?
Krzysztof:Load balancers are a must in highly scalable environments that are usually distributed across multiple servers or data centers. Large MySQL setups can quickly become very complex — many clusters, each containing numerous nodes and using different and interconnected technologies: MySQL replication, Galera Cluster. Load balancers not only help to maintain availability of the database tier by routing traffic to available nodes, but they also hide the complexity of the database tier from the application.
Percona: You call out three general groups of load balancers: application connectors, TCP reverse proxies, and SQL-aware load balancers. What workloads do these three groups generally address best?
Krzysztof: I wouldn’t say “workloads” — I’d say more like “use cases.” Each of those groups will handle all types of workloads but they do it differently. TCP reverse proxies like HAProxy or nginx will just route packets: fast and robust. They won’t understand the state of MySQL backends, though. For that you need to use external scripts like Percona’s clustercheck or Severalnines’ clustercheck-iptables.
On the other hand, should you want to build your application to be more database-aware, you can use mysqlnd and manage complex HA topologies from your application. Finally, SQL-aware load balancers like ProxySQL or MaxScale can be used to move complexity away from the application and, for example, perform read-write split in the proxy layer. They detect the MySQL state and can make necessary changes in routing — such as moving writes to a newly promoted master. They can also empower the DBA by allowing him to (for example) rewrite queries as they pass the proxy.
Percona: Where do you see load balancing technologies heading in order to deal with some of the database trends that keep you awake at night?
Krzysztof: Personally, I love to see the “empowerment” of DBA’s. For example, ProxySQL not only routes packets and helps to maintain high availability (although this is still the main role of a proxy), it is also a flexible tool that can help a DBA tackle many day-to-day problems. An offending query? You can cache it in the proxy or you can rewrite it on the fly. Do you need to test your system before an upgrade, using real-world queries? You can configure ProxySQL to mirror the production traffic on a test system. You can use it to build a sharded environment. These things, in the past, typically weren’t possible for a DBA to do — the application had to be modified and new code had to be deployed. Activities like those take time, time that is very precious when the ops staff is dealing with databases on fire from a high load. Now I can do all that just through reconfiguring a proxy. Isn’t it great?
Percona: What are looking forward to the most at Percona Live Europe this year?
Krzysztof: The Percona Live Europe agenda looks great and, as always, it’s a hard choice to decide which talks to attend. I’d love to learn more about the upcoming MySQL 8.0: there are quite a few talks covering both performance improvements and different features of 8.0. There’s also a new Galera version in the works with great features like non-blocking DDL’s, so it would be great to see what’s happening there. We’re also excited to run the “Become a MySQL DBA” tutorial again (our blog series on the same topic has been very popular).
Additionally, I’ve been working within the MySQL community for a while and I have many friends who, unfortunately, I don’t see very often. Percona Live Europe is an event that brings us together and where we can catch up. I’m definitely looking forward to this.
You can read more about Krzysztof thoughts on load balancers at Severalnines blog.
Want to find out more about Krzysztof, load balancers and Severalnines? Register for Percona Live Europe 2016, and come see his talk MySQL Load Balancers – MaxScale, ProxySQL, HAProxy, MySQL Router & nginx: a close up look.
Use the code FeaturedTalk and receive €25 off the current registration price!
Percona Live Europe 2016: Amsterdam is the premier event for the diverse and active open source database community. The conferences have a technical focus with an emphasis on the core topics of MySQL, MongoDB, and other open source databases. Percona live tackles subjects such as analytics, architecture and design, security, operations, scalability and performance. It also provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs. This conference is an opportunity to network with peers and technology professionals by bringing together accomplished DBA’s, system architects and developers from around the world to share their knowledge and experience. All of these people help you learn how to tackle your open source database challenges in a whole new way.
This conference has something for everyone!
Percona Live Europe 2016: Amsterdam is October 3-5 at the Mövenpick Hotel Amsterdam City Centre.
PostgreSQL Day at Percona Live Amsterdam 2016
Introducing PostgreSQL Day at Percona Live Europe, Amsterdam 2016.
As modern open source database deployments change, often including more than just a single open source database, Percona Live has also changed. We changed our model from being a purely MySQL-focused conference (with variants) to include a significant amount of MongoDB content. We’ve also expanded our overview of the open source database landscape and included introductory talks on many other technologies. These included practices we commonly see used in the world, and new up and coming solutions we think show promise.
In getting Percona Live Europe 2016 ready, something unexpected happened: we noticed the PostgreSQL community come together and submit many interesting talks about this great open source database technology. This effort on their part pushed to go further than we initially planned this year, and we’ve put together a full day of PostgreSQL talks. At Percona Live Europe this year, we will be running our first ever PostgreSQL Day on October 4th!
Some folks have been questioning this decision: do we really need so much PostgreSQL content? Isn’t there some tension between the MySQL and PostgreSQL communities? (Here is a link to a very recent example.)
While it might be true (and I think it is) that some contention exists between these groups, I don’t think isolation and indifference are the answers to improving cooperation. They certainly aren’t the best plan for the open source database community at large, because there is too much we can learn from each other — especially when it comes to promoting open source databases as a real alternative to commercial ones.
Every open source community has its own set of “zealots” (or maybe just “strict adherents”). But our dedication to one particular technology shouldn’t blind us to the value of others. The MySQL and PostgreSQL communities have both successfully obtained support through substantial large scale deployments. There are more and more engineers joining those communities, looking to find better solutions for the problems they face and learn from others’ technologies.
Through the years I have held very productive discussions with people like Josh Berkus, Bruce Momjian, Oleg Bartunov, Ilya Kosmodemiansky and Robert Treat (to name just a few) about how things are done in MySQL versus PostgreSQL — and what could be done better in both.
At PGDay this year, I was glad to see Alexey Kopytov speaking about what MySQL does better; it got some very constructive conversations going. I was also pleased that my keynote on Migration to the Open Source Databases at the same conference was well attended and also sparked some good conversations.
I want this trend to continue to grow. This is why I think running a PostgreSQL Day as part of Percona Live Europe, Amsterdam is an excellent development. It provides an outstanding opportunity for people interested in PostgreSQL to further their knowledge through exposure to MySQL, MongoDB and other open source technologies. This holds true for folks attending the conference mainly as MySQL and MongoDB users: they get exposed to the state of PostgreSQL in 2016.
Even more, I hope that this new track will spark productive conversations in the hallways, at lunches and other events between the speakers themselves. It’s really the best way to see what we can learn from each other. In the end, it benefits all technologies.
I believe the whole conference is worth attending, but for people who only wish to attend our new PostgreSQL Day on October 4th, you can register for a single day conference pass using the PostgreSQLRocks discount code (€200, plus VAT).
I’m looking forward to meeting and speaking with members of the PostgreSQL community at Percona Live!
How to stop offending queries with ProxySQL
This blog discusses how to find and address badly written queries using ProxySQL.
All of us are very good in writing good queries. We know this to always be true!
Percona Server 5.7.14-7 is now available
Percona announces the GA release of Percona Server 5.7.14-7 on August 23, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.
Based on MySQL 5.7.14, including all the bug fixes in it, Percona Server 5.7.14-7 is the current GA release in the Percona Server 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.14-7 milestone at Launchpad.
New Features:- Percona Server Audit Log Plugin now supports filtering by user, database, and sql_command.
- Percona Server now supports tree map file block allocation strategy for TokuDB.
- Fixed potential cardinality 0 issue for TokuDB tables if ANALYZE TABLE finds only deleted rows and no actual logical rows before it times out. Bug fixed #1607300 (#1006, #732).
- TokuDB database.table.index names longer than 256 characters could cause a server crash if background analyze table status was checked while running. Bug fixed #1005.
- PAM Authentication Plugin would abort authentication while checking UNIX user group membership if there were more than a thousand members. Bug fixed #1608902.
- If DROP DATABASE would fail to delete some of the tables in the database, the partially-executed command is logged in the binlog as DROP TABLE t1, t2, ... for the tables for which drop succeeded. A slave might fail to replicate such DROP TABLE statement if there exist foreign key relationships to any of the dropped tables and the slave has a different schema from the master. Fix by checking, on the master, whether any of the database to be dropped tables participate in a Foreign Key relationship, and fail the DROP DATABASE statement immediately. Bug fixed #1525407 (upstream #79610).
- PAM Authentication Plugin didn’t support spaces in the UNIX user group names. Bug fixed #1544443.
- Due to security reasons ld_preload libraries can now only be loaded from the system directories (/usr/lib64, /usr/lib) and the MySQL installation base directory.
- In the client library, any EINTR received during network I/O was not handled correctly. Bug fixed #1591202 (upstream #82019).
- SHOW GLOBAL STATUS was locking more than the upstream implementation which made it less suitable to be called with high frequency. Bug fixed #1592290.
- The included .gitignore in the percona-server source distribution had a line *.spec, which means someone trying to check in a copy of the percona-server source would be missing the spec file required to build the RPMs. Bug fixed #1600051.
- Audit Log Plugin did not transcode queries. Bug fixed #1602986.
- If the changed page bitmap redo log tracking thread stops due to any reason, then shutdown will wait for a long time for the log tracker thread to quit, which it never does. Bug fixed #1606821.
- Changed page tracking was initialized too late by InnoDB. Bug fixed #1612574.
- Fixed stack buffer overflow if --ssl-cipher had more than 4000 characters. Bug fixed #1596845 (upstream #82026).
- Audit Log Plugin events did not report the default database. Bug fixed #1435099.
- Canceling the TokuDB Background ANALYZE TABLE job twice or while it was in the queue could lead to server assertion. Bug fixed #1004.
- Fixed various spelling errors in comments and function names. Bug fixed #728 (Otto Kekäläinen).
- Implemented set of fixes to make PerconaFT build and run on the AArch64 (64-bit ARMv8) architecture. Bug fixed #726 (Alexey Kopytov).
#1542874 (upstream #80296), #1610242, #1604462 (upstream #82283), #1604774 (upstream #82307), #1606782, #1607359, #1607606, #1607607, #1607671, #1609422, #1610858, #1612551, #1613663, #1613986, #1455430, #1455432, #1581195, #998, #1003, and #730.
The release notes for Percona Server 5.7.14-7 are available in the online documentation. Please report any bugs on the launchpad bug tracker .
Percona Server 5.6.32-78.0 is now available
Percona announces the release of Percona Server 5.6.32-78.0 on August 22nd, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.
Based on MySQL 5.6.32, including all the bug fixes in it, Percona Server 5.6.32-78.0 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release are available in the 5.6.32-78.0 milestone on Launchpad.
New Features:- Percona Server Audit Log Plugin now supports filtering by user and SQL command.
- Percona Server now supports tree map file block allocation strategy for TokuDB.
- Fixed potential cardinality 0 issue for TokuDB tables if ANALYZE TABLE finds only deleted rows and no actual logical rows before it times out. Bug fixed #1607300 (#1006, #732).
- TokuDB database.table.index names longer than 256 characters could cause server crash if background analyze table status was checked while running. Bug fixed #1005.
- PAM Authentication Plugin would abort authentication while checking UNIX user group membership if there were more than a thousand members. Bug fixed #1608902.
- If DROP DATABASE would fail to delete some of the tables in the database, the partially-executed command is logged in the binlog as DROP TABLE t1, t2, ... for the tables for which drop succeeded. A slave might fail to replicate such DROP TABLE statement if there exist foreign key relationships to any of the dropped tables and the slave has a different schema from master. Fix by checking, on the master, whether any of the database to be dropped tables participate in a Foreign Key relationship, and fail the DROP DATABASE statement immediately. Bug fixed #1525407 (upstream #79610).
- PAM Authentication Plugin didn’t support spaces in the UNIX user group names. Bug fixed #1544443.
- Due to security reasons ld_preload libraries can now only be loaded from the system directories (/usr/lib64, /usr/lib) and the MySQL installation base directory.
- Percona Server 5.6 could not be built with the -DMYSQL_MAINTAINER_MODE=ON option. Bug fixed #1590454.
- In the client library, any EINTR received during network I/O was not handled correctly. Bug fixed #1591202 (upstream #82019).
- The included .gitignore in the percona-server source distribution had a line *.spec, which means someone trying to check in a copy of the percona-server source would be missing the spec file required to build the RPMs. Bug fixed #1600051.
- Audit Log Plugin did not transcode queries. Bug fixed #1602986.
- LeakSanitizer-enabled build failed to bootstrap server for MTR. Bug fixed #1603978 (upstream #81674).
- Fixed MYSQL_SERVER_PUBLIC_KEY connection option memory leak. Bug fixed #1604419.
- The fix for bug #1341067 added a call to free some of the heap memory allocated by OpenSSL. This is not safe for repeated calls if OpenSSL is linked twice through different libraries and each is trying to free the same. Bug fixed #1604676.
- If the changed page bitmap redo log tracking thread stops due to any reason, then shutdown will wait for a long time for the log tracker thread to quit, which it never does. Bug fixed #1606821.
- Audit Log Plugin events did not report the default database. Bug fixed #1435099.
- Canceling the TokuDB Background ANALYZE TABLE job twice or while it was in the queue could lead to server assertion. Bug fixed #1004.
- Fixed various spelling errors in comments and function names. Bug fixed #728 (Otto Kekäläinen).
- Implemented set of fixes to make PerconaFT build and run on the AArch64 (64-bit ARMv8) architecture. Bug fixed #726 (Alexey Kopytov).
Other bugs fixed: #1603073, #1604323, #1604364, #1604462, #1604774, #1606782, #1607224, #1607359, #1607606, #1607607, #1607671, #1608385, #1608437, #1608845, #1609422, #1610858, #1612084, #1612551, #1455430, #1455432, #1610242, #998, #1003, #729, and #730.
Release notes for Percona Server 5.6.32-78.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker.
Query rewrite plugin: scalability fix in MySQL 5.7.14
In this post, we’ll look at a scalability fix for issues the query rewrite plugin had on performance.
Several months ago, Vadim blogged about the impact of a query rewrite plugin on performance. We decided to re-evaluate the latest release of 5.7(5.7.14), which includes fixes for this issue.
I reran tests for MySQL 5.7.13 and 5.7.14 using the same setup and the same test: sysbench OLTP_RO without and with the query rewrite plugin enabled.
MySQL 5.7.14 performs much better, with almost no overhead. Let’s check PMP for these runs:
MySQL 5.7.13 206 __lll_lock_wait(libpthread.so.0),pthread_mutex_lock(libpthread.so.0),plugin_unlock_list,mysql_audit_release,handle_connection,pfs_spawn_thread,start_thread(libpthread.so.0),clone(libc.so.6) 152 __lll_lock_wait(libpthread.so.0),pthread_mutex_lock(libpthread.so.0),plugin_foreach_with_mask,mysql_audit_acquire_plugins,mysql_audit_notify,invoke_pre_parse_rewrite_plugins,mysql_parse,dispatch_command,do_command,handle_connection,pfs_spawn_thread,start_thread(libpthread.so.0),clone(libc.so.6) 97 __lll_lock_wait(libpthread.so.0),pthread_mutex_lock(libpthread.so.0),plugin_lock,acquire_plugins,plugin_foreach_with_mask,mysql_audit_acquire_plugins,mysql_audit_notify,invoke_pre_parse_rewrite_plugins,mysql_parse,dispatch_command,do_command,handle_connection,pfs_spawn_thread,start_thread(libpthread.so.0),clone(libc.so.6) 34 __io_getevents_0_4(libaio.so.1),LinuxAIOHandler::collect,LinuxAIOHandler::poll,os_aio_handler,fil_aio_wait,io_handler_thread,start_thread(libpthread.so.0),clone(libc.so.6) 18 send(libpthread.so.0),vio_write,net_write_packet,net_flush,net_send_eof,THD::send_statement_status,dispatch_command,do_command,handle_connection,pfs_spawn_thread,start_thread(libpthread.so.0),clone(libc.so.6) 9 recv(libpthread.so.0),vio_read,net_read_raw_loop,net_read_packet,my_net_read,Protocol_classic::read_packet,Protocol_classic::get_command,do_command,handle_connection,pfs_spawn_thread,start_thread(libpthread.so.0),clone(libc.so.6) 8 poll(libc.so.6),vio_io_wait,vio_socket_io_wait,vio_read,net_read_raw_loop,net_read_packet,my_net_read,Protocol_classic::read_packet,Protocol_classic::get_command,do_command,handle_connection,pfs_spawn_thread,start_thread(libpthread.so.0),clone(libc.so.6) MySQL 5.7.14 309 send(libpthread.so.0),vio_write,net_write_packet,net_flush,net_send_eof,THD::send_statement_status,dispatch_command,do_command,handle_connection,pfs_spawn_thread,start_thread(libpthread.so.0),clone(libc.so.6) 43 recv(libpthread.so.0),vio_read,net_read_raw_loop,net_read_packet,my_net_read,Protocol_classic::read_packet,Protocol_classic::get_command,do_command,handle_connection,pfs_spawn_thread,start_thread(libpthread.so.0),clone(libc.so.6) 34 __io_getevents_0_4(libaio.so.1),LinuxAIOHandler::collect,LinuxAIOHandler::poll,os_aio_handler,fil_aio_wait,io_handler_thread,start_thread(libpthread.so.0),clone(libc.so.6) 15 poll(libc.so.6),vio_io_wait,vio_socket_io_wait,vio_read,net_read_raw_loop,net_read_packet,my_net_read,Protocol_classic::read_packet,Protocol_classic::get_command,do_command,handle_connection,pfs_spawn_thread,start_thread(libpthread.so.0),clone(libc.so.6) 7 send(libpthread.so.0),vio_write,net_write_packet,net_flush,net_send_ok,Protocol_classic::send_ok,THD::send_statement_status,dispatch_command,do_command,handle_connection,pfs_spawn_thread,start_thread(libpthread.so.0),clone(libc.so.6) 7 pthread_cond_wait,os_event::wait_low,srv_worker_thread,start_thread(libpthread.so.0),clone(libc.so.6) 7 pthread_cond_wait,os_event::wait_low,buf_flush_page_cleaner_worker,start_thread(libpthread.so.0),clone(libc.so.6)No sign of extra locks with the plugin API in PMP for MySQL 5.7.14. Good job!
Also, note that the fix for this issue should help to improve performance for any other audit based API plugins.
Top Most Overlooked MySQL Performance Optimizations: Q & A
Thank you for attending my 22nd July 2016 webinar titled “Top Most Overlooked MySQL Performance Optimizations“. In this blog, I will provide answers to the Q & A for that webinar.
For hardware, which disk raid level do you suggest? Is raid5 suggested performance-wise and data-integrity-wise?
RAID 5 comes with high overhead, as each write turns into a sequence of four physical I/O operations, two reads and two writes. We know that RAID 5s have some write penalty, and it could affect the performance on spindle disks. In most cases, we advise using alternative RAID levels. Use RAID 5 when disk capacity is more important than performance (e.g., archive databases that aren’t used often). Since write performance isn’t a problem in the case of SSD, but capacity is expensive, RAID 5 can help by wasting less disk space.
Regarding collecting table statistics, do you have any suggestions for analyzing large tables (over 300GB) since we had issues with MySQL detecting the wrong cardinality?
MySQL optimizer makes decisions about the execution plan (EXPLAIN), and statistics are re-estimated automatically and can be re-estimated explicitly when one calls the ANALYZE TABLE statement for the table, or the OPTIMIZE TABLE statement for InnoDB tables (which rebuilds the table and then performs an ANALYZE for the table).
When MySQL optimizer is not picking up the right index in EXPLAIN, it could be caused by outdated or wrong statistics (optimizer bugs aside). So, when you optimize the table you rebuild it so data are stored in a more compact way (assuming they changed a lot in the past) and then you re-estimate statistics based on some random sample pages checked in the table. As a result, you come up with statistics that are more correct for the data you have at the moment. This allows optimizer to get a better plan. When an explicit hint is added, you reduce possible choices for optimizer and it can use a good enough plan even with wrong statistics.
If you use versions 5.6.x and 5.7.x, and InnoDB tables, there is a way to store/fix statistics when the plans are good. Using Persistent Optimizer Statistics prevents it from changing automatically. It’s recommended you run ANALYZE TABLE to calculate statistics (if really needed) during off peak time and make sure the table in question is not in use. Check this blogpost too.
Regarding the buffer pool, when due you think using multiple buffer pool instances make sense?
Multiple InnoDB buffer pools were introduced in MySQL 5.5, and the default value for it was 1. Now, the default value in MySQL 5.6 is 8. Enabling innodb_buffer_pool_instances is useful in highly concurrent workloads as it may reduce contention of the global mutexes. innodb_buffer_pool_instances helps to improve scalability in multi-core machines and having multiple buffer pools means that access to the buffer pool splits across all instances. Therefore, no single mutex controls the access pattern.
innodb_buffer_pool_instances only takes effect when set to 1GB (at minimum), and the total specified size for innodb_buffer_pool is divided among all the buffer pool instances. Further, setting the innodb_buffer_pool_instances parameter is not a dynamic option, so it requires a server restart to take effect.
What do you mean “PK is appended to secondary index”
In InnoDB, secondary indexes are stored along with their corresponding primary key values. InnoDB uses this primary key value to search for the row in the clustered index. So, primary keys are implicitly added with secondary keys.
About Duplicate Keys, if I have a UNIQUE KEY on two columns, is it ok then to set a key for each of these columns also? Or should I only keep the unique key on the columns and get rid of regular key on each column also?
As I mentioned during the talk, for composite index the leftmost prefix is used. For example, If you have a UNIQUE INDEX on columns A,B as (A,B), then this index is not used for lookup for the query below:
For that query, you need a separate index on B column.
On myisam settings, doesn’t the MySQL and information_schema schemas require myisam? If so, are any settings more than default needing to be changed?
performance_schema uses the PERFORMANCE_SCHEMA storage engine, so only MySQL system database tables use the MyISAM engine. The MySQL system database is not used much and usually default settings for MyISAM engine are fine.
Will functions make my app slow compare than query?
I’m not sure how you’re comparing “queries” versus “stored functions.” Functions also need to transform, similar to the query execution plan. But it might be slower compare to well-coded SQL, even with the overhead of copying the resulting data set back to the client. Typically, functions have many SQL queries. The trade-off is that this does increase the load on the database server because more of the work is done on the server side and less is done on the client (application) side.
Foreign key will make my fetches slower?
MySQL enforces referential integrity (which ensures data consistency between related tables) via foreign keys for the InnoDB storage engine. There could be some overhead of this for the INSERT/UPDATE/DELETE foreign key column, which has to check if the value exists in a related column of other tables. There could be some overhead for this, but again it’s an index lookup so the cost shouldn’t be high. However, locking overhead comes into play as well. This blogpost from our CEO is informative on this topic. This especially affect writes, but I don’t think FK fetches slower for SELECT as it’s an index lookup.
Large pool size can have a negative impact to performance? About 62GB of pool size?
The InnoDB buffer pool is by far the most important option for InnoDB Performance, as it’s the main cache for data and indexes and it must be set correctly. Setting it large enough (i.e., larger than your dataset) shouldn’t cause any problems as long as you leave enough memory for OS needs and for MySQL buffers (e.g., sort buffer, join buffer, temporary tables, etc.).
62GB doesn’t necessarily mean a big InnoDB buffer pool. It depends on how much memory your MySQL server contains, and what the size of your total InnoDB dataset is. A good rule of thumb is to set the InnoDB buffer pool size as large as possible, while still leaving enough memory for MySQL buffers and for OS.
You find duplicate, redundant indexes by looking at information_schema.key_column_usage directly?
The key_column_usage view provides information about key columns constraints. It doesn’t provide information about duplicate or redundant indexes.
Can you find candidate missing indexes by looking at the slow query log?
Yes, as I mentioned you can find unused indexes by enabling log_queries_not_using_indexes. It writes to slow_query_log. You can also enable the user_statistics feature which adds several information_schema tables, and you can find un-used indexes with the help of user_statistics. pt-index-usage is yet another tool from Percona toolkit for this purpose. Also, check this blogpost on this topic.
How to find the unused indexes? They also have an impact on performance.
Unused indexes can be found with the help of the pt-index-usage tool from Percona toolkit as mentioned above. If you are using Percona Server, you can also use User Statistics feature. Check this blogpost from my colleague, which shows another technique to find unused indexes.
As far as I understand, MIXED will automatically use ROW for non-deterministic and STATEMENT for deterministic queries. I’ve been using it for years now without any problems. So why this recommendation of ROW?
In Mixed Mode, MySQL uses statement-based replication for most queries, switching to row-based replication only when statement-based replication would cause an inconsistency. We recommend ROW-based logging because it’s efficient and performs better as it requires less row locks. However, RBR can generate more data if a DML query affects many rows and a significant amount of data needs to be written to the binary log (and you can configure binlog_row_image parameter to control the amount of logging). Also, make sure you have good network bandwidth between master/slave(s) for RBR, as it needs to send more data to slaves. Another important thing to get best of the performance with ROW-based replication is to make sure all your database tables contain a Primary Key or Unique Key (because of this bug http://bugs.mysql.com/bug.php?id=53375).
Can you give a brief overview of sharding…The pros and cons also.
With Sharding, database data is split into multiple databases with each shard storing a subset of data. Sharding is useful to scale writes if you have huge dataset and a single server can’t handle amount of writes.
Performance and throughput could be better with sharding. On the other had, it requires lots of development and administration efforts. The application needs to be aware of the shards and keep track of which data is stored in which shard. You can use MySQL Fabric framework to manage farms of MySQL Servers. Check for details in the manual.
Why not mixed replication mode instead of row-based replication ?
As I mentioned above, MIXED uses a STATEMENT-based format by default, and converts to ROW-based replication format for non-deterministic queries. But ROW-based format is recommended as there could still be cases where MySQL fails to detect non-deterministic query behavior and replicates in a STATEMENT-based format.
Can you specify a few variables which could reduce slave lag?
Because of the single-threaded nature of MySQL (until MySQL 5.6), there is always a chance that a MySQL slave can lag from the master. I would suggest considering the below parameters to avoid slave lag:
- innodb_flush_log_at_trx_commit <> 1, Either set it t or 0 however, it could cause you 1 second of data loss in case of crash.
- innodb_flush_method = O_DIRECT, for unix like operating system O_DIRECT is recommended to avoid double buffering. If your InnoDB data and log files are located on SAN then O_DIRECT is probably not good choice.
- log_bin = 0, Disable binary logging (if enabled) to minimize extra Disk IO.
- sync_binlog = 0, Disable sync_binlog.
Those above parameters would definitely help to minimize slave lag. However, along with that make sure your slave(s) hardware is as strong as the master. Make sure your read queries are fast enough. Don’t overload slave to much, and distribute read traffic evenly between slave(s). Also, you should have the same table definitions on slave(s) as the master (e.g., master server indexes must exists on slave(s) tables too). Last but not least, I wrote a blogpost on how to diagnose and cure replication lag. It might be useful for further reading.
Percona Server 5.5.51-38.1 is now available
Percona announces the release of Percona Server 5.5.51-38.1 on August 19, 2016. Based on MySQL 5.5.51, including all the bug fixes in it, Percona Server 5.5.51-38.1 is now the current stable release in the 5.5 series.
Percona Server is open-source and free. You can find release details of the release in the 5.5.51-38.1 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.
Bugs Fixed:- PAM Authentication Plugin would abort authentication while checking UNIX user group membership if there were more than a thousand members. Bug fixed #1608902.
- PAM Authentication Plugin didn’t support spaces in the UNIX user group names. Bug fixed #1544443.
- If DROP DATABASE would fail to delete some of the tables in the database, the partially-executed command is logged in the binlog as DROP TABLE t1, t2, ... for the tables for which drop succeeded. A slave might fail to replicate such DROP TABLE statement if there exist foreign key relationships to any of the dropped tables and the slave has a different schema from the master. Fixed by checking, on the master, whether any of the database to be dropped tables participate in a Foreign Key relationship, and fail the DROP DATABASE statement immediately. Bug fixed #1525407 (upstream #79610).
- Percona Server 5.5 could not be built with the -DMYSQL_MAINTAINER_MODE=ON option. Bug fixed #1590454.
- In the client library, any EINTR received during network I/O was not handled correctly. Bug fixed #1591202 (upstream #82019).
- The included .gitignore in the percona-server source distribution had a line *.spec, which means someone trying to check in a copy of the percona-server source would be missing the spec file required to build the RPM packages. Bug fixed #1600051.
- The fix for bug #1341067 added a call to free some of the heap memory allocated by OpenSSL. This was not safe for repeated calls if OpenSSL is linked twice through different libraries and each is trying to free the same. Bug fixed #1604676.
- If the changed page bitmap redo log tracking thread stops due to any reason, then shutdown will wait for a long time for the log tracker thread to quit, which it never does. Bug fixed #1606821.
- Performing slow InnoDB shutdown (innodb_fast_shutdown set to 0) could result in an incomplete purge, if a separate purge thread is running (which is a default in Percona Server). Bug fixed #1609364.
- Due to security reasons ld_preload libraries can now only be loaded from the system directories (/usr/lib64, /usr/lib) and the MySQL installation base directory.
#1515591 (upstream #79249), #1612551, #1609523, #756387, #1097870, #1603073, #1606478, #1606572, #1606782, #1607224, #1607359, #1607606, #1607607, #1607671, #1608385, #1608424, #1608437, #1608515, #1608845, #1609422, #1610858, #1612084, #1612118, and #1613641.
Find the release notes for Percona Server 5.5.51-38.1 in our online documentation. Report bugs on the launchpad bug tracker.
ProxySQL 1.2.1 GA Release
The GA release of ProxySQL 1.2.1 is available. You can get it from https://github.com/sysown/proxysql/releases. There are also Docker images for Release 1.2.1: https://hub.docker.com/r/percona/proxysql/.
ProxySQL is a high-performance proxy, currently for MySQL and its forks (like Percona Server and MariaDB). It acts as an intermediary for client requests seeking resources from the database. ProxySQL was created for DBAs by René Cannaò, as a means of solving complex replication topology issues.
This post is published with René’s approval. René is busy implementing more new ProxySQL features, so I decided to make this announcement!
Release highlights:- Support for backend SSL connections
- Support for encrypted password (mysql_users table now supports both plain text password and hashed password, in the same format of mysql.user.password)
- Improved monitoring module
- Better integration with Percona XtraDB Cluster
- New feature: the Scheduler, that allows the extension of ProxySQL with external scripts
The last point is especially important in conjunction with our recent Percona XtraDB Cluster 5.7 RC1 release. When we ship Percona XtraDB Cluster 5.7 GA, we plan to make ProxySQL the default proxy solution choice for Percona XtraDB Cluster. ProxySQL is aware of the cluster and node status, and can direct traffic appropriately.
ProxySQL 1.2.1 comes with additional scripts to support this:
- proxysql_galera_checker.sh: https://github.com/sysown/proxysql/blob/v1.2.1/tools/proxysql_galera_checker.sh
- The script to check a node status, using not only availability by ping, but also checking node status (should be in SYNCED)
- proxysql_galera_writer.pl: https://github.com/sysown/proxysql/blob/v1.2.1/tools/proxysql_galera_writer.pl
- The script to choose one writer out of N available nodes – in case if we want to direct write traffic to a dedicated node.
ProxySQL 1.2.1 and these scripts are compatible with existing Percona XtraDB Cluster 5.6 GA releases.
ProxySQL 1.2.1 is a solid release, currently used by many demanding high-performance production workloads – it is already battle tested! Give it a try if you are looking for a proxy solution.
ProxySQL is available under OpenSource license GPLv3, which allows you unlimited usage in production. ProxySQL has no plans to change the license!
TokuDB/PerconaFT fragmented data file performance improvements
In this blog post, we’ll discuss how we’ve improved TokuDB and PerconaFT fragmented data file performance.
Through our internal benchmarking and some user reports, we have found that with long term heavy write use TokuDB/PerconaFT performance can degrade significantly on large data files. Using smaller node sizes makes the problem worse (which is one of our performance tuning recommendations when you have faster storage). The problem manifests as low CPU utilization, a drop in overall TPS and high client response times during prolonged checkpointing.
This post explains a little about how PerconaFT structures dictionary files and where the current implementation breaks down. Hopefully, it explains the nature of the issue, and how our solution helps addresses it. It also provides some contrived benchmarks that prove the solution.
PerconaFT map file disk formatNOTE. This post uses the terms index, data file, and dictionary are somewhat interchangeable. We will use the PerconaFT term “dictionary” to refer specifically to a PerconaFT key/value data file.
PerconaFT stores every dictionary in its own data file on disk. TokuDB stores each index in a PerconaFT dictionary, plus one additional dictionary per table for some metadata. For example, if you have one TokuDB table with two secondary indices, you would have four data files or dictionaries: one small metadata dictionary for the table, one dictionary for the primary key/index, and one for each secondary index.
Each dictionary file has three major parts:
- Two headers (yes, two) made up of various bits of metadata, file versions, a checkpoint logical sequence number (CLSN), the offset of this headers block translation table, etc…
- Two (yes, two, one per header) block translation tables (BTT) that maps block numbers (BNs) to the physical offsets and sizes of the data blocks within the file.
- Data blocks and holes (unused space). Unlike InnoDB, PerconaFT data blocks (nodes) are variable sizes and can be any size from a minimum of a few bytes for an empty internal node all the way up to the block size defined when the tree created (4MB by default if we don’t use compression) and anywhere in between, depending on the amount of data within that node.
Each dictionary file contains two versions of the header stored on disk, and only one is valid at any given point in time. Since we fix the size of the header structure, we always know their locations. The first at offset zero, the other is immediately after the first. The header that is currently valid is the header with the later/larger CLSN.
We write the header and the BTT to disk during a checkpoint or when a dictionary is closed (the only time we do so). The header overwrites the older header (the one with the older CLSN) on disk. From that moment onward, the disk space used by the previous version of the dictionary (the whole thing, not just the header) that is not also used by the latest version, is considered immediately free.
There is much more magic to how the PerconaFT does checkpoint and consistency, but that is really out of the scope of this post. Maybe a later post that addresses the sharp checkpoint of the PerconaFT can dive into this.
The block allocatorThe block allocator is the algorithm and container that manages the list of known used blocks and unused holes within an open dictionary file. When a node gets written, it is the responsibility of the block allocator to find a suitable location in the file for the nodes data. It is always placed into a new block, never overwrites an existing block (except for reclaimed block space from blocks that are removed or moved and recorded during the last checkpoint). Conversely, when a node gets destroyed it is the responsibility of the block allocator to release that used space and create a hole out of the old block. That hole also must be merged with any other holes that are adjacent to it to have a record of just one large hole rather than a series of consecutive smaller holes.
Fragmentation and large filesThe current implementation of the PerconaFT block allocator maintains a simple array of used blocks in memory for each open dictionary. The used blocks are ordered ascending by their offset in the file. The holes between the blocks are calculated by knowing the offset and size of the two bounding blocks. For example, one can calculate the hole offset and size between two adjacent blocks as: b[n].offset + b[n].size and b[n+1].offset – (b[n].offset + b[n].size), respectively.
To find a suitable hole to place node data, the current block allocator starts at the first block in the array. It iterates through the blocks looking for a hole between blocks that is large enough to hold the nodes data. Once we find a hole, we cut the space needed for the node out of the hole and the remainder is left as a hole for another block to possibly use later.
Note. Forcing alignment to 512 offsets for direct I/O has overhead, regardless if direct I/O is used or not.
This linear search severely degrades the PerconaFT performance for very large and fragmented dictionary files. We have some solid evidence from the field that this does occur. We can see it via various profiling tools as a lot of time spent within block_allocator_strategy::first_fit. It is also quite easy to create a case by using very small node (block) sizes and small fanouts (forces the existence of more nodes, and thus more small holes). This fragmentation can and does cause all sorts of side effects as the search operation locks the entire structure within memory. It blocks nodes from translating their node/block IDs into file locations.
Let’s fix it…In this block storage paradigm, fragmentation is inevitable. We can try to dance around and propose different ways to prevent fragmentation (at the expense of higher CPU costs, online/offline operations, etc…). Or, we can look at the way the block allocator works and try to make it more efficient. Attacking the latter of the two options is a better strategy (not to say we aren’t still actively looking into the former).
Tree-based “Max Hole Size” (MHS) lookupThe linear search block allocator has no idea where bigger and smaller holes might be located within the set (a core limitation). It must use brute force to find a hole big enough for the data it needs to store. To address this, we implemented a new in-memory, tree-based algorithm (red-black tree). This replaces the current in-memory linear array and integrates the hole size search into the tree structure itself.
In this new block allocator implementation, we store the set of known in-use blocks within the node structure of a binary tree instead of a linear array. We order the tree by the file offset of the blocks. We then added a little extra data to each node of this new tree structure. This data tells us the maximum hole we can expect to find in each child subtree. So now when searching for a hole, we can quickly drill down the tree to find an available hole of the correct size without needing to perform a fully linear scan. The trade off is that merging holes together and updating the parental max hole sizes is slightly more intricate and time-consuming than in a linear structure. The huge improvement in search efficiency makes this extra overhead pure noise.
We can see in this overly simplified diagram, we have five blocks:
- offset 0 : 1 byte
- offset 3 : 2 bytes
- offset 6 : 3 bytes
- offset 10 : 5 bytes
- offset 20 : 8 bytes
We can calculate four holes in between those blocks:
- offset 1 : 2 bytes
- offset 5 : 1 byte
- offset 9 : 1 byte
- offset 15 : 5 bytes
We see that the search for a 4-byte hole traverses down the right side of the tree. It discovers a hole at offset 15. This hole is a big enough for our 4 bytes. It does this without needing to visit the nodes at offsets 0 and 3. For you algorithmic folks out there, we have gone from an O(n) to O(log n) search. This is tremendously more efficient when we get into severe fragmentation states. A side effect is that we tend to allocate blocks from holes closer to the needed size rather than from the first one big enough to fit. The small hole fragmentation issue may actually increase over time, but that has yet to be seen in our testing.
BenchmarksAs our CTO Vadim Tkachenko asserts, there are “Lies, Damned Lies and Benchmarks.” We’re going to show a simple test case where we thought, “What is the worst possible scenario that I can come up with in a small-ish benchmark to show the differences?”. So, rather than try and convince you using some pseudo-real-world benchmark that uses sleight of hand, I’m telling you up front that this example is slightly absurd, but pushes the issue to the foreground.
That scenario is actually pretty simple. We shape the tree to have as many nodes as possible, and intentionally use settings that reduce concurrency. We will use a standard sysbench OLTP test, and run it for about three hours after the prepare stage has completed:
- Hardware:
- Intel i7, 4 core hyperthread (8 virtual cores) @ 2.8 GHz
- 16 GB of memory
- Samsung 850 Pro SSD
- Sysbench OLTP:
- 1 table of 160M rows or about 30GB of primary key data and 4GB secondary key data
- 24 threads
- We started each test server instance with no data. Then we ran the sysbench prepare, then the sysbench run with no shutdown in between the prepare and run.
- prepare command : /data/percona/sysbench/sysbench/sysbench –test=/data/percona/sysbench/sysbench/tests/db/parallel_prepare.lua –mysql-table-engine=tokudb –oltp-tables-count=1 –oltp-table-size=160000000 –mysql-socket=$(PWD)/var/mysql.sock –mysql-user=root –num_threads=1 run
- run command : /data/percona/sysbench/sysbench/sysbench –test=/data/percona/sysbench/sysbench/tests/db/oltp.lua –mysql-table-engine=tokudb –oltp-tables-count=1 –oltp-table-size=160000000 –rand-init=on –rand-type=uniform –num_threads=24 –report-interval=30 –max-requests=0 –max-time=10800 –percentile=99 –mysql-socket=$(PWD)/var/mysql.sock –mysql-user=root run
- mysqld/TokuDB configuration
- innodb_buffer_pool_size=5242880
- tokudb_directio=on
- tokudb_empty_scan=disabled
- tokudb_commit_sync=off
- tokudb_cache_size=8G
- tokudb_checkpointing_period=300
- tokudb_checkpoint_pool_threads=1
- tokudb_enable_partial_eviction=off
- tokudb_fsync_log_period=1000
- tokudb_fanout=8
- tokudb_block_size=8K
- tokudb_read_block_size=1K
- tokudb_row_format=tokudb_uncompressed
- tokudb_cleaner_period=1
- tokudb_cleaner_iterations=10000
So as you can see: amazing results, right? Sustained throughput, immensely better response time and better utilization of available CPU resources. Of course, this is all fake with a tree shape that no sane user would implement. It illustrates what happens when the linear list contains small holes: exactly what we set out to fix!
ClosingLook for this improvement to appear in Percona Server 5.6.32-78.0 and 5.7.14-7. It’s a good one for you if you have huge TokuDB data files with lots and lots of nodes.
Credits!Throughout this post, I referred to “we” numerous times. That “we” encompasses a great many people that have looked into this in the past and implemented the current solution. Some are current and former Percona and Tokutek employees that you may already know by name. Some are newer at Percona. I got to take their work and research, incorporate it into the current codebase, test and benchmark it, and report it here for all to see. Many thanks go out to Jun Yuan, Leif Walsh, John Esmet, Rich Prohaska, Bradley Kuszmaul, Alexey Stroganov, Laurynas Biveinis, Vlad Lesin, Christian Rober and others for all of the effort in diagnosing this issue, inventing a solution, and testing and reviewing this change to the PerconaFT library.
How Apache Spark makes your slow MySQL queries 10x faster (or more)
In this blog post, we’ll discuss how to improve the performance of slow MySQL queries using Apache Spark.
IntroductionIn my previous blog post, I wrote about using Apache Spark with MySQL for data analysis and showed how to transform and analyze a large volume of data (text files) with Apache Spark. Vadim also performed a benchmark comparing performance of MySQL and Spark with Parquet columnar format (using Air traffic performance data). That works great, but what if we don’t want to move our data from MySQL to another storage (i.e., columnar format), and instead want to use “ad hock” queries on top of an existing MySQL server? Apache Spark can help here as well.
TL;DR version:Using Apache Spark on top of the existing MySQL server(s) (without the need to export or even stream data to Spark or Hadoop), we can increase query performance more than ten times. Using multiple MySQL servers (replication or Percona XtraDB Cluster) gives us an additional performance increase for some queries. You can also use the Spark cache function to cache the whole MySQL query results table.
The idea is simple: Spark can read MySQL data via JDBC and can also execute SQL queries, so we can connect it directly to MySQL and run the queries. Why is this faster? For long running (i.e., reporting or BI) queries, it can be much faster as Spark is a massively parallel system. MySQL can only use one CPU core per query, whereas Spark can use all cores on all cluster nodes. In my examples below, MySQL queries are executed inside Spark and run 5-10 times faster (on top of the same MySQL data).
In addition, Spark can add “cluster” level parallelism. In the case of MySQL replication or Percona XtraDB Cluster, Spark can split the query into a set of smaller queries (in the case of a partitioned table it will run one query per each partition for example) and run those in parallel across multiple slave servers of multiple Percona XtraDB Cluster nodes. Finally, it will use map/reduce the type of processing to aggregate the results.
I’ve used the same “Airlines On-Time Performance” database as in previous posts. Vadim created some scripts to download data and upload it to MySQL. You can find the scripts here: https://github.com/Percona-Lab/ontime-airline-performance. I’ve also used Apache Spark 2.0, which was released July 26, 2016.
Apache Spark SetupStarting Apache Spark in standalone mode is easy. To recap:
- Download the Apache Spark 2.0 and place it somewhere.
- Start master
- Start slave (worker) and attach it to the master
- Start the app (in this case spark-shell or spark-sql)
Example:
root@thor:~/spark# ./sbin/start-master.sh less ../logs/spark-root-org.apache.spark.deploy.master.Master-1-thor.out 15/08/25 11:21:21 INFO Master: Starting Spark master at spark://thor:7077 15/08/25 11:21:21 INFO Utils: Successfully started service 'MasterUI' on port 8080. 15/08/25 11:21:21 INFO MasterWebUI: Started MasterWebUI at http://10.60.23.188:8080 root@thor:~/spark# ./sbin/start-slave.sh spark://thor:7077To connect to Spark we can use spark-shell (Scala), pyspark (Python) or spark-sql. Since spark-sql is similar to MySQL cli, using it would be the easiest option (even “show tables” works). I also wanted to work with Scala in interactive mode so I’ve used spark-shell as well. In all the examples I’m using the same SQL query in MySQL and Spark, so working with Spark is not that different.
To work with MySQL server in Spark we need Connector/J for MySQL. Download the package and copy the mysql-connector-java-5.1.39-bin.jar to the spark directory, then add the class path to the conf/spark-defaults.conf:
spark.driver.extraClassPath = /usr/local/spark/mysql-connector-java-5.1.39-bin.jar spark.executor.extraClassPath = /usr/local/spark/mysql-connector-java-5.1.39-bin.jar Running MySQL queries via Apache SparkFor this test I was using one physical server with 12 CPU cores (older Intel(R) Xeon(R) CPU L5639 @ 2.13GHz) and 48G of RAM, SSD disks. I’ve installed MySQL and started spark master and spark slave on the same box.
Now we are ready to run MySQL queries inside Spark. First, start the shell (from the Spark directory, /usr/local/spark in my case):
$ ./bin/spark-shell --driver-memory 4G --master spark://server1:7077Then we will need to connect to MySQL from spark and register the temporary view:
val jdbcDF = spark.read.format("jdbc").options( Map("url" -> "jdbc:mysql://localhost:3306/ontime?user=root&password=", "dbtable" -> "ontime.ontime_part", "fetchSize" -> "10000", "partitionColumn" -> "yeard", "lowerBound" -> "1988", "upperBound" -> "2016", "numPartitions" -> "28" )).load() jdbcDF.createOrReplaceTempView("ontime")So we have created a “datasource” for Spark (or in other words, a “link” from Spark to MySQL). The Spark table name is “ontime” (linked to MySQL ontime.ontime_part table) and we can run SQL queries in Spark, which in turn parse it and translate it in MySQL queries.
“partitionColumn” is very important here. It tells Spark to run multiple queries in parallel, one query per each partition.
Now we can run the query:
val sqlDF = sql("select min(year), max(year) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') and (origin = 'RDU' or dest = 'RDU') GROUP by carrier HAVING cnt > 100000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT 10") sqlDF.show() MySQL Query ExampleLet’s go back to MySQL for a second and look at the query example. I’ve chosen the following query (from my older blog post):
select min(year), max(year) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 100000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT 10The query will find the total number of delayed flights per each airline. In addition, the query will calculate the smart “ontime” rating, taking into consideration the number of flights (we do not want to compare smaller air carriers with the large ones, and we want to exclude the older airlines who are not in business anymore).
The main reason I’ve chosen this query is that it is hard to optimize it in MySQL. All conditions in the “where” clause will only filter out ~70% of rows. I’ve done a basic calculation:
mysql> select count(*) FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI'); +-----------+ | count(*) | +-----------+ | 108776741 | +-----------+ mysql> select count(*) FROM ontime; +-----------+ | count(*) | +-----------+ | 152657276 | +-----------+ mysql> select round((108776741/152657276)*100, 2); +-------------------------------------+ | round((108776741/152657276)*100, 2) | +-------------------------------------+ | 71.26 | +-------------------------------------+Table structure:
CREATE TABLE `ontime_part` ( `YearD` int(11) NOT NULL, `Quarter` tinyint(4) DEFAULT NULL, `MonthD` tinyint(4) DEFAULT NULL, `DayofMonth` tinyint(4) DEFAULT NULL, `DayOfWeek` tinyint(4) DEFAULT NULL, `FlightDate` date DEFAULT NULL, `UniqueCarrier` char(7) DEFAULT NULL, `AirlineID` int(11) DEFAULT NULL, `Carrier` char(2) DEFAULT NULL, `TailNum` varchar(50) DEFAULT NULL, ... `id` int(11) NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`,`YearD`), KEY `covered` (`DayOfWeek`,`OriginState`,`DestState`,`Carrier`,`YearD`,`ArrDelayMinutes`) ) ENGINE=InnoDB AUTO_INCREMENT=162668935 DEFAULT CHARSET=latin1 /*!50100 PARTITION BY RANGE (YearD) (PARTITION p1987 VALUES LESS THAN (1988) ENGINE = InnoDB, PARTITION p1988 VALUES LESS THAN (1989) ENGINE = InnoDB, PARTITION p1989 VALUES LESS THAN (1990) ENGINE = InnoDB, PARTITION p1990 VALUES LESS THAN (1991) ENGINE = InnoDB, PARTITION p1991 VALUES LESS THAN (1992) ENGINE = InnoDB, PARTITION p1992 VALUES LESS THAN (1993) ENGINE = InnoDB, PARTITION p1993 VALUES LESS THAN (1994) ENGINE = InnoDB, PARTITION p1994 VALUES LESS THAN (1995) ENGINE = InnoDB, PARTITION p1995 VALUES LESS THAN (1996) ENGINE = InnoDB, PARTITION p1996 VALUES LESS THAN (1997) ENGINE = InnoDB, PARTITION p1997 VALUES LESS THAN (1998) ENGINE = InnoDB, PARTITION p1998 VALUES LESS THAN (1999) ENGINE = InnoDB, PARTITION p1999 VALUES LESS THAN (2000) ENGINE = InnoDB, PARTITION p2000 VALUES LESS THAN (2001) ENGINE = InnoDB, PARTITION p2001 VALUES LESS THAN (2002) ENGINE = InnoDB, PARTITION p2002 VALUES LESS THAN (2003) ENGINE = InnoDB, PARTITION p2003 VALUES LESS THAN (2004) ENGINE = InnoDB, PARTITION p2004 VALUES LESS THAN (2005) ENGINE = InnoDB, PARTITION p2005 VALUES LESS THAN (2006) ENGINE = InnoDB, PARTITION p2006 VALUES LESS THAN (2007) ENGINE = InnoDB, PARTITION p2007 VALUES LESS THAN (2008) ENGINE = InnoDB, PARTITION p2008 VALUES LESS THAN (2009) ENGINE = InnoDB, PARTITION p2009 VALUES LESS THAN (2010) ENGINE = InnoDB, PARTITION p2010 VALUES LESS THAN (2011) ENGINE = InnoDB, PARTITION p2011 VALUES LESS THAN (2012) ENGINE = InnoDB, PARTITION p2012 VALUES LESS THAN (2013) ENGINE = InnoDB, PARTITION p2013 VALUES LESS THAN (2014) ENGINE = InnoDB, PARTITION p2014 VALUES LESS THAN (2015) ENGINE = InnoDB, PARTITION p2015 VALUES LESS THAN (2016) ENGINE = InnoDB, PARTITION p_new VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */Even with a “covered” index, MySQL will have to scan ~70M-100M of rows and create a temporary table:
mysql> explain select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime_part WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT 10G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: ontime_part type: range possible_keys: covered key: covered key_len: 2 ref: NULL rows: 70483364 Extra: Using where; Using index; Using temporary; Using filesort 1 row in set (0.00 sec)What is the query response time in MySQL:
mysql> select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime_part WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT 10; +------------+----------+---------+----------+-----------------+------+ | min(yearD) | max_year | Carrier | cnt | flights_delayed | rate | +------------+----------+---------+----------+-----------------+------+ | 2003 | 2013 | EV | 2962008 | 464264 | 0.16 | | 2003 | 2013 | B6 | 1237400 | 187863 | 0.15 | | 2006 | 2011 | XE | 1615266 | 230977 | 0.14 | | 2003 | 2005 | DH | 501056 | 69833 | 0.14 | | 2001 | 2013 | MQ | 4518106 | 605698 | 0.13 | | 2003 | 2013 | FL | 1692887 | 212069 | 0.13 | | 2004 | 2010 | OH | 1307404 | 175258 | 0.13 | | 2006 | 2013 | YV | 1121025 | 143597 | 0.13 | | 2003 | 2006 | RU | 1007248 | 126733 | 0.13 | | 1988 | 2013 | UA | 10717383 | 1327196 | 0.12 | +------------+----------+---------+----------+-----------------+------+ 10 rows in set (19 min 16.58 sec)19 minutes is definitely not great.
SQL in SparkNow we want to run the same query inside Spark and let Spark read data from MySQL. We will create a “datasource” and execute the query:
scala> val jdbcDF = spark.read.format("jdbc").options( | Map("url" -> "jdbc:mysql://localhost:3306/ontime?user=root&password=mysql", | "dbtable" -> "ontime.ontime_sm", | "fetchSize" -> "10000", | "partitionColumn" -> "yeard", "lowerBound" -> "1988", "upperBound" -> "2015", "numPartitions" -> "48" | )).load() 16/08/02 23:24:12 WARN JDBCRelation: The number of partitions is reduced because the specified number of partitions is less than the difference between upper bound and lower bound. Updated number of partitions: 27; Input number of partitions: 48; Lower bound: 1988; Upper bound: 2015. dbcDF: org.apache.spark.sql.DataFrame = [id: int, YearD: date ... 19 more fields] scala> jdbcDF.createOrReplaceTempView("ontime") scala> val sqlDF = sql("select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime WHERE OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT 10") sqlDF: org.apache.spark.sql.DataFrame = [min(yearD): date, max_year: date ... 4 more fields] scala> sqlDF.show() +----------+--------+-------+--------+---------------+----+ |min(yearD)|max_year|Carrier| cnt|flights_delayed|rate| +----------+--------+-------+--------+---------------+----+ | 2003| 2013| EV| 2962008| 464264|0.16| | 2003| 2013| B6| 1237400| 187863|0.15| | 2006| 2011| XE| 1615266| 230977|0.14| | 2003| 2005| DH| 501056| 69833|0.14| | 2001| 2013| MQ| 4518106| 605698|0.13| | 2003| 2013| FL| 1692887| 212069|0.13| | 2004| 2010| OH| 1307404| 175258|0.13| | 2006| 2013| YV| 1121025| 143597|0.13| | 2003| 2006| RU| 1007248| 126733|0.13| | 1988| 2013| UA|10717383| 1327196|0.12| +----------+--------+-------+--------+---------------+----+spark-shell does not show the query time. This can be retrieved from Web UI or from spark-sql. I’ve re-run the same query in spark-sql:
./bin/spark-sql --driver-memory 4G --master spark://thor:7077 spark-sql> CREATE TEMPORARY VIEW ontime > USING org.apache.spark.sql.jdbc > OPTIONS ( > url "jdbc:mysql://localhost:3306/ontime?user=root&password=", > dbtable "ontime.ontime_part", > fetchSize "1000", > partitionColumn "yearD", lowerBound "1988", upperBound "2014", numPartitions "48" > ); 16/08/04 01:44:27 WARN JDBCRelation: The number of partitions is reduced because the specified number of partitions is less than the difference between upper bound and lower bound. Updated number of partitions: 26; Input number of partitions: 48; Lower bound: 1988; Upper bound: 2014. Time taken: 3.864 seconds spark-sql> select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT 10; 16/08/04 01:45:13 WARN Utils: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf. 2003 2013 EV 2962008 464264 0.16 2003 2013 B6 1237400 187863 0.15 2006 2011 XE 1615266 230977 0.14 2003 2005 DH 501056 69833 0.14 2001 2013 MQ 4518106 605698 0.13 2003 2013 FL 1692887 212069 0.13 2004 2010 OH 1307404 175258 0.13 2006 2013 YV 1121025 143597 0.13 2003 2006 RU 1007248 126733 0.13 1988 2013 UA 10717383 1327196 0.12 Time taken: 139.628 seconds, Fetched 10 row(s)So the response time of the same query is almost 10x faster (on the same server, just one box). But now how was this query translated to MySQL queries, and why it is so much faster? Here is what is happening inside MySQL:
Inside MySQLSpark:
scala> sqlDF.show() [Stage 4:> (0 + 26) / 26]MySQL:
mysql> select id, info from information_schema.processlist where info is not NULL and info not like '%information_schema%'; +-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | id | info | +-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 10948 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2001 AND yearD < 2002) | | 10965 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2007 AND yearD < 2008) | | 10966 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1991 AND yearD < 1992) | | 10967 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1994 AND yearD < 1995) | | 10968 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1998 AND yearD < 1999) | | 10969 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2010 AND yearD < 2011) | | 10970 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2002 AND yearD < 2003) | | 10971 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2006 AND yearD < 2007) | | 10972 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1990 AND yearD < 1991) | | 10953 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2009 AND yearD < 2010) | | 10947 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1993 AND yearD < 1994) | | 10956 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD < 1989 or yearD is null) | | 10951 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2005 AND yearD < 2006) | | 10954 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1996 AND yearD < 1997) | | 10955 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2008 AND yearD < 2009) | | 10961 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1999 AND yearD < 2000) | | 10962 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2011 AND yearD < 2012) | | 10963 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2003 AND yearD < 2004) | | 10964 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1995 AND yearD < 1996) | | 10957 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2004 AND yearD < 2005) | | 10949 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1989 AND yearD < 1990) | | 10950 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1997 AND yearD < 1998) | | 10952 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2013) | | 10958 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1992 AND yearD < 1993) | | 10960 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2000 AND yearD < 2001) | | 10959 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2012 AND yearD < 2013) | +-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 26 rows in set (0.00 sec)Spark is running 26 queries in parallel, which is great. As the table is partitioned it only uses one partition per query, but scans the whole partition:
mysql> explain partitions SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2001 AND yearD < 2002)G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: ontime_part partitions: p2001 type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 5814106 Extra: Using where 1 row in set (0.00 sec)In this case, as the box has 12 CPU cores / 24 threads, it efficently executes 26 queries in parallel and the partitioned table helps to avoid contention issues (I wish MySQL could scan partitions in parallel, but it can’t at the time of writing).
Another interesting thing is that Spark can “push down” some of the conditions to MySQL, but only those inside the “where” clause. All group by/order by/aggregations are done inside Spark. It needs to retrieve data from MySQL to satisfy those conditions and will not push down group by/order by/etc to MySQL.
That also means that queries without “where” conditions (for example “select count(*) as cnt, carrier from ontime group by carrier order by cnt desc limit 10”) will have to retrieve all data from MySQL and load it to Spark (as opposed to MySQL will do all group by inside). Running it in Spark might be slower or faster (depending on the amount of data and use of indexes) but it also requires more resources and potentially more memory dedicated for Spark. The above query is translated to 26 queries, each does a “select carrier from ontime_part where (yearD >= N AND yearD < N)”
Pushing down the whole query into MySQLIf we want to avoid sending all data from MySQL to Spark we have the option of creating a temporary table on top of a query (similar to MySQL’s create temporary table as select …). In Scala:
val tableQuery = "(select yeard, count(*) from ontime group by yeard) tmp" val jdbcDFtmp = spark.read.format("jdbc").options( Map("url" -> "jdbc:mysql://localhost:3306/ontime?user=root&password=", "dbtable" -> tableQuery, "fetchSize" -> "10000" )).load() jdbcDFtmp.createOrReplaceTempView("ontime_tmp")In Spark SQL:
CREATE TEMPORARY VIEW ontime_tmp USING org.apache.spark.sql.jdbc OPTIONS ( url "jdbc:mysql://localhost:3306/ontime?user=root&password=mysql", dbtable "(select yeard, count(*) from ontime_part group by yeard) tmp", fetchSize "1000" ); select * from ontime_tmp;Please note:
- We do not want to use “partitionColumn” here, otherwise we will see 26 queries like this in MySQL: “SELECT yeard, count(*) FROM (select yeard, count(*) from ontime_part group by yeard) tmp where (yearD >= N AND yearD < N)” (obviously not optimal)
- This is not a good use of Spark, more like a “hack.” The only good reason to do it is to be able to have the result of the query as a source of an additional query.
Another option is to cache the result of the query (or even the whole table) and then use .filter in Scala for faster processing. This requires sufficient memory dedicated for Spark. The good news is we can add additional nodes to Spark and get more memory for Spark cluster.
Spark SQL example:
CREATE TEMPORARY VIEW ontime_latest USING org.apache.spark.sql.jdbc OPTIONS ( url "jdbc:mysql://localhost:3306/ontime?user=root&password=", dbtable "ontime.ontime_part partition (p2013, p2014)", fetchSize "1000", partitionColumn "yearD", lowerBound "1988", upperBound "2014", numPartitions "26" ); cache table ontime_latest; spark-sql> cache table ontime_latest; Time taken: 465.076 seconds spark-sql> select count(*) from ontime_latest; 5349447 Time taken: 0.526 seconds, Fetched 1 row(s) spark-sql> select count(*), dayofweek from ontime_latest group by dayofweek; 790896 1 634664 6 795540 3 794667 5 808243 4 743282 7 782155 2 Time taken: 0.541 seconds, Fetched 7 row(s) spark-sql> select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime_latest WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') and (origin='RDU' or dest = 'RDU') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT 10; 2013 2013 MQ 9339 1734 0.19 2013 2013 B6 3302 516 0.16 2013 2013 EV 9225 1331 0.14 2013 2013 UA 1317 177 0.13 2013 2013 AA 5354 620 0.12 2013 2013 9E 5520 593 0.11 2013 2013 WN 10968 1130 0.1 2013 2013 US 5722 549 0.1 2013 2013 DL 6313 478 0.08 2013 2013 FL 2433 205 0.08 Time taken: 2.036 seconds, Fetched 10 row(s)Here we cache partitions p2013 and p2014 in Spark. This retrieves the data from MySQL and loads it in Spark. After that all queries run on the cached data and will be much faster.
With Scala we can cache the result of a query and then use filters to only get the information we need:
val sqlDF = sql("SELECT flightdate, origin, dest, depdelayminutes, arrdelayminutes, carrier, TailNum, Cancelled, Diverted, Distance from ontime") sqlDF.cache().show() scala> sqlDF.filter("flightdate='1988-01-01'").count() res5: Long = 862 Using Spark with Percona XtraDB ClusterAs Spark can be used in a cluster mode and scale with more and more nodes, reading data from a single MySQL is a bottleneck. We can use MySQL replication slave servers or Percona XtraDB Cluster (PXC) nodes as a Spark datasource. To test it out, I’ve provisioned Percona XtraDB Cluster with three nodes on AWS (I’ve used m4.2xlarge Ubuntu instances) and also started Apache Spark on each node:
- Node1 (pxc1): Percona Server + Spark Master + Spark worker node + Spark SQL running
- Node2 (pxc2): Percona Server + Spark worker node
- Node3 (pxc3): Percona Server + Spark worker node
All the Spark worker nodes use the memory configuration option:
cat conf/spark-env.sh export SPARK_WORKER_MEMORY=24gThen I can start spark-sql (also need to have connector/J JAR file copied to all nodes):
$ ./bin/spark-sql --driver-memory 4G --master spark://pxc1:7077When creating a table, I still use localhost to connect to MySQL (url “jdbc:mysql://localhost:3306/ontime?user=root&password=xxx”). As Spark worker nodes are running on the same instance as Percona Cluster nodes, it will use the local connection. Then running a Spark SQL will evenly distribute all 26 MySQL queries among the three MySQL nodes.
Alternatively we can run Spark cluster on a separate host and connect it to the HA Proxy, which in turn will load balance selects across multiple Percona XtraDB Cluster nodes.
Query Performance BenchmarkFinally, here is the query response time test on the three AWS Percona XtraDB Cluster nodes:
Query 1: select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime_part WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT 10;
Query / Index type MySQL Time Spark Time (3 nodes) Times Improvement No covered index (partitioned) 19 min 16.58 sec 192.17 sec 6.02 Covered index (partitioned) 2 min 10.81 sec 48.38 sec 2.7
Query 2: select dayofweek, count(*) from ontime_part group by dayofweek;
Query / Index type MySQL Time Spark Time (3 nodes) Times Improvement No covered index (partitoned) 19 min 15.21 sec 195.058 sec 5.92 Covered index (partitioned) 1 min 10.38 sec 27.323 sec 2.58
Now, this looks really good, but it can be better. With three nodes @ m4.2xlarge we will have 8*3 = 24 cores total (although they are shared between Spark and MySQL). We can expect 10x improvement, especially without a covered index.
However, on m4.2xlarge the amount of RAM did not allow me to run MySQL out of memory, so all reads were from EBS non-provisioned IOPS, which only gave me ~120MB/sec. I’ve redone the test on a set of three dedicated servers:
- 28 cores E5-2683 v3 @ 2.00GHz
- 240GB of RAM
- Samsung 850 PRO
The test was running completely off RAM:
Query 1 (from the above)
Query / Index type MySQL Time Spark Time (3 nodes) Times Improvement No covered index (partitoned) 3 min 13.94 sec 14.255 sec 13.61 Covered index (partitioned) 2 min 2.11 sec 9.035 sec 13.52
Query 2: select dayofweek, count(*) from ontime_part group by dayofweek;
Query / Index type MySQL Time Spark Time (3 nodes) Times Improvement No covered index (partitoned) 2 min 0.36 sec 7.055 sec 17.06 Covered index (partitioned) 1 min 6.85 sec 4.514 sec 14.81
With this amount of cores and running out of RAM we actually do not have enough concurrency as the table only have 26 partitions. I’ve tried the unpartitioned table with ID primary key and use 128 partitions.
Note about partitioningI’ve used partitioned table (partition by year) in my tests to help reduce MySQL level contention. At the same time the “partitionColumn” option in Spark does not require that MySQL table is partitioned. For example, if a table has a primary key, we can use this CREATE VIEW in Spark :
CREATE OR REPLACE TEMPORARY VIEW ontime USING org.apache.spark.sql.jdbc OPTIONS ( url "jdbc:mysql://127.0.0.1:3306/ontime?user=root&password=", dbtable "ontime.ontime", fetchSize "1000", partitionColumn "id", lowerBound "1", upperBound "162668934", numPartitions "128" );Assuming we have enough MySQL servers (i.e., nodes or slaves), we can increase the number of partitions and that can improve the parallelism (as opposed to only 26 partitions when running one partition by year). Actually, the above test gives us even better response time: 6.44 seconds for query 1.
Where Spark doesn’t work wellFor faster queries (those that use indexes or can efficiently use an index) it does not make sense to use Spark. Retrieving data from MySQL and loading it into Spark is not free. This overhead can be significant for faster queries. For example, a query like this select count(*) from ontime_part where YearD = 2013 and DayOfWeek = 7 and OriginState = 'NC' and DestState = 'NC'; will only scan 1300 rows and will return instant (0.00 seconds reported by MySQL).
An even better example is this: select max(id) from ontime_part. In MySQL, the query will use the index and all calculations will be done inside MySQL. Spark, on the other hand, will have to retrieve all IDs (select id from ontime_part) from MySQL and calculate maximum. That took 24.267 seconds.
ConclusionUsing Apache Spark as an additional engine level on top of MySQL can help to speed up the slow reporting queries and add much-needed scalability for the long running select queries. In addition, Spark can help with query caching for frequent queries.
PS: Visual explain plan with SparkSpark Web GUI provides lots of ways of monitoring Spark jobs. For example, it shows the “job” progress:
And SQL visual explain details: