tel. 883 59 39 89 l mail: l CZYNNE PN-PT 9-17

NajlepszeRolety.PL - rolety - żaluzje - moskitiery

TEL. 883 59 39 89 / UL. MONIUSZKI 54 - MYSŁOWICE

Wieści RSS

Planet MySQL

Planet MySQL -
  • Watch the New Webinar: An Introduction to Database Proxies (for MySQL)
    As hinted at earlier this month, we’re happy to announce our latest on-demand webinar:“An Introduction to Database Proxies (for MySQL)” In this webinar, Gilles Rayrat, our VP of Engineering and database proxies guru, shares some of his knowledge on the world of database proxies, how they work, why they’re important and what to use them for. Starting with a simple database connectivity scenario, Gilles builds up the content by discussing clustered databases and what happens in the case of a failure through to explaining the important role database proxies play; including a more in-depth look into some advanced database connectivity setups and proxies functionalities. More specifically, Gilles covers the following: A simple database connectivity scenario The concept of a clustered database Failure in a clustered database: the nightmare scenario The solution: use a proxy! Preferably a smart one … Advanced database connectivity setups Advanced proxy functionalities Recap Watch the database proxies webinar While this webinar discusses database proxy concepts in general, we do call ourselves the MySQL Availability Company for a reason … and hence the content of this webinar is geared at MySQL (or MariaDB and Percona Server) predominantly. This topic is not entirely innocent of course, as we’ve been involved with MySQL proxy technology for quite some time … namely, Tungsten Proxy, known to Continuent customers as Tungsten Connector. It is in fact a MySQL proxy; and at Continuent, we like to refer to it as an ‘Intelligent Proxy for MySQL’. Tungsten Proxy – The Intelligent MySQL Proxy – in a nutshell Provides intelligent traffic routing to valid MySQL master(s), locally and globally Scales read queries to valid slaves via query inspection and other methods Application and active users do not disconnect during MySQL master failover events Combined with another intelligent layer of Tungsten Clustering called Tungsten Manager, it provides automatic, rapid master failover for MySQL High Availability and automated cross-site level failover for Disaster Recovery Tungsten Connector (Tungsten Proxy) has been an important part of the Continuent Tungsten Clustering solution since 2006 and we’re gearing up to make it more widely known in the coming months. Stay tuned! And watch our webinar on database proxies in the meantime!

    Introduction In this article, we are going to see how the SQL EXISTS operator works and when you should use it. Although the EXISTS operator has been available since SQL:86, the very first edition of the SQL Standard, I found that there are still many application developers who don’t realize how powerful SQL subquery expressions really are when it comes to filtering a given table based on a condition evaluated on a different table. Database table model Let’s assume we have the following two tables in our database, that form a one-to-many... Read More The post SQL EXISTS and NOT EXISTS appeared first on Vlad Mihalcea.

  • How to clone a MySQL test or development instance from InnoDB Cluster?
    Introduction to InnoDB ClusterIf you have not heard about MySQL InnoDB Cluster MySQL, InnoDB Cluster is a built-in high-availability solution for MySQL. The key benefit over old high-availability solutions is that InnoDB Cluster is built into MySQL and supported on all platforms where MySQL is supported.The key components of MySQL InnoDB Cluster:- MySQL Group Replication- MySQL Shell- MySQL RouterMySQL Group Replication is a plugin that makes sure that; data is distributed to all nodes, conflicts are handled and also recovery if needed.MySQL Shell makes is easy to configure and administer your InnoDB Cluster.MySQL Router is the last part of InnoDB cluster, it's a lightweight middleware that provides transparent routing between the application and back-end MySQL Servers part of group replication.If you want to get started with InnoDB Cluster:- a test/development environment from InnoDB ClusterMost users need some way to periodically refresh test and development environments from production. In the past many users solved thus using traditional master-slave replication, the solution was to setup a MySQL slave that use LVM to leverage snapshots to create new test/developer environments.For InnoDB Cluster we can use MySQL Clone (available by MySQL 8.0.l7 or later) to creating a test or development environment much easier.Steps needed on InnoDB Cluster (create a dedicated clone user for the donor):CREATE USER clone_user@'%' IDENTIFIED BY "clone_password";GRANT BACKUP_ADMIN ON *.* to clone_user@%;GRANT SELECT ON performance_schema.* TO clone_user@%;GRANT EXECUTE ON *.* to clone_user@%;You might limit the clone user to only be able to connect from some sub-net and not from all host (%). The InnoDB Cluster must run MySQL 8.0.17 or later.Next we need to create the test or development server.1) Start a new MySQL instance (using MySQL 8.0.17 or later).2) Provision data using clone:INSTALL PLUGIN CLONE SONAME "";INSTALL PLUGIN group_replication SONAME '';SET GLOBAL clone_valid_donor_list = "";CREATE USER clone_user@localhost IDENTIFIED BY "clone_password";GRANT CLONE_ADMIN ON *.* to clone_user@localhost;SET global log_error_verbosity=3;CLONE INSTANCE FROM clone_user@ identified by "clone_password";.. after restart disable GR plugin (lots of errors in error log)UNINSTALL PLUGIN group_replication;Some notes on above, you do not have to create the clone user on the new MySQL instance, you should set the clone_valid_donor_list to the IP of the donor you have selected but this is not mandatory either. The log level is set so you can monitor progress of clone in the log file.There are some controls in the MySQL Server that forces us to load the group replication plugin prior to executing the clone command. If you do not load the group replication plugin you will get an error like: ERROR 3870 (HY000): Clone Donor plugin group replication is not active in RecipientOf course, the final step will be to clean/wash the data before handjing over the MySQL instance for test or development, this step is not covered by this blog but using our masking and De-Identification can be used to solve this step ;)I have tested above procedures using MySQL 8.0.18.Happy Clustering!

  • Deletes are fast and slow in an LSM
    In an LSM deletes are fast for the deleter but can make queries that follow slower. The problem is that too many tombstones can get in the way of a query especially a range query.A tombstone must remain in the LSM tree until any keys it deletes have been removed from the LSM. For example, if there is a tombstone for key "ABC" on level 1 of the LSM tree and a live value for that key on level 3 then that tombstone cannot be removed. It is hard to make the check (does live key exist below me) efficient.I haven't read much about optimizing for tombstones in an LSM not named RocksDB. Perhaps I have not tried hard to find such details. Maybe this is something that LSM engine developers should explain more in public.Confirming whether a tombstone can be droppedThis is based on code that I read ~2 years ago. Maybe RocksDB has changed today.Tombstones are dropped during compaction. The question is how much work (CPU and IO) you are willing to spend to determine whether a tombstone can be dropped. LevelDB and RocksDB favor performance over exactness. By this I mean they spend less CPU and no IO on the "can I drop this tombstone" check. The check is simple today. If there is an LSM level (or sorted run) below (older) that has a key range which overlaps the tombstone key then the tombstone won't be dropped. And in most workloads this means that tombstones aren't dropped until they reach the max LSM level -- because there usually is overlap.An LSM could spend a bit more CPU and check bloom filters for the levels that overlap with the tombstone. That might allow tombstones to be dropped earlier. An even more expensive check would be to use more CPU and possibly IO to confirm whether the level that overlaps with the tombstone really has that key. This can make compaction much slower.Fast pathThe SingleDelete API in RocksDB makes it easier to drop tombstones. If you respect what the API requires then tombstones can be dropped quickly -- without spending IO or CPU. SingleDelete makes it easy to drop tombstones for a given key when those tombstones meet during compaction. They don't do anything for the case above where the tombstone is on one level and the live key might be on a lower level.MyRocks magicMyRocks has some clever code that does extra work to remove tombstones from an SST when the SST has too many tombstones. Configuration options for this are mentioned in this post. Percona has some docs on rocksdb_compaction_sequential_deletes. And I am sure there is a slide deck from my favorite MyRocks expert. Maybe he will share that with me.

  • A Comparison Between the MySQL Clone Plugin and Xtrabackup
    In one of our previous blogs we explained how Clone Plugin, one of new features that showed in MySQL 8.0.17, can be used to rebuild a replication slave. Currently the go-to tool for that, as well as for backups, is Xtrabackup. We thought it is interesting to compare how those tools work and behave. Comparing Performance The first thing we decided to test is how both perform when it comes to storing the copy of the data locally. We used AWS and m5d.metal instance with two NVMe SSD and we ran the clone to local copy: mysql> CLONE LOCAL DATA DIRECTORY='/mnt/clone/'; Query OK, 0 rows affected (2 min 39.77 sec) Then we tested Xtrabackup and made the local copy: rm -rf /mnt/backup/ ; time xtrabackup --backup --target-dir=/mnt/backup/ --innodb-file-io-threads=8 --innodb-read-io-threads=8 --innodb-write-io-threads=8 --innodb-io-capacity=20000 --parallel=16 200120 13:12:28 completed OK! real 2m38.407s user 0m45.181s sys 4m18.642s As you can see, the time required to copy the data was basically the same. In both cases the limitation was the hardware, not the software. Transferring data to another server will be the most common use case for both tools. It can be a slave you want to provision or rebuild. In the future it may be a backup, Clone Plugin doesn’t have such functionality as of now but we are pretty sure in the future someone will make it possible to use it as a backup tool. Given that hardware is the limitation for local backup in both cases, hardware will also be a limitation for transferring the data across the network. Depending on your setup, it could be either the network, disk I/O or CPU. In a I/O-intensive operations CPU is the least common bottleneck. This makes it quite common to trade some CPU utilization for reduction in the data set size. You can accomplish that through compression. If it is done on the fly, you still have to read the same amount of data but you send less of it (as it is compressed) over the network. Then, you will have to decompress it and write it down. It is also possible that the files themselves are compressed. In that case you reduce the amount of data read, transferred and written to disk. Clone Plugin doesn’t come with any sort of on-the-fly compression available. It can clone compressed InnoDB tables but this doesn’t help much when compared to Xtrabackup as Xtrabackup will as well copy the reduced data set. On the other hand, Xtrabackup can be used along with the compression done on the fly, so it will come up faster if the network will be the limiting factor. Other than that we would expect to see similar results in both cases. Comparing Usability Performance is just one thing to compare, there are many others like how easy tools are to use. In both cases there are several steps you have to perform. For Clone Plugin it is: Install the plugin on all nodes Create users on both donor and receiver nodes Set up the donor list on the receiver Those three steps have to be performed once. When they are set, you can use Clone Plugin to copy the data. Based on the init system you may need to start MySQL node after the clone process has completed. This is not required if, like in the case of systemd, MySQL will be automatically restarted. Xtrabackup requires a couple more steps to get things done. Install the software on all nodes Create user on the donor Those two steps have to be executed once. For every backup you have to execute following steps: Configure network streaming. Simple and secure way would be to use SSH, something like: xtrabackup --backup --innodb-file-io-threads=8 --innodb-read-io-threads=8 --innodb-write-io-threads=8 --innodb-io-capacity=20000 --parallel=8 --stream=xbstream --target-dir=/mnt/backup/ | ssh root@ "xbstream -x -C /mnt/backup/" We found, though, for faster harddrives, with single-threaded SSH, CPU becomes a bottleneck. Setting up netcat requires additional step on the receiver to ensure netcat is up, listening and redirecting the traffic to the proper software (xbstream). Stop MySQL on the receiver node Run the Xtrabackup Apply InnoDB logs Copy back the data Start MySQL on the receiver node As you can see, Xtrabackup requires more steps to be taken. Security Considerations Clone Plugin can be configured to use SSL for data transfer even though by default it uses plain text. Cloning of the encrypted tablespaces is possible but there is no option to encrypt, for example, the local clone. User would have to do it separately, after the clone process is completed. Xtrabackup itself doesn’t provide any security. Security is determined by how you stream the data. If you use SSH for streaming, data in transit will be encrypted. If you decide to use netcat, it will be sent as a plain text. Of course, if the data is encrypted in tablespaces, it is already secured, just like in the case of the Clone Plugin. Xtrabackup can also be used along with on-the-fly encryption to ensure your data is encrypted also at rest. Plugin Features Clone Plugin is a new product, still in an infant phase. Its primary task is to provide ways of provisioning nodes in InnoDB Cluster and it does that just fine. For other tasks, like backups or provisioning of replication slaves, it can be used to some extent but it suffers from several limitations. We covered some of them in our previous blog so we won’t repeat it here but the most serious one, when talking about provisioning and backups, is that only InnoDB tables are cloned. If you happen to use any other storage engine, you cannot really use Clone Plugin. On the other hand Xtrabackup will happily backup and transfer most commonly used storage engines: InnoDB, MyISAM (unfortunately, it’s still used in many places) and CSV. Xtrabackup comes also with a set of tools that are intended to help with streaming the data from node to node or even stream backup to S3 buckets. To sum it up, when it comes to backing up data and provisioning replication slaves, xtrabackup is and most likely will still be the most popular pick. On the other hand, Clone Plugin, most likely, will improve and evolve. We will see what the future holds and how things will look like in a year’s time. Let us know if you have any thoughts on the Clone Plugin, we are very interested to see what is your opinion on this new tool.   Tags:  MySQL plugin cloning xtrabackup