tel. 883 59 39 89 l mail: kontakt@najlepszerolety.pl l CZYNNE PN-PT 9-17

NajlepszeRolety.PL - rolety - żaluzje - moskitiery

TEL. 883 59 39 89 / UL. MONIUSZKI 54 - MYSŁOWICE

Wieści RSS

Planet MySQL

Planet MySQL - https://planet.mysql.com
  • Updating/Deleting Rows From Clickhouse (Part 2)
    In this post, we’ll look at updating and deleting rows with ClickHouse. It’s the second of two parts. In the first part of this post, we described the high-level overview of implementing incremental refresh on a ClickHouse table as an alternative support for UPDATE/DELETE. In this part, we will show you the actual steps and sample code. Prepare Changelog Table First, we create the changelog table below. This can be stored on any other MySQL instance separate from the source of our analytics table. When we run the change capture script, it will record the data on this table that we can consume later with the incremental refresh script: CREATE TABLE `clickhouse_changelog` ( `db` varchar(255) NOT NULL DEFAULT '', `tbl` varchar(255) NOT NULL DEFAULT '', `created_at` date NOT NULL, `updated_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `log_file` varchar(255) NOT NULL, `log_pos` int(10) unsigned NOT NULL, PRIMARY KEY (`db`,`tbl`,`created_at`), KEY `log_file` (`log_file`,`log_pos`) ) ENGINE=InnoDB; Create ClickHouse Table Next, let’s create the target ClickhHouse table. Remember, that the corresponding MySQL table is below: CREATE TABLE `hits` ( `id` int(11) NOT NULL AUTO_INCREMENT, `type` varchar(100) DEFAULT NULL, `user_id` int(11) NOT NULL, `location_id` int(11) NOT NULL, `created_at` datetime DEFAULT NULL PRIMARY KEY (`id`), KEY `created_at` (`created_at`) ) ENGINE=InnoDB; Converting this table to ClickHouse looks like below, with the addition of a “created_day” column that serves as the partitioning key: CREATE TABLE hits ( id Int32, created_day Date, type String, user_id Int32, location_id Int32, created_at Int32 ) ENGINE = MergeTree PARTITION BY toMonday(created_day) ORDER BY (created_at, id) SETTINGS index_granularity = 8192; Run Changelog Capture Once the tables are ready, running the change capture script. An example script can be found in this gist, which is written in Python and uses the python-mysql-replication library. This library acts as replication client, continuously downloads the binary logs from the source and sifts through it to find any UPDATE/DELETE executed against our source table. There are a few configuration options that need to be customized in the script. LOG_DB_HOST: The MySQL host where we created the clickhouse_changelog table. LOG_DB_NAME: The database name where the clickhouse_changelog table is created. SRC_DB_HOST: The MySQL host where we will be downloading binary logs from. This can either be a primary or secondary/replica as long as its the same server where our raw table is also located. MYSQL_USER: MySQL username. MYSQL_PASS: MySQL password. TABLE: The table we want to watch for changes. When the script is successfully configured and running, the clickhouse_changelog table should start populating with data like below.mysql> select * from mydb.clickhouse_changelog; +------+------+------------+---------------------+------------------+-----------+ | db | tbl | created_at | updated_at | log_file | log_pos | +------+------+------------+---------------------+------------------+-----------+ | mydb | hits | 2014-06-02 | 2017-12-23 17:19:33 | mysql-bin.016353 | 18876747 | | mydb | hits | 2014-06-09 | 2017-12-23 22:10:29 | mysql-bin.016414 | 1595314 | | mydb | hits | 2014-06-16 | 2017-12-23 02:59:37 | mysql-bin.016166 | 33999981 | | mydb | hits | 2014-06-23 | 2017-12-23 18:09:33 | mysql-bin.016363 | 84498477 | | mydb | hits | 2014-06-30 | 2017-12-23 06:08:59 | mysql-bin.016204 | 23792406 | | mydb | hits | 2014-08-04 | 2017-12-23 18:09:33 | mysql-bin.016363 | 84499495 | | mydb | hits | 2014-08-18 | 2017-12-23 18:09:33 | mysql-bin.016363 | 84500523 | | mydb | hits | 2014-09-01 | 2017-12-23 06:09:19 | mysql-bin.016204 | 27120145 | +------+------+------------+---------------------+------------------+-----------+ Full Table Import So we have our changelog capture in place, the next step is to initially populate the ClickHouse table from MySQL. Normally, we can easily do this with a  mysqldump into a tab-separated format, but remember we have to transform the created_at column from MySQL into ClickHouse’s Date format to be used as partitioning key. A simple way to do this is by using a simple set of shell commands like below: SQL=$(cat <<EOF SELECT id, DATE_FORMAT(created_at, "%Y-%m-%d"), type, user_id, location_id, UNIX_TIMESTAMP(created_at) FROM hits EOF ) mysql -h source_db_host mydb -BNe "$sql" > hist.txt cat hist.txt | clickhouse-client -d mydb --query="INSERT INTO hits FORMAT TabSeparated" One thing to note about this process is that the MySQL client buffers the results for the whole query, and it could eat up all the memory on the server you run this from if the table is really large. To avoid this, chunk the table into several million rows at a time. Since we already have the changelog capture running and in place from the previous step, you do not need to worry about any changes between chunks. We will consolidate those changes during the incremental refreshes. Incremental Refresh After initially populating the ClickHouse table, we then set up our continuous incremental refresh using a separate script. A template script we use for the table on our example can be found in this gist. What this script does is twofold: Determines the list of weeks recently modified based on clickhouse_changelog, dump rows for those weeks and re-imports to ClickHouse. If the current week is not on the list of those with modifications, it also checks for new rows based on the auto-incrementing primary key and appends them to the ClickHouse table. An example output of this script would be: ubuntu@mysql~/clickhouse$ bash clickhouse-incr-refresh.sh hits 2017-12-24_00_20_19 incr-refresh Starting changelog processing for hits 2017-12-24_00_20_19 incr-refresh Current week is: 2017-12-18 2017-12-24_00_20_19 incr-refresh Processing week: 2017-12-18 2017-12-24_00_20_20 incr-refresh Changelog import for hits complete ubuntu@mysql~/clickhouse$ bash clickhouse-incr-refresh.sh hits 2017-12-24_00_20_33 incr-refresh Starting changelog processing for hits 2017-12-24_00_20_33 incr-refresh Current week is: 2017-12-18 2017-12-24_00_20_33 incr-refresh Weeks is empty, nothing to do 2017-12-24_00_20_33 incr-refresh Changelog import for hits complete 2017-12-24_00_20_33 incr-refresh Inserting new records for hits > id: 5213438 2017-12-24_00_20_33 incr-refresh No new rows found 2017-12-24_00_20_33 incr-refresh Incremental import for hits complete ubuntu@mysql~/clickhouse$ Note that, on step 4, if you imported a really large table and the changelog had accumulated a large number of changes to refresh, the initial incremental execution might take some time. After that though, it should be faster. This script can be run every minute, longer or shorter, depending on how often you want the ClickHouse table to be refreshed. To wrap up, here is a query from MySQL on the same table, versus ClickHouse. mysql> SELECT COUNT(DISTINCT user_id) FROM hits WHERE created_at -> BETWEEN '2016-01-01 00:00:00' AND '2017-01-01 00:00:00'; +-------------------------+ | COUNT(DISTINCT user_id) | +-------------------------+ | 3023028 | +-------------------------+ 1 row in set (25.89 sec) :) SELECT COUNT(DISTINCT user_id) FROM hits WHERE created_at BETWEEN 1451606400 AND 1483228800; SELECT COUNTDistinct(user_id) FROM hits WHERE (created_at >= 1451606400) AND (created_at <= 1483228800) ┌─uniqExact(user_id)─┐ │ 3023028 │ └────────────────────┘ 1 rows in set. Elapsed: 0.491 sec. Processed 35.82 million rows, 286.59 MB (73.01 million rows/s., 584.06 MB/s.) Enjoy!

  • Fun with Bugs #60 - On Some Memory Leaks, Replication and Other Bugs Fixed in MySQL 5.7.21
    Oracle had formally released MySQL 5.7.21 yesterday. I do not bother any more to study MySQL release notes carefully and completely, but during a quick review today I've noted several interesting items I'd like you to pay attention to.I am historically interested in InnoDB implementation details, so I could not miss Bug #87619 - "InnoDB partition table will lock into the near record as a condition in the use ". This was a regression bug in 5.7+, probably caused by new implementation of partitioning in InnoDB.Another interesting bug is Bug #86927 - "Renaming a partitioned table does not update mysql.innodb_table_stats.", by Jean-François Gagné. It was yet another bug in InnoDB's persistent statistics (that I truly hate). What makes it especially interesting to me, though, is that it's the first public bug report I noted that mentioned MySQL 9.0.0 release as a target for the fix:"Fixed as of the upcoming 5.7.21, 8.0.4, 9.0.0 release"So, it's clear that back in October 2017 Oracle had already got a separate branch for upcoming MySQL 9.0.x! It also probably means that MySQL 8.0.x GA is coming really soon.There are bug reports that are worth reading for technical reasons, others - only if you want to get some fun. Bug #86607 - "InnoDB crashed when master thread evict dict_table_t object" is agood example that covers both cases. Good to know the crash is fixed, but, please, make sure to read all comments there.In this release I've noted fixes to several public bugs reported by Shane Bester. The first one of them is Bug #86573 - "foreign key cascades use excessive memory". Check how he used memory instrumentation in Performance Schema to demonstrate the problem! In Bug #86482 - "innodb leaks memory, performance_schema file_instances #sql-ib3129987-252773.ibd", he used similar approach to show potential memory leak in the Performance Schema itself ! Yet another bug that mentions 9.0.0 as a target version for the fix, among others...  Bug #78048 - "INNODB Full text Case sensitive not working", is here both because I recently started to notice problems related to InnoDB FULLTEXT indexing, again (first time was soon after it was introduced), and because it has an MTR  test case contributed by Sveta Smirnova.XA transactions support had always been problematic in MySQL  (still "Verified" Bug #87526 by Sveta Smirnova is one of recent examples how incomplete or useless it can be, see also MDEV-14593). Check the following bugs fixed in MySQL 5.7.21 if you use XA transactions: Bug #87393 - "xa rollback with wrong xid will be recorded into the binlog". It was reported by HongXiang Jiang, who had also contributed a patch. Bug #83295 - "replication error occurs, use xa transaction(one phase)". Yet another XA transactions problem reported by Hiroyuki Itoh and then confirmed by many affected users. Nice to see it fixed. There are many fixes in MySQL 5.7.21 related to memory leaks. Two bug reports of this kind were from Przemyslaw Malkowski: Bug #85371 - "Memory leak in multi-source replication when binlog_rows_query_log_events=1". Again, memory instrumentation of Performance Schema was used to demonstrate the problem. Vlad Lesin, also from Percona, contributed the patch for this bug. Bug #85251 - "Memory leak in master-master GTID replication with sync_relay_log_info". Here Vlad Lesin, who had contributed the patch, also used Massif for the detailed analysis. To summarize, I start to miss memory instrumentation in Performance Schema in MariaDB 10.x... This is a really useful feature.I usually care about optimizer bugs, and these two attracted my attention: Bug #87207 - "select distinct with secondary key for 'Using index for group-by' bad results". This nice optimizer regression bug was found by Shane Bester. As a workaround, while you do not use 5.7.21, you can try to set optimizer_switch='use_index_extensions=off'. I'd keep it that way by default... Bug #72854 - "Extremely slow performance with outer joins and join buffer". I am happy to see this old optimizer bug reported by Sergey Petrunya from MariaDB finally fixed. You can find a lot more details, including usual references to MySQL bug reports that are still private, in the Release Notes. Keep reading and consider upgrade :)

  • Webinar January 18, 2018: MySQL Troubleshooting and Performance Optimization with Percona Monitoring and Management (PMM) Part 2
    Join Percona’s Product Manager Michael Coburn as he presents MySQL Troubleshooting and Performance Optimization with Percona Monitoring and Management (PMM) Part 2 on Thursday, January 18, 2018, at 11:00 am PST / 2:00 pm EST (UTC-8). Register Now Tags: Percona Monitoring and Management, PMM, Monitoring, MySQL, Performance, Optimization, DBA, SysAdmin, DevOpsExperience Level: Expert Optimizing MySQL performance and troubleshooting MySQL problems are two of the most critical and challenging tasks for MySQL DBAs. The databases powering your applications need to handle heavy traffic loads while remaining responsive and stable. This is so that you can deliver an excellent user experience. Furthermore, DBA’s are also expected to find cost-efficient means of solving these issues. In this webinar — the second part of a two-part series — Michael discusses how you can optimize and troubleshoot MySQL performance and demonstrate how Percona Monitoring and Management (PMM) enables you to solve these challenges using free and open source software. We will look at specific, common MySQL problems and review the essential components in PMM that allow you to diagnose and resolve them. By the end of this webinar, you will have a better understanding of how you can troubleshoot MySQL problems in your database. Register for the webinar now. Michael Coburn, Product Manager Michael joined Percona as a Consultant in 2012 and progressed through various roles including Managing Consultant, Principal Architect, Technical Account Manager, and Technical Support Engineer. He is now leading the Product Manager of Percona Monitoring and Management.

  • MySQL Workbench on Slack
    Now that we have the new MySQL community workspace on Slack we opened a workspace channel for our users there. If you have a question or problems with Workbench this is the place to be. This channel will soon replace our old #workbench IRC channel.

  • MySQL on Docker - How to Containerize Your Database - New Whitepaper
    Severalnines is happy to announce that our new whitepaper “MySQL on Docker - How to Containerize Your Database” is now available to download for free! While the idea of containers has been around since the early days of Unix, Docker made waves in 2013 when it hit the market with its innovative solution. Docker allows you to add your stacks and applications to containers where they share a common operating system kernel. This lets you have a lightweight virtualized system with almost zero overhead. Docker also lets you bring containers up or down in seconds, making for rapid deployment of your stack. Download whitepaper Severalnines has been experimenting with and writing about how to utilize Docker for MySQL in our MySQL on Docker Blog Series since 2014. This new white paper is the culmination of years of work by our team trying to understand how best to deploy and manage MySQL on Docker while utilizing the advanced monitoring and management features found in ClusterControl. The topics covered in this white paper are... An Introduction to Docker MySQL Docker Images Networking in Docker Understanding MySQL Containers & Volume Monitoring and Management of MySQL in Docker Docker Security Backup and Restores Running ClusterControl on Docker If your organization is or plans on taking advantage of the latest in Docker container technology in conjunction with their open source MySQL databases, this whitepaper will help you better understand what you need to do to get started. ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE ClusterControl on Docker Related resources  Download the Whitepaper  ClusterControl on Docker  MySQL Docker Containers - Understanding the Basics  Running Galera Cluster in Production with ClusterControl on Kubernetes ClusterControl provides advanced management and monitoring functionality to get your MySQL replication and clustered instances up-and-running using proven methodologies that you can depend on to work. Used in conjunction with other orchestration tools for deployment to the containers, ClusterControl makes managing your open source databases easy with point-and-click interfaces and no need to have specialized knowledge about the technology. ClusterControl delivers on an array of features to help manage and monitor your open source database environments: Management & Monitoring: ClusterControl provides management features to repair and recover broken nodes, as well as test and automate MySQL upgrades. Advanced Monitoring: ClusterControl provides a unified view of all MySQL nodes and clusters across all your data centers and lets you drill down into individual nodes for more detailed statistics. Automatic Failure Detection and Handling: ClusterControl takes care of your replication clusters health. If a master failure is detected, ClusterControl automatically promotes one of the available slaves to ensure your cluster is always up. Learn more about how ClusterControl can enhance performance here or pull the Docker Image here. Tags:  docker MySQL container how to intro mysql on docker database containers containerisation