tel. 883 59 39 89 l mail: kontakt@najlepszerolety.pl l CZYNNE PN-PT 9-17

NajlepszeRolety.PL - rolety - żaluzje - moskitiery

TEL. 883 59 39 89 / UL. MONIUSZKI 54 - MYSŁOWICE

Wieści RSS

Planet MySQL

Planet MySQL - https://planet.mysql.com
  • MySQL Lock information in MySQL Shell
    Last Tuesday, it was the very first session of DB AMA, Morgan Tocker made a nice presentation of MySQL Performance_Schema and illustrated it with some nice queries to get Meta et Data Locks. As those queries were not that simple to write or at least to remember, I thought it might be a good idea to add them to MySQL Shell, the best MySQL DBA Tool ! I’ve then added a new method to the check plugin: getLocks(). Let’t see it in action: As you can see, this is a small extension that can offers you a better view of what’s locked per transaction. You can find several MySQL Shell Extension directly on github: https://github.com/lefred/mysqlshell-plugins

  • Finding the Unknowns in Database Audit Logs
    Then secretary of Defense Donald Rumsfeld popularized the terms: “known knowns”, “known unknowns”, and “unknown unknowns.” With the ever-increasing number of data breaches and vulnerabilities, database operation teams have to account for every possibility. Visualizing your audit data allows you to look for the “unknowns”, those access patterns or connections that you’d otherwise overlook. Although enabling an audit log and shipping it off to a vault may meet security and regulatory requirements, you will lose an important opportunity to protect your customer and employee information. The following dashboard demonstrates the type of information that audit logs can reveal: Who is connecting to my database (IP address, location, username..) Who is trying to connect to my database but getting access errors? Which tables are being accessed and by whom? Who is accessing sensitive data? With a quick glance, we can see that “User_Unknown” and Ops1_US are accessing the salaries table. “User_Unknown” has executed 2 updates and has an IP address registered for Barbados. Also, you can see that User_Unknown has logged several 1045 errors, likely a brute-force password attempt. The below diagram shows all the components to enable this type of real-time analytics: MySQL Enterprise Audit generates the logs, filebeat pushes log changes to logstash, logstash processes JSON and sends to ELK stack (Elasticsearch, logstash, Kibana). Although I used the ELK stack, there are many great SIEM (security information and event management) and log analyzers: Splunk, SolarWinds, Oracle Audit Vault, Sumo Logic, Micro Focus, Trustwave, Datadog, etc… For the audit log, I used Oracle’s MySQL Enterprise Audit plugin. For the above dashboard, I enabled specific filters to log connects and disconnects, connection errors and table access. These filters can be found on prior blog (MySQL Audit Logging — How to Avoid Data Overload.) For auditing purposes, the general query log should be avoided. It lacks compression, automatic log rotation, encryption and it puts significant overhead on your system. In addition, it doesn’t collect sufficient information to meet regulatory security requirements. If you’re interested in setting up a similar environment, I used Oracle cloud and the terraform scripts on this github repo to deploy Elasticsearch and Kibana. MySQL Enterprise Audit plugin can be downloaded with the enterprise server from Oracle e-delivery. For log processing, I changed the MySQL audit output to JSON (log file formats) I posted the logstash config file below for ingesting the file into elasticsearch. I welcome any feedback on improvements or your feedback on alternative log analyzers. Also, please let us know if we can provide more filtering examples. Filter definitions: #logstash config file input { beats { port => 5044 } filter { # remove comma between json events mutate { gsub => [“message”,”\} \}\,”,”} }”] } # remove errors due to malformed json if “_jsonparsefailure” in [tags] { drop { } } grok { match => { “message” => [ “%{JSON:payload_raw}” ] } pattern_definitions => { “JSON” => “{.*$” }} if “_grokparsefailure” in [tags] { drop { } }json { source => “payload_raw”} # replace log timestamp with actual audit log timestampdate { match => [ “[timestamp]”, “yyyy-MM-d H:m:s” ]} geoip{ source => “[login][ip]” target => “geoip” } # Remove the temporary “payload_raw” field (and other fields) mutate { remove_field => [ “payload_raw”, “message”, “port”, “host”, “@version”, “timestamp” ]}} output { elasticsearch { hosts => [ “[elastisearch_server_IP]:9200” ] }}

  • MySQL without the SQL
    MySQL can be used as a NoSQL JSON Document Store as well as a traditional relational database.  Before the MySQL Document Store you needed a DBA, or someone acting as one, to set up the schema to hold the tables and the tables according to how the data was arranged (and the indexes, constraints, and all the other stuff RDMS are known for). This was great for cases when you know exactly what the data looks likes, there are no changes to the structure of that data, and once things are set they go one that way forever.But what about projects where change is constant, or where making changes to accommodate but keeping the old stuff as already formatted is needed? That is a perfect case for the MySQL Document Store.MySQL w/o SQLSo you can connect to the server, create a schema, create a document collection, and store data without once using SQL.  This video shows you a simple example of doing just that!  I connected to a MySQL server running 8.0.19 using the new MySQL Shell (mysqlsh) using an account I set up earlier and for security reasons you still need someone with a semblance of DBA skills for setting up an account with CREATE USER <user>@<host> IDENTIFIED BY <password> and GRANT ALL ON <shcma>,* to <user>@<host> or the MySQL Workbench equivalent. After the login, I typed session to see the detail on the session.  Next I created a schema with the session.createSchema() operator.  So the schema was created without Structured Query Language,Next I needed to point the db object to that new schema and you can use either the \use command or session.setCurrentSchema()to do just that.Now we need a document collection and to create one use db.createCollection() and from there it is easy to use add() to store data.Commands Used myssqlsh demo@localhostsessionsession.getSchemas()session.createSchema('demo')\use demo  ( or session.setCurrentSchema('demo')  )dbdb.getCollections()db.createCollection('dave')db.dave.add( { name : "Dave", "demo" : "Working!" })db.dave.find()All opinions expressed in this blog are those of Dave Stokes who is actually amazed to find anyone else agreeing with him

  • Still have binlog_error_action as IGNORE_ERROR?
    Recently, we were affected by an ignored configuration option introduced in MySQL 5.6. This incident caused us to perform extended planning of downtime and the rebuilding of the slave. In this post, we’ll discuss our encounter with binlog_error_action and likely bad default. The incident started with an alert from our monitoring platform that a check failed to verify a MySQL master’s binary log coordinates. My colleague working from the other side of the globe observed the following: mysql> show master status; Empty set (0.00 sec) mysql> show binary logs; ERROR 1381 (HY000): You are not using binary logging Interestingly, the MySQL slave was still reporting that everything was okay! mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: master Master_User: replicator Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000186 Read_Master_Log_Pos: 1069066810 Relay_Log_File: mysqld-relay-bin.000245 Relay_Log_Pos: 1069067004 Relay_Master_Log_File: mysql-bin.000186 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 1069066810 Relay_Log_Space: 1069067296 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 23294781 Master_UUID: 7e2e60eb-39fb-11ea-a248-005056b589fd Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 7e2e60eb-39fb-11ea-a248-005056b589fd:50961813-81101493 Executed_Gtid_Set: 2366bbb5-39fb-11ea-a246-005056b5f82f:1-818, 7e2e60eb-39fb-11ea-a248-005056b589fd:1-81101493 Auto_Position: 1 1 row in set (0.00 sec) Guess what my colleague saw in the error log? 2020-03-18 07:23:50 14694 [ERROR] /usr/sbin/mysqld: Incorrect key file for table '/tmp/#sql_3966_2.MYI'; try to repair it 2020-03-18 07:23:50 14694 [ERROR] Got an error from unknown thread, /mnt/workspace/percona-server-5.6-redhat-binary-new/label_exp/min-centos-7-x64/test/rpmbuild/BUILD/percona-server-5.6.46-86.2/storage/myisam/mi_write.c:230 2020-03-18 14:10:46 14694 [Warning] IP address '192.168.0.10' could not be resolved: Name or service not known 2020-03-19 00:55:59 7f5589a9a700 InnoDB: Error: Write to file (merge) failed at offset 5678039040. InnoDB: 1048576 bytes should have been written, only 892928 were written. InnoDB: Operating system error number 28. InnoDB: Check that your OS and file system support files of this size. InnoDB: Check also that the disk is not full or a disk quota exceeded. InnoDB: Error number 28 means 'No space left on device'. InnoDB: Some operating system error numbers are described at InnoDB: http://dev.mysql.com/doc/refman/5.6/en/operating-system-error-codes.html 2020-03-19 00:57:59 7f5589a9a700 InnoDB: Error: Write to file (merge) failed at offset 3308257280. InnoDB: 1048576 bytes should have been written, only 888832 were written. InnoDB: Operating system error number 28. InnoDB: Check that your OS and file system support files of this size. InnoDB: Check also that the disk is not full or a disk quota exceeded. InnoDB: Error number 28 means 'No space left on device'. InnoDB: Some operating system error numbers are described at InnoDB: http://dev.mysql.com/doc/refman/5.6/en/operating-system-error-codes.html 2020-03-19 13:29:08 14694 [ERROR] /usr/sbin/mysqld: Incorrect key file for table '/tmp/#sql_3966_2.MYI'; try to repair it 2020-03-19 13:29:08 14694 [ERROR] Got an error from unknown thread, /mnt/workspace/percona-server-5.6-redhat-binary-new/label_exp/min-centos-7-x64/test/rpmbuild/BUILD/percona-server-5.6.46-86.2/storage/myisam/mi_write.c:230 2020-03-19 13:29:11 14694 [ERROR] /usr/sbin/mysqld: Incorrect key file for table '/tmp/#sql_3966_5.MYI'; try to repair it 2020-03-19 13:29:11 14694 [ERROR] Got an error from unknown thread, /mnt/workspace/percona-server-5.6-redhat-binary-new/label_exp/min-centos-7-x64/test/rpmbuild/BUILD/percona-server-5.6.46-86.2/storage/myisam/mi_write.c:230 2020-03-19 13:29:11 14694 [ERROR] /usr/sbin/mysqld: Incorrect key file for table '/tmp/#sql_3966_2.MYI'; try to repair it ... 2020-03-19 15:27:33 14694 [ERROR] Got an error from unknown thread, /mnt/workspace/percona-server-5.6-redhat-binary-new/label_exp/min-centos-7-x64/test/rpmbuild/BUILD/percona-server-5.6.46-86.2/storage/myisam/mi_write.c:230 2020-03-19 15:27:33 14694 [ERROR] /usr/sbin/mysqld: Incorrect key file for table '/tmp/#sql_3966_3.MYI'; try to repair it 2020-03-19 15:27:33 14694 [ERROR] Got an error from unknown thread, /mnt/workspace/percona-server-5.6-redhat-binary-new/label_exp/min-centos-7-x64/test/rpmbuild/BUILD/percona-server-5.6.46-86.2/storage/myisam/mi_write.c:230 2020-03-19 15:27:35 14694 [ERROR] An error occurred during flush stage of the commit. 'binlog_error_action' is set to 'IGNORE_ERROR'. Hence turning logging off for the whole duration of the MySQL server process. To turn it on again: fix the cause, shutdown the MySQL server and restart it. Further debugging of the issue with PMM, confirmed a spike on the /tmp disk. PMM Disk Check The environment here is an active-passive replication with the Orchestrator looking over them and doing its thing. It also has application traffic flowing through a ProxySQL cluster directed to the current active database. So, What’s the Situation? We need to restart the master to fix this, as mentioned in the error-log. The master is no longer writing binary logs and therefore the slave is broken. Because the slave is broken, it’s no longer able to take over an active position. No backups and no HA. Here’s a TL;DW (write) summary of solution offered: # Disable recoveries on Orchestrator: orchestrator-client -c disable-global-recoveries # Change database configuration add following variable: binlog_error_action=ABORT_SERVER # Restart MySQL # Verify error log & binary log # Enable global recoveries: orchestrator-client -c enable-global-recoveries # Take fresh backup on master # Restore passive-node (Slave) & reconfigure replication. # Raise /tmp disk partition more than largest table-size. Bad Default?! This option was introduced in MySQL 5.6.22 as a solution to the bug report about “mysqld continues operation without logging when binlogs cannot be written!” The default value was “IGNORE_ERROR”. The alternate option is to crash the server with the value “ABORT_SERVER”. Had it been ABORT_SERVER, we had our SWAT team of Orchestrator and ProxySQL to manage the HA. They’d have performed the failover to the passive-node. Then, the only thing left to resolve this issue is to verify the status of the aborted (crashed) MySQL database server, which, if running at all, would already be demoted to a passive position. Thankfully, this default is changed in MySQL 5.7.7 and later versions to ABORT_SERVER. A Task for You! Here is something for you to do: For the EOL of MySQL 5.6, we still have almost a year to go. It’s scheduled for February 2021.  If you’re using MySQL 5.6, go ahead and change the binlog_error_action variable. This is a dynamic change and won’t affect anything in “live.” Make sure you persist it in the MySQL config file. mysql> SET GLOBAL binlog_error_action=ABORT_SERVER; Comment from a remote colleague With everything going on in the world, we hope you’re laying low and working from home. Pythian has been cultivating a remote-work culture for more than a decade. If you’re struggling with the remote team and facing challenges (technical or not), it’s possible we’ve already encountered them, and have a solution. Pythian is here to help. References:  https://bugs.mysql.com/bug.php?id=51014 https://mysqlserverteam.com/new-option-to-stop-the-server-if-binlogging-fails-in-mysql-5-6/ https://jira.percona.com/browse/PXC-2546 p.s. Are you still using MySQL 5.6? Consider upgrading your MySQL sooner than later.

  • Evaluating Group Replication with Multiple Writers in MySQL
    In this blog, I want to evaluate Group Replication Scaling capabilities to handle several writers, that is, when the read-write connection is established to multiple nodes, and in this case, two nodes. This setup is identical to my previous post, Evaluating Group Replication Scaling Capabilities in MySQL. For this test, I deploy multi-node bare metal servers, where each node and client are dedicated to an individual server and connected between themselves by a 10Gb network. I use the 3-nodes Group Replication setup. Hardware specifications: System | Supermicro; SYS-F619P2-RTN; v0123456789 (Other) Service Tag | S292592X0110239C Platform | Linux Release | Ubuntu 18.04.4 LTS (bionic) Kernel | 5.3.0-42-generic Architecture | CPU = 64-bit, OS = 64-bit Threading | NPTL 2.27 SELinux | No SELinux detected Virtualized | No virtualization detected # Processor ################################################## Processors | physical = 2, cores = 40, virtual = 80, hyperthreading = yes Models | 80xIntel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz Caches | 80x28160 KB # Memory ##################################################### Total | 187.6G For the benchmark, I use sysbench-tpcc 1000W prepared database as: ./tpcc.lua --mysql-host=172.16.0.11 --mysql-user=sbtest --mysql-password=sbtest --mysql-db=sbtest --time=300 --threads=64 --report-interval=1 --tables=10 --scale=100 --db-driver=mysql --use_fk=0 --force_pk=1 --trx_level=RC prepare The configs, scripts, and raw results are available on our GitHub. The workload is “in-memory,” that is, data (about 100GB) should fit into innodb_buffer_pool (also 100GB). For the MySQL version, I use MySQL 8.0.19. I use the following command line: ./tpcc.lua --mysql-host=172.16.0.12,172.16.0.13 --mysql-user=sbtest --mysql-password=sbtest --mysql-db=sbtest --time=$time --threads=$i --report-interval=1 --tables=10 --scale=100 --trx_level= RR --db-driver=mysql --report_csv=yes --mysql-ignore-errors=3100,3101,1213,1180 run This establishes an active connection to TWO separate nodes in Group Replication. To make sure that transactions do not work with stale data, I use group_replication_consistency=’BEFORE’ on all nodes. See more on this in my previous post, Making Sense of MySQL Group Replication Consistency Levels. Results Let’s review the results I’ve got. First, let’s take a look at how performance changes when we increase user threads from 1 to 256 for 3 nodes. Interesting to see how the throughput becomes unstable when we increase the number of threads. To see it in more detail, let’s draw the chart with the individual scales for each set of threads: As we can see, there are a lot of variations for threads starting with 2. Let’s check the 8 and 64 threads with 1-sec resolution. There are multiple periods when throughput is 0. These are the results with 1-sec interval: 393 610.99 603.01 3140.97 4822.97 4865.93 2454.05 1038 939.99 1340.02 1549.01 1561 626 0 0 66 0 0 63 0 0 69 0 0 268 369 367 331 385 356 258 0 0 There is basically an 11-second long stall when the cluster could not handle transactions. Conclusion Unless I missed some tuning parameters which would improve the performance, it is hard to recommend a multi-writer setup based on Group Replication. Though Multi-primary mode is disabled by default in MySQL Group Replication, you should not try to enable this.