tel. 883 59 39 89 l mail: l CZYNNE PN-PT 9-17

NajlepszeRolety.PL - rolety - żaluzje - moskitiery

TEL. 883 59 39 89 / UL. MONIUSZKI 54 - MYSŁOWICE

Wieści RSS

Planet MySQL

Planet MySQL -
  • Percona XtraDB Cluster 5.7.26-31.37 Is Now Available
    Percona is glad to announce the release of Percona XtraDB Cluster 5.7.26-31.37 on June 26, 2019. Binaries are available from the downloads section or from our software repositories. Percona XtraDB Cluster 5.7.26-31.37 is now the current release, based on the following: Percona Server for MySQL 5.7.26-29 Codership WSREP API release 5.7.25 Codership Galera library 3.26 All Percona software is open-source and free. Bugs Fixed PXC-2480: In some cases, Percona XtraDB Cluster could not replicate CURRENT_USER() used in the ALTER statement. USER() and CURRENT_USER() are no longer allowed in any ALTER statement since they fail when replicated. PXC-2487: The case when a DDL or DML action was in progress from one client and the provider was updated from another client could result in a race condition. PXC-2490: Percona XtraDB Cluster could crash when binlog_space_limit was set to a value other than zero during wsrep_recover mode. PXC-2491: SST could fail if the donor had encrypted undo logs. PXC-2497: The user can set the preferred donor by setting the wsrep_sst_donor variable. An IP address is not valid as the value of this variable. If the user still used an IP address, an error message was produced that did not provide sufficient information. The error message has been improved to suggest that the user check the value of the wsrep_sst_donor for an IP address. PXC-2537: Nodes could crash after an attempt to set a password using mysqladmin Other bugs fixed: PXC-2276, PXC-2292, PXC-2476,  PXC-2560 Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

  • What is the Best Way to Check the Health of a Tungsten Cluster Before a Switch?
    The Question Recently, a customer asked us: What would cause a node switch to fail in a Tungsten Cluster? For example, we saw the following during a recent session where a switch failed: cctrl> switch to db3 SELECTED SLAVE: db3@alpha SET POLICY: MAINTENANCE => MAINTENANCE PURGE REMAINING ACTIVE SESSIONS ON CURRENT MASTER 'db1@alpha' PURGED A TOTAL OF 0 ACTIVE SESSIONS ON MASTER 'db1@alpha' FLUSH TRANSACTIONS ON CURRENT MASTER 'db1@alpha' Exception encountered during SWITCH. Failed while setting the replicator 'db1' role to 'slave' ClusterManagerException: Exception while executing command 'replicatorStatus' on manager 'db1' Exception=Failed to execute '/alpha/db1/manager/ClusterManagementHelper/replicatorStatus alpha db3' Reason= CLUSTER_MEMBER(true) STATUS(FAIL) +----------------------------------------------------------------------------+ |alpha | +----------------------------------------------------------------------------+ |Handler Exception: SYSTEM | |Cause:Exception | | MANAGER | |CLUSTER_MEMBER(true) | |STATUS(FAIL) | |Exception: ConnectionException | |Message: getResponseQueue():No response queue found for id: 1552059204364 | The Answer The Tungsten Manager is unable to communicate with a remote resource or has insufficient memory Here are some possibilities to consider: Network blockage – if the Manager is unable to communicate with the target layer (i.e. Replicator or another manager), then the above error will occur Manager tuning – if a Manager restart on all nodes clears the issue, then this indicates that the Manager is starved for resources The Solution So what may be done to alleviate the problem? Manager tuning – earlier versions of Tungsten Clustering did not allocate sufficient resources to the Java JVM, so make the following three configuration changes via tpm update: mgr-heap-threshold=200 mgr-java-mem-size=250 Network blockage – Make sure the replicators are all online and caught up, and check the manager’s view of the cluster using the following commands on every node:cctrl> ls cctrl> members cctrl> ping cctrl> ls resources cctrl> cluster validate cctrl> show alarms Here are examples: tungsten@db1:/home/tungsten # cctrl [LOGICAL] /east > members east/db1(ONLINE)/ east/db2(ONLINE)/ east/db3(ONLINE)/ [LOGICAL] /east > ping NETWORK CONNECTIVITY: PING TIMEOUT=2 NETWORK CONNECTIVITY: CHECKING MY OWN ('db1') CONNECTIVITY HOST db1/ ALIVE (ping) result: true, duration: 0.01s, notes: ping -c 1 -w 2 NETWORK CONNECTIVITY: CHECKING CLUSTER MEMBER 'db2' HOST db2/ ALIVE (ping) result: true, duration: 0.00s, notes: ping -c 1 -w 2 NETWORK CONNECTIVITY: CHECKING CLUSTER MEMBER 'db3' HOST db3/ ALIVE (ping) result: true, duration: 0.01s, notes: ping -c 1 -w 2 [LOGICAL] /east > cluster validate ======================================================================== CHECKING FOR QUORUM: MUST BE AT LEAST 2 DB MEMBERS QUORUM SET MEMBERS ARE: db1, db3, db2 SIMPLE MAJORITY SIZE: 2 GC VIEW OF CURRENT DB MEMBERS IS: db1, db2, db3 VALIDATED DB MEMBERS ARE: db1, db3, db2 REACHABLE DB MEMBERS ARE: db1, db3, db2 ======================================================================== MEMBERSHIP IS VALID BASED ON VIEW/VALIDATED CONSOLIDATED MEMBERS CONSISTENCY CONCLUSION: I AM IN A PRIMARY PARTITION OF 3 DB MEMBERS OUT OF THE REQUIRED MAJORITY OF 2 VALIDATION STATUS=VALID CLUSTER ACTION=NONE [LOGICAL] /east > ls resources +----------------------------------------------------------------------------+ |RESOURCES | +----------------------------------------------------------------------------+ | db1:DATASERVER: ONLINE | | db1:MANAGER: ONLINE | | db1:MEMBER: ONLINE | | db1:REPLICATOR: ONLINE | | db2:DATASERVER: ONLINE | | db2:MANAGER: ONLINE | | db2:MEMBER: ONLINE | | db2:REPLICATOR: ONLINE | | db3:DATASERVER: ONLINE | | db3:MEMBER: ONLINE | | db3:REPLICATOR: ONLINE | | west:CLUSTER: ONLINE | +----------------------------------------------------------------------------+ [LOGICAL] /east > show alarms +----------------------------------------------------------------------------+ |ALARMS | +----------------------------------------------------------------------------+ | | +----------------------------------------------------------------------------+ Summary The Wrap-Up In this blog post we discussed what would cause a node switch to fail in a Tungsten Cluster and what may be done about it. To learn about Continuent solutions in general, check out The Library Please read the docs! For more information about Tungsten clusters, please visit Tungsten Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business! For more information, please visit Want to learn more or run a POC? Contact us

  • Stored Functions and Temporary Tables are Not a Good Fit
    In this blog post, I am going to show why we have to be careful with stored functions in select list, as a single query can cause thousands of queries in the background if we aren’t cautious. For this example, I am only going to use the SLEEP function to demonstrate the issue, but you could use any other stored functions. Here is the test schema and data: DROP TABLE IF EXISTS t1; CREATE TABLE t1 ( id INT NOT NULL PRIMARY KEY auto_increment, gcol INT NOT NULL, fcol INT NOT NULL, INDEX (gcol) ); INSERT INTO t1 VALUES (NULL,1,1),(NULL,2,1),(NULL,3,1),(NULL,4,2),(NULL,5,1); And the first query: mysql [localhost] {msandbox} (test) > SELECT gcol, SLEEP(1) FROM t1 WHERE 1 GROUP BY gcol LIMIT 1; +------+----------+ | gcol | SLEEP(1) | +------+----------+ | 1 | 0 | +------+----------+ 1 row in set (1.00 sec) The query takes one second, which means the SLEEP was executed after/during the LIMIT step. The second query creates a temp table: mysql [localhost] {msandbox} (test) > SELECT fcol, SLEEP(1) FROM t1 WHERE 1 GROUP BY fcol LIMIT 1; +------+----------+ | fcol | SLEEP(1) | +------+----------+ | 1 | 0 | +------+----------+ 1 row in set (5.02 sec) It takes 5 seconds, meaning the SLEEP was executed for every row before LIMIT, and the only differences between these two queries is the second one uses temp tables. (Yes, the schema and query could be improved to avoid temp tables, but this is only a test case and you can’t avoid temp tables all the time.) MySQL uses a different order of steps here and does the select list evaluation before/during creating the temp tables. As you can see, there are functions in the select list, and MySQL will create temp tables, and that could cause thousands of extra queries in the background. I recently had a case where running a single query with functions in the select list resulted in 333,906 queries in the background! In the query, there was a Limit 0,10. There is a very good blog post describing the order of the operations in SQL, which should be the following: - From - Where - Group By - Aggregations - Having - Window - Select - Distinct - Uninon - Order by - Offset - Limit Based on this, if MySQL strictly follows the SQL order, the LIMIT should be the last, and the first query should take 5s because the Select list evaluation should happen before that for every row. But I guess this is a performance optimization in MySQL, to only run the evaluation for the limited number of rows. When temp tables are involved, MySQL will use a different logic and do the evaluation before/during the group by when it creates the temp tables. When MySQL creates a temp table, it has to materialize every single row, and that includes the functions as well. If your query is going to create a huge temp table, MySQL will execute the function for all those rows. MySQL does not have generated columns in internal temp tables, but what might do the trick here is to just point to a function that needs to be executed when MySQL reads the data out from the temp table. I have created a ticket, but we are still debating if there is any room for improvement here. Using DML queries in these functions If you are using DML (insert/update/delete) queries in stored functions, and you are calling them in the select list, you have to be careful because if MySQL creates temp tables it will call those queries for all the rows in the temp table as well. So you might just want to insert only one row in another table, as you will end up with thousands if not. How can we avoid this? If you are facing this issue, you have to analyze your queries. You might only need an extra index to avoid temp tables, or you could rewrite the query. Where can I see if this is happening? Unfortunately, in this case, the slow query log cannot help us, because these function calls and the queries from the functions are not logged. This could also be a feature request because it would make debugging much easier. (I have already created a feature request in Percona Server to log queries from stored functions.) However, in the slow query log for the original query, the Rows_examined would be very high if you are using stored functions and that number is way higher than it should be and might cause problems. Also, the general log can be helpful for investigating the problem here, because in the general log we can see all these function calls as well. Conclusion If you are using stored functions in Select list, you have to make sure the query does not use temp tables, otherwise, it could cause many extra queries in the background.

  • SQL EXISTS OPERATOR Tutorial With Example
    SQL EXISTS OPERATOR Tutorial With Example is today’s topic. The SQL exists operator is used for checking the existence of a record in any subquery. The SQL exists condition is used in the combination with a subquery and is met if the subquery results in any one row. The result of exists keyword is a Boolean value, i.e., True or False. It can be used with select, insert, update, or delete in a query. SQL EXISTS OPERATOR The EXISTS operator is used to test for existence of any record in the subquery. The EXISTS operator returns true if a subquery returns one or more records. The EXISTS condition in SQL is used to check whether a result of a correlated nested query is empty (contains no tuples) or not.  WHERE EXISTS tests for the existence of any records in the subquery. EXISTS returns true if a subquery returns one or more records. EXISTS is commonly used with a correlated subquery. See the following syntax of SQL Exists Statement. #SYNTAX Select column(s) from table_name where exists (Select column_name from table_name where condition); #PARAMETERS Column(s) is the name of the column in the table. Table_name is the name of the table. Where exists is the condition is used for checking the exists of record in a subquery. Also, there is another syntax which is similar to the above. SELECT [Column Names] FROM [Source] WHERE EXISTS (Write Subquery to Check) Columns: It allows us to choose a number of columns from the tables. It may be one or more columns. Source: One or more sql tables present in the Database. SQL JOINS are used to join the multiple tables. Subquery: Here, we have to provide a Subquery. If a subquery returns true, then it will return the rows; otherwise, it doesn’t return any records. Let’s understand this by examples: Consider table: (CUSTOMERS and ORDERS) CUSTOMERS: CUST_CODE CUST_NAME CITY GRADE AMOUNT 101 SHUBH KOLKATA 1 10000 201 SOURAV KOLKATA 1 10000 301 KISHAN PATNA 2 20000 401 DILEEP ALLAHABAD 2 30000 501 SHRUTI PATNA 3 40000 601 SHOUVIK JALANDHAR 4 20000 701 ROHIT PATNA 2 40000 801 ROUNAK NULL 5 10000   ORDERS: O_id Cust_id Order_date 1 201 2019-02-06 2 301 2019-03-06 3 302 2019-04-06 4 403 2019-05-06   #Using EXISTS condition with a select statement In this, we are going to use the exists condition within a select statement. #QUERY select cust_name, city from customers where exists (select * from orders where customers.cust_code = orders.cust_id); See the output.   #EXPLANATION In the above query, we had fetched the name and city of the customers whose orders were placed, and this has been implemented using exists condition. #Using NOT EXISTS condition with select statement In this we are going to use not exists condition within a select statement. #QUERY select cust_name, city from customers where not exists (select * from orders where customers.cust_code = orders.cust_id); See the following output.   #EXPLANATION In the above query, we had fetched the name and city of the customers whose orders were not placed, and this has been implemented using exists condition. #Using Exists condition with DELETE statements In this we are going to use the exists condition with delete statements. #QUERY Delete from orders where exists (select * from customers where customers.cust_code = orders.cust_id AND customers.cust_name='Kishan'); See the following output.   #EXPLANATION In the above query, we have deleted the record of order from orders table whose name is Kishan in Customers table. #Using Exists condition with UPDATE statements In the above query, we have updated the name of the customer whose customer id is 201 in the orders table. #QUERY Update customers set cust_name='Pranav' where exists (select * from orders where customers.cust_code = orders.cust_id); See the following output.   #EXPLANATION In the above query, we have updated the name of the customer whose customer id is 201 in the orders table. #NOTE SQL statements that use the EXISTS condition are very inefficient since the sub-query is rerun for every row in the outer query’s table. Finally, SQL EXISTS OPERATOR Tutorial With Example is over. The post SQL EXISTS OPERATOR Tutorial With Example appeared first on AppDividend.

  • Adaptive Hash Index on AWS Aurora
    Recently I had a case where queries against Aurora Reader were 2-3 times slower than on the Writer node. In this blog post, we are going to discuss why. I am not going to go into the details of how Aurora works, as there are other blog posts discussing that. Here I am only going to focus on one part. The Problem My customer reported there is a huge performance difference between the Reader and the Writer node just by running selects. I was a bit surprised, as the select queries should run locally on the reader node, the dataset could fit easily in memory, there were no reads on disk level, and everything looked fine. I was trying to rule out every option when one of my colleagues mentioned I should have a look at the InnoDB_Adaptive_Hash_Indexes. He was right – it was disabled on the Reader nodes, I could see it on the console. Let’s enable the adaptive hash index I opened the control panel and I was checking the parameter groups, but the adaptive hash index was already enabled. Ok, I might have made a mistake but I double checked myself many times and it was true. Adaptive hash was disabled on the console but enabled on the control panel. That means the AWS control panel is lying! I have restarted the nodes multiple times, and I have created new test clusters, etc… but I was not able to enable adaptive hash on the Reader node. It was enabled on the Writer node, and it was working. Is this causing the performance difference? Because I was able to enable or disable the adaptive hash index on the Writer node, I continued my tests there and I could confirm that when I disabled it the queries got slower. Same speed as on the Reader node. When I enabled,  AHI queries got faster. In general with AHI on the Writer node, the customer’s queries were running 2 times faster. AHI can help for many workloads but not all of them, and you should test your queries/workload both with and without AHI. Why is it disabled on the Reader? I have to be honest because I am not an AWS engineer and I do not know the code of Aurora, but I am only guessing here and I might be wrong. Why can I change it in the parameter group? We can modify the adaptive hash in the parameter groups, but there is no impact on the Reader nodes at all. Many customers could think they have AHI enabled but actually, they don’t. I think this is a bad practice because if we cannot enable it on the Reader node we should not be able to change it on the control panel. Is this causing any performance problems for me? If you are using the Reader node for selects queries, which are based on secondary keys, you are probably suffering from this but it depends on your workload if it is impacting your performance or not. In my customer’s case, the difference was 2 times slower without AHI. But I want fast queries! If your queries heavily benefit from AHI, you should run your queries on the Writer node or even on an async slave, or have a look on AWS RDS which does not have this limitation or use EC2 instances. You could also check query cache in Aurora. Query Cache In Aurora, they reworked the Query Cache which does not have the limitations like in Community Edition or in Percona Server.  Cacheable queries take out an “exclusive lock” on MySQL’s query cache. In the real world, that means only one query can use the Query Cache at a time and all the other queries have to wait for the mutex. Also in MySQL 8.0 they completely removed the Query Cache. But in Aurora they redesigned it and they removed this limitation – there is no single global mutex on the Query Cache anymore. I think one of the reasons for this is could be because they knew that Adaptive Hash won’t work. Does AWS know about this? I have created a ticket to AWS engineers to get some feedback on this, and they verified my findings and have confirmed Adaptive Hash Index cannot be enabled on the Reader nodes. They are looking into why we can modify it on the control panel. Conclusion I would recommend checking your queries on your Reader nodes to make sure they perform well and compare the performance with the Writer node. At this moment, we cannot enable AHI on Reader nodes, and I am not sure if that will change any time soon. But this can impact the performance in some cases, for sure.