tel. 883 59 39 89 l mail: kontakt@najlepszerolety.pl l CZYNNE PN-PT 9-17

NajlepszeRolety.PL - rolety - żaluzje - moskitiery

TEL. 883 59 39 89 / UL. MONIUSZKI 54 - MYSŁOWICE

Wieści RSS

Planet MySQL

Planet MySQL - https://planet.mysql.com
  • Proxy MySQL :: HAproxy || ProxySQL & KeepAlived
    So when it comes to routing your MySQL traffic several options exist. HAproxy MariaDB MaxScale NGINX MySQL Router ProxySQL You can even still get MySQL Proxy if you wanted it but it is EOF.  Now I have seen HAproxy used more often with clients, it is pretty straight forward to set up. Percona has an example for those interested:  https://www.percona.com/doc/percona-xtradb-cluster/LATEST/howtos/haproxy.html Personally I like ProxySQL. Percona also has  few blogs on this as well https://github.com/sysown/proxysql/wiki/ProxySQL-Configuration https://www.percona.com/blog/2017/01/19/setup-proxysql-for-high-availability-not-single-point-failure/ https://www.percona.com/blog/2017/01/25/proxysql-admin-configuration/ https://www.percona.com/blog/2016/09/15/proxysql-percona-cluster-galera-integration/ Percona also has ProxySQL version available  https://www.percona.com/downloads/proxysql/ I was thinking I would write up some examples but overall Percona has explained it all very well.  I do not want to take anything away from those posts, instead point out that a lot of good information is available via those urls. So instead of rewriting what has already been written, I will create a collection of information for those interested.  First compare and decide for yourself what you need and want. The following link of course is going to be biased towards ProxySQL but it gives you an overall scope for you to consider.  http://www.proxysql.com/compare If you have a cluster or master to master and you do not care which server the writes vs reads go onto, just as long as you have a connection; then HAproxy is likely a simple fast set up for you.  The bonus with ProxySQL is the ability to sort traffic in a weighted fashion, EASY. So you can have writes go to node 1, and selects pull from node 2 and node 3. Documentation on this can be found here: https://github.com/sysown/proxysql/wiki/ProxySQL-Read-Write-Split-(HOWTO) Yes it can be done with HAproxy but you have to instruct the application accordingly.  https://severalnines.com/resources/tutorials/mysql-load-balancing-haproxy-tutorial This is handled in ProxySQL based on your query rules. https://github.com/sysown/proxysql/wiki/Main-(runtime)#mysql_query_rules https://github.com/sysown/proxysql/wiki/ProxySQL-Configuration#mysql-query-rules Now the obvious question here: OK so how do you keep ProxySQL from becoming the single point of failure?   You can invest is a robust load balancer and etc etc etc ... Toss hardware at it.... Or make it easy on yourself and support open source and use KeepAlived.  This is VERY easy to set up and all of it is documented again well here:  https://www.percona.com/blog/2017/01/19/setup-proxysql-for-high-availability-not-single-point-failure/ http://www.keepalived.org/doc/  To be fair here is an example for keepalived and HAproxy as well   https://andyleonard.com/2011/02/01/haproxy-and-keepalived-example-configuration/ If you ever dealt with lua and mysql-proxy, ProxySQL and Keepalived should be very simple for you. If you still want it for some reason: https://launchpad.net/mysql-proxy Regardless if you choose HAproxy, ProxySQL or another solution, you need to ensure not to replace once single point of failure with another and keepalived is a great for that. So little reason to not do this if you are using a proxy.  So a few more things on ProxySQL.  If you track hosts that connect to your database via your reporting or monitoring , realize those IPS or hostnames are now going to be the proxy server.  What about all the users you already have in MySQL then? Can you migrate them to proxysql? Yes you can. It takes a few steps but it is do able. Here is an example of this: https://dba.stackexchange.com/questions/164705/how-to-easily-bring-80-mysql-users-into-proxysql Make sure you understand the Multi layer configuration system. Save your info to disk! https://github.com/sysown/proxysql/wiki/Main-(runtime)#runtime-tables https://github.com/sysown/proxysql/wiki/Main-(runtime)#disk-database Can ProxySQL run on the MySQL Default port 3306  Yes Edit the mysql-interfaces Keep in mind now your max_connections. If you have Max_connections in mysql set to 500, then that is your limit of course for standard users. With ProxySQL you can now spread users across the system and set a max per node. So to help ensure you do not hit 500 connections set the mysql-max_connections a little bit lower than MySQL value.  Take advantage of the Monitor Module and STATS .. Know what is going on with  your proxy and traffic.  Take advantage of Query Caching if applicable for your application.

  • A Look into MariaDB Auditing for GDPR Compliance
    A Look into MariaDB Auditing for GDPR Compliance maria-luisaraviol Wed, 05/23/2018 - 18:27 When we are talking about a database auditing concept, what we are focused on is tracking the use of database records, and the monitoring of each operation on the data. The auditing activities goal is to provide a clear and reliable answer to the typical 4 W questions: Who accessed the database, When did this happen, What was touched, Where this access came from. Auditing should also help the security team answer the 5th W: Why this happened? Auditing is also a very important task when we want to monitor the database activity to collect information that can help to increase the database performance or debug the application. When we talk about security, accountability and regulatory compliance Database Auditing plays an even more critical role. An auditing activity is key in achieving accountability as it allows us to investigate malicious or suspicious database activities. It’s used to help DBAs detect excessive user privileges or suspicious activities coming from specific connections. In particular, the new European Union General Data Protection Regulation (GDPR) says that it will be important to be able to provide detail of changes to personal data to demonstrate that data protection and security procedures are effective and are being followed. Furthermore, we must ensure that data is only accessed by appropriate parties. This means that we need to be able to say who changed an item of data and when they changed it. It’s broader than GDPR. HIPAA (Health Insurance Portability and Accountability Act) requires healthcare providers to deliver audit trails about anyone and everyone who touches any data in their records. This is down to the row and record level. Furthermore, if a data breach occurs, organizations must disclose full information on these events to their local data protection authority (DPA) and all customers concerned with the data breach within 72 hours so they can respond accordingly. MariaDB Audit Plugin For all these reasons MariaDB started including the Audit Plugin since version 10.0.10 of MariaDB Server. The purpose of the MariaDB Audit Plugin is to log the server's activity: for each client session, it records who connected to the server (i.e., user name and host), what queries were executed, and which tables were accessed and server variables that were changed. Events that are logged by the MariaDB Audit Plugin are grouped into three different types: CONNECT, QUERY and TABLE events. There are actually more types of events to allow fine-tuning of the audit, and focus on just the events and statements relevant for a specific organisation. These are detailed on the Log Settings Page. There also exist several system variables to configure the MariaDB Audit Plugin. the Server Audit Status Variables page includes all variables relevant to review the status of the auditing. The overall monitoring should include an alert to monitor that the auditing is active. This information is stored in a rotating log file or it may be sent to the local syslog. For security reasons, it's sometimes recommended to use the system logs instead of a local file: in this case the value of server_audit_output_type needs to be set to syslog. It is also possible to set up even more advanced and secure solutions such as using a remote syslog service (Read more about the MariaDB Audit Plugin and setting up a rsyslog). What does the MariaDB audit log file looks like? The audit log file is a set of rows in plain text format, written as a list of comma-separated fields to a file. The general format for the logging to the plugin's own file is defined like the following: [timestamp],[serverhost],[username],[host],[connectionid], [queryid],[operation],[database],[object],[retcode] If the log file is sent to syslog the format is slightly different as the syslog has its own standard format (refer to the MariaDB Audit Plugin Log Format page for the details). A typical MariaDB Audit plugin log file example is: # tail mlr_Test_audit.log 20180421 09:22:38,mlr_Test,root,localhost,22,0,CONNECT,,,0 20180421 09:22:42,mlr_Test,root,localhost,22,35,QUERY,,'CREATE USER IF NOT EXISTS \'mlr\'@\'%\' IDENTIFIED WITH \'mysql_native_password\' AS \'*F44445443BB93ED07F5FAB7744B2FCE47021238F\'',0 20180421 09:22:42,mlr_Test,root,localhost,22,36,QUERY,,'drop user if exists mlr',0 20180421 09:22:45,mlr_Test,root,localhost,22,0,DISCONNECT,,,0 20180421 09:25:29,mlr_Test,root,localhost,20,0,FAILED_CONNECT,,,1045 20180421 09:25:44,mlr_Test,root,localhost,43,133,WRITE,employees,salaries, 20180421 09:25:44,mlr_Test,root,localhost,43,133,QUERY,employees,'DELETE FROM salaries LIMIT 100',0 Audit Files Analysis Log files are a great source of information but only if you have a system in place to consistently review the data. Also the way you shape your application and database environment is important. In order to get useful auditing, for example, it’s recommended that every human user has his own account. Furthermore, from the applications standpoint, if those are not using native DB accounts but application based accounts, each application accessing the same server should have its own "application-user". As we said before, you have to use the information collected and analyse it on a regular basis, and when needed, take immediate actions based on those logged events. However, even small environments can generate a lot of information to be analysed manually. Starting with the most recent release, Monyog 8.5, the monitoring tool that is included with the MariaDB TX and MariaDB AX subscriptions,  added a very interesting feature for MariaDB: The Audit Log. This feature parses the audit log maintained by MariaDB Server and displays the content in a clean tabular format. Monyog accesses the audit log file, the same way it does for other MariaDB log files, including the Slow Query, General Query and Error log. Through the Monyog interface you can select the server and the time-frame for which you want the audit log to be seen from.  Then, clicking on “SHOW AUDIT LOG” fetches the contents of the log. The limit on the number of rows that can be fetched in one time-frame is 10000. The snapshot above gives you a quick summary of the audit log in a percentage, like Failed Logins, Failed Events, Schema changes, Data Changes and Stored Procedure. All these legends are clickable and shows the corresponding audit log entries on clicking. Furthermore, you can use the filter option to fetch audit log based on Username, Host, Operation, Database and Table/Query. MariaDB Releases Login or Register to post comments

  • Configuring MySQL in a Docker Container
    In recent weeks I’ve been focusing on Docker in order to get a much better understanding of the containerized world that is materializing in front of us. Containers aren’t just for stateless applications anymore and we’re seeing more cases where MySQL and other databases are being launched in a containerized fashion, so it’s important to know how to configure your MySQL container! In docker hub, you will see an option for this by doing a volume mount from the docker host to the container on /etc/mysql/conf.d. But the problem is that the container image you’re using may not have an !includedir referencing the conf.d directory, much like the latest version of mysql community, as you will see below. [root@centos7-1 ~]# docker run --memory-swappiness=1 --memory=2G -p 3306:3306 --name=mysql1 -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:5.7.22 [root@centos7-1 ~]# docker exec -it mysql1 cat /etc/my.cnf | grep -i include [root@centos7-1 ~]# This means that if you use the prescribed method of placing a config file in /etc/mysql/conf.d in the container, it’s not going to be read and will have no impact on the configuration of the underlying MySQL instance. You might think that the next step would be to attach to the container, modify the my.cnf file (after installing a text editor) and adding the !includedir in your my.cnf file, but this goes against the docker / containerization philosophy. You should be able to just launch a container with the appropriate arguments and be off to fight the universe’s data battles. So in this case, I would propose the following workaround: Instead of using /etc/mysql/conf.d, we can look at the mysql option file reference and realize there is more than one place we can put a config file. In fact, it looks like the next place mysql is going to look for configuration is going to be /etc/mysql/my.cnf and if we check our recently deployed container, we’ll see that /etc/mysql isn’t used. [root@centos7-1 ~]# docker exec -it mysql1 ls /etc/mysql ls: cannot access /etc/mysql: No such file or directory We can mount a volume with a my.cnf file to this directory on the container and it should pick up whatever configuration we supply, as demonstrated below. [root@centos7-1 ~]# docker stop mysql1 mysql1 [root@centos7-1 ~]# docker rm mysql1 mysql1 [root@centos7-1 ~]# cat /mysqlcnf/mysql1/my.cnf [mysqld] server-id=123 [root@centos7-1 ~]# docker run --memory-swappiness=1 --memory=2G -p 3306:3306 -v /mysqlcnf/mysql1:/etc/mysql --name=mysql1 -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:5.7.22 d5d980ee01d5b4707f3a7ef5dd30df1d780cdfa35b14ad22ff436fb02560be1b [root@centos7-1 ~]# docker exec -it mysql1 cat /etc/mysql/my.cnf [mysqld] server-id=123 [root@centos7-1 ~]# docker exec -it mysql1 mysql -u root -ppassword -e "show global variables like 'server_id'" mysql: [Warning] Using a password on the command line interface can be insecure. +---------------+-------+ | Variable_name | Value | +---------------+-------+ | server_id | 123 | +---------------+-------+ [root@centos7-1 ~]# Another option for doing this is overriding the my.cnf file in /etc/ with our own version. You can do this with a mount as noted in the mysql reference for Persisting Data and Configuration Changes, but in that case you will be overwriting other items that might be included in the my.cnf as part of the docker build. This may or may not be your intention depending on how you want to deploy your containers. Conclusion Be aware of the container image you’re using and what configuration options are available to you. Some forks will include a !includedir reference to /etc/mysql/conf.d, some won’t. You may want to overwrite the entire my.cnf file by volume mounting to a copy of the my.cnf on the docker host. Or you may just want to supplement the configuration with a second configuration file in /etc/mysql. The important things are to test, to make sure your configuration is properly read by the mysql container, and to establish confidence in the configuration method used before deploying in your environment.

  • Happy Birthday MySQL 1995
    Happy Birthday MySQL  ! Turned 23 today !

  • When Your JSON Key is Numeric
    There was an interesting question on Stackoverflow.com on extracting values from a JSON data type column in a MySQL database.  What caught my eye was the the keys for the key/value pairs were numeric. In particular the author of the question only wanted values for the key named 74.  The sample data was fairly simple.{ "70" : "Apple", "71" : "Peach", "74" : "Kiwi" }I thought SELECT JSON_EXTRACT(column, '$.74') FROM table; should work but it did not. There was a complaint about an invalid path expression.It turns out that you need to make the second argument in the function '$."74"' or SELECT JSON_EXTRACT(column,'$."74"') FROM table; File this under something to remember for later. :-)