Size: a a a

2016 November 30

F

Feed Reader Bot in MySQL
Planet MySQL
Emea Webinar-Galera Cluster® Best Practices for DBAs and DevOps Part 2: Taking Full Advantage of Multi-Master
Description This webinar will be the second in our series on best practices to follow when using Galera Cluster.In this part, we will discuss important topics related to multi-master setups:Practical considerations when using Galera in a multi-master setupEvaluating the characteristics of your database workloadPreparing your application for multi-masterDetecting and dealing with transaction conflictsThe webinar will conclude with a Q&A session where you can ask any questions you may have about Galera Cluster.Time: 11-12 AM EEST (Eastern Europe Standard Time), 13th of  DecemberSpeakers: Philip Stoev, Quality and Release Manager, Codership                Sakari Keskitalo, COO, CodershipREGISTER TO THE EMEA TIMEZONE WEBINAR
источник

F

Feed Reader Bot in MySQL
Planet MySQL
USA Webinar-Galera Cluster® Best Practices for DBAs and DevOps Part 2: Taking Full Advantage of Multi-Master
Description This webinar will be the second in our series on best practices to follow when using Galera Cluster.In this part, we will discuss important topics related to multi-master setups:Practical considerations when using Galera in a multi-master setupEvaluating the characteristics of your database workloadPreparing your application for multi-masterDetecting and dealing with transaction conflictsThe webinar will conclude with a Q&A session where you can ask any questions you may have about Galera Cluster.Time: 9-10 AM PST (Pacific time zone), 13th of  DecemberSpeakers: Philip Stoev, Quality and Release Manager, Codership                Sakari Keskitalo, COO, CodershipREGISTER TO USA TIME ZONE WEBINAR
источник
2016 December 01

F

Feed Reader Bot in MySQL
Planet MySQL
Using the InnoDB Buffer Pool Pre-Load Feature in MySQL 5.7
In this blog post, I’ll discuss how to use the InnoDB buffer pool pre-load feature in MySQL 5.7
Starting MySQL 5.6, you can configure MySQL to save the contents of your InnoDB buffer pool and load it on startup. Starting in MySQL 5.7, this is the default behavior. Without any special effort, MySQL saves and restores a portion of buffer pool in the default configuration. We made a similar feature available in Percona Server 5.5 – so the concept has been around for quite a while.
Frankly, time has reduced the need for this feature. Five years ago, we would typically store databases on spinning disks. These disks often took quite a long time to warm up with normal database workloads, which could lead to many hours of poor performance after a restart. With the rise of SSDs, warm up happens faster and reduces the penalty from not having data in the buffer pool. Typically, a system reaches 90% of its fully warmed up performance in 10 minutes or less. But since it takes virtually no effort to use, saving the contents of the InnoDB buffer pool is a great feature to enable by default.
This blog post looks into some issues with this feature that might not be totally obvious from its name or documentation.
#1 
By default, MySQL only saves 25% of the most actively accessed pages (by the LRU) in the InnoDB buffer pool (not the whole buffer pool).
This is a reasonable choice for many use cases: it saves the most valuable pages, which can then be loaded faster than if you try to load every page in the buffer pool (many of which might not be relevant for continuing workload).
You can change this number by setting the innodb_buffer_pool_dump_pct variable. If you’re using InnoDB essentially as an in-memory database, and want to ensure all data is memory resident and can be accessed without a disk read, set it to 100.     
Note that this variable is based on the actual amount of data present in memory, not the buffer pool size, For example, if you have a 100GB buffer pool but it only contains 10GB of data, by default only 25% of 10GB (2.5GB) gets saved. (As the manual explains, it will not take nearly as much on disk as only the page identifiers are stored, not full page contents.)
#2
MySQL starts and becomes accessible through the network before the buffer pool load on startup is complete. Immediately before the start, a lot of resources fetch buffer pool contents from the disk as quickly as possible, possibly affecting performance. If you have multiple MySQL nodes – like using MySQL Replication or running Percona XtraDB Cluster – you might consider bringing them back for production traffic only after the buffer pool load operation completes. You can monitor the buffer pool load progress by watching the GLOBAL STATUS variable:
Buffer pool load is in progress:| Innodb_buffer_pool_load_status          | Loaded 403457/419487 pages         |Buffer pool load is complete:| Innodb_buffer_pool_load_status          | Buffer pool(s) load completed at 161123  9:18:57 |As a side note, it would be great if MySQL would provide a clearer concept of the “State” of the node: being UP versus being READY to serve the traffic in an optimal way are often not the same.
#3
InnoDB’s buffer pool preload is not very efficient, at least with fast storage. In my test environment, with a rather capable NVMe storage, I get a more than 400MB/sec warmup rate if I run read-only sysbench workload. The InnoDB’s buffer pool preload warmup rate is around 100MB/sec or so.  I would guess the problem is that it doesn’t drive as many parallel IO requests as SSD storage needs to perform optimally. I did not investigate it further.
#4
Innodb buffer pool save/restore only stores the buffer pool cont[...]
источник

F

Feed Reader Bot in MySQL
Feed Reader Bot
Planet MySQL
Using the InnoDB Buffer Pool Pre-Load Feature in MySQL 5.7
In this blog post, I’ll discuss how to use the InnoDB buffer pool pre-load feature in MySQL 5.7
Starting MySQL 5.6, you can configure MySQL to save the contents of your InnoDB buffer pool and load it on startup. Starting in MySQL 5.7, this is the default behavior. Without any special effort, MySQL saves and restores a portion of buffer pool in the default configuration. We made a similar feature available in Percona Server 5.5 – so the concept has been around for quite a while.
Frankly, time has reduced the need for this feature. Five years ago, we would typically store databases on spinning disks. These disks often took quite a long time to warm up with normal database workloads, which could lead to many hours of poor performance after a restart. With the rise of SSDs, warm up happens faster and reduces the penalty from not having data in the buffer pool. Typically, a system reaches 90% of its fully warmed up performance in 10 minutes or less. But since it takes virtually no effort to use, saving the contents of the InnoDB buffer pool is a great feature to enable by default.
This blog post looks into some issues with this feature that might not be totally obvious from its name or documentation.
#1 
By default, MySQL only saves 25% of the most actively accessed pages (by the LRU) in the InnoDB buffer pool (not the whole buffer pool).
This is a reasonable choice for many use cases: it saves the most valuable pages, which can then be loaded faster than if you try to load every page in the buffer pool (many of which might not be relevant for continuing workload).
You can change this number by setting the innodb_buffer_pool_dump_pct variable. If you’re using InnoDB essentially as an in-memory database, and want to ensure all data is memory resident and can be accessed without a disk read, set it to 100.     
Note that this variable is based on the actual amount of data present in memory, not the buffer pool size, For example, if you have a 100GB buffer pool but it only contains 10GB of data, by default only 25% of 10GB (2.5GB) gets saved. (As the manual explains, it will not take nearly as much on disk as only the page identifiers are stored, not full page contents.)
#2
MySQL starts and becomes accessible through the network before the buffer pool load on startup is complete. Immediately before the start, a lot of resources fetch buffer pool contents from the disk as quickly as possible, possibly affecting performance. If you have multiple MySQL nodes – like using MySQL Replication or running Percona XtraDB Cluster – you might consider bringing them back for production traffic only after the buffer pool load operation completes. You can monitor the buffer pool load progress by watching the GLOBAL STATUS variable:
Buffer pool load is in progress:| Innodb_buffer_pool_load_status          | Loaded 403457/419487 pages         |Buffer pool load is complete:| Innodb_buffer_pool_load_status          | Buffer pool(s) load completed at 161123  9:18:57 |As a side note, it would be great if MySQL would provide a clearer concept of the “State” of the node: being UP versus being READY to serve the traffic in an optimal way are often not the same.
#3
InnoDB’s buffer pool preload is not very efficient, at least with fast storage. In my test environment, with a rather capable NVMe storage, I get a more than 400MB/sec warmup rate if I run read-only sysbench workload. The InnoDB’s buffer pool preload warmup rate is around 100MB/sec or so.  I would guess the problem is that it doesn’t drive as many parallel IO requests as SSD storage needs to perform optimally. I did not investigate it further.
#4
Innodb buffer pool save/restore only stores the buffer pool cont[...]
ents on a clear shutdown.  If the server crashes MySQL still does a buffer pool preload, but with the content information saved on last clean shutdown (stored in the ib_buffer_pool  file). This might end up wasting time loading data that is not relevant for the current workload. Periodically running the following ensures a fresh set of pages is available for a quick warmup, even if MySQL crashed:SET GLOBAL innodb_buffer_pool_dump_now=ON;This preserves the current list of buffer pool pages.
Note that while you (hopefully) do not see your MySQL crash that often, the same issue exists with backups, MySQL slave cloning with Percona XtraBackup, or LVM snapshot. This causes these operations to be less efficient.
I hope the observations in this blog help you put this feature to better use!
источник

F

Feed Reader Bot in MySQL
Planet MySQL
Galera Cache (gcache) is finally recoverable on restart
This post describes how to recover Galera Cache (or gcache) on restart.
Recently Codership introduced (with Galera 3.19) a very important and long awaited feature. Now users can recover Galera cache on restart.
Need
If you gracefully shutdown cluster nodes one after another, with some lag time between nodes, then the last node to shutdown holds the latest data. Next time you restart the cluster, the last node shutdown will be the first one to boot. Any followup nodes that join the cluster after the first node will demand an SST.
Why SST, when these nodes already have data and only few write-sets are missing? The DONOR node caches missing write-sets in Galera cache, but on restart this cache is wiped clean and restarted fresh. So the DONOR node doesn’t have a Galera cache to donate missing write-sets.
This painful set up made it necessary for users to think and plan before gracefully taking down the cluster. With the introduction of this new feature, the user can retain the Galera cache.
How does this help ?
On restart, the node will revive the galera-cache. This means the node can act as a DONOR and service missing write-sets (facilitating IST, instead of using SST). This option to retain the galera-cache is controlled by an option named gcache.recover=yes/no. The default is NO (Galera cache is not retained). The user can set this option for all nodes, or selective nodes, based on disk usage.
gcache.recover in action
The example below demonstrates how to use this option:
Let’s say the user has a three node cluster (n1, n2, n3), with all in sync.
The user gracefully shutdown n2 and n3.
n1 is still up and running, and processes some workload, so now n1 has latest data.
n1 is eventually shutdown.
Now the user decides to restart the cluster. Obviously, the user needs to start n1 first, followed by n2/n3.
n1 boots up, forming an new cluster.
n2 boots up, joins the cluster, finds there are missing write-sets and demands IST but given that n1 doesn’t have a gcache, it falls back to SST.
n2 (JOINER node log):2016-11-18 13:11:06 3277 [Note] WSREP: State transfer required:
Group state: 839028c7-ad61-11e6-9055-fe766a1886c3:4680
Local state: 839028c7-ad61-11e6-9055-fe766a1886c3:3893n1 (DONOR node log), gcache.recover=no:2016-11-18 13:11:06 3245 [Note] WSREP: IST request: 839028c7-ad61-11e6-9055-fe766a1886c3:3893-4680|tcp://192.168.1.3:5031
2016-11-18 13:11:06 3245 [Note] WSREP: IST first seqno 3894 not found from cache, falling back to SSTNow let’s re-execute this scenario with gcache.recover=yes.
n2 (JOINER node log):2016-11-18 13:24:38 4603 [Note] WSREP: State transfer required:
Group state: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:1495
Local state: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:769
....
2016-11-18 13:24:41 4603 [Note] WSREP: Receiving IST: 726 writesets, seqnos 769-1495
....
2016-11-18 13:24:49 4603 [Note] WSREP: IST received: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:1495n1 (DONOR node log):2016-11-18 13:24:38 4573 [Note] WSREP: IST request: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:769-1495|tcp://192.168.1.3:5031
2016-11-18 13:24:38 4573 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.You can also validate this by checking the lowest write-set available in gcache on the DONOR node.mysql> show status like 'wsrep_local_cached_downto';
+---------------------------+-------+
| Variable_name | Value |
+---------------------------+-------+
| wsrep_local_cached_downto | 1 |
+---------------------------+-------+
1 row in set (0.00 sec)So as you can see, gcache.recover could restore the cache on restart and help service IST over SST. This is a major resource saver for most of those graceful shutdowns.
gcache revive doesn’t work if . . .
If gcache pag[...]
источник

F

Feed Reader Bot in MySQL
Feed Reader Bot
Planet MySQL
Galera Cache (gcache) is finally recoverable on restart
This post describes how to recover Galera Cache (or gcache) on restart.
Recently Codership introduced (with Galera 3.19) a very important and long awaited feature. Now users can recover Galera cache on restart.
Need
If you gracefully shutdown cluster nodes one after another, with some lag time between nodes, then the last node to shutdown holds the latest data. Next time you restart the cluster, the last node shutdown will be the first one to boot. Any followup nodes that join the cluster after the first node will demand an SST.
Why SST, when these nodes already have data and only few write-sets are missing? The DONOR node caches missing write-sets in Galera cache, but on restart this cache is wiped clean and restarted fresh. So the DONOR node doesn’t have a Galera cache to donate missing write-sets.
This painful set up made it necessary for users to think and plan before gracefully taking down the cluster. With the introduction of this new feature, the user can retain the Galera cache.
How does this help ?
On restart, the node will revive the galera-cache. This means the node can act as a DONOR and service missing write-sets (facilitating IST, instead of using SST). This option to retain the galera-cache is controlled by an option named gcache.recover=yes/no. The default is NO (Galera cache is not retained). The user can set this option for all nodes, or selective nodes, based on disk usage.
gcache.recover in action
The example below demonstrates how to use this option:
Let’s say the user has a three node cluster (n1, n2, n3), with all in sync.
The user gracefully shutdown n2 and n3.
n1 is still up and running, and processes some workload, so now n1 has latest data.
n1 is eventually shutdown.
Now the user decides to restart the cluster. Obviously, the user needs to start n1 first, followed by n2/n3.
n1 boots up, forming an new cluster.
n2 boots up, joins the cluster, finds there are missing write-sets and demands IST but given that n1 doesn’t have a gcache, it falls back to SST.
n2 (JOINER node log):2016-11-18 13:11:06 3277 [Note] WSREP: State transfer required:
Group state: 839028c7-ad61-11e6-9055-fe766a1886c3:4680
Local state: 839028c7-ad61-11e6-9055-fe766a1886c3:3893n1 (DONOR node log), gcache.recover=no:2016-11-18 13:11:06 3245 [Note] WSREP: IST request: 839028c7-ad61-11e6-9055-fe766a1886c3:3893-4680|tcp://192.168.1.3:5031
2016-11-18 13:11:06 3245 [Note] WSREP: IST first seqno 3894 not found from cache, falling back to SSTNow let’s re-execute this scenario with gcache.recover=yes.
n2 (JOINER node log):2016-11-18 13:24:38 4603 [Note] WSREP: State transfer required:
Group state: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:1495
Local state: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:769
....
2016-11-18 13:24:41 4603 [Note] WSREP: Receiving IST: 726 writesets, seqnos 769-1495
....
2016-11-18 13:24:49 4603 [Note] WSREP: IST received: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:1495n1 (DONOR node log):2016-11-18 13:24:38 4573 [Note] WSREP: IST request: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:769-1495|tcp://192.168.1.3:5031
2016-11-18 13:24:38 4573 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.You can also validate this by checking the lowest write-set available in gcache on the DONOR node.mysql> show status like 'wsrep_local_cached_downto';
+---------------------------+-------+
| Variable_name | Value |
+---------------------------+-------+
| wsrep_local_cached_downto | 1 |
+---------------------------+-------+
1 row in set (0.00 sec)So as you can see, gcache.recover could restore the cache on restart and help service IST over SST. This is a major resource saver for most of those graceful shutdowns.
gcache revive doesn’t work if . . .
If gcache pag[...]
es are involved. Gcache pages are still removed on shutdown, and the gcache write-set until that point also gets cleared.
Again let’s see and example:
Let’s assume the same configuration and workflow as mentioned above. We will just change the workload pattern.
n1, n2, n3 are in sync and an average-size workload is executed, such that the write-set fits in the gcache. (seqno=1-x)
n2 and n3 are shutdown.
n1 continues to operate and executes some average size workload followed by a huge transaction that results in the creation of a gcache page. (1-x-a-b-c-h) [h represent transaction seqno]
Now n1 is shutdown. During shutdown, gcache pages are purged (irrespective of the keep_page_sizes setting).
The purge ensures that all the write-sets that has seqno smaller than gcache-page-residing write-set are purged, too. This effectively means (1-h) everything is removed, including (a,b,c).
On restart, even though n1 can revive the gcache it can’t revive anything, as all the write-sets are purged.
When n2 boots up, it requests IST, but n1 can’t service the missing write-set (a,b,c,h). This causes SST to take place.
Summing it up
Needless to say, gcache.recover is a much needed feature, given it saves SST pain. (Thanks Codership.) It would be good to see if the feature can be optimized to work with gcache pages.
And yes, Percona XtraDB Cluster inherits this feature in its upcoming release.
источник

F

Feed Reader Bot in MySQL
Planet MySQL
Pronouncing Database Terms
It is the business of educated people to speak so that no-one may be able to tell in what county their childhood was passed. -- A. Burrell, A Handbook for Teachers in Public Elementary School, 1891
The terms that reveal where a person (mis)spent a DBMS-related childhood are "char", "data", "GIF", "gigabyte", "GUI", "JSON", "query", "schema", "tuple", "_", "`", and "«".
CHAR
(1) Like "Care" because it's short for "Character" (so hard C and most folks say "Character" that way)?
(2) Like "Car" because it's short for "Character" (so hard C and a few folks in the British Isles say it that way and perhaps all other English words ending in consonant + "ar" are pronounced that way)?
(3) Like "Char" (the English word for a type of trout)?
C/C++ programmers say (3), for example Bjarne Stroustrup of C++ fame says that's illogical but usual. However, people who have not come to SQL from another programming language may be more likely to go with (2), leading one  online voter to exclaim "I've known a lot of people who say "car" though. (Generally SQL-y people; is this what they teach in DBA classes?)" and Tom Kyte of Oracle fame reportedly says "var-car" .
DATA
The Oxford English Dictionary (OED) shows 4 (four!) variations:
"Brit. /ˈdeɪtə/,  /ˈdɑːtə/,  U.S. /ˈdædə/,  /ˈdeɪdə/".
It's only the first syllable that matters -- DAY or DA?
I haven't seen the Longman Pronunciation Dictionary, but a blog post says the results of Longman's preference poll were:
"BrE: deɪtə 92% ˈdɑːtə 6% ˈdætə 2% AmE: ˈdeɪțə 64%ˈdæțə 35% ˈdɑːțə 1%" (notice it's ț not t for our American friends). By the way OED says in a computing context it's "as a mass noun" so I guess "data is" is okay.
GIF
It's "jif",  says its creator.
GIGABYTE
That letter at the start is a hard G; The "Jigabyte" pronunciation is unknown to Merriam-Webster, Cambridge, and Oxford dictionaries.
GUI
No question it's "gooey", for all the dictionaries I checked. So pronounce our product as "osselot-goey".
GUID
The author of "Essential COM" says
The exact pronunciation of GUID is a subject of heated debate among COM developers. Although the COM specification says that GUID rhymes with fluid, the author [Don Box] believes that the COM specification is simply incorrect, citing the word languid as setting the precedent.
The COM specification is a standard and therefore cannot be incorrect, but I can't find it, and I like setting-a-precedent games, so let's use the exact word Guid, eh? It appears in Hugh MacDiarmid's masterpiece  "A Drunk Man Looks At The Thistle"
But there are flegsome deeps
Where the soul o'Scotland sleeps
That I to bottom need
To wauk Guid kens what deid
.. which proves that Guid is a one-syllable word, though doubtless MacDiarmid pronounced it "Gweed".
JSON
Doug Crockford of Yahoo fame, seen on Youtube, says:
So I discovered JAYsun. Java Script Object Notation. There's a lot of argument about how you pronounce that. I strictly don't care. I think probably the correct pronunciation is [switching to French] "je sens".
The argument is mostly between people who say JAYsun and people who say JaySAWN. It's controversial, and in our non-JSON environment it's a foreign word, so spelling it out J S O N is safe and okay.
QUERY
In the 1600s the spelling was "quaery", so it must have rhymed with "very", and it still does, for some Americans. But the OED says that both American and British speakers say "QUEERie" nowadays.
SCHEMA
It's "Skema". The "Shema" pronunciation is unknown to Merriam-Webster, Cambridge, and Oxford dictionaries.
SQL
See the earlier post "How to pronounce SQL" which concluded:
In the end, then, it's "when in Rome do as the Romans do". In Microsoft or Oracle contexts one should, like Mr Ellison, respect Microsoft's or Oracle's way of speaking. But here i[...]
источник

F

Feed Reader Bot in MySQL
Feed Reader Bot
Planet MySQL
Pronouncing Database Terms
It is the business of educated people to speak so that no-one may be able to tell in what county their childhood was passed. -- A. Burrell, A Handbook for Teachers in Public Elementary School, 1891
The terms that reveal where a person (mis)spent a DBMS-related childhood are "char", "data", "GIF", "gigabyte", "GUI", "JSON", "query", "schema", "tuple", "_", "`", and "«".
CHAR
(1) Like "Care" because it's short for "Character" (so hard C and most folks say "Character" that way)?
(2) Like "Car" because it's short for "Character" (so hard C and a few folks in the British Isles say it that way and perhaps all other English words ending in consonant + "ar" are pronounced that way)?
(3) Like "Char" (the English word for a type of trout)?
C/C++ programmers say (3), for example Bjarne Stroustrup of C++ fame says that's illogical but usual. However, people who have not come to SQL from another programming language may be more likely to go with (2), leading one  online voter to exclaim "I've known a lot of people who say "car" though. (Generally SQL-y people; is this what they teach in DBA classes?)" and Tom Kyte of Oracle fame reportedly says "var-car" .
DATA
The Oxford English Dictionary (OED) shows 4 (four!) variations:
"Brit. /ˈdeɪtə/,  /ˈdɑːtə/,  U.S. /ˈdædə/,  /ˈdeɪdə/".
It's only the first syllable that matters -- DAY or DA?
I haven't seen the Longman Pronunciation Dictionary, but a blog post says the results of Longman's preference poll were:
"BrE: deɪtə 92% ˈdɑːtə 6% ˈdætə 2% AmE: ˈdeɪțə 64%ˈdæțə 35% ˈdɑːțə 1%" (notice it's ț not t for our American friends). By the way OED says in a computing context it's "as a mass noun" so I guess "data is" is okay.
GIF
It's "jif",  says its creator.
GIGABYTE
That letter at the start is a hard G; The "Jigabyte" pronunciation is unknown to Merriam-Webster, Cambridge, and Oxford dictionaries.
GUI
No question it's "gooey", for all the dictionaries I checked. So pronounce our product as "osselot-goey".
GUID
The author of "Essential COM" says
The exact pronunciation of GUID is a subject of heated debate among COM developers. Although the COM specification says that GUID rhymes with fluid, the author [Don Box] believes that the COM specification is simply incorrect, citing the word languid as setting the precedent.
The COM specification is a standard and therefore cannot be incorrect, but I can't find it, and I like setting-a-precedent games, so let's use the exact word Guid, eh? It appears in Hugh MacDiarmid's masterpiece  "A Drunk Man Looks At The Thistle"
But there are flegsome deeps
Where the soul o'Scotland sleeps
That I to bottom need
To wauk Guid kens what deid
.. which proves that Guid is a one-syllable word, though doubtless MacDiarmid pronounced it "Gweed".
JSON
Doug Crockford of Yahoo fame, seen on Youtube, says:
So I discovered JAYsun. Java Script Object Notation. There's a lot of argument about how you pronounce that. I strictly don't care. I think probably the correct pronunciation is [switching to French] "je sens".
The argument is mostly between people who say JAYsun and people who say JaySAWN. It's controversial, and in our non-JSON environment it's a foreign word, so spelling it out J S O N is safe and okay.
QUERY
In the 1600s the spelling was "quaery", so it must have rhymed with "very", and it still does, for some Americans. But the OED says that both American and British speakers say "QUEERie" nowadays.
SCHEMA
It's "Skema". The "Shema" pronunciation is unknown to Merriam-Webster, Cambridge, and Oxford dictionaries.
SQL
See the earlier post "How to pronounce SQL" which concluded:
In the end, then, it's "when in Rome do as the Romans do". In Microsoft or Oracle contexts one should, like Mr Ellison, respect Microsoft's or Oracle's way of speaking. But here i[...]
n open-source-DBMS-land the preference is to follow the standard.
TUPLE
See the earlier post "Tuples". It's "Tuhple".
_
According to Swan's "Practical English Usage" the _ (Unicode code point 005F) character is more often called underline by Britons, more often called underscore by Americans. (The SQL-standard term is underscore.) (The Unicode term is LOW LINE; SPACING UNDERSCORE was the old Unicode-version-1.0 term.)
` `
This is a clue for telling if people have MySQL backgrounds -- they'll pronounce the ` (Unicode code point 0060) symbol as "backtick". Of course it also is found in other programming contexts nowadays, but there are lots of choices in the Jargon File:
Common: backquote; left quote; left single quote; open quote; ; grave. Rare: Backprime; [backspark]; unapostrophe; birk; blugle; back tick; back glitch; push; ; quasiquote.
By the way The Jargon File is a good source for such whimsical alternatives of ASCII names.
« »
You might be fooled by an Adobe error, as I was, into thinking that these French-quote-mark thingies are pronounced GEELmoes. Wrong. They are GEELmays. (The Unicode term is left-point or right-point double angle quotation marks.) This matter matters because, as Professor Higgins said, "The French don't care what they do actually, as long as they pronounce it properly."
Meanwhile ...
Enhancements made to the source code for the next version of ocelotgui, Ocelot's Graphical User Interface for MySQL and MariaDB, are: error messages are optionally in French, and grid output is optionally in HTML. As always, the description of the current version is on ocelot.ca and the downloadable source and releases are on github.
источник

F

Feed Reader Bot in MySQL
Planet MySQL
Debian and MariaDB Server
GNU/Linux distributions matter, and Debian is one of the most popular ones out there in terms of user base. Its an interesting time as MariaDB Server becomes more divergent compared to upstream MySQL, and people go about choosing default providers of the database.
The MariaDB Server original goals were to be a drop-in replacement. In fact this is how its described (“It is an enhanced, drop-in replacement for MySQL”). We all know that its becoming increasingly hard for that line to be used these days.
Anyhow in March 2016, Debian’s release team has made the decision that going forward, MariaDB Server is what people using Debian Stretch get, when they ask for MySQL (i.e. MariaDB Server is the default provider of an application that requires the use of port 3306, and provides a MySQL-like protocol).
All this has brought some interesting bug reports and discussions, so here’s a collection of links that interest me (with decisions that will affect Debian users going forward).
Connectors
MySQL ODBC in Stretch – do follow the thread
[debian-mysql] final decision about MySQL r-deps needed / cleaning up the MySQL mess – yes, the MySQL C++ connector is not the same as the MariaDB Connector/C. And let’s not forget the things that depend on the C++ connector, i.e. libreoffice-mysql-connector. Rene Engelhard started this excellent thread with questions that could do with answers.
MariaDB Server
Don’t include in stretch – bug#837615 – this is about how MariaDB Server 10.0 (note the version – this matters) should be included, but MySQL 5.6 shouldn’t be.
MariaDB 10.1? – note that Otto Kekäläinen, CEO of the MariaDB Foundation, says the plan is to skip MariaDB Server 10.1 and go straight to MariaDB Server 10.2. As of this writing, MariaDB Server 10.2 is in its first beta released 27 Sep 2016, so are we expecting a few more betas before the release candidate? History shows there were four betas for 10.1 and one release candidate, while there were three betas and two release candidates of 10.0. There is no response here as to what is gained from skipping MariaDB Server 10.1, but one can guess that this has to do with support cycles.
default-mysql-client forces removal of mysql-server* and mysql-client* – bug#842011 – bug reporter is a bit hostile towards the package team, but the gist is that “mariadb is NOT a drop-in replacement for mysql.” Users are bound to realise this once Debian Stretch gets more mainstream use.
[debian-mysql] Bug#840855: Bug#840855: mysql-server: MySQL 5.7? – questioning what happens to MySQL 5.7, and this is really a call to action – if you disagree, email the security and release teams now not after Stretch is released! Quoting Clint Byrum, “The release and security teams have decided that MySQL will live only in unstable for stretch due to the perceived complications with tracking security patches in MySQL.”
[debian-mysql] About packages that depend on mysql-* / mariadb  / virtual-mysql-* – in where we find the API-incompatible libmysqlclient, naming conventions, and more.
источник

F

Feed Reader Bot in MySQL
Planet MySQL
MySQL Connector/Python 2.2.2 m3 Development Release has been released
MySQL Connector/Python 2.2.2 M3 is the third development release of the MySQL Connector Python 2.2 series. This series adds support for the new X DevAPI. The X DevAPI enables application developers to write code that combines the strengths of the relational and document models using a modern, NoSQL-like syntax that does not assume previous experience writing traditional SQL.
To learn more about how to write applications using the X DevAPI, see http://dev.mysql.com/doc/x-devapi-userguide/en/index.html. For more information about how the X DevAPI is implemented in MySQL Connector/Python, and its usage, see http://dev.mysql.com/doc/dev/connector-python.
Please note that the X DevAPI requires at least MySQL Server version 5.7.12 or higher with the X Plugin enabled. For general documentation about how to get started using MySQL as a document store, see http://dev.mysql.com/doc/refman/5.7/en/document-store.html.
To download MySQL Connector/Python 2.2.2 M3, see the “Development Releases” tab at http://dev.mysql.com/downloads/connector/python/
We are working on the github upload, it should be available soon.
Enjoy!
Changes in MySQL Connector/Python 2.2.2 (2016-12-01, Milestone 3)
* Functionality Added or Changed
* Bugs Fixed Functionality Added or Changed
* If the MySQL server is configured to support secure connections, Connector/Python now attempts to establish a secure (encrypted) connection by default, falling back to an unencrypted connection otherwise. This is behavior similar to the –ssl-mode=PREFERRED option supported by MySQL client programs.
The following TLS/SSL options have been implemented for the mysqlx URI schema. All require Python 2.7.9 or higher.
+ ssl-enable: This option enforces SSL connections. If given, a connection attempt must be able to establish a secure connection or the attempt fails.
+ ssl-ca: This option is used to verify the server certificate.
+ ssl-cert, ssl-key: These options are used to pass the client certificate and key, but the server currently does not validate the client using these.
The ssl-enable parameter can be specified in a parameter dictionary or URL, like this:
mysqlx.get_session({“user”: “root”, “host”: “localhost”, “port”: 33060,
“password”: “pass”, “ssl-enable”: True})
Or:
mysqlx.get_session(“mysqlx://root:pass@localhost?ssl-enable”)
The other parameters are used similarly. In a URL, path name values should be given within parentheses; for example, ssl-cert=(path_name). (Bug #24954646)
* There is now a standard API to create a table: Schema objects have a create_table function. It throws an error if the table exists.
* For any method that takes a value list of parameters for its argument, there is now more flexibility with how the parameters can be specified: Either as a value list or a list of individual parameters. For example, these method calls are the same:
Collection.add([{“a”: 27}, {“a”: 28}])
Collection.add({“a”: 27}, {“a”: 28})
* For Schema objects, get_view, create_view, alter_view,
and drop_view functions were added to support retrieval,
create, alter, and drop operations on View objects.
* Unix domain socket files are now supported as a connection transport. The socket file can be specified in a parameter dictionary or URL, like this:
mysqlx.get_session({“user”: “root”, “password”: “pass”,
“socket”: “/path/to/socket”})
Or:
mysqlx.get_session(“mysqlx://user:pass@(/path/to/sock)”)
mysqlx.get_session(“mysqlx://user:pass@%2Fto%2Fsock/schema”)
mysqlx.get_session(“mysqlx://user:pass@.%2Fpath%2Fto%2Fsock/schema”)
mysqlx.get_session(“mysqlx://user:pass@..%2Fpath%2Fto%2Fsock/schema”)
Bugs Fixed
* For a user created with REQUIRE SSL, establishing an SSL connection by specifying[...]
источник

F

Feed Reader Bot in MySQL
Feed Reader Bot
Planet MySQL
MySQL Connector/Python 2.2.2 m3 Development Release has been released
MySQL Connector/Python 2.2.2 M3 is the third development release of the MySQL Connector Python 2.2 series. This series adds support for the new X DevAPI. The X DevAPI enables application developers to write code that combines the strengths of the relational and document models using a modern, NoSQL-like syntax that does not assume previous experience writing traditional SQL.
To learn more about how to write applications using the X DevAPI, see http://dev.mysql.com/doc/x-devapi-userguide/en/index.html. For more information about how the X DevAPI is implemented in MySQL Connector/Python, and its usage, see http://dev.mysql.com/doc/dev/connector-python.
Please note that the X DevAPI requires at least MySQL Server version 5.7.12 or higher with the X Plugin enabled. For general documentation about how to get started using MySQL as a document store, see http://dev.mysql.com/doc/refman/5.7/en/document-store.html.
To download MySQL Connector/Python 2.2.2 M3, see the “Development Releases” tab at http://dev.mysql.com/downloads/connector/python/
We are working on the github upload, it should be available soon.
Enjoy!
Changes in MySQL Connector/Python 2.2.2 (2016-12-01, Milestone 3)
* Functionality Added or Changed
* Bugs Fixed Functionality Added or Changed
* If the MySQL server is configured to support secure connections, Connector/Python now attempts to establish a secure (encrypted) connection by default, falling back to an unencrypted connection otherwise. This is behavior similar to the –ssl-mode=PREFERRED option supported by MySQL client programs.
The following TLS/SSL options have been implemented for the mysqlx URI schema. All require Python 2.7.9 or higher.
+ ssl-enable: This option enforces SSL connections. If given, a connection attempt must be able to establish a secure connection or the attempt fails.
+ ssl-ca: This option is used to verify the server certificate.
+ ssl-cert, ssl-key: These options are used to pass the client certificate and key, but the server currently does not validate the client using these.
The ssl-enable parameter can be specified in a parameter dictionary or URL, like this:
mysqlx.get_session({“user”: “root”, “host”: “localhost”, “port”: 33060,
“password”: “pass”, “ssl-enable”: True})
Or:
mysqlx.get_session(“mysqlx://root:pass@localhost?ssl-enable”)
The other parameters are used similarly. In a URL, path name values should be given within parentheses; for example, ssl-cert=(path_name). (Bug #24954646)
* There is now a standard API to create a table: Schema objects have a create_table function. It throws an error if the table exists.
* For any method that takes a value list of parameters for its argument, there is now more flexibility with how the parameters can be specified: Either as a value list or a list of individual parameters. For example, these method calls are the same:
Collection.add([{“a”: 27}, {“a”: 28}])
Collection.add({“a”: 27}, {“a”: 28})
* For Schema objects, get_view, create_view, alter_view,
and drop_view functions were added to support retrieval,
create, alter, and drop operations on View objects.
* Unix domain socket files are now supported as a connection transport. The socket file can be specified in a parameter dictionary or URL, like this:
mysqlx.get_session({“user”: “root”, “password”: “pass”,
“socket”: “/path/to/socket”})
Or:
mysqlx.get_session(“mysqlx://user:pass@(/path/to/sock)”)
mysqlx.get_session(“mysqlx://user:pass@%2Fto%2Fsock/schema”)
mysqlx.get_session(“mysqlx://user:pass@.%2Fpath%2Fto%2Fsock/schema”)
mysqlx.get_session(“mysqlx://user:pass@..%2Fpath%2Fto%2Fsock/schema”)
Bugs Fixed
* For a user created with REQUIRE SSL, establishing an SSL connection by specifying[...]
–ssl-key but not –ssl-ca or
–ssl-cert fails for standard MySQL client programs. The same connection configuration was (improperly) permitted in Connector/Python. (Bug #24953032)
* Connection failures due to an improper SSL CA resulted in an uninformative error message. (Bug #24948054)
* Using a schema object to alter a view failed if the view selected from a non-INFORMATION_SCHEMA table and it was altered to select from an INFORMATION_SCHEMA table. (Bug
#24947078)
* schema.create_collection() with an empty collection name threw an improper error. (Bug #24520850)
Documentation
——————–
Online:http://dev.mysql.com/doc/connector-python/en/index.html The source distribution includes the manual in various formats under the docs/ folder.
Reporting Bugs
——————–
We welcome and appreciate your feedback and bug reports:http://bugs.mysql.com/
On Behalf of the MySQL/ORACLE RE Team, Balasubramanian Kandasamy
источник

F

Feed Reader Bot in MySQL
Planet MySQL
Database Daily Ops Series: GTID Replication and Binary Logs Purge
This blog continues the ongoing series on daily operations and GTID replication.
In this blog, I’m going to investigate why the error below has been appearing in a special environment I’ve been working with on the last few days:Last_IO_Errno: 1236
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log:
'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the
master has purged binary logs containing GTIDs that the slave requires.'The error provides the right message, and explains what is going on. But sometimes, it can be a bit tricky to solve this issue: you need additional information discovered after some tests and readings. We try and keep Managed Services scripted, in the sense that our advice and best practices are repeatable and consistent. However, some additional features and practices can be added depending on the customer situation.
Some time ago one of our customer’s database servers presented the above message. At that point, we could see the binary log files in a compressed form on master (gzipped). Of course, MySQL can’t identify a compressed file with a .gz extension as a binary log. We uncompressed the file, but replication presented the same problem – even after uncompressing the file and making sure the UUID of the current master and the TRX_ID were there. Obviously, I needed to go and investigate the problem to see what was going on.
After some reading, I re-read the below:

When the server starts, the global value of gtid_purged, which was called before as gtid_lost, is initialized to the set of GTIDs contained by the Previous_gtid_log_event of the oldest binary log. When a binary log is purged, gtid_purged is re-read from the binary log that has now become the oldest one.
=> https://dev.mysql.com/doc/refman/5.6/en/replication-options-gtids.html#sysvar_gtid_purged
That makes me think: if something is compressing binlogs on the master without purging them as expected by the GTID mechanism, it’s not going to be able to re-read existing GTIDs on disk. When the slave replication threads restarts, or the DBA issues commands like reset slave and reset master (to clean out the increased GTID sets on Executed_Gtid_Set from the SHOW SLAVE STATUS command, for example), this error can occur. But if I compress the file:
Will the slave get lost and not find all the needed GTIDs on the master after a reset slave/reset master?
If I purge the logs correctly, using PURGE BINARY LOGS, will the slave be OK when restarting replication threads?
Test 1: Compressing the oldest binary log file on master, restarting slave threads
I would like to test this very methodically. We’ll create one GTID per binary log, and then I will compress the oldest binary log file in order to make it unavailable for the slaves. I’m working with three virtual machines, one master and two slaves. On the second slave, I’m going to run the following sequence: stop slave, reset slave, reset master, start slave, and then, check the results. Let’s see what happens.
On master (tool01):tool01 [(none)]:> show master logs;
+-------------------+-----------+
| Log_name          | File_size |
+-------------------+-----------+
| mysqld-bin.000001 |       341 |
| mysqld-bin.000002 |       381 |
| mysqld-bin.000003 |       333 |
+-------------------+-----------+
3 rows in set (0.00 sec)
tool01 [(none)]:> show binlog events in 'mysqld-bin.000001';
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              [...]
источник

F

Feed Reader Bot in MySQL
Feed Reader Bot
Planet MySQL
Database Daily Ops Series: GTID Replication and Binary Logs Purge
This blog continues the ongoing series on daily operations and GTID replication.
In this blog, I’m going to investigate why the error below has been appearing in a special environment I’ve been working with on the last few days:Last_IO_Errno: 1236
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log:
'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the
master has purged binary logs containing GTIDs that the slave requires.'The error provides the right message, and explains what is going on. But sometimes, it can be a bit tricky to solve this issue: you need additional information discovered after some tests and readings. We try and keep Managed Services scripted, in the sense that our advice and best practices are repeatable and consistent. However, some additional features and practices can be added depending on the customer situation.
Some time ago one of our customer’s database servers presented the above message. At that point, we could see the binary log files in a compressed form on master (gzipped). Of course, MySQL can’t identify a compressed file with a .gz extension as a binary log. We uncompressed the file, but replication presented the same problem – even after uncompressing the file and making sure the UUID of the current master and the TRX_ID were there. Obviously, I needed to go and investigate the problem to see what was going on.
After some reading, I re-read the below:

When the server starts, the global value of gtid_purged, which was called before as gtid_lost, is initialized to the set of GTIDs contained by the Previous_gtid_log_event of the oldest binary log. When a binary log is purged, gtid_purged is re-read from the binary log that has now become the oldest one.
=> https://dev.mysql.com/doc/refman/5.6/en/replication-options-gtids.html#sysvar_gtid_purged
That makes me think: if something is compressing binlogs on the master without purging them as expected by the GTID mechanism, it’s not going to be able to re-read existing GTIDs on disk. When the slave replication threads restarts, or the DBA issues commands like reset slave and reset master (to clean out the increased GTID sets on Executed_Gtid_Set from the SHOW SLAVE STATUS command, for example), this error can occur. But if I compress the file:
Will the slave get lost and not find all the needed GTIDs on the master after a reset slave/reset master?
If I purge the logs correctly, using PURGE BINARY LOGS, will the slave be OK when restarting replication threads?
Test 1: Compressing the oldest binary log file on master, restarting slave threads
I would like to test this very methodically. We’ll create one GTID per binary log, and then I will compress the oldest binary log file in order to make it unavailable for the slaves. I’m working with three virtual machines, one master and two slaves. On the second slave, I’m going to run the following sequence: stop slave, reset slave, reset master, start slave, and then, check the results. Let’s see what happens.
On master (tool01):tool01 [(none)]:> show master logs;
+-------------------+-----------+
| Log_name          | File_size |
+-------------------+-----------+
| mysqld-bin.000001 |       341 |
| mysqld-bin.000002 |       381 |
| mysqld-bin.000003 |       333 |
+-------------------+-----------+
3 rows in set (0.00 sec)
tool01 [(none)]:> show binlog events in 'mysqld-bin.000001';
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              [...]
|
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| mysqld-bin.000001 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             |
| mysqld-bin.000001 | 120 | Previous_gtids |         1 |         151 |                                                                   |
| mysqld-bin.000001 | 151 | Gtid           |         1 |         199 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:1' |
| mysqld-bin.000001 | 199 | Query          |         1 |         293 | create database wb01                                              |
| mysqld-bin.000001 | 293 | Rotate         |         1 |         341 | mysqld-bin.000002;pos=4                                           |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
5 rows in set (0.00 sec)
tool01 [(none)]:> show binlog events in 'mysqld-bin.000002';
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| mysqld-bin.000002 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             |
| mysqld-bin.000002 | 120 | Previous_gtids |         1 |         191 | 4fbe2d57-5843-11e6-9268-0800274fb806:1                            |
| mysqld-bin.000002 | 191 | Gtid           |         1 |         239 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:2' |
| mysqld-bin.000002 | 239 | Query          |         1 |         333 | create database wb02                                              |
| mysqld-bin.000002 | 333 | Rotate         |         1 |         381 | mysqld-bin.000003;pos=4                                           |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
5 rows in set (0.00 sec)
tool01 [(none)]:> show binlog events in 'mysqld-bin.000003';
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| mysqld-bin.000003 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             |
| mysqld-bin.000003 | 120 | Previous_gtids |         1 |         191 | 4fbe2d57-5843-11e6-9268-0800274fb806:1-2                          |
| mysqld-bin.000003 | 191 | Gtid           |         1 |         239 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:3' |
| mysqld-bin.000003 | 239 | Query          |         1 |         333 | create database wb03                                              |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
4 rows in set (0.00 sec)Here we see that each existing binary log file has just one transaction. That will make it easier to compress the oldest binary log, and then disappear with part of the existing GTIDs. When the slave connects to a master, it will first send all the Executed_Gtid_Set, and then the master sends all the missing IDs to the slave. As Stephane Combaudon said, we will fo[...]
источник

F

Feed Reader Bot in MySQL
Feed Reader Bot
|
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| mysqld-bin.000001 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             |
| mysqld-bin.000001 | 120 | Previous_gtids |         1 |         151 |                                                                   |
| mysqld-bin.000001 | 151 | Gtid           |         1 |         199 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:1' |
| mysqld-bin.000001 | 199 | Query          |         1 |         293 | create database wb01                                              |
| mysqld-bin.000001 | 293 | Rotate         |         1 |         341 | mysqld-bin.000002;pos=4                                           |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
5 rows in set (0.00 sec)
tool01 [(none)]:> show binlog events in 'mysqld-bin.000002';
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| mysqld-bin.000002 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             |
| mysqld-bin.000002 | 120 | Previous_gtids |         1 |         191 | 4fbe2d57-5843-11e6-9268-0800274fb806:1                            |
| mysqld-bin.000002 | 191 | Gtid           |         1 |         239 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:2' |
| mysqld-bin.000002 | 239 | Query          |         1 |         333 | create database wb02                                              |
| mysqld-bin.000002 | 333 | Rotate         |         1 |         381 | mysqld-bin.000003;pos=4                                           |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
5 rows in set (0.00 sec)
tool01 [(none)]:> show binlog events in 'mysqld-bin.000003';
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| mysqld-bin.000003 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             |
| mysqld-bin.000003 | 120 | Previous_gtids |         1 |         191 | 4fbe2d57-5843-11e6-9268-0800274fb806:1-2                          |
| mysqld-bin.000003 | 191 | Gtid           |         1 |         239 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:3' |
| mysqld-bin.000003 | 239 | Query          |         1 |         333 | create database wb03                                              |
+-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
4 rows in set (0.00 sec)Here we see that each existing binary log file has just one transaction. That will make it easier to compress the oldest binary log, and then disappear with part of the existing GTIDs. When the slave connects to a master, it will first send all the Executed_Gtid_Set, and then the master sends all the missing IDs to the slave. As Stephane Combaudon said, we will fo[...]
rce it to happen! Slave database servers are both currently in the same position:tool02 [(none)]:> show slave statusG
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.0.10
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysqld-bin.000003
          Read_Master_Log_Pos: 333
               Relay_Log_File: mysqld-relay-bin.000006
                Relay_Log_Pos: 545
        Relay_Master_Log_File: mysqld-bin.000003
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
            ...
           Retrieved_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            Executed_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            
tool03 [(none)]:> show slave statusG
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.0.10
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysqld-bin.000003
          Read_Master_Log_Pos: 333
               Relay_Log_File: mysqld-relay-bin.000008
                Relay_Log_Pos: 451
        Relay_Master_Log_File: mysqld-bin.000003
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
...
           Retrieved_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            Executed_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3Now, we’ll compress the oldest binary log on master:[root@tool01 mysql]# ls -lh | grep mysqld-bin.
-rw-rw---- 1 mysql mysql  262 Nov 11 13:55 mysqld-bin.000001.gz #: this is the file containing 4fbe2d57-5843-11e6-9268-0800274fb806:1
-rw-rw---- 1 mysql mysql  381 Nov 11 13:55 mysqld-bin.000002
-rw-rw---- 1 mysql mysql  333 Nov 11 13:55 mysqld-bin.000003
-rw-rw---- 1 mysql mysql   60 Nov 11 13:55 mysqld-bin.indexOn tool03, which is the database server that will be used, we will execute the replication reload:tool03 [(none)]:> stop slave; reset slave; reset master; start slave;
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.03 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.02 sec)
tool03 [(none)]:> show slave statusG
*************************** 1. row ***************************
               Slave_IO_State:
                  Master_Host: 192.168.0.10
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File:
          Read_Master_Log_Pos: 4
               Relay_Log_File: mysqld-relay-bin.000002
                Relay_Log_Pos: 4
        Relay_Master_Log_File:
             Slave_IO_Running: No
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 0
              Relay_Log_Space: 151
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 1236
                Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
               Last_SQL_E[...]
источник

F

Feed Reader Bot in MySQL
Feed Reader Bot
rce it to happen! Slave database servers are both currently in the same position:tool02 [(none)]:> show slave statusG
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.0.10
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysqld-bin.000003
          Read_Master_Log_Pos: 333
               Relay_Log_File: mysqld-relay-bin.000006
                Relay_Log_Pos: 545
        Relay_Master_Log_File: mysqld-bin.000003
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
            ...
           Retrieved_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            Executed_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            
tool03 [(none)]:> show slave statusG
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.0.10
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysqld-bin.000003
          Read_Master_Log_Pos: 333
               Relay_Log_File: mysqld-relay-bin.000008
                Relay_Log_Pos: 451
        Relay_Master_Log_File: mysqld-bin.000003
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
...
           Retrieved_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3
            Executed_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3Now, we’ll compress the oldest binary log on master:[root@tool01 mysql]# ls -lh | grep mysqld-bin.
-rw-rw---- 1 mysql mysql  262 Nov 11 13:55 mysqld-bin.000001.gz #: this is the file containing 4fbe2d57-5843-11e6-9268-0800274fb806:1
-rw-rw---- 1 mysql mysql  381 Nov 11 13:55 mysqld-bin.000002
-rw-rw---- 1 mysql mysql  333 Nov 11 13:55 mysqld-bin.000003
-rw-rw---- 1 mysql mysql   60 Nov 11 13:55 mysqld-bin.indexOn tool03, which is the database server that will be used, we will execute the replication reload:tool03 [(none)]:> stop slave; reset slave; reset master; start slave;
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.03 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.02 sec)
tool03 [(none)]:> show slave statusG
*************************** 1. row ***************************
               Slave_IO_State:
                  Master_Host: 192.168.0.10
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File:
          Read_Master_Log_Pos: 4
               Relay_Log_File: mysqld-relay-bin.000002
                Relay_Log_Pos: 4
        Relay_Master_Log_File:
             Slave_IO_Running: No
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 0
              Relay_Log_Space: 151
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 1236
                Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
               Last_SQL_E[...]
rrno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1
                  Master_UUID: 4fbe2d57-5843-11e6-9268-0800274fb806
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp: 161111 14:47:13
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 1
1 row in set (0.00 sec)Bingo! We broke the replication streaming on the slave. Now we know that the missing GTID on the master was due to the compressed file, and wasn’t able to be passed along to the connecting slave during their negotiation. Additionally, @@GTID_PURGED was not reloaded as per what the online manual said. The test done and we confirmed the theory (if you have additional comments, enter it at the end of the blog).
Test 2: Purge the oldest file on master and reload replication on slave
Let’s make it as straightforward as possible. The purge can be done manually using the PURGE BINARY LOGS command to get it done a proper way as the binary log index file should be considered a part of this purge operation as well (it should be edited to remove the file name index entry together with the log file on disk). I’m going to execute the same as before, but include purging the file manually with the mentioned command.tool01 [(none)]:> show master logs;
+-------------------+-----------+
| Log_name | File_size |
+-------------------+-----------+
| mysqld-bin.000001 | 341 |
| mysqld-bin.000002 | 381 |
| mysqld-bin.000003 | 333 |
+-------------------+-----------+
3 rows in set (0.00 sec)
tool01 [(none)]:> purge binary logs to 'mysqld-bin.000002';
Query OK, 0 rows affected (0.01 sec)
tool01 [(none)]:> show master logs;
+-------------------+-----------+
| Log_name | File_size |
+-------------------+-----------+
| mysqld-bin.000002 | 381 |
| mysqld-bin.000003 | 333 |
+-------------------+-----------+
2 rows in set (0.00 sec)Now, we’ll execute the commands to check how it goes:tool03 [(none)]:> stop slave; reset slave; reset master; start slave;
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.02 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.02 sec)
tool03 [(none)]:> show slave statusG
*************************** 1. row ***************************
              Slave_IO_State:
                 Master_Host: 192.168.0.10
                 Master_User: repl
                 Master_Port: 3306
               Connect_Retry: 60
             Master_Log_File:
         Read_Master_Log_Pos: 4
              Relay_Log_File: mysqld-relay-bin.000002
               Relay_Log_Pos: 4
       Relay_Master_Log_File:
            Slave_IO_Running: No
           Slave_SQL_Running: Yes
             Replicate_Do_DB:
         Replicate_Ignore_DB:
          Replicate_Do_Table:
      Replicate_Ignore_Table:
     Replicate_Wild_Do_Table:
 Replicate_Wild_Ignore_Table:
                  Last_Errno: 0
                  Last_Error:
                Skip_Counter: 0
         Exec_Master_Log_Pos: 0
             Relay_Log_Space: 151
             Until_Condition: None
              Until_Log_File:
               Until_Log_Pos: 0
          Master_SSL_Allowed: No
          Master_SSL_CA_File:
          Master_SSL_CA_Path:
             Master_SSL_Cert:
           Master_SSL_Cipher:
              Master_SSL_Key:
       Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
               Last_IO_Errno: 1236
               Last_IO_Error: Got fatal error 1236 from master when reading data from binary lo[...]
источник

F

Feed Reader Bot in MySQL
Feed Reader Bot
rrno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1
                  Master_UUID: 4fbe2d57-5843-11e6-9268-0800274fb806
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp: 161111 14:47:13
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 1
1 row in set (0.00 sec)Bingo! We broke the replication streaming on the slave. Now we know that the missing GTID on the master was due to the compressed file, and wasn’t able to be passed along to the connecting slave during their negotiation. Additionally, @@GTID_PURGED was not reloaded as per what the online manual said. The test done and we confirmed the theory (if you have additional comments, enter it at the end of the blog).
Test 2: Purge the oldest file on master and reload replication on slave
Let’s make it as straightforward as possible. The purge can be done manually using the PURGE BINARY LOGS command to get it done a proper way as the binary log index file should be considered a part of this purge operation as well (it should be edited to remove the file name index entry together with the log file on disk). I’m going to execute the same as before, but include purging the file manually with the mentioned command.tool01 [(none)]:> show master logs;
+-------------------+-----------+
| Log_name | File_size |
+-------------------+-----------+
| mysqld-bin.000001 | 341 |
| mysqld-bin.000002 | 381 |
| mysqld-bin.000003 | 333 |
+-------------------+-----------+
3 rows in set (0.00 sec)
tool01 [(none)]:> purge binary logs to 'mysqld-bin.000002';
Query OK, 0 rows affected (0.01 sec)
tool01 [(none)]:> show master logs;
+-------------------+-----------+
| Log_name | File_size |
+-------------------+-----------+
| mysqld-bin.000002 | 381 |
| mysqld-bin.000003 | 333 |
+-------------------+-----------+
2 rows in set (0.00 sec)Now, we’ll execute the commands to check how it goes:tool03 [(none)]:> stop slave; reset slave; reset master; start slave;
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.02 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.02 sec)
tool03 [(none)]:> show slave statusG
*************************** 1. row ***************************
              Slave_IO_State:
                 Master_Host: 192.168.0.10
                 Master_User: repl
                 Master_Port: 3306
               Connect_Retry: 60
             Master_Log_File:
         Read_Master_Log_Pos: 4
              Relay_Log_File: mysqld-relay-bin.000002
               Relay_Log_Pos: 4
       Relay_Master_Log_File:
            Slave_IO_Running: No
           Slave_SQL_Running: Yes
             Replicate_Do_DB:
         Replicate_Ignore_DB:
          Replicate_Do_Table:
      Replicate_Ignore_Table:
     Replicate_Wild_Do_Table:
 Replicate_Wild_Ignore_Table:
                  Last_Errno: 0
                  Last_Error:
                Skip_Counter: 0
         Exec_Master_Log_Pos: 0
             Relay_Log_Space: 151
             Until_Condition: None
              Until_Log_File:
               Until_Log_Pos: 0
          Master_SSL_Allowed: No
          Master_SSL_CA_File:
          Master_SSL_CA_Path:
             Master_SSL_Cert:
           Master_SSL_Cipher:
              Master_SSL_Key:
       Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
               Last_IO_Errno: 1236
               Last_IO_Error: Got fatal error 1236 from master when reading data from binary lo[...]
g: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
              Last_SQL_Errno: 0
              Last_SQL_Error:
 Replicate_Ignore_Server_Ids:
            Master_Server_Id: 1
                 Master_UUID: 4fbe2d57-5843-11e6-9268-0800274fb806
            Master_Info_File: /var/lib/mysql/master.info
                   SQL_Delay: 0
         SQL_Remaining_Delay: NULL
     Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
          Master_Retry_Count: 86400
                 Master_Bind:
     Last_IO_Error_Timestamp: 161111 16:35:02
    Last_SQL_Error_Timestamp:
              Master_SSL_Crl:
          Master_SSL_Crlpath:
          Retrieved_Gtid_Set:
           Executed_Gtid_Set:
               Auto_Position: 1
1 row in set (0.00 sec)The GTID on the purged file is needed by the slave. In both cases, we can set the @@GTID_PURGED as below with the transaction that we know was purged, and move forward with replication:tool03 [(none)]:> stop slave; set global gtid_purged='4fbe2d57-5843-11e6-9268-0800274fb806:1';
Query OK, 0 rows affected, 1 warning (0.00 sec)
Query OK, 0 rows affected (0.01 sec)
tool03 [(none)]:> start slave;
Query OK, 0 rows affected (0.01 sec)The above adjusts the GTID on @@GTID_PURGED to just request the existing GTIDs, using the oldest existing GTID minus one to make the slave start the replication from the oldest existing GTID. In our scenario above, the replica restarts replication from 4fbe2d57-5843-11e6-9268-0800274fb806:2, which lives on binary log file mysqld-bin.000002. Replication is fixed, as its threads can restart processing the data streaming coming from master.
You will need to execute additional steps in checksum and sync for the set of transactions that were jumped when setting a new value for @@GTID_PURGED. If replication continues to break after restarting, I advise you rebuild the slave (possibly the subject of future blog).
Good explanations about this can be found on the below bug, reported by the Facebook guys and Laurynas Biveinis, the Percona Server Lead (who clarified the issue):
MySQL Bugs: #72635: Data inconsistencies when master has truncated binary log with GTID after crash;
MySQL Bugs: #73032: Setting gtid_purged may break auto_position and thus slaves;
Conclusion
Be careful when purging or doing something manually with binary logs, because @@GTID_PURGED needs to be automatically updated when binary logs are purged. It seems to happen only when expire_logs_days is set to purge binary logs. Yet you need to be careful when trusting this variable, because it doesn’t consider fraction of days, depending the number of writes on a database server, it can get disks full in minutes. This blog showed that even housekeeping scripts and the PURGER BINARY LOGS command were able to make it happen.
источник

F

Feed Reader Bot in MySQL
Planet MySQL
MySQL 8.0: Improving the Test Framework to avoid skipping tests
In MySQL 8.0 we have improved the MySQL Test Framework (MTR) by adding a new no-skip option that avoids skipping tests in regression test runs. Before describing this feature in more detail, let me start with an introduction.
Introduction
In MTR, we have an existing command called skip.…
источник
2016 December 02

F

Feed Reader Bot in MySQL
Planet MySQL
MySQL 8.0: MTR Configurations to Be Set to Server Defaults Where Possible
MySQL Test Run or MTR for short, is a MySQL test program. It was developed to ensure that the MySQL server’s operation is as expected whether it be in terms of testing the functionality of new features or integrity of the old.…
источник

F

Feed Reader Bot in MySQL
Planet MySQL
Make MySQL 8.0 Better Through Better Benchmarking
This blog post discusses how better MySQL 8.0 benchmarks can improve MySQL in general.
Like many in MySQL community, I’m very excited about what MySQL 8.0 offers. There are a lot of great features and architecture improvements. Also like many in the MySQL community, I would like to see MySQL 8.0 perform better. Better performance is what we always want (and expect) from new database software releases.
Rarely do performance improvements happen by accident – they require running benchmarks, finding bottlenecks and eliminating them. This is the area where I think things could use improvement.
If you come to the MySQL Keynote at Oracle OpenWorld, or if you go to MySQL Benchmarks Page, you find a very limited set of benchmarks. They mostly focus around sysbench performance, with large numbers of connections and large numbers of cores. I’m not convinced this effort is the best use of our time these days.
Don’t get me wrong: as one of the original designers of sysbench, it is a great and simple tool that helps spot many bottlenecks. I still use it to find performance issues. But it is only one tool, which is by no means provides full coverage of real-world MySQL workloads.
I agree with Mark Callaghan (see discussion here): we need to run more benchmarks using a wider set of circumstances, to ensure there are no regressions in new releases. This will help move MySQL performance forward for real users.      
Here are some specific ideas on how I think we could benchmark MySQL 8.0 better:
Focus on production-recommended settings. Way too often we see benchmarks run with the doublewrite buffer disabled, InnoDB checksum disabled and no binary log (like in this benchmark). While they produce some excitingly high numbers, they have little practical value for real workloads. At very least I would very much like to see separate numbers for the “street legal car,” versus one designed to set a record on the salt flats.
Go beyond sysbench. Sysbench focuses on PK-only based access for very simple tables, and does not even do JOINs as part of its workload. I would like to see more benchmarks that have tables with many indexes, using secondary key lookups and joins, involving rows with many fields, and medium and large size blobs that are common bottlenecks. We also need more database features covered. Are foreign keys or triggers getting faster or slower? What about stored procedure execution performance? I would love to see these get covered. Mark Callaghan suggests LinkBench, which I think is a fine benchmark to add, but it shouldn’t be the only one.
Do more with sysbench. Sysbench could get more done and cover more important workloads. Workloads with data fitting in memory and not fitting in memory should be shown separately. Testing performance with large numbers of tables is also very important – many MySQL installations for SaaS applications run tens of thousands of tables (sometimes going into millions). I would also suggest running more injection benchmarks with sysbench, as they are more representative of the real world.
Look at latency, not just throughput. The benchmarks results we commonly see are focused on the throughput over a long period of time, without looking at the latency and how performance changes over time. Stalls and performance dips are well known in the MySQL space – especially the famous InnoDB checkpointing woes (though this issue has gotten a lot better) There are other cases and circumstances where stalls and slowdowns can happen.  
Measure resource consumption. I very much like how Mark Callaghan shows the CPU usage and IO usage per transaction/operation, so we can get a better idea of efficiency.
Concurrency. Recently, the focus has been on very hi[...]
источник

F

Feed Reader Bot in MySQL
Feed Reader Bot
Planet MySQL
Make MySQL 8.0 Better Through Better Benchmarking
This blog post discusses how better MySQL 8.0 benchmarks can improve MySQL in general.
Like many in MySQL community, I’m very excited about what MySQL 8.0 offers. There are a lot of great features and architecture improvements. Also like many in the MySQL community, I would like to see MySQL 8.0 perform better. Better performance is what we always want (and expect) from new database software releases.
Rarely do performance improvements happen by accident – they require running benchmarks, finding bottlenecks and eliminating them. This is the area where I think things could use improvement.
If you come to the MySQL Keynote at Oracle OpenWorld, or if you go to MySQL Benchmarks Page, you find a very limited set of benchmarks. They mostly focus around sysbench performance, with large numbers of connections and large numbers of cores. I’m not convinced this effort is the best use of our time these days.
Don’t get me wrong: as one of the original designers of sysbench, it is a great and simple tool that helps spot many bottlenecks. I still use it to find performance issues. But it is only one tool, which is by no means provides full coverage of real-world MySQL workloads.
I agree with Mark Callaghan (see discussion here): we need to run more benchmarks using a wider set of circumstances, to ensure there are no regressions in new releases. This will help move MySQL performance forward for real users.      
Here are some specific ideas on how I think we could benchmark MySQL 8.0 better:
Focus on production-recommended settings. Way too often we see benchmarks run with the doublewrite buffer disabled, InnoDB checksum disabled and no binary log (like in this benchmark). While they produce some excitingly high numbers, they have little practical value for real workloads. At very least I would very much like to see separate numbers for the “street legal car,” versus one designed to set a record on the salt flats.
Go beyond sysbench. Sysbench focuses on PK-only based access for very simple tables, and does not even do JOINs as part of its workload. I would like to see more benchmarks that have tables with many indexes, using secondary key lookups and joins, involving rows with many fields, and medium and large size blobs that are common bottlenecks. We also need more database features covered. Are foreign keys or triggers getting faster or slower? What about stored procedure execution performance? I would love to see these get covered. Mark Callaghan suggests LinkBench, which I think is a fine benchmark to add, but it shouldn’t be the only one.
Do more with sysbench. Sysbench could get more done and cover more important workloads. Workloads with data fitting in memory and not fitting in memory should be shown separately. Testing performance with large numbers of tables is also very important – many MySQL installations for SaaS applications run tens of thousands of tables (sometimes going into millions). I would also suggest running more injection benchmarks with sysbench, as they are more representative of the real world.
Look at latency, not just throughput. The benchmarks results we commonly see are focused on the throughput over a long period of time, without looking at the latency and how performance changes over time. Stalls and performance dips are well known in the MySQL space – especially the famous InnoDB checkpointing woes (though this issue has gotten a lot better) There are other cases and circumstances where stalls and slowdowns can happen.  
Measure resource consumption. I very much like how Mark Callaghan shows the CPU usage and IO usage per transaction/operation, so we can get a better idea of efficiency.
Concurrency. Recently, the focus has been on very hi[...]
gh concurrency in terms of connections and active connections, typically on very big iron (using as many as 72 cores). And as much as this is important to “future-proofing” MySQL as we get more and more cores per socket every year, it should not be the only focus. In fact, it is extremely rare for me to see sustained loads of more than 20-40  “threads running” for well-configured systems. With modern solutions like ProxySQL, you can restrict concurrency to the most optimal levels for your server through multiplexing. Not to mention the thread pool, which is available in MySQL Enterprise, Percona Server and MariaDB. I would like to see a much more focused benchmark at medium-to-low concurrency. The fact that single thread performance has gotten slower in every Major MySQL version is not a good thing. As MySQL currently runs a single query in a single thread, it impacts query latencies in many real-world situations.
Virtualization. We need more benchmarks in virtualized environments, as virtualization and the cloud are where most workloads are these days (by number). Yes, big iron and bare metal are where you get the best performance, but it’s not where most users are running MySQL. Whenever you are looking at full blown virtualization or containers, the performance profile can be substantially different from bare metal. Virtualized instances often have smaller CPU cores – getting the best performance with 8-16 virtual cores might be a more relevant data set for many than the performance with 100+ cores.
SSL and encryption. MySQL 5.7 was all about security. We’re supposed to be able to enable SSL easily, but was any work done on making it cheap? The benchmark Ernie Souhrada did a few years back showed a pretty high overhead (in MySQL 5.6). We need more focus on SSL performance, and getting it would allow more people to run MySQL with SSL. I would also love to see more benchmarks with encryption enabled, to understand better how much it costs to have your data encrypted “at rest,” and in what cases.
Protocol X and MySQL Doc Store. These were added after MySQL 5.7 GA, so it would be unfair to complain about the lack of benchmarks comparing the performance of those versus previous versions. But if Protocol X is the future, some benchmarks are in order. It would be great to have official numbers on the amount of overhead using MySQL Doc Store has compared to SQL (especially since we know that queries are converted to SQL for execution).
Replication benchmarks. There are a lot of great replication features in newer MySQL versions: statement/row/mixed, GTID or no GTID, chose multiple formats for row events, enable various forms of semi-sync replication, two ways of parallel replication and multi-source replication. Additionally, MySQL group replication is on the way. There seems to be very little comprehensive benchmarks for these features, however. We really need to understand how they scale and perform under various workloads.
Mixed workloads.  Perhaps one of the biggest differences between benchmarks and real production environments is that in benchmarks the same workload often is used over and over, while in the real world there is a mix of “application workloads.” The real world also has additional tasks such as backups, reporting or running “online” ALTER TABLE operations. Practical performance is performance you can count on while also serving these types of background activities. Sometimes you can get a big surprise from the severity of impact from such background activities.
Compression benchmarks. There have been some InnoDB compression benchmarks (both for new and old methods), but they are a completely separate set of benchmarks that are hard to put in context with everything else. For example, do they scale well with high numbers of connections and large numbers of cores?
Long-running benchmarks. A lot of the benchmarks[...]
источник