Jan 17, 2010

What's in *Your* Binlog?

Over the last couple of months I have run into a number of replication problems where I needed to run reports on MySQL binlogs to understand what sort of updates servers were processing as well as to compute peak and average throughput.   It seems that not even Maatkit has a simple tool to report on binlog contents, so I wrote a quick Perl script called analyze-binlog.pl to summary output from mysqlbinlog, which is the standard tool to dump binlogs to text. 

The script operates as a filter with the following syntax:
Usage: ./binlog-analyze.pl [-h] [-q] [-v]
Options:
  -h : Print help
  -q : Suppress excess output
  -v : Print verbosely for debugging
To get a report, you just run mysqlbinlog on a binlog file and pipe the results into analyze-binlog.pl.  Here is typical invocation and output.  The -q option keeps the output as short as possible.
$ mysqlbinlog /var/lib/mysql/mysql-bin.001430 | ./binlog-analyze.pl -q
===================================
| SUMMARY INFORMATION             |
===================================
Server Version    : 5.0.89
Binlog Version    : 4
Duration          : 1:03:37 (3817s)

===================================
| SUMMARY STATISTICS              |
===================================
Lines Read        :        17212685
Events            :         3106006
Bytes             :      1073741952
Queries           :         2235077
Xacts             :          817575
Max. Events/Second:            5871.00
Max. Bytes/Second :         1990077.00
Max. Event Bytes  :          524339
Avg. Events/Second:             813.73
Avg. Bytes/Second :          281305.20
Avg. Queries/Sec. :             585.56
Avg. Xacts/Sec.   :             214.19
Max. Events Time  :         9:01:02

===================================
| EVENT COUNTS                    |
===================================
Execute_load_query   :           10
Intvar               :        53160
Query                :      2235077
Rotate               :            1
Start                :            1
User_var             :          182
Xid                  :       817575

===================================
| SQL STATEMENT COUNTS            |
===================================
begin                :       817585
create temp table    :            0
delete               :        31781
insert               :           20
insert into          :       411266
select into          :            0
update               :       633857

There are lots of things to see in the report, so here are a few examples.  For one thing, peak update rates generate 5871 events and close to 2Mb of log output per second.  That's loaded but not enormously so--MySQL replication can easily dump over 10,000 events per second into the binlog using workhorse 4-core machines.  The application(s) connected to the database execute a large number of fast, short transactions--typical of data logging operations, for example storing session data.  We can also see from the Execute_load_query events that somebody executed MySQL LOAD DATA INFILE commands.  That's interesting to me because we are just putting them into Tungsten and need to look out for them in user databases.

To interprete the binlog report most effectively, you need to understand MySQL binlog event types.  MySQL replication developers have kindly provided a very helpful description of the MySQL binlog format that is not hard to read.  You'll need to refer to it if you get very deeply into binlog analysis.  It certainly beats reading the MySQL replication code, which is a bit of a thicket.

Anyway, I hope this script proves useful.  As you may have noted from the URL the script is checked into the Tungsten project on SourceForge and will be part of future releases.  I plan to keep tweaking it regularly to add features and fix bugs.  Incidentally, if you see any bugs let me know.  There are without doubt a couple left.

14 comments:

Shlomi Noach said...

Hi,

"...It seems that not even Maatkit has a simple tool to report on binlog contents..."

Try out:
mysqlbinlog mysql-bin.00001 | mk-query-digest --type=binlog

Regards

Robert Hodges said...

Mk-query-digest is a very useful tool but I had a surprising number of problems with it. It is difficult to read out what I was really looking for, namely peak SQL update rates and peak rates of output to the binlog. I also had problems with Perl dependencies on systems I don't want to modify.

Perhaps most important, mk-query-digest seems really slow. Running head-to-head my script was about 30 time faster. That's a big deal because one run with my script already ran for over an hour to crunch 50GB+ of binlog data. Perl can scan 1GB of mysqlbinlog output in a couple of seconds but the execution time increases dramatically as you start to add regexp parsing logic.

Baron said...

Peak rates aren't there. But they could be.

The slowness is due to the default analysis, which does a LOT of work. (I've spent large amounts of time profiling and optimizing that.) The default analysis can be disabled easily with the reporting options.

The right way to do this in mk-query-digest would be to write a plugin, which is something we're getting pretty close to supporting.

Robert Hodges said...

Hi Baron,

I'm not sure I would have used a mk-query-digest plugin, as it would probably have taken longer to learn how to do that than to write something from scratch. Anybody who has custom needs can just take the script I developed as a template and roll their own. Perl is pretty good that way--simple example and off you go. On the other hand, if I wanted detail, mk-query-digest is clearly the way to go.

Ronald Bradford said...

No bad. I can use this with client analysis when the replication volume is high.

Robert Hodges said...

@Ronald
Thanks for the comment. I am starting to think about extending this a bit to collect what you might "fast-but-half-hearted" stats on busy systems--Our next problem is to figure out quickly whether it makes sense to add slaves for scaling purposes.

Ronald Bradford said...

Robert,

I'd add one thing to your report. A distribution including min/max/avg/95% of the event size.

I say this because a client now is getting a nagios alert where pre my performance improvements they were not. While we review why, they in effect take the master binlog pos, and the slave(s) binlog pos and error > 5,000.

I found just by quick looking of eye, single events in the binlog > 5,000 bytes. It would have been great to have a tool give me a distribution analysis per file (or in my case, a timeslice of a file) that showed more details around size.

Robert Hodges said...

Hi Ronald,

That's a great idea. It also dovetails with an issue we are circling around for Tungsten replication. We can tolerate the occasional "fat" event, but as they grow bigger or more common we need to increase resources assigned to the replicator.

Robert

Baron said...

Robert, if you add 95% to your tool I'd be interested to see how you do it. We do that in Maatkit and that's one of the primary reasons the default analysis was slow before we switched to a better but much more complex implementation. (Regex parsing is not the main culprit as you guessed, by the way). A naive implementation can easily run a machine out of memory in a matter of seconds and swap it into the ground, especially with Perl, which is a memory hog. So let me know if you find a good way to do that.

Robert Hodges said...

Hi Baron! I have been thinking about this all week and have come to the same conclusion--this one can't be done easily with single counters. Also, I misspoke on the regex comment. What I *meant* was that time goes up with code path length per event processed, something that increases very rapidly with more analysis.

Unknown said...

@Robert Hodges

There seems to be bug when i pass multiple binlog files to this scripts. The summary statistics are not correct

Robert Hodges said...

@Anonymous What command did you run and what seemed to be incorrect about the results?

Unknown said...

I ran the script on 20 binlogs at a time

Here are the statistics


===================================
| SUMMARY STATISTICS |
===================================
Events : 107039739
Bytes : 1073742164
Queries : 71373098
Xacts : 35666585
Max. Events/Sec. : 3124 @16:03:59
Max. Bytes/Sec. : 879538 @15:52:35
Max. Queries/Sec. : 2082 @16:03:59
Max. Xacts/Sec. : 1042 @16:03:59
Max. Event Bytes : 11416 @5:36:26
Avg. Events/Sec. : 167773.89
Avg. Bytes/Sec. : 1682981.45
Avg. Queries/Sec. : 111870.06
Avg. Xacts/Sec. : 55903.74


All the data here seems to be incorrect. Bytes is showing as 1G eventhough I ran it on 20 1G files, queries/sec is 10 times more than the actual one. This is happening only when I pass multiple binlogs files at a time.

Even I pass only one log everything seems to be fine

Unknown said...

I ran the script on 20 binlogs at a time

Here are the statistics


===================================
| SUMMARY STATISTICS |
===================================
Events : 107039739
Bytes : 1073742164
Queries : 71373098
Xacts : 35666585
Max. Events/Sec. : 3124 @16:03:59
Max. Bytes/Sec. : 879538 @15:52:35
Max. Queries/Sec. : 2082 @16:03:59
Max. Xacts/Sec. : 1042 @16:03:59
Max. Event Bytes : 11416 @5:36:26
Avg. Events/Sec. : 167773.89
Avg. Bytes/Sec. : 1682981.45
Avg. Queries/Sec. : 111870.06
Avg. Xacts/Sec. : 55903.74


All the data here seems to be incorrect. Bytes is showing as 1G eventhough I ran it on 20 1G files, queries/sec is 10 times more than the actual one. This is happening only when I pass multiple binlogs files at a time.

Even I pass only one log everything seems to be fine