Fortunately, you can use a simple MySQL trick to get all the rows of a table to replicate through to slaves. The idea is to dump the table, delete the rows, then reload it again. The delete and subsequent reload replicate out to slaves, after which everything is consistent again. Let's say we have a table called tpcb.history that needs to be fixed. Login with mysql and run the following commands:
BEGIN;
SELECT * FROM tpcb.history 
  INTO OUTFILE '/tmp/tpcb.history.dmp' FOR UPDATE;
DELETE FROM tpcb.history;
LOAD DATA INFILE '/tmp/tpcb.history.dmp' REPLACE
  INTO TABLE tpcb.history FIELDS TERMINATED BY '\t'
  LINES TERMINATED BY '\n';
COMMIT;This seems useful enough that I put together a simple script called reload-table.sh with a README and checked them into the Tungsten Replicator codeline on SourgeForge.net. You can refresh the same table shown above using the following command:
[sudo] ./reload-table-b.sh -u tungsten -p secret -t tpcb.history
9 comments:
What about mk-table-sync or REPLACE INTO?
I agree, when we are dealing with millions of records on per days basis. Below is the best option.
select * into outfile ....
load data infile ...
Similar, scenario discussed in my post.
http://krishna-mysql.blogspot.com/2011/01/building-report-server.html
@Daniel
REPLACE INTO won't help with statement replication. You'll just reload the same corrupt table on the slave. You actually have to put the rows somewhere and reload them.
Thanks for the reminder on mk-table-sync. One issue on Maatkit is sometimes if you know what's wrong it's simpler to use the SQL directly so you can see what you are doing.
@Krishna, don't forget the DELETE between loading. It appears that LOAD DATA INFILE does not change anything if you don't do this. You can prove this by doing a SHOW MASTER STATUS to see if anything went into the binlog.
I've done that myself once or twice. It works most of the time, but sometimes, if the outfile is too big and the my.cnf file is configured to limit the binlog file by size, it may cause problem.
Other then that, its a pretty fast way to sync replication if you are completely lost about where you are.
This won't work with row-based-replication if there are rows present on the slave that aren't present on the master.
Consider TRUNCATE TABLE instead of DELETE FROM.
There are still cases where TRUNCATE can fall back to delete (such as in the presence of FK with InnoDB) and this might still result in inconsistency with RBR.
You can also do it with regular replication:
on a good slave:
STOP SLAVE;
on the broken slave a few seconds later:
STOP SLAVE;
SHOW SLAVE STATUS;
record the relay_master_log_file, and exec_master_log_pos values from the broken slave
on the good slave:
START SLAVE UNTIL MASTER_LOG_FILE=...., MASTER_LOG_POS=...
using the values that you recorded on the bad slave, now both slaves are at the same transactional position with respect to the master.
dump the tables on the good slave. restore the tables on the bad slave.
start slave on both servers.
done.
(open id for livejournal is broken now)
@swanhart Good point on RBR--probably should just do the whole thing with statement replication using 'set session binlog_format=statement'.
Meanwhile, I wanted to use TRUNCATE but was scared off by the confusing description in http://dev.mysql.com/doc/refman/5.1/en/truncate-table.html. The doc indicates at one point that TRUNCATE appears in the log as DROP TABLE/CREATE TABLE but in fact as of 5.1.54 you'll see TRUNCATE in the log. Moreover, it auto-commits--there are BEGIN markers for a TRUNCATE issued in a transaction. That's not good as you'll lose your table as mentioned in the original article above.
At this point it seems you have to look at the binlog to be sure of the semantics.
@swanhart: TRUNCATE is, without a doubt, faster than DROP TABLE, but it isn't transaction safe, so it isn't appropriate for this kind of thing. See: http://dev.mysql.com/doc/refman/5.1/en/truncate-table.html, specifically "Truncate operations cause an implicit commit."
The commit would stop the transaction and make the operation not atomic.
Post a Comment