Thursday, October 14, 2010

HowTo: xtrabackup directly to target host, no additional space for archive file needed

xtrabackup is a great tool for taking backups/snapshots, etc. and sometimes we have large amounts of data to deal with and not enough storage to mess around with.

The xtrabackup docs contain steps for how to stream your backup over the network to another host, which is fine if the end result you want is a tar/gzip type archive, however, in some cases you may want to just get the files to the other host unextracted in order to create a new slave DB.

In this case you just want to get that snapshot into the new host as easily as possible -- in many cases I don't have enough storage to first put it into a tar or tar.gz and then extract.

To work around that, here is a way you can stream your backup over the network straight onto disk on the other side, while avoiding the need for an archive file as a stepping stone in the process.


Note: My bash-fu is probably not as advanced as some, so perhaps there is a more elegant way to make this fly, but it seems to work just fine for me.


ssh root@target-host "cd /data/target-dir; nc -l 9210 | tar xvif - " & sleep 1; \
innobackupex-1.5.1 --stream=tar /datadir/path --user=root --password=XXXXX\
   --slave-info | nc target-host 9210

Once you are done, remember you still need to --apply-log before the snapshot can be used.

Hopefully this will save someone else a few minutes.

Lachlan

Thursday, June 3, 2010

ON DUPLICATE KEY UPDATE Gotcha!

I know it has been a long time between drinks/posts, but I've been busy -- I promise! :)

Today I spent a considerable amount of time trying to figure out why an INSERT SELECT ON DUPLICATE KEY UPDATE was not behaving as I would expect.

Here is an example to illustrate:

CREATE TABLE t1 (
  id INT AUTO_INCREMENT,
  num INT NOT NULL DEFAULT 0,
  PRIMARY KEY (id)
);

CREATE TABLE t2 (
  id INT NOT NULL,
  num INT
);

INSERT INTO t1 VALUES (1, 10);
INSERT INTO t2 VALUES (1, NULL);


INSERT INTO t1
  SELECT id, num
  FROM t2
ON DUPLICATE KEY UPDATE
  num=IFNULL(VALUES(num), t1.num);

To convert the above query into plain English -- I'm saying, INSERT into table t1 the id and num fields from the t2 table. If there are already row(s) for any UNIQUE key in the target table, t1, then we should instead UPDATE the existing row. Additionally, we should set the num field to the result of whatever this evaluates to:

IFNULL( VALUES(num), t1.num)

To explain: VALUES(num) means "The value that is to be placed into the "num" field when it is updated."

So we are saying, if a NULL is going to be put into the field "num" then we want to leave the value alone -- set it to "t1.num" -- eg. the value that is already there.

One might expect the result of my query to be as follows:


testDB:test> SELECT * FROM t1;
+----+-----+
| id | num |
+----+-----+
|  1 |  10 |
+----+-----+
1 row in set (0.00 sec)


However, that is not the case -- the actual result is:



testDB:test> SELECT * FROM t1;
+----+-----+
| id | num |
+----+-----+
|  1 |   0 |
+----+-----+
1 row in set (0.00 sec)


Why is this the case?

The hint lies in the result of the INSERT itself:


dbp16-int:test> INSERT INTO t1   SELECT id, num   FROM t2 ON DUPLICATE KEY UPDATE   num=IFNULL(VALUES(num), t1.num);
Query OK, 0 rows affected, 1 warning (0.00 sec)
Records: 1  Duplicates: 0  Warnings: 1

Note: 1 warning...

testDB:test> SHOW WARNINGS;
+---------+------+-----------------------------+
| Level   | Code | Message                     |
+---------+------+-----------------------------+
| Warning | 1048 | Column 'num' cannot be null |
+---------+------+-----------------------------+
1 row in set (0.00 sec)

What is actually happening here is that because the column 'num' in the table t1 is defined as NOT NULL, MySQL is silently converting it to a valid value of 0.

So VALUES(num) actually equals 0, thus, it will not evaluate as NULL and the 0 will be INSERTed into the table.

This was not my intention and the solution in this case was to allow NULLs on the "num" field of the table t1. It may not always be possible to remove such restrictions if you rely on these automatic conversions by MySQL to "valid values".

Something to keep in mind - VALUES() will always evaluate as what would have ended up in the table, which may not necessarily be the same thing as the value that was attempted to be INSERTed.

Tuesday, February 2, 2010

replicate-do-db gotcha!

Last weekend we went live with a change where we split one of our central user databases off into a master-master replication pair, with HA and a virtual IP/hostname to point our apps to. It had been previously hosted on one of the shards, so it was time to give it a life of its own as well as some redundancy.

On Friday we did a test of the change in our staging environment.

After setting up the master-master pair, the first thing we verified was writes/table creates, etc. were flowing in both directions for the database we planned to replicate between the two. Lets just call it "db1".

To enable this I had used the option: replicate-do-db=db1 in the my.cnf on both DB machines.

After we verified this, I was told that we should probably also replicate "db2". I edited the config on both machines and changed the option to be replicate-db-db=db1,db2

We then proceeded to take a snapshot of our staging environment for these dbs so that we could test failover, app performance, etc.

We restored the snapshot, brought up the app and everything looked fine. We tested failovers. All looked good.

The change was pushed into production over the weekend and on Sunday night a network glitch caused the HA to failover to make the second DB in the pair active.  Then late Monday evening someone noticed that changes didn't actually seem to be replicating between the two, although SHOW SLAVE STATUS reported IO and SQL threads running and everything was caught up.

Enter the culprit - my replicate-do-db setting

The correct way to tell MySQL to enable replication for select DBs is to issue the parameter multiple times:

replicate-do-db=db1
replicate-do-db=db2

The configuration as I had it was actually telling MySQL to filter and only replicate a database called "db1,db2" which of course did not exist.

Despite the fact we thought that we'd been good at testing, this managed to slip through.

I guess even someone who has been using MySQL for almost 10 years can make silly mistakes. Whoops!

Here's hoping that someone else out there can learn from my mistake!

Lachlan

Friday, January 15, 2010

How to fix Juniper Networks Network Connect on Mac OS 10.6

So I guess this isn't exactly MySQL related, but I can't help but not post this in the hopes that Google will index it and that it will save somebody some time.

OS: Mac OS 10.6 (Snow Leopard)
Problem:
Connecting to your Juniper Networks device via web browser and clicking "Start" under the "Client Application Sessions" section to kickoff a VPN session causes the browser to seem to attempt to install the Network Connect client, but after a short time it simply returns you back to the main menu page. No VPN session is established.

Solution:
It seems there is some compatability issue with Mac OS 10.6 that is causing the problem.
Check to see if you even have a "/Applications/Network Connect.app".

If not, download and install from:

https://your.juniper.device.address/dana-cached/nc/NetworkConnect.dmg

Then in a shell:

sudo chmod 755 /usr/local/juniper/nc/6*
sudo mkdir '/Applications/Network Connect.app/Contents/Frameworks'

Logout of the Juniper VPN web portal and log back in. The Network Connect app should start automagically - if not try clicking "Start" again.

You should now have a VPN session!

Hope this helps someone out there.

Lachlan

Tuesday, January 5, 2010

What has become of me? .. and who am I anyway??...

OK, I acknowledge that I'm probably not one of the names that springs to anyone's mind when they think of well known MySQL community folks. So for the vast majority of you out there that have no idea who I am, I'll try to bring you up to speed.

I have been working with MySQL (the product) since the end of 2000. Originally I did LAMP development for various businesses in Australia up until September 2004, when I had the opportunity of a lifetime - the chance to join a little database company that could called MySQL AB as a MySQL Support Engineer -- the first one south of the equator, I might add!

I was nicknamed "the guy doing support upside down".

As the support organisation grew along with the company I was later made manager of the Asia-Pacific MySQL Support Team at the beginning of 2007. I continued in that role right up until the end of 2009.

I'll spare you from too much gushing, but I will tell you that my time with the MySQL folks was the most rewarding in every way I have had in my professional career.

So anyway, the point here is that unless you worked in MySQL/Sun or were a MySQL Support customer over the last 5 or so years, you probably have not heard of me. Now that I have left MySQL/Sun, I aim to change that.

So where am I now? and why did I leave?

Well to keep the story short, I had traveled to San Francisco from my home in Melbourne, Australia a few times and really grew to love the place. During a longer visit earlier this year, I decided it was time for a change and began looking for a job in SF.

I found a smallish company (<100 ppl) in downtown San Francisco doing some pretty interesting stuff in the Search Engine Marketing (SEM) space called Marin Software Inc. You can follow the link to read some more about them if you're interested.

Marin were looking to bump out their Operations team with someone with strong MySQL skills and after a series of interviews it seems that I fit the bill.

I'm now living in San Francisco and just on day 2 here at Marin, but as time progresses I plan to post various MySQL-related stuff and contribute back to the MySQL Community where I can.

Stay tuned!...