mySQL Tables growing to 7 million rows cannot be optimized – here is another way

Optimizing these large tables DOES NOT WORK. It corrupts them and you have to talk to tech support to get backups restored. Even when you ssh into the server and do it “command line.” I reminded myself of this yesterday when I optimized a table forgetting the better way.

First, back up the sluggish table. It had lots of overhead – this is a hint. Another way to back it up, is to copy it to a new table. Here’s how (don’t do this from phpmyadmin because it will for sure time out). I use a mac so I have sequel_pro AND SET THE TIMEOUT  TO 1000 seconds or more in the preferences. If you have access to ssh into the machine – even better.

[sql]</pre>
CREATE TABLE xxx_backupddd_nnn LIKE xxx;
insert into xxx_backupddd_nnn SELECT * FROM xxx;

RENAME TABLE xxx TO xxx_overheadddd_nnn;
CREATE TABLE xxx LIKE xxx_overheadddd_nnn;
INSERT into xxx SELECT * FROM xxx_backupddd_nnn;
<pre>

[/sql]

In my case xxx is the table and ddd_nnn the date and attempt such as 20140303_1.  I run each command line by line and carefully look at the output. Sometimes the server times out even after 1/2 seconds so be watchful.

No tech support phone call or $15 backup recovery fee required. Really – that is not bad for a recovery fee.