Our big server (holding the db for an active web application) is about the same size ~140GB total db space, 200+ tables, several tables with 10-70M rows and the largest at almost 200M rows. We are about 4-1 select vs insert/update on the database (see later). We only have two tables that get big push updates, 300K and 1.3M, rows and that is just once a day.

When we were performance testing two years ago we looked at myISAM vs InnoDB, SSD, SSD raid, standard disk raid and increasing memory. We saw the biggest performance gain by switching to InnoDB and getting row level locking; mainly because we have php sessions in the db (~20K rows but constant read/write/update) and the 200M row table (activity log that gets lots of inserts and large reads when reports are run). Neither SSD option gave us much performance gain over standard disk raid (especially for the price) and frequently the SSD raid was slower during our once a day update. Increasing the memory and then increasing the innodb-buffer-pool-size was a much better option for the money than the SSDs. We have 96GB of RAM in the server and the buffer set to 68GB. Because of this large buffer almost half our reads are cache hits saving a huge db and disk load, before the memory change our db read vs write was about 7-1. The disks are 4 x 7200rpm 2.5in 250GB in a raid 10.

I think your change to swapping the db in the web config vs copying the tables should help with killing the SSDs. I would watch the disk stats to make sure the reads from the web queries are not also somehow helping to kill the SSDs; since in your previous setup all the web queries were also hitting the SSD that keeps dying.

Next I would work on changing to InnoDB. Switching to InnoDB will give you some performance gains on the web queries and should give a lot on the updates (row vs table locking). However, before making the change be sure to review the InnoDB configuration settings especially related to the various buffers, there is a whole separate set of config values for InnoDB and if you leave them at defaults you will kill the server. Set the buffer as large as you can, probably with your machine 16GB is about your max.

Long term I think you would be better off if you can get more memory in the server and having larger buffers, you can then probably go back to standard disks in a raid to save some money. It would be great if you had enough memory for 48GB or 64GB buffer.

I use the temp table swap trick in a few places also. It is a nice way to update data without impacting the user and then suddenly all the updates are there. In mine I have the temp table in the same db to start with so no copies.