The very first thing to do is to make a copy of whatever data files you still have, and to keep it and any backups safe until long after your recovery effort is complete. Please read this (short) Wiki page:
http://wiki.postgresql.org/wiki/Corruption
Once you have done that, you can attempt various recovery strategies without fear that you will be worse off for the attempt, beyond the time required to try it. In general I recommend carefully following one of the techniques described in the documentation -- attempts to cut corners or to be creative often lead to corruption. Only a seasoned expert with a good understanding of PostgreSQL internals should attempt to deviate from the documented steps.
You didn't describe your backup strategy; details of what is available there may suggest alternatives you would not otherwise have.
Ultimately, if you have data of value which is not backed up, you may need to hand-edit the system tables to eliminate references to lost tablespace. This is not for the faint of heart. There are a number of companies with which you can contract for such services, many of whom have experience with recovery from catastrophic hardware failure like this.
http://www.postgresql.org/support/professional_support/
I am not affiliated with any of these companies.
What you are asking for has never been documented or attempted.
However, I have answered two posts like this in the past:
I have an additional suggestion for data you are going to delete, going forward. You will have to put some good developer effort in this one:
Before dropping a table, you should
- go to every INT and FLOAT value and make them 0
- go to every CHAR, VARCHAR and TEXT field
- get the length of the field
- update the column with the same number of bytes with all Xs or all 0s
You will have to examine the entry in INFORMATION_SCHEMA.COLUMNS for every column to determine which is numeric and which is character. Once you have manually overwritten all data in the old table, then you are free to drop it, knowing you masked the data beforehand.
CAVEAT
If all your InnoDB tables have no constraints/foreign keys, here is something you can do during some down time.
- Convert all InnoDB tables it to MyISAM
- Shutdown mysql
- delete ibdata1
- Startup MySQL
- Convert those MyISAM tables back to InnoDB
I can't take credit for that idea. Shlomi Noach suggested it back on September 19, 2013.
If the DB is not too big, here is how you can script it.
MYSQL_USER=root
MYSQL_PASS=password
MYSL_CONN="-u${MYSQL_CONN} -p${MYSQL_PASS}"
SQL="SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=MyISAM;')"
SQL="${SQL} FROM information_schema.tables WHERE engine='InnoDB'"
mysql ${MYSQL_CONN} -ANe"${SQL}" > Convert_To_MyISAM.sql
SQL="SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' ENGINE=InnoDB;')"
SQL="${SQL} FROM information_schema.tables WHERE engine='InnoDB'"
mysql ${MYSQL_CONN} -ANe"${SQL}" > Convert_To_InnoDB.sql
mysql ${MYSQL_CONN} -ANe"SET GLOBAL innodb_fast_shutdown = 0"
At this point, look at the two scripts in vi
or less
.
If you are satisfied with the code, then you can do this:
mysql ${MYSQL_CONN} < Convert_To_MyISAM.sql
service mysql stop
rm -f /var/lib/mysql/ibdata1
service mysql start
mysql ${MYSQL_CONN} < Convert_To_InnoDB.sql
Give it a Try !!!
Best Answer
I would put the whole backup file onto a testserver, is that possible for you? And then do: mysqldump -u root -p test1 catalog_product_index_price > dump_catalog_product_index_price.sql