Check the value of the variable "log_error":
select @@log_error
or
show variables like 'log_error'
It defaults to /var/log/mysqld.log on a RHEL6 server though can be different.
Then check this log file - it will generally give you a reason why it's failing.
Alternatively, if the service is being killed because the server is running out of memory, check the system log:
/var/log/messages
For the "Out Of Memory killer" doing it's thing or any other messages that might be relevant around the time that the server died.
There are basically three ways of upgrading PostgreSQL from different major versions (e.g. 9.1 to 9.3).
Upgrading with pg_dump
The first one, and recommended if possible, is to do a dump of the old (9.1) version using the binary of the newer (9.3) version and restore it on a new cluster created of the newer version.
This approach is, generally, the slower one, but also the most feasible. One tip to make it more fast, is using concurrency. To dump with parallel jobs, you can do:
$ pg_dump --format=directory --jobs=4 --no-synchronized-snapshots --file=/path/to/mydump mydatabase
You'll have to do it for each database you have, adjust --jobs=4
value to any value (test some values from 2 to number of cores, and see which gives better speed). Also, during this phase, nobody should be conected to the database, any modification will result in a corrupted dump (because of the non-secure option --no-synchronized-snapshots
).
After that, you can restore you dump into the new instance using pg_restore
:
$ createdb <options> -T template0 mydatabase
$ pg_restore --exit-on-error --jobs=4 --dbname=mydatabase /path/to/mydump
After that, it is recommended to run ANALYZE
on your database:
$ vacuumdb --analyze-only mydatabase
(if you can afford the time, run only --analyze
to also VACUUM
the database and update the visibility maps)
Upgrading with pg_upgrade
Another option, is to use the contrib pg_upgrade
. Using the --link
method it provides a really fast way of upgrading PostgreSQL.
Before using you have to make a backup of the entire data directory, because in --link
mode, if something goes wrong, you may loose both data (new and old). Also, read the entire docs and specially the notes at the bottom (there are some limitations for pg_upgrade).
UPDATE:
Please use the --check
option before run the definitive command. Also, for large databases is recommendable to run this command in a screen session.
Upgrade using a trigger based replication tool
Another option to upgrade a version, is using a replication tool based on trigger. Like Slony, Bucardo and Londiste.
This is the option that requires the least downtime possible, but it is the hardest one to work on.
To do that you need to build a master-slave where the master is your current version (9.1) and the slave is the new version (9.3). You then, wait the first sync (with the system still in production), after that you close everyone connected to the database (the downtime starts here), wait for the slave to catch-up, promote it (the slave) to master and redirect all clients/applications to this new version. And you're done.
The Slony documentation provides a step-by-step to upgrade PostgreSQL using Slony.
Which one to choose
Well, as always depends, resuming:
- The dump+restore is the most reliable, but generally the most slow one (the parallelism can give pretty good results though)
- The pg_upgrade is one of the best options for little downtime (if you can use, see the limitations), it often takes only a few minutes, even for big databases
- The trigger replication, is with no doubt the one that gives the least downtime possible (near zero), but it is really hard to achieve and I recommend only for experience people (on both PostgreSQL and the replication tool).
I hope I could help. Good luck.
Best Answer
Turns out the answer was on Stack overflow. And Daniel commented on it right about the time I was installing it.
https://stackoverflow.com/a/22833812/147562