You have changed too many things at once. First run 9.2 on your new hardware and check the performance. Then upgrade to 9.6 once you get that sorted out.
How did you move the data? Logically, by using pg_dump and then a restore, or physically by using pg_basebackup (or a cold copy) and then pg_upgrade?
Update finalal Set jsondone=1 Where id=1
The old server runs this at an average of 0.2 ms over 72 iterations. The new server is 1.43ms.
72 iterations of the same id=? value, or consecutive ones, or random ones? And all in one transaction, or each a separate one?
The new server is showing half the cost (but more startup costs for some reason) so I assume it would be faster, especially on faster hardware.
Those costs are not general time estimates, they are internal accounting used to make reasonable planner choices. Things for which there are no choices (there is only way to update a row) are not costed at all, but obviously they take actual time to do. And even with that in mind, you have to use great caution comparing them across versions. They just aren't meant for that purpose.
I did notice that shared_blks_read and shared_blks_dirtied were about twice as high on the new server. shared_blks_hit were only slightly higher. Maybe a clue in that , but I do not really know that this means.
I think what this means is that your data in the new server is packed more tightly than it was in the old server, probably because you used pg_dump to get the data over rather than a physical copy. The default fillfactor for tables is 100%, so the rows are packed into a page as many as will fit.
So on the old server when you do an update, there is room to put the new version of the updated row in the same page as the old version. On the new server, there is no room on the page, so it has to seek out a different page to put the new version of the row on (and then both pages get dirtied). Which also means it has to update the indexes, so they know where to find the new version.
Hire a professional.
If this is important production data that needs to be recovered quickly, your best strategy is to get an expert in as soon as possible to get the work done.
The work you will be focusing on is creating processes and procedures to make sure this doesn't happen in the future. And convincing management that the likely hood of it happen again based on those processes and procedures are as low as possible.
Best Answer
On CentOS server, you can use these below commands:
PostgreSQL 9.6:
PostgreSQL 10.x
PostgreSQL 11.x