Error when load db dump (cross platform)

dumpsybase

After a Sybase (15.0.3) compress dump a database on the original platform – Solaris Sparc 64 – with all the required steps (even the quiesce step), I have tried to load it on my Sybase (15.7) Solaris x64 (running under vmware), [equal page sizes on both systems !!!] and I got this error:

1> load database wfcv2  from "compress::1::/20120412_wfcv2_zdump"
2> go
Backup Server session id is: 28. Use this value when executing the
'sp_volchanged' system stored procedure after fulfilling any volume change
request from the Backup Server.
Backup Server: 4.132.1.1: Attempting to open byte stream device:
'compress::1::/20120412_wfcv2_zdump::000'
Backup Server: 4.177.2.1: The database page size of 2048 bytes obtained from ASE
is different from the database page size of -1024 bytes read from the dump
header. The LOAD session must exit.
Backup Server: 1.14.2.2: Unrecoverable I/O or volume error.  This DUMP or LOAD
session must exit.
Backup Server: 6.32.2.3: compress::1::/20120412_wfcv2_zdump::000: volume not
valid or not requested (server: , session id: 28.)
Backup Server: 1.14.2.4: Unrecoverable I/O or volume error.  This DUMP or LOAD
session must exit.
Msg 8009, Level 16, State 1:
Server 'MYSERVER', Line 1:
Error encountered by Backup Server.  Please refer to Backup Server messages for
details.

Sugestions ?

Question from PHILL: Can you post the output of: load database wfcv2
from "compress::1::/20120412_wfcv2_zdump" with headeronly and load
database wfcv2 from "compress::1::/20120412_wfcv2_zdump" with
listonly=full – Phil

1> load database wfcv2 from "compress::1::/20120412_wfcv2_zdump" with headeronly
2> go
Backup Server session id is: 31. Use this value when executing the
'sp_volchanged' system stored procedure after fulfilling any volume change
request from the Backup Server.
Backup Server: 4.132.1.1: Attempting to open byte stream device:
'compress::1::/20120412_wfcv2_zdump::000'
Backup Server: 4.177.2.1: The database page size of 2048 bytes obtained from ASE
is different from the database page size of -1024 bytes read from the dump
header. The LOAD session must exit.
Backup Server: 1.14.2.2: Unrecoverable I/O or volume error.  This DUMP or LOAD
session must exit.
Msg 8009, Level 16, State 1:
Server 'MYSERVER', Line 1:
Error encountered by Backup Server.  Please refer to Backup Server messages for
details.
1>
2>
3> load database wfcv2 from "compress::1::/20120412_wfcv2_zdump" with listonly=full
4> go
Backup Server session id is: 33. Use this value when executing the
'sp_volchanged' system stored procedure after fulfilling any volume change
request from the Backup Server.
Backup Server: 4.22.1.1: Option LISTONLY is not valid for device
'compress::1::/20120412_wfcv2_zdump::000'.
Msg 8009, Level 16, State 1:
Server 'MYSERVER', Line 3:
Error encountered by Backup Server.  Please refer to Backup Server messages for
details.
1>
2>
3>

Backup Server messages
-------------------------------------------------------------------------------------------------------------------------
Apr 12 11:38:00 2012: Backup Server: 2.23.1.1: Connection from Server MYSERVER on Host MyMachine with HostProcid 3776.
Apr 12 11:38:00 2012: Backup Server: 4.132.1.1: Attempting to open byte stream device: 'compress::1::/20120412_wfcv2_zdump::000'
Apr 12 11:38:00 2012: Backup Server: 4.177.2.1: The database page size of 2048 bytes obtained from ASE is different from the database
 page size of -1024 bytes read from the dump header. The LOAD session must exit.
Apr 12 11:38:00 2012: Backup Server: 1.14.2.2: Unrecoverable I/O or volume error.  This DUMP or LOAD session must exit.
Apr 12 11:38:18 2012: Backup Server: 2.23.1.1: Connection from Server MYSERVER on Host MyMachine with HostProcid 3776.
Apr 12 11:38:18 2012: Backup Server: 4.22.1.1: Option LISTONLY is not valid for device 'compress::1::/20120412_wfcv2_zdump::000'.

Question/comments from PHILL: Actually, I think it's your syntax. The
-1024 block size thing is a red herring. Try: load database wfcv2 from "compress::/20120412_wfcv2_zdump" – Which directory is
20120412_wfcv2_zdump in? Is it really in the root (/) directory on
your box? If not, alter the path. – Phil

1) I've tried your suggestion and got the same error !

2) Because the machine where I am trying to load the dump, is my test machine (and I don't have any more space available anywhere …), I am using / (root) location to place dump file for load. Yes, it is not the correct thing to do, but as I have stated "no free space available !".

Question/comment from PHILL: The LOAD syntax is incorrect.

You're not supposed to specify the compression level between the pairs
of: characters in the LOAD DATABASE command.

Assuming your dump file is on the local filesystem at
/20120412_wfcv2_zdump, your load command should be:

1> load database wfcv2 from "compress::/20120412_wfcv2_zdump"
2> go

Sybase recommends the native "compression = compress_level" option as preferred over the older "compress::compression_level" option. If you use the native option for dump database, you do not need to use "compress::compression_level" when loading your database.

in: http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.help.ase_15.0.commands/html/commands/commands64.htm

As stated, sybase recommends!

I know that the load syntax is correct and works – from my personal experience. Yesterday was able to load others BDs from the same source server to MyMachine. Only this DB that is over 10 GB space (+/- 2GB compressed …), is causing problems…

Question/comment from PHILL: Are you sure you got the same error?
Is the filename correct? What's the output of ls -al
/20120412_wfcv2_zdump ? You might need to chmod 777
/20120412_wfcv2_zdump it – Phil

1) Yes the name is correct !

2) It isn't a permissions issue. I am using root user for everything (yes, it is not the correct thing to do, but as I have stated this is my personal test machine !).

Question/comment from PHILL: Ok, I've read the manual again. The
load format is definitely load database wfcv2 from
"compress::/20120412_wfcv2_zdump" for compressed volumes and not
"compress::1::/ … – Please post the output of the error generated by
this so I can see (I know you stated you tried it, but I'd like to
have a look anyway). The documentation even states not to put the
compression level "1". One last thing – did you ftp the file in ASCII
mode by accident? – Phil

OK ! Here it goes !
… And the answer to "did you ftp the file in ASCII" is NO ! Thanks anyway !

1>
2>
3> load database wfcv2 from "compress::/20120412_wfcv2_zdump"
4> go
Backup Server session id is: 35. Use this value when executing the
'sp_volchanged' system stored procedure after fulfilling any volume change
request from the Backup Server.
Backup Server: 4.132.1.1: Attempting to open byte stream device:
'compress::/20120412_wfcv2_zdump::000'
Backup Server: 4.177.2.1: The database page size of 2048 bytes obtained from ASE
is different from the database page size of -1024 bytes read from the dump
header. The LOAD session must exit.
Backup Server: 1.14.2.2: Unrecoverable I/O or volume error.  This DUMP or LOAD
session must exit.
Backup Server: 6.32.2.3: compress::/20120412_wfcv2_zdump::000: volume not valid
or not requested (server: , session id: 35.)
Backup Server: 1.14.2.4: Unrecoverable I/O or volume error.  This DUMP or LOAD
session must exit.
Msg 8009, Level 16, State 1:
Server 'MYMACHINE', Line 3:
Error encountered by Backup Server.  Please refer to Backup Server messages for
details.

I believe that the answer to all this issue can be the simplest answer: – Data corruption … !

Just in case, I´ll do another dump, and try to load it again !

Phill, thanks for you time ! 😉

Best Answer


Ok ! Let me describe how this issue was solved:


1) There was definitely a corruption issue caused by a problem with vmware tools on solaris 10. When the network interface had high transfer/load operations (sample: copy of a 2 GB DB ....), it just stoped working, in the middle of the operation. To put the interface working again, I had to disconnect and connect the network interface again (in the vmware interface !). Basically, I had to unistall vmware tools on solaris virtual machine. There was one catch, the highest transfer rates that could be achieved was about 300 Kb. Basically I could take hours to perform a simple ftp transfer of a 2 GB database, but there wasn´t any corruption at all. How to prove/test that there is/inst´t any corruption. I just packed (on the source machine) the database dump into a tar file (yes, an extra 20kb), but after the download complete, on the target server, I was able to untar the file, and if the untar operation was sucessfull that proved that there wasn´t any corruption on the file that was just transfered.

2) After being sure that the dump was ok, I got a diferent error:

Apr 17 14:24:20 2012: Backup Server: 4.188.1.1: Database wfcv2: 158936 kilobytes (1%) LOADED.
Apr 17 14:24:47 2012: Backup Server: 4.188.1.1: Database wfcv2: 303212 kilobytes (2%) LOADED.
Apr 17 14:25:16 2012: Backup Server: 4.188.1.1: Database wfcv2: 447104 kilobytes (3%) LOADED.
Apr 17 14:25:39 2012: Backup Server: 4.124.2.1: Archive API error for device='compress::1::/data4/20120413_wfcv2_zdump::000': Vendor application name=Compress API, Library version=1, API routine=syb_read(), Message=syb_read: gzread() error=0, msg=Error 0
Apr 17 14:25:39 2012: Backup Server: 4.124.2.1: Archive API error for device='compress::1::/data4/20120413_wfcv2_zdump::000': Vendor application name=Compress API, Library version=1, API routine=syb_close(), Message=syb_close: gzclose() error=-3 msg=Input/output buffer is corrupt
Apr 17 14:25:39 2012: Backup Server: 6.32.2.3: compress::1::/data4/20120413_wfcv2_zdump::000: volume not valid or not requested (server: , session id: 20.)
Apr 17 14:25:39 2012: Backup Server: 1.14.2.4: Unrecoverable I/O or volume error.  This DUMP or LOAD session must exit.

Ok ! Sybase configuration issues !

I had to configure some parameters that were related to load operations, such as:

number of large i/o buffers -> 32 max memory

And a Solaris issue !

I also had to adjust the Operating System Shared Memory to the sybase engine ... !

And I finally was able to load the DB (size > 2.1 GB) !

;-) Cheers !