Gaius is right, use DBA_TABLES
for NUM_ROWS
and DBA_SEGMENTS
for size:
select owner,table_name,num_rows,last_analyzed from dba_tables;
The num_rows count is as of LAST_ANALYZED
date, which should be close enough even without running DBMS_STATS.
For sizing:
select owner,segment_type,sum(bytes)/1024/1024 size_mb
from dba_segments
group by owner,segment_type;
or (depending what level of details you need):
select owner,segment_name,sum(bytes)/1024/1024 size_mb
from dba_segments
group by owner,segment_name;
For LOBs you'll need to join it to DBA_LOBS
, for indexes to DBA_INDEXES
, for tables to DBA_TABLES
. A lot will depend on your specific requirement. Since you mentioned you are trying to figure out used space in tablespace it might be as simple as:
select sum(bytes)/1024/1024 size_mb
from dba_segments
where tablespace_name='YOUR_TBLSP_NAME';
Your solution would really work only with small DB as it would not be feasible to read all data on large DB.
I am hesitant to post how to address ORA-1555
as in this case this it NOT your primary issue but just for completeness - you are on 10g using automatic undo management so your DBA would have to increase undo_retention
in your database (the ixora link is relevant to a DB without auto undo management).
I had the same issue with a migration from 9i to 11g. I decided that I did not want to take a chance that Oracle's character set conversion would do something I did not expect. Managers don't care about character sets, they just want to know that the data was migrated without any problems. I kept the WE8MSWIN1252 character set and there were no problems.
However, if you had to have a UTF8 character set, and did not want to take any chances this is how I would do it.
Requirements: a virtual machine in a test environment, to make the install and testing painless
- start on the original 10g database in a test environment.
- start a new instance of 11g on a virtual machine in a test environment with the old character set
- import your data using a data pump or the old imp/exp.
- test the web application to see what happens
- if successful take a snapshot of the machine so you can revert to this
- establish what columns need to be larger. Usually these are LONG or possibly large VARCHAR2 fields
- increase the size of the columns so conversion is not required
- if you are able set up another Oracle 11 with the new character set and use the data pump to bring the data over. If this is not feasible then do an export of your data and bring the virtual machine back to before the Oracle install. Then install Oracle 11 g with the new character set and import the data. Ideally no conversion will occur if you have identified all the tables.
- test the web application
Why go to all this extra work? This solution breaks your project into two issues, the upgrade and the character set. Upgrading databases is not to be considered lightly. Many times it is not the upgrade that is the problem but peripheral problems like interactions with applications or other databases. Custom PL/SQL code should have no problems but you never know.
If there is a problem people will be looking to you to explain how you were diligent and careful not how you saved a day or two.
Best Answer
If your
NLS_CHARACTERSET
is AL32UTF8, anyVARCHAR2
column will support Unicode (via the UTF-8 encoding). In 99.9% of cases, you'd never want to create anNVARCHAR2
column.The
NVARCHAR2
column, which uses theNLS_NCHAR_CHARACTERSET
which will almost always be AL16UTF16 is really only useful in a couple of cases. The first, and by far most common, is when you have a legacy application whoseNLS_CHARACTERSET
doesn't support Unicode and cannot be migrated to Unicode but where you want to support Unicode data in a handful of columns. That's not the case here. The second would be when the database does support Unicode but you want to use the UTF-16 encoding for some reason such as the ability to store primarily Japanese and Chinese data using mostly 2 bytes per character in UTF-16 rather than 3 bytes per character in UTF-8. But if you do that, you're still (barring 12.1 and the ability to have 32kVARCHAR2
columns) limited to 4000 bytes (2000 characters in a UTF-16 column). Hence the error when you try to defineNVARCHAR2(4000)
which would need at least 8000 bytes to store 4000 characters.