The solution is dbms_redefinition
package. Basically redefition will move, online, to a new table (called interim table), then move all dependent objets like indexes and last exchange the interim table with the original one.
What you have to do is:
- Create the interim table withe correct column definition
- Run redefinition
- Drop the old table
Use dbms_redefinition
in the following way:
-- DETERMINE IF THE ORIGINAL TABLE CAN BE REDEFINED ONLINE
BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE('SCHEMA','YOURTABLE', DBMS_REDEFINITION.CONS_USE_ROWID);
END;
/
-- BEGIN THE REDEFINITION
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE(
UNAME => 'SCHEMA',
ORIG_TABLE => 'YOURTABLE',
INT_TABLE => 'INTERIM_YOURTABLE'
OPTIONS_FLAG => DBMS_REDEFINITION.CONS_USE_ROWID);
END;
/
-- IF THE TABLE HAD DEPENDENCIES (INDEXES ... CONSTRAINTS ... TRIGGERS)
-- THIS WOULD BE THE POINT AT WHICH THEY WOULD HAVE BEEN COPIED
DECLARE
error_count pls_integer := 0;
BEGIN
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('SCHEMA', 'YOURTABLE', 'INTERIM_YOURTABLE',
dbms_redefinition.cons_orig_params, TRUE, TRUE, TRUE, FALSE, error_count);
DBMS_OUTPUT.PUT_LINE('errors := ' || TO_CHAR(error_count));
END;
/
-- FINISH THE REDEFINITION
exec DBMS_REDEFINITION.FINISH_REDEF_TABLE('SCHEMA','YOURTABLE','INTERIM_YOURTABLE');
If you run redefinition with SYSTEM user there is no problem. If you want to execute it with a less privileged user you have to trick some privileges in order to get it working. Privileges required are:
- Execute privilege to DBMS_REDEFINITION
- Create any table
- Alter any table
- Drop any table
- Lock any table
- Select any table
Tables with the following characteristics cannot be redefined online:
- [9.0.1]Tables with no primary keys
- Tables that have materialized view logs defined on them
- [9i] Tables that are materialized view container tables and AQ tables
- [10g] Tables that are replicated in an n-way master configuration can be redefined, but horizontal subsetting (subset of rows in the table), vertical subsetting (subset of columns in the table), and column transformations are not allowed.
- The overflow table of an IOT table
- Tables with fine-grained access control (row-level security)
- Tables with BFILE columns
- Tables with LONG columns can be redefined online, but those columns must be converted to CLOBS. Also, LONG RAW columns must be converted to BLOBS. Tables with LOB columns are acceptable.
- Tables in the SYS and SYSTEM schema
- Temporary tables
Other restrictions:
- A subset of rows in the table
- Only simple deterministic expressions, sequences, and SYSDATE can be used when mapping the columns in the interim table to those of the original table. For example, subqueries are not allowed.
- If new columns are being added with no column mappings, then they must not be declared NOT NULL until the redefinition is complete.
- There cannot be any referential constraints between the table being redefined and the interim table.
- Table redefinition cannot be done NOLOGGING.
- [10g] For materialized view logs and queue tables, online redefinition is restricted to changes in physical properties.
- You cannot convert a nested table to a VARRAY.
You will want to load your data into a new table, doing this in small batches, then drop the existing table. I put together a quick example using the Sales.Customer table in AdventureWorks, something similar should work for you also.
First, create your new table, complete with the new datatype you want to use:
CREATE TABLE [Sales].[Currency_New](
[CurrencyCode] [nchar](4) NOT NULL,
[Name] [varchar](128) NOT NULL,
[ModifiedDate] [datetime] NOT NULL,
CONSTRAINT [PK_Currency_New_CurrencyCode] PRIMARY KEY CLUSTERED
(
[CurrencyCode] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)
GO
Then, insert your records and define your batch. I am using 10 here, but you will likely want to use something larger, say 10,000 rows at a time. For 30MM rows I'd even suggest you go to 100k row batch size at a time, that's the limit I typically used with larger tables:
DECLARE @RowsInserted INT, @InsertVolume INT
SET @RowsInserted = 1
SET @InsertVolume = 10 --Set to # of rows
WHILE @RowsInserted > 0
BEGIN
INSERT INTO [Sales].[Currency_New] ([CurrencyCode]
,[Name]
,[ModifiedDate])
SELECT TOP (@InsertVolume)
SC.[CurrencyCode]
,SC.[Name]
,SC.[ModifiedDate]
FROM [Sales].[Currency] AS SC
LEFT JOIN [Sales].[Currency_New] AS SCN
ON SC.[CurrencyCode] = SCN.[CurrencyCode]
WHERE SCN.[CurrencyCode] IS NULL
SET @RowsInserted = @@ROWCOUNT
END
I usually do a sanity check and verify the rowcounts are the same before cleaning up:
SELECT COUNT(*) FROM [Sales].[Currency]
SELECT COUNT(*) FROM [Sales].[Currency_New]
Once you are confident you have migrated your data, you can drop the original table:
DROP TABLE [Sales].[Currency]
Last step, rename the new table, so that users don't have to change any code:
EXEC sp_rename '[Sales].[Currency_New]', '[Sales].[Currency]';
GO
I don't know how long this will take. I'd suggest you try doing this when you have a clear maintenance window and users aren't connected.
HTH
Best Answer
As the error message states, you can't just modify the number column to reduce its precision and scale.
Here is one of the workarounds I would use for this situation.
Here is my table.
Add another column with the number data type with specified precision.
Update the new column with the data from the old column.
Set the old column to null to change its specification.
Modify the column
Update the column with the original data.
Drop the temp column