It's not actually possible to specify a different tablespace when importing using the oracle imp
utility. However, as a workaround, you can pre-create the tables by doing a ROWS=N
import into the USERS
tablespace, then alter table mytable move tablespace BLOG_DATA;
for each table to move them to the new tablespace, then do the import again with the IGNORE=Y
parameter to ignore the table creation errors and import all of the data.
If the data was exported using Data Pump (expdp
), (as an aside, everyone should be using this these days, rather than the old legacy exp
/imp
utilities) you can easily import into another tablespace using the REMAP_TABLESPACE
parameter.
eg:
impdp scott/tiger@ZOMG file=blog_data.dmp directory=mydir remap_tablespace=USERS:BLOG_DATA
You've missed one place to get an overview of Oracle: the Concepts Guide. It covers all the major topics (including backup and recovery, which is quite important and doesn't appear in the list of links you've posted).
Whats the next step? Create the Schema or Tablespace?
Both! They're orthogonal. Users are logical entities that access your database. Tablespaces are a storage concept. A user can have access to multiple tablespaces, and a tablespace can store data from multiple schemas. You need both, and you need to grant access to the appropriate tablespace to the users you create. (See e.g. here for the difference between user and schema.)
Tablespace datafile(s) is where actual data from tables is stored?
Yes, all your database's data and indexes are stored in tablespaces. The main storage structures are:
- Ordinary tablespaces store normal, persistent data. That's going to be the largest part of your database, space-usage wise.
- Temporary tablespaces store non-persistent data - global temporary tables that get purged at the end of sessions or transactions, temporary storage for things like on-disk sorts, etc.
- Undo tablespace(s) and redo log files: that's what Oracle uses to provide ACID guarantees.
- Control files: they describe your database (name, files, log sequence and checkpoint information, even some backup info).
(The system tablespace is an ordinary tablespace, except that you shouldn't store anything in it - consider it as Oracle internal and off-limits for ordinary use.)
In addition, your should take great care of your redo log files, the "most crucial structure for database recovery". They are "hot" (lots of writes) and should be on their own disks/luns.
How many [tablespaces/datafiles] are needed?
As much as you need. There's no general rule here. The number of datafiles will depend on how much data you need to store, operating system limits, Oracle datafile size limits, your storage (hard disks/volumes) constraints, backup/recovery considerations (e.g. having only one humongous Bigfile datafile might not be the best idea), ...
How you structure your tablespaces is up to you too. Having a tablespace per "application" in your tablespace can be good approach to get started. You can always create more tablespaces later if needed (but keep in mind that moving an object from one tablespace to another can be time-consuming, and might require either downtime or pretty complex operations).
Default or Temporary?
Both! You need space to store your data persistently, and you also need some amount of temporary storage for your database's operation.
How much space will I need for it?
Anywhere between a few megabytes and several terabytes – only you can know here. To estimate the space you need for a table, create a table with the same structure, fill it up with some sample data (should be more or less statistically representative of what you'll be storing in it) and measure the space usage. Then extrapolate. Don't forget to include the space required indexes (and materialized views)!
Autoextend?
I'd say yes, use autoextend features, but set limits. You probably shouldn't let Oracle try to autoextend past the actual available space on your filesystems. And monitor space usage. (Keep in mind that datafile extension is relatively costly. Don't set the autoextend size too small.)
For ZFS specifically, Oracle has a whitepaper you might be interested in: Configuring ZFS for an Oracle Database (270k PDF).
Best Answer
I wonder why would you use EXP when you have EXPDP-more efficient and performant tool- on your hand?
Using IMPDP
Yes. Since the import job executes the DDL to create those tables and the DDL includes
TABLESPACE
clause to define where to create the table not the default tablespace of the user. The objects will be created in their original tablespace regardless of the default tablespace of the new user.This can be confirmed by exporting the dump file to SQL file.
IMPDP
Command-DDL-
Here is an excerpt from the SQL file.
You can use
REMAP_TABLESPACE
parameter while importing the dump.Here is the DDL for the same table imported with the
REMAP_TABLESPACE
clause.IMPDP command-
DDL-
REMAP_TABLESPACE
Update:
If that is the case, I would import the objects in the same tablespace and later on I would migrate to another tablespace using
ALTER INDEX REBUILD TABLESPACE <new_tbs>
command.Using IMP
If you have to use the original import tool then-
indexfile parameter along with
fromuser
andtouser
parameter(Same schema name forfromuser
andtouser
).Example:
imp system@orclpdb file=dumpfile.dmp fromuser=jay touser=jay indexfile=DDL.sql
Note: It takes some efforts to edit the SQL file which containsREM
keywords as well as output for the number of rows.Sample:
Edit the SQL file generated by the import tool to change the tablespace for the objects.
Run the SQL script using SQLPLUS.
Disable all the constraints
Import the dump file with the parameter
IGNORE=Y
to ignore the warning about the already existed objects.Enable the constraints.