A table belongs to one database. Period. You can share a tablespace among databases of the same DB-cluster, but you cannot share a table.
But you can use dblink
for what you have in mind. This way you can query tables from a different database - even from a different DB-cluster on a different server. Simple SELECT
queries on the same server are very fast. Try this search for code examples.
Or look at CREATE FOREIGN TABLE
for a very similar functionality. For now, I prefer dblink, because it is mature and stable. The new SQL/MED features (Management of External Data) FOREIGN DATA WRAPPER
and FOREIGN TABLE
are still in the making ..
You've missed one place to get an overview of Oracle: the Concepts Guide. It covers all the major topics (including backup and recovery, which is quite important and doesn't appear in the list of links you've posted).
Whats the next step? Create the Schema or Tablespace?
Both! They're orthogonal. Users are logical entities that access your database. Tablespaces are a storage concept. A user can have access to multiple tablespaces, and a tablespace can store data from multiple schemas. You need both, and you need to grant access to the appropriate tablespace to the users you create. (See e.g. here for the difference between user and schema.)
Tablespace datafile(s) is where actual data from tables is stored?
Yes, all your database's data and indexes are stored in tablespaces. The main storage structures are:
- Ordinary tablespaces store normal, persistent data. That's going to be the largest part of your database, space-usage wise.
- Temporary tablespaces store non-persistent data - global temporary tables that get purged at the end of sessions or transactions, temporary storage for things like on-disk sorts, etc.
- Undo tablespace(s) and redo log files: that's what Oracle uses to provide ACID guarantees.
- Control files: they describe your database (name, files, log sequence and checkpoint information, even some backup info).
(The system tablespace is an ordinary tablespace, except that you shouldn't store anything in it - consider it as Oracle internal and off-limits for ordinary use.)
In addition, your should take great care of your redo log files, the "most crucial structure for database recovery". They are "hot" (lots of writes) and should be on their own disks/luns.
How many [tablespaces/datafiles] are needed?
As much as you need. There's no general rule here. The number of datafiles will depend on how much data you need to store, operating system limits, Oracle datafile size limits, your storage (hard disks/volumes) constraints, backup/recovery considerations (e.g. having only one humongous Bigfile datafile might not be the best idea), ...
How you structure your tablespaces is up to you too. Having a tablespace per "application" in your tablespace can be good approach to get started. You can always create more tablespaces later if needed (but keep in mind that moving an object from one tablespace to another can be time-consuming, and might require either downtime or pretty complex operations).
Default or Temporary?
Both! You need space to store your data persistently, and you also need some amount of temporary storage for your database's operation.
How much space will I need for it?
Anywhere between a few megabytes and several terabytes – only you can know here. To estimate the space you need for a table, create a table with the same structure, fill it up with some sample data (should be more or less statistically representative of what you'll be storing in it) and measure the space usage. Then extrapolate. Don't forget to include the space required indexes (and materialized views)!
Autoextend?
I'd say yes, use autoextend features, but set limits. You probably shouldn't let Oracle try to autoextend past the actual available space on your filesystems. And monitor space usage. (Keep in mind that datafile extension is relatively costly. Don't set the autoextend size too small.)
For ZFS specifically, Oracle has a whitepaper you might be interested in: Configuring ZFS for an Oracle Database (270k PDF).
Best Answer
Unfortunately, an isolated backup of a tablespace cannot be used for any purpose whatsoever.
I think what you're trying to do is based on the premise that a tablespace could be more or less moved around, be possibly plugged to a different database or the same database at a different point in time, that kind of thing. But none of this is possible. A tablespace holds live metadata in rows and elsewhere that become wrong the moment you isolate it from the rest of the instance.
The results of DDL statements go into the
pg_catalog
schema of each database.pg_catalog
also contains references to objects that are shared across all databases and not stored in any particular database, such aspg_authid
orpg_database
.Looking at Have postgres' pg_dump export an index I can see where your idea is coming from. However I don't see how @mustaccio answer has any practical use. If you have a FS-level image of a table plus its indexes, they having matching TIDs, and then what? There's no practical use to these files outside of the rest of the cluster.