I'll outline SGA and PGA, and then will try to give you a couple of appropriate examples.
SGA consists of many structures, shared by many processes (both background and foreground). Just a few of them:
- shared pool
- buffer cache
- log buffer
If you undersize, for example, buffer cache, the queries will perform slower. Buffer cache is the area of memory used to hold the blocks of data read from and written to data files. Whenever you query a table, data is read from the data files in the units called blocks. The requested data is not returned straight to the user process, instead it is copied to the buffer cache, and then is read from the buffers in the buffer cache if other queries request the same data. Undersized buffer cache leads to lots of I/O and thus adversely impacts performance.
PGA, on the other hand, is the area of memory private to one process. It cannot be shared shared by processes. It is used to maintain the process state. One of the parts of PGA is sort area. It is used to sort a set of rows. In the best case the sort is performed in memory, but if the PGA (sort area in particular) is undersized, the sorting involves the temporary tablespace segments residing on disk. This makes the queries perform slower because disk is one of the bottlenecks in the computer systems.
For more info read "Overview of the Program Global Area" and "Overview of the System Global Area" in Concepts.
More than five tables in a query leads to performance degradation.
A join will increases the cost of running a query to some degree no matter what. But what's the right thing to do ultimately depends on what's ideal or acceptable for any given project.
For example. I have a 1-column table that contains 1 million row, whose only column is an ordered bigint clustered index. Make 10 queries ranging from no join to 10 joins, and compare execution plan and execution time. You'll see the increased cost and time pretty much in a linear manner. In my personal environment, SELECT *
with no join might take 5 seconds to complete while 6 joins might take 6 seconds. What if the table had only 10,000 records? It will run significantly faster even if the number of joins is doubled (in my environment <200ms).
Now, if the only way to get the right data set is by joining 10 tables and it takes 5 seconds, is that good or bad? If the speed is critical to your application, like high-speed stock trading, then you might want to reconsider. What if it was a report for sales department?
Does it depends on the database type (SQL Server Vs Oracle Vs MySQL)
or number of rows in the tables, indexing, INNER Vs OUTER Join,
cardinality of the tables joined, etc.?
Database product, yes and no. Each products have different optimization logic. So one might not execute a query the same as others. One might be faster than others in some cases and vice versa. Everything else you mentioned affects performance, including selectivity. OUTER can be slower than INNER. There are occasions where OUTER joins can be faster or costs less.
Of course, my very simple example is far from any real-life scenario but the underlying logic is the same. How many you should join depends on your environment and your requirements. Things you mentioned will affect the performance, and if the query is running below acceptable level or started having a performance issue, you should examine them. But none of them are relevant to how many joins you SHOULD have.
Best Answer
The precision and scale of a number column is a matter for business rules and data modelling. If a number can have values in the trillions don't declare it as
number(6)
.The general cases seem to benumber
number(20,0)
or less commonlyinteger
number(15,2)
Then there are the specialized edge cases for
binary_float
andbinary_double
. If you are doing heavy-duty calculations there are definite performance benefits from choosing these data types (at the price of losing some exactitude). But if you're using regularnumber
columns just choose the precision and scale you need for the business rules, and don't sweat on performance.If we don't define
number
further there is no default precision. We can store any number from0.0000000000000000000000000000000000001
to99999999999999999999999999999999999999
.