- COUNT(*) will include NULLS
- COUNT(column_or_expression) won't.
This means COUNT(any_non_null_column)
will give the same as COUNT(*)
of course because there are no NULL values to cause differences.
Generally, COUNT(*)
should be better because any index can be used because COUNT(column_or_expression)
may not be indexed or SARGable
From ANSI-92 (look for "Scalar expressions 125
")
Case:
a) If COUNT(*) is specified, then the result is the cardinality
of T.
b) Otherwise, let TX be the single-column table that is the
result of applying the <value expression> to each row of T
and eliminating null values. If one or more null values are
eliminated, then a completion condition is raised: warning-
null value eliminated in set function.
The same rules apply to SQL Server and Sybase too at least
Note: COUNT(1) is the same as COUNT(*) because 1 is a non-nullable expression.
A trigger on schema
(with no schema specified) doesn't fire for all schemas. It only fires when the triggering action is run by the user who owns that trigger.
So they are quite different, and are not interchangeable.
Here's an example that sets up a logging table, and three create
triggers: on database
, and on schema
for users foo
and bar
:
Connected. -- as mat
SQL> create table mat.log (dt timestamp, who varchar(3),
2 cur varchar(10), own varchar(42), obj varchar(42));
Table created.
SQL> create or replace trigger db_trig
2 after create on database
3 begin
4 insert into mat.log values (systimestamp, 'db', user,
5 ORA_DICT_OBJ_OWNER, ORA_DICT_OBJ_NAME);
6 end;
7 /
Trigger created.
SQL> connect foo/foo
Connected.
SQL> create or replace trigger foo_trig
2 after create on schema
3 begin
4 insert into mat.log values (systimestamp, 'foo', user,
5 ORA_DICT_OBJ_OWNER, ORA_DICT_OBJ_NAME);
6 end;
7 /
Trigger created.
SQL> connect bar/bar
Connected.
SQL> create or replace trigger bar_trig
2 after create on schema
3 begin
4 insert into mat.log values (systimestamp, 'bar', user,
5 ORA_DICT_OBJ_OWNER, ORA_DICT_OBJ_NAME);
6 end;
7 /
Trigger created.
Now let's create a table in foo
's schema, as foo
:
SQL> connect foo/foo
Connected.
SQL> create table foo.foo_stuff (id number);
Table created.
And let's create a table in foo
's schema, as bar
:
SQL> connect bar/bar
Connected.
SQL> create table foo.bar_stuff (id number);
Table created.
Here's what we've logged:
SQL> select * from mat.log order by dt;
DT WHO CUR OWN OBJ
------------------------------ --- ---------- ---------- ---------------
25-NOV-12 07.52.03.797794 PM db FOO FOO FOO_TRIG
25-NOV-12 07.52.03.828670 PM db BAR BAR BAR_TRIG
25-NOV-12 07.52.03.865334 PM foo FOO FOO FOO_STUFF
25-NOV-12 07.52.03.865579 PM db FOO FOO FOO_STUFF
25-NOV-12 07.52.03.894672 PM bar BAR FOO BAR_STUFF
25-NOV-12 07.52.03.894911 PM db BAR FOO BAR_STUFF
6 rows selected.
So:
- the two
create trigger
statements were logged by the "global" after create on database
trigger. That trigger also logged everything else.
foo
's after create on schema
logged the table creation that was done by foo
bar
's trigger logged the table creation that was run by bar
himself, even though bar
created a table in foo
's schema.
Best Answer
Chained and Migrated Rows are explained in the Logical Storage Structures of the Concepts Guide.
A migrated row would be a row who's column data was completely transferred from one block to another due to an update. The original block would essentially only have a "forwarding address" stored for that row.
A chained row would have parts of its column data in multiple blocks. The original block would contain both actual column data and a forwarding address for the rest of it. (You can get rows chained to more than two blocks.)
Both are implemented the same way deep down, so they're really two aspects of the same thing.
Also note that for tables with more than 255 columns, all rows are technically chained - one "row piece" can only contain 255 column values. The chaining can happen in the same block, or with other blocks depending on space availability (and isn't particularly "bad" if all the data ends up in the same block).
The only way, as far as I know, to get accurate data on row chaining is to use:
See Listing Chained Rows of Tables and Clusters.
This is potentially expensive, the whole table needs to be scanned. Statistics gathering doesn't fill the
CHAIN_CNT
column of thedba_tables
view. (I think it might have at some point, but it doesn't in 11.2 at least.)You can monitor the
table fetch continued row
1 statistic (v$sysstat
) to see if a query is affected by chained or migrated rows, but I don't believe you can have that metric per-session so either you need a quiet system to measure, or the reading will be "noisy".The Secrets of Oracle Row Chaining and Migration has interesting information about chained and migrated rows, how you measure them, and potential ways of fixing them.
As always, don't go about rebuilding tables or changing storage parameters "just because" you see chained or migrated rows. Do so only if you measure that it's actually causing you performance problems.
1 From Statistics Descriptions:
See also Table Fetch by Continued Row.