I'll begin by saying that an Oracle DBA should know what the problem was by looking at ASH and AWR reports. Without AWR reports it's hard to know what the real problem is here. However, I'll help you with the first query.
The top query in the Top SQL with Top Row Sources
shows the the query with the sql_id of 8t441yd5bwygd
is performing a table scan. You can see the text of the query in the Complete List Of SQL Text
section.
The main problem here is that there is a LIKE
clause in the query, which also uses the lower()
function to convert the column data into lowercase, which is then compared with the bind variable :1
, which is the value passed into the query from the application.
Cut down:
select *
from PERSON_CARD
where lower(PERSON_ID) like :1;
To answer the question properly we really need more information regarding the data types of the columns involved & the possible values that can be passed into the WHERE clause by the application, but there are a few possibilities.
My worry is that this SQL (and probably the schema design) is generated by Hibernate, and that PERSON_ID is a long hex string UDID rather than an integer.
1) The data in the PERSON_ID column is a non-number string:
If (and only if) the input value :1 only contains a single wildcard at the end of the string (eg: LIKE "foo%"
) you can create a functional index on the PERSON_ID column:
create index person_card_pid_idx on person_card(lower(person_id));
This will cause an index range scan rather than a table scan (I hope!).
2) The data in the PERSON_ID column is actually an integer that exactly matches the value passed in as :1
2 possibilities here. If PERSON_ID isn't already indexed in the database, add an index. If it's already indexed, get the developers to change the application to use PERSON_ID=:1
instead of LIKE with a LOWER() call.
3) The data in the PERSON_ID column is a non-number string & the value in :1 being matched contains wildcards '%' at the beginning of the string
You'll have to use Oracle text indexes.
From user APC:
The AWR has this section called IO Stats. Not unreasonably, this is where it reports the statistics for IO activity. Within that section there are figures for each tablespace. You need to run these reports against the target database. Ideally you want several different runs, each run being no more than twenty minutes long; the longer the run, the more likely it is that the law of averages will drain meaning from the figures you get.
So, the principle is: you run this report and get a benchmark. Then you deploy your changes and re-run the report. The difference in the various columns is the performance benefit of the change. Really you would want to requests and data per sec go up, average time and waits go down.
To make these figure more convincing you should run the same workload in the before and after tests. This is where something like Real Application Testing really comes into its own.
Best Answer
AWR(Automatic Workload Repository Reports) takes snapshots of the database in specified intervals(one hour is default) and stores that data for later analysis. For example continuous improvement of the database work.
ASH(Active Session History) has information about current active sessions, and it gather data every second. If you are experiencing sudden database issues, you should use ASH.
More on this subject http://docs.oracle.com/cd/E11882_01/server.112/e16638/autostat.htm#PFGRF94186