Guilty as charged, until proven innocent, right?
Well, it is not that nothing can't go wrong in the database, but I wish it would stop be treated as the black sheep of the family.
Recently, I had been in a situation where application was constantly complaining for bad response time, day after day. First thing to explore was whether application changed its behaviour, since most of the time that is the cause for performance issues. Surprisingly enough, it did not. No new SQLs, no major changes in execution count.
Second, I went to look for database changing its behaviour. But it didn't. No changes in execution plan (usually major cause for performance degradation). As much as I have looked, from the database side, it looked like database didn't change its behavior.
So what could be wrong?
Storage can go wrong for example, and that was the case here.
Looking for problems in the wrong place will not assist in solving them. But just saying "it is not the database" probably won't help. You will probably always have to prove it...
Average physical read/write time is what oracle experiencing when dealing with IO. But underneath these numbers, two more tiers are involved. Storage and File System.
Well, after checking average physical read time and average physical write time of several files in this database, it was clear (unfortunately, just to me for a while) that writes became much slower, at exactly the time when the application have started to complain.
Don't think it was easy. For many days Storage guys continues to say "Nothing is wrong with the storage" (how many times have you heard this? I wonder), but finally, when numbers became so high, it couldn't be ignored any more.
At last, spotlight went to spot on the right place (and stopped pointing to the wrong place, the database). Well, probably just until the next complaint will rise again...but I guess I can't win it all..
Here is a query that can be run in Oracle 10g and above. It might look scary, but in short, it's just go over dba_hist_filestatxs (where file statistics are held overtime), and since numbers there are incremented every snapshot (it doesn't hold the deltas), I have used LAG analytic function to subtract current value from preceding value (i.e, current snapshot from preceding snapshot), and I did it for every statistics I wanted to observe.
select * from (
b.begin_interval_time begin_interval_time ,
nullif((phywrts - ( lag(phywrts,1,0) over( order by a.snap_id ) )),0) delta_phywrts,
trunc( (writetim*10 - ( lag(writetim*10,1,0) over( order by a.snap_id ) ) ) /
nullif((phywrts - ( lag(phywrts,1,0) over( order by a.snap_id ) )),0) ,3 ) avg_write_ms,
nullif((phyrds - ( lag(phyrds,1,0) over( order by a.snap_id ) )),0) delta_phyrds,
trunc( (readtim*10 - ( lag(readtim*10,1,0) over( order by a.snap_id ) ) ) /
nullif((phyrds - ( lag(phyrds,1,0) over( order by a.snap_id ) )),0) ,3 ) avg_read_ms,
nullif((singleblkrds - ( lag(singleblkrds,1,0) over( order by a.snap_id ) )),0) delta_singleblkrds,
trunc( (singleblkrdtim*10 - ( lag(singleblkrdtim*10,1,0) over( order by a.snap_id ) ) ) /
nullif((singleblkrds - ( lag(singleblkrds,1,0) over( order by a.snap_id ) )) ,0),3 ) avg_singleblkrds_ms
from dba_hist_filestatxs a ,
where filename = 'my_file.dbf'
where to_char(begin_interval_time,'HH24') between 7 and 11 -- in case you wish to see only specific hours
and to_char(begin_interval_time,'HH24') not in ('FRIDAY','SATURDAY ') -- in case you wish to filter nonworking days
order by begin_interval_time;
Below is part of the real life output, where major changes in average write time can clearly be seen, especially since 24.5. I have taken statistics of a specific data file, each time between 09:30-12:00, in normal working days (not in a weekend). Snapshot interval is 15 minutes. (For simplification, although above query shows reads and writes statistics, below I am showing only writes, since this is the issue relevant for this post).
on 7.5, average write time was between 2-7 ms.
on 16.5, average write time was between 6-11 ms.
on 21.5, average write time was between 6-14 ms.
on 24.5, average write time was between 12-21 ms.
on 13.6, average write time was very very bad. around 40 ms write time. This is 10 times slower than a month ago!
Finally, on 6/7, average write time went back to normal. went down to around 5 ms.
Pay attention to the increasing values of the 3rd column, avg_write_ms, which shows the average physical write time of a specific datafile, for every snapshot.
Ladies and gentlemen of the jury, what is your verdict?
We have found the defendant NOT GUILTY, your honor.
To sum it all up, as stated in the beginning, Database is not always to blame. This time the jury found it innocent.
First thing to check is the application. If you see that applications is behaving more or less the same (no new SQLs, same execution count per SQL), check the database.
And if the database is behaving more or less the same, it is simply not there.
Something else is the source of the problem.
As always, comments are most welcome..
Oracle Performance Tuning & DBA Consulting