PLS-00201: identifier ‘SECRETUSER.ASHVIEWER’ must be declared

 Oracle  Comments Off on PLS-00201: identifier ‘SECRETUSER.ASHVIEWER’ must be declared
Mar 012013
 

It happens when I try to deploy ASH Analytics on one of our production databases. We’ve done this before and there was no issue with it.. It seams this is a “feature” of the latest OEM Cloud control. We’ve patched recently the OMS to 12.1.0.2.1 (PSU Jan’2013) and probably this is the problem.

Usually ASH Analytics is easy to setup. Doug Burns have demonstrated this in his lecture “Falling in Love All Over Again: OEM 12c Performance Page Enhancements” on the latest BGOUG seminar. Generally you need a user with some specific privileges (OEM tells you what is needed). After you create such user, you click on Performance home -> ASH analytics, provle the credentials, and the feature gets deployed by and OEM job. But in the latest release the deployment job fails with

Error while executing the script : oracle.sysman.assistants.common.dbutil.SQLFatalErrorException:
Error creating PL/SQL Object PRVT_AWR_DATA : PACKAGE BODY – PLS-00201: identifier ‘SECRETUSER.ASHVIEWER’ must be declared
Error creating PL/SQL Object PRVT_AWR_DATA : PACKAGE BODY – PL/SQL: Statement ignored
at line number – 2580
Driver SQL Script encountered errors.
Fail

After three failures I knew it was not my fault. And, of course, I could not find anything similar in Metalink nor Google.

Fortunately at the beginning of the job log you can see the scripts that will be executed. Even better: the packages are not wrapped and it is easy the identify the problem. And the problem is the wrong order of execution of the scripts.

The sql scripts can be found on the OEM app server, in plugins/oracle.sysman.db.oms.plugin_12.1.0.3.0/sql/db/latest/instance/. The order in the job is:

prvs_awr_data.sql
prvt_awr_data.sql
prvs_awr_data_cp.sql
prvt_awr_data_cp.sql
dbms_compare_period.sql
prvt_compare_period.sql
eaddm_pkgdef.sql
eaddm_pkgbody.sql
ashviewer_pkgdefs.sql
ashviewer_pkgbodys.sql

The second script fails with the aforementioned error, because it needs the package ashviewer. As you can see, this package is created by the last 2 files. The solution is to run the scripts in the flowing order:

prvs_awr_data.sql

ashviewer_pkgdefs.sql
ashviewer_pkgbodys.sql

prvt_awr_data.sql
prvs_awr_data_cp.sql
prvt_awr_data_cp.sql
dbms_compare_period.sql
prvt_compare_period.sql
eaddm_pkgdef.sql
eaddm_pkgbody.sql

After this ASH Analytics works just fine.

 Posted by at 14:55

EM Event: Critical:DB – Row(3): Supplied date is in the future

 Oracle  Comments Off on EM Event: Critical:DB – Row(3): Supplied date is in the future
Oct 282012
 

This is just another nasty bug. After the DST change this night, OEM 12c Cloud control started a real flood with notifications like this:

EM Event: Critical:ORADB – Row(3): Supplied date is in the future : now = Sun Oct 28 03:34:22 CET 2012 supplied value = Sun Oct 28 04:34:00 CET 2012

Or even worse

EM Event: Critical:ORADB – Row(1): Supplied date is in the future : now = Sun Oct 28 02:55:23 CEST 2012 supplied value = Sun Oct 28 02:55:20 CET 2012

There is nothing about this issue in My Oracle Support, of course. The only similar bug is Bug 14258529, but there is not fix nor workaround.

It looks like a restart of the agents fixes the issue. We have more than 20 minutes now without any errors.

Now I’m going back in the bed.

 Posted by at 3:04

Too many databases for one server

 Oracle  Comments Off on Too many databases for one server
Oct 142012
 

OK, the server is not that bad. He could handle the load. We are talking about a Dev server, after all. It runs Dev databases.

The problem is, it runs about 30 Dev. At every given time only 5-6 (or even 1-2) are active. But it gets much worse when the time comes for the automatic statistic gathering. Then the server is knocked down to the knees. All 30 or so DBs start fighting for the disks – and the disks cannot handle the massive requests. Then the ASM get severely overloaded and even sops responding for a few seconds. In some extreme cases some of the databases get terminated:

ORA-15064: communication failure with ASM instance
ORA-03113: end-of-file on communication channel
Process ID:
Session ID: 4 Serial number: 1
ASMB (ospid: 8290): terminating the instance due to error 15064

One more detail: all those 30 databases are created as a copy of one “master” database. Whatever you put in the master – sooner or later gets populated to all the rest. Every developer or a team can recreate his team DB as a current copy of the master. Or a copy of other team DB, but this happens rarely. So, if out master DB knows the statistics have to be gathered at 22:00, sooner or later all Dev databases start gathering stats at 22:00. This is, in fact, the most resource-intensive thing that can happen to the server.

So hare’s what I’ve done:

create or replace procedure p_scramble_stats_start as
  v_start_hour char(1);
  v_start_min char(2);
BEGIN
  -- disable
  DBMS_SCHEDULER.DISABLE(name => '"SYS"."MONDAY_WINDOW"', force => TRUE);
  DBMS_SCHEDULER.DISABLE(name => '"SYS"."TUESDAY_WINDOW"', force => TRUE);
  DBMS_SCHEDULER.DISABLE(name => '"SYS"."WEDNESDAY_WINDOW"', force => TRUE);
  DBMS_SCHEDULER.DISABLE(name => '"SYS"."THURSDAY_WINDOW"', force => TRUE);
  DBMS_SCHEDULER.DISABLE(name => '"SYS"."FRIDAY_WINDOW"', force => TRUE);
  DBMS_SCHEDULER.DISABLE(name => '"SYS"."SATURDAY_WINDOW"', force => TRUE);
  DBMS_SCHEDULER.DISABLE(name => '"SYS"."SUNDAY_WINDOW"', force => TRUE);
  -- "choose" the time
  select to_char(ora_hash(instance_name, 6)), to_char(ora_hash(instance_name, 59))
    into v_start_hour, v_start_min
    from v$instance;
  -- set
  DBMS_SCHEDULER.SET_ATTRIBUTE(name      => '"SYS"."MONDAY_WINDOW"',
                               attribute => 'REPEAT_INTERVAL',
                               value     => 'freq=daily;byday=MON;byhour='||v_start_hour||';byminute='||v_start_min||';bysecond=0');
  DBMS_SCHEDULER.SET_ATTRIBUTE(name      => '"SYS"."TUESDAY_WINDOW"',
                               attribute => 'REPEAT_INTERVAL',
                               value     => 'freq=daily;byday=TUE;byhour='||v_start_hour||';byminute='||v_start_min||';bysecond=0');
  DBMS_SCHEDULER.SET_ATTRIBUTE(name      => '"SYS"."WEDNESDAY_WINDOW"',
                               attribute => 'REPEAT_INTERVAL',
                               value     => 'freq=daily;byday=WED;byhour='||v_start_hour||';byminute='||v_start_min||';bysecond=0');
  DBMS_SCHEDULER.SET_ATTRIBUTE(name      => '"SYS"."THURSDAY_WINDOW"',
                               attribute => 'REPEAT_INTERVAL',
                               value     => 'freq=daily;byday=THU;byhour='||v_start_hour||';byminute='||v_start_min||';bysecond=0');
  DBMS_SCHEDULER.SET_ATTRIBUTE(name      => '"SYS"."FRIDAY_WINDOW"',
                               attribute => 'REPEAT_INTERVAL',
                               value     => 'freq=daily;byday=FRI;byhour='||v_start_hour||';byminute='||v_start_min||';bysecond=0');
  DBMS_SCHEDULER.SET_ATTRIBUTE(name      => '"SYS"."SATURDAY_WINDOW"',
                               attribute => 'REPEAT_INTERVAL',
                               value     => 'freq=daily;byday=SAT;byhour='||v_start_hour||';byminute='||v_start_min||';bysecond=0');
  DBMS_SCHEDULER.SET_ATTRIBUTE(name      => '"SYS"."SUNDAY_WINDOW"',
                               attribute => 'REPEAT_INTERVAL',
                               value     => 'freq=daily;byday=SUN;byhour='||v_start_hour||';byminute='||v_start_min||';bysecond=0');
  -- enable
  DBMS_SCHEDULER.ENABLE(name => '"SYS"."MONDAY_WINDOW"');
  DBMS_SCHEDULER.ENABLE(name => '"SYS"."TUESDAY_WINDOW"');
  DBMS_SCHEDULER.ENABLE(name => '"SYS"."WEDNESDAY_WINDOW"');
  DBMS_SCHEDULER.ENABLE(name => '"SYS"."THURSDAY_WINDOW"');
  DBMS_SCHEDULER.ENABLE(name => '"SYS"."FRIDAY_WINDOW"');
  DBMS_SCHEDULER.ENABLE(name => '"SYS"."SATURDAY_WINDOW"');
  DBMS_SCHEDULER.ENABLE(name => '"SYS"."SUNDAY_WINDOW"');
END;

This procedure picks a random hour an minute (from 00:00 to 6:59) based in the current instance name. Then sets all maintenance windows (used for statistics gathering) to be open at that time. In our case the statistics gathering needs 5 to 15 minutes, so that’s OK.

I have done a scheduler job on all DBS job, which runs this procedure every evening at 22:00. This way, no matter which database is copied, the stats gathering time is changed the same day.

This solved the problem for us.

 Posted by at 8:06

Creating a manual SQL tunning task

 Oracle  Comments Off on Creating a manual SQL tunning task
Oct 132012
 

This is tested on 11.2.0.2

SQL is a great piece of software. Well, it needs an additional license, but can handle problems that are practically impossible for a human being.

The most often case is when something goes terribly wring on production and has to be fixed urgently. SQL tunning advisor have saved us so many times in such situations… It usually finds some very good solution (without the need for the DBA to understand what this bloody SQL statement does at all); and makes an SQL profile for us, so we do not need to patch the application code.

There is, however, one more scenario. Imagine a brave developer, creating a mighty statement with 97 lines in the execution plan. He have thought for many hours or days on the problem. Then he comes for assistance with the optimization of this statement. I cannot even understand what this statement does without loosing some hours! But SQL tunning advisor completes in minutes and usually gives some quite good tips. But how can you run it?

It easy to do it through the Grid/Cloud control – some clicks and this is it. But what is the database is not monitored by OEM CC (this is a DEV db)?

In fact, it is not that hard:

1. Create a task:

DECLARE
  v_task VARCHAR2(30);
  v_sql  CLOB;
BEGIN
  v_sql     := 'SELECT ... FROM ... WHERE ...';
  v_task := DBMS_SQLTUNE.CREATE_TUNING_TASK(sql_text => v_sql, 
                                               user_name => 'HR', 
                                               scope => 'COMPREHENSIVE',
                                               time_limit => 3600, -- seconds
                                               task_name => 'tune_test2',
                                               description => 'Tune statement used for the new XYZ functionality.');
END;
/

2. Execute the task. This may take some time, depending on the complexity of the statement and the limit given in the former step:

exec dbms_sqltune.execute_tuning_task(task_name => 'tune_test2');

3. Now let’s see the result

set long 90000 longchunksize 90000
set linesize 232 pagesize 9999
select dbms_sqltune.report_tuning_task('tune_test2') from dual;

DBMS_SQLTUNE.REPORT_TUNING_TASK('TUNE_TEST2')
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name   : tune_test2
Tuning Task Owner  : HR
Workload Type      : Single SQL Statement
Scope              : COMPREHENSIVE
Time Limit(seconds): 3600
Completion Status  : COMPLETED
Started at         : 10/05/2012 12:07:10
Completed at       : 10/05/2012 12:12:13

-------------------------------------------------------------------------------
Schema Name: HR
SQL ID     : ba6g55fakh01v
SQL Text   : SELECT ...

-------------------------------------------------------------------------------
FINDINGS SECTION (1 finding)
-------------------------------------------------------------------------------

1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
  A potentially better execution plan was found for this statement.

  Recommendation (estimated benefit: 76.48%)
  ------------------------------------------
  - Consider accepting the recommended SQL profile.
    execute dbms_sqltune.accept_sql_profile(task_name => 'tune_test2',
            task_owner => 'HR', replace => TRUE);

-------------------------------------------------------------------------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------------------

1- Original With Adjusted Cost
------------------------------
Plan hash value: 2801951464

------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                        | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                             |    21 |  1302 | 30711   (4)| 00:06:09 |
...
|  97 |    TABLE ACCESS FULL                     | MY_TAB                      |     2 |    10 |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
...
  95 - access("E"."TEST_ID"="T"."TEST_ID")

2- Original With Adjusted Cost
------------------------------

-------------------------------------------------------------------------------
Error: cannot fetch explain plan for object: 1
-------------------------------------------------------------------------------

3- Using SQL Profile
--------------------
Plan hash value: 3439552619

------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                        | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                             |    21 |  1302 |  7223   (2)| 00:01:27 |
....
|  97 |    TABLE ACCESS FULL                     | MY_TAB                      |     2 |    10 |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
....
  95 - access("E"."TEST_ID"="T"."TEST_ID")

(Yes, I really had to do this for a statement with 97 steps in the execution plan and 95 filters. Of course, I have removed the statement, plan and predicates here)

4. Now we can easily apply the plan. In my case I just wanted to see the advise. But you can never be 100% sure unless you run the whole statement. And even then 🙂

execute dbms_sqltune.accept_sql_profile(task_name => 'tune_test2',  task_owner => 'HR', replace => TRUE);
 Posted by at 8:25

126390

 Oracle  Comments Off on 126390
Jul 132012
 

One hundred twenty-six thousand three hundred ninety. This is a huge number.

126390 is the number of index partitions we have currently on one of the fact tables in our Data Warehouse. And there are two global indexes also.

And it simply works!

I am struck with admiration of the beautiful database engine, that can work flawlessly with such extreme values.

 Posted by at 8:27

No more AUDIT BY SESSION in 11g

 Oracle  Comments Off on No more AUDIT BY SESSION in 11g
Feb 052012
 

Another interesting “feature” coming with the upgrade from 10g to 11g.

During the automated pre-upgrade tests we have noticed, the audit trail table grows rapidly and one part of the IO load is caused by inserts in SYS.AUD$.

We found the following Metalink note:
Huge/Large/Excessive Number Of Audit Records Are Being Generated In The Database [ID 1171314.1]

1.) Starting with Oracle 11g the BY SESSION clause is obsolete. This is documented in the Database Security Guide :

” The BY SESSION clause of the AUDIT statement now writes one audit record for every audited event. In previous releases, BY SESSION wrote one audit record for all SQL statements or operations of the same type that were executed on the same schema objects in the same user session. Now, both BY SESSION and BY ACCESS write one audit record for each audit operation. In addition, there are separate audit records for LOGON and LOGOFF events. ”

Because of the above change it is expected that the number of the audit records/files will grow considerably.

In the documentation we find the following amboguity:
Oracle® Database SQL Language Reference 11g Release 1 (11.1)

BY SESSION

In earlier releases, BY SESSION caused the database to write a single record for all SQL statements or operations of the same type executed on the same schema objects in the same session. Beginning with this release of Oracle Database, both BY SESSION and BY ACCESS cause Oracle Database to write one audit record for each audited statement and operation. BY SESSION continues to populate different values to the audit trail compared with BY ACCESS. If you specify neither clause, then BY SESSION is the default.

The same can be seen in Oracle® Database SQL Language Reference 11g Release 2 (11.2)

It does not get better in Oracle® Database Security Guide 11g Release 2 (11.2)

Benefits of Using the BY ACCESS Clause in the AUDIT Statement

By default, Oracle Database writes a new audit record for every audited event, using the BY ACCESS clause functionality. To use this functionality, either include BY ACCESS in the AUDIT statement, or if you want, you can omit it because it is the default. (As of Oracle Database 11g Release 2 (11.2.0.2), the BY ACCESS clause is the default setting.)

Oracle recommends that you audit BY ACCESS and not BY SESSION in your AUDIT statements. The benefits of using the BY ACCESS clause in the AUDIT statement are as follows:

To be honest, this sounds to me like “Errrr… something broke with this BY SESSION. We managed to make it work somehow, but now it acts almost like BY ACCESS”

So we had to redefine our audit policy before the upgrade. Fortunately most noaudit operations can be executed online. Only noaudit execute uses some nasty latches and we had to execute those during the downtime.
Here is one useful query about this:

select 'noaudit select on ' || owner || '.' || object_name || ' WHENEVER SUCCESSFUL;'

  from DBA_OBJ_AUDIT_OPTS

 where sel like 'S/%'

   and owner in ('...application schemas...')

   and object_name not in ('...exclusions...')

union all

select 'noaudit execute on ' || owner || '.' || object_name || ' WHENEVER SUCCESSFUL;'

  from DBA_OBJ_AUDIT_OPTS

 where exe like 'S/%'

   and owner in ('...application schemas...')

   and object_name not in ('...exclusions...')

union all

select 'noaudit update on ' || owner || '.' || object_name || ' WHENEVER SUCCESSFUL;'

  from DBA_OBJ_AUDIT_OPTS

 where upd like 'S/%'

   and owner in ('...application schemas...')

   and object_name not in ('...exclusions...')

union all

select 'noaudit insert on ' || owner || '.' || object_name || ' WHENEVER SUCCESSFUL;'

  from DBA_OBJ_AUDIT_OPTS

 where ins like 'S/%'

   and owner in ('...application schemas...')

   and object_name not in ('...exclusions...')

union all

select 'noaudit delete on ' || owner || '.' || object_name || ' WHENEVER SUCCESSFUL;'

  from DBA_OBJ_AUDIT_OPTS

 where del like 'S/%'

   and owner in ('...application schemas...')

   and object_name not in ('...exclusions...')

 

And one more, important: do not forget to check what is written in ALL_DEF_AUDIT_OPTS before the upgrade. If you do not do this, catupgrd will create thousands objects with inappropriate auditing.

 Posted by at 10:31

The adventure of upgrading to 11.2

 Oracle  Comments Off on The adventure of upgrading to 11.2
Feb 052012
 

My presentation “The adventure of upgrading to 11.2″ from BGOUG seminar in Hisar can be downloaded – here

I’d like to thank once again to Milena, SPI and all other people that invest lots of time and effort to make BGOUG seminars come true. You’ve done a great job, again!

 Posted by at 10:23

Similar names

 Oracle  Comments Off on Similar names
Dec 232011
 

We have the following task: Table1 contains the clients of the biggest grocery store in Munich, and Table2 all the fans of Bayern (Munich). The question is how many Bayern fans visit the grocery.
Let’s assume we have the columns First Name (FNAME), Last Name (LNAME), CITY, COUNTRY.

The easiest solution is:

select *

  from table2 t2

 where exists (select 1

          from table1 t1

         where t1.fname = t2.fname

           and t1.lname = t2.lname

           and t1.city = t2.city

           and t1.country = t2.country)

But this will give us the result if all names are written correctly. Which, unfortunately, happens rarely. For example Mr. Schmitt from the fans may be written as Schmidt in the grocery; Mrs Pavlovich may be Pavlovic; John Smith is not the same as John Smit, etc. We need something better.

There is one SQL function which returns a character string containing the phonetic representation a string. The function is soundex. The algorithm behind it is relatively simple and can be found here. Let’s try it:

select *

  from table2 t2

 where exists (select 1

          from table1 t1

         where soundex(t1.fname) = soundex(t2.fname)

           and soundex(t1.lname) = soundex(t2.lname)

           and t1.city = t2.city

           and t1.country = t2.country)

(to make the things simple, i have assumed the CITY and COUNTRY are written fine. We can use the same transformation on them)

But this gives us too many false positives. I do not think Marko Mjagkov is the same as Marica Moisejevs, but soundex returns the same – M620 M221. We need something else

There is one nice, relatively new package, named utl_match. In this package there are functions using two different algorithms facing this problem. The first one is developed in Russia by Vladimir Levenshtein in 1965 (the function EDIT_DISTANCE). The other seems to be developed in US, by Matthew Jaro and William Winkler, published in 1990. Let’s see what we can do using the latter:

select *

  from table2 t2

 where exists (select 1

          from table1 t1

         where utl_match.jaro_winkler(t1.fname, t2.fname) > 0.9

           and utl_match.jaro_winkler(t1.lname, t2.lname) > 0.9

           and t1.city = t2.city

           and t1.country = t2.country)

Hm, we have another problem – there is no effective way to speedup the query. If the grocery has 10 000 clients in Munich, and the football cub has 500 000 fans in the city, we will execute the function utl_match.jaro_winkler more than 5 000 000 000 times. It will, for example, try to compare Adolf with Gretta, which is nonsense. One cannot index this, or use another infrastructure tricks – only raw computing power will help. So it takes time.

And so, I decided to combine both approaches and came up with this:

select *

  from table2 t2

 where exists (select 1

          from table1 t1

         where soundex(t1.fname) = soundex(t2.fname)

           and soundex(t1.lname) = soundex(t2.lname)

           and utl_match.jaro_winkler(t1.fname, t2.fname) > 0.9

           and utl_match.jaro_winkler(t1.lname, t2.lname) > 0.9

           and t1.city = t2.city

           and t1.country = t2.country)

Here soundex limits the number of checks, although there may be many false positives. One can even do a function-based index on Soundex, or a MView.
Then utl_match.jaro_winkler gives us the most “similar” names. Believe it or not, this works fine – on tables with 20К and 2M rows (all from the same city), it completed in tens of seconds and returns mostly reasonable results. Of course, there may be many coincidences – there are many men in Munich named Hans Shmidt. But it is useful.

P.S. During the research we encountered PETER. We did not test it because it requires using extproc, which can potentially lead to security problem. But the product looks very promising.

 Posted by at 10:47

Upgrading to Oracle 11.2 – BGOUG’11

 Oracle  Comments Off on Upgrading to Oracle 11.2 – BGOUG’11
Nov 202011
 

My presentation “The adventure of upgrading to 11.2” from BGOUG seminar in Hisar can be downloaded – here

I’d like to thank once again to Milena, SPI and all other people that invest lots of time and effort to make BGOUG seminars come true. You’ve done a great job, again!

 Posted by at 11:50

11g: no more “AUDIT … BY SESSION”

 Oracle  Comments Off on 11g: no more “AUDIT … BY SESSION”
Nov 162011
 

Another interesting “feature” coming with the upgrade from 10g to 11g.

During the automated pre-upgrade tests we have noticed, the audit trail table grows rapidly and one part of the IO load is caused by inserts in SYS.AUD$.

We found the following Metalink note:
Huge/Large/Excessive Number Of Audit Records Are Being Generated In The Database [ID 1171314.1]

1.) Starting with Oracle 11g the BY SESSION clause is obsolete. This is documented in the Database Security Guide :

” The BY SESSION clause of the AUDIT statement now writes one audit record for every audited event. In previous releases, BY SESSION wrote one audit record for all SQL statements or operations of the same type that were executed on the same schema objects in the same user session. Now, both BY SESSION and BY ACCESS write one audit record for each audit operation. In addition, there are separate audit records for LOGON and LOGOFF events. ”

Because of the above change it is expected that the number of the audit records/files will grow considerably.

In the documentation we find the following amboguity:
Oracle® Database SQL Language Reference 11g Release 1 (11.1)

BY SESSION

In earlier releases, BY SESSION caused the database to write a single record for all SQL statements or operations of the same type executed on the same schema objects in the same session. Beginning with this release of Oracle Database, both BY SESSION and BY ACCESS cause Oracle Database to write one audit record for each audited statement and operation. BY SESSION continues to populate different values to the audit trail compared with BY ACCESS. If you specify neither clause, then BY SESSION is the default.

The same can be seen in Oracle® Database SQL Language Reference 11g Release 2 (11.2)

It does not get better in Oracle® Database Security Guide 11g Release 2 (11.2)

Benefits of Using the BY ACCESS Clause in the AUDIT Statement

By default, Oracle Database writes a new audit record for every audited event, using the BY ACCESS clause functionality. To use this functionality, either include BY ACCESS in the AUDIT statement, or if you want, you can omit it because it is the default. (As of Oracle Database 11g Release 2 (11.2.0.2), the BY ACCESS clause is the default setting.)

Oracle recommends that you audit BY ACCESS and not BY SESSION in your AUDIT statements. The benefits of using the BY ACCESS clause in the AUDIT statement are as follows:

To be honest, this sounds to me like “Errrr… something broke with this BY SESSION. We managed to make it work somehow, but now it acts almost like BY ACCESS”

So we had to redefine our audit policy before the upgrade. Fortunately most noaudit operations can be executed online. Only noaudit execute uses some nasty latches and we had to execute those during the downtime.

Here is one useful query about this:

select 'noaudit select on ' || owner || '.' || object_name || ' WHENEVER SUCCESSFUL;'
  from DBA_OBJ_AUDIT_OPTS
 where sel like 'S/%'
   and owner in ('...application schemas...')
   and object_name not in ('...exclusions...')
union all
select 'noaudit execute on ' || owner || '.' || object_name || ' WHENEVER SUCCESSFUL;'
  from DBA_OBJ_AUDIT_OPTS
 where exe like 'S/%'
   and owner in ('...application schemas...')
   and object_name not in ('...exclusions...')
union all
select 'noaudit update on ' || owner || '.' || object_name || ' WHENEVER SUCCESSFUL;'
  from DBA_OBJ_AUDIT_OPTS
 where upd like 'S/%'
   and owner in ('...application schemas...')
   and object_name not in ('...exclusions...')
union all
select 'noaudit insert on ' || owner || '.' || object_name || ' WHENEVER SUCCESSFUL;'
  from DBA_OBJ_AUDIT_OPTS
 where ins like 'S/%'
   and owner in ('...application schemas...')
   and object_name not in ('...exclusions...')
union all
select 'noaudit delete on ' || owner || '.' || object_name || ' WHENEVER SUCCESSFUL;'
  from DBA_OBJ_AUDIT_OPTS
 where del like 'S/%'
   and owner in ('...application schemas...')
   and object_name not in ('...exclusions...')

And one more, important: do not forget to check what is written in ALL_DEF_AUDIT_OPTS before the upgrade. If you do not do this, catupgrd will create thousands objects with inappropriate auditing.

 Posted by at 12:04