RMAN-03009 and ORA-27052 Due to Opportunistic Locking

While backing up backup sets to a secondary location which was a Windows share, RMAN job failed with the following error:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at sometime
ORA-27052: unable to flush file data
Linux-x86_64 Error: 5: Input/output error

When I digged  up a little bit, I found the following lines in server's system log:

<host> kernel:  CIFS VFS: No response to cmd 47 mid 2921
<host> kernel:  CIFS VFS: Write2 ret -11, wrote 0
<host> kernel:  CIFS VFS: Write2 ret -112, wrote 0
<host> kernel:  CIFS VFS: Write2 ret -11, wrote 0
<host> kernel:  CIFS VFS: Write2 ret -11, wrote 0

The problem was caused by optimistic locking property of CIFS module. If the file you are copying is large enough (in my case it was about 80 GB) through a CIFS channel, you may hit this problem. By default optimistic locking is set to 1 when the module is loaded. Disabling optimistic locking solves the problem with no performance drawback, at least I haven't observed.

To disable optimistic locking:
$ echo 0 > /proc/fs/cifs/OplockEnabled

ETL with Bulk Insert Using ODP.Net

As a part of an ETL process it is likely to load data from some flat files. And in some situations, data in the file may not be well-formed. So data need to be cleaned/transformed before loading into database. To achieve this goal, developing your own tool could be much more practical. Let's say, you decided to write your code in C#. In that case, you need to use ODP.Net library to connect Oracle. I'd like to show you three ways of loading data using ODP.Net. 

First one is row-by-row basis. Pseudo code representing the case is as below:

For each row in the file
   Read row;
   Do transformations of the row;
   Insert row;
loop;

To execute insert statements, I will use the method below. It takes advantage of bind variable to avoid hard parsing.

private void ExecuteNonQuery(string commandText, Dictionary<string, object> parameters)
{
   using (OracleCommand comm = new OracleCommand(commandText))
   {
      if (orclConn.State != System.Data.ConnectionState.Open) 
         orclConn.Open();
      comm.Connection = this.orclConn;
      foreach (KeyValuePair<string, object> kvp in parameters)
         comm.Parameters.Add(kvp.Key, kvp.Value);
      comm.ExecuteNonQuery();
   }
}

When I tested the code above for a file with 46000 rows and 24 columns on a test database. It took 641.26 seconds to complete.

Second method is almost same with the first one accept it corrects a mistake of it. This method will commit only once. The code above doesn't contain a commit statement because OracleCommand objects commits automatically, in other words auto commit is on by default. Which means in the first method I've committed for 46000 times. To correct the situation by turning auto commit down, my coding will be:

Begin a transaction: OracleCommand comm = GetTransaction();
For each row in the file
   Read row;
   Do transformations of the row;
   Insert row: ExecuteNonQuery(comm, "<insert statement>", parameters);
loop
Commit transaction: comm.Transaction.Commit();

Methods referenced above are:

private OracleCommand GetTransaction()
{
   OracleTransaction dbTran = this.BeginTransaction;
   OracleCommand comm = new OracleCommand();
   comm.Connection = this.orclConn;
   comm.Transaction = dbTran;
   return comm;
}

private void ExecuteNonQuery(OracleCommand comm, string commandText, Dictionary<string, object> parameters)
{
   comm.CommandText = commandText;
   foreach (KeyValuePair<string, object> kvp in parameters)
      comm.Parameters.Add(kvp.Key, kvp.Value);
   comm.ExecuteNonQuery();
}

With this method, test took 604.04 seconds, faster than the first one a little bit. Why the difference is so minor? Because in Oracle, commits take almost the same time for a row or a million rows inserted. Redo buffer is already flushed probably many times since LGWR process flushes redo:
  • Every three seconds, 
  • When someone commits, 
  • When switching log file,
  • When redo buffer gets full 1/3 of it or has 1 MB of cached redo log data
So what is left for commit is triggering one more flush. Actually what is expensive in a transaction is rollback rather than commit.

Third and the final method is using arrays as bind variables as described in the article. To implement array binding, we'll store the rows in a list and extract values from that list as arrays:

For each row in the file
   Read row;
   Do transformations of the row;
   Add row to a list;
loop
Insert rows;

To insert rows using array binding, required code:

OracleCommand comm = GetTransantionCommand;

comm.ArrayBindCount = rows.Count;
comm.CommandText = "insert into t value(:p1,:p2)";
comm.Parameters.Add("p1", OracleDbType.Varchar2, rows.Select(t => t.Attribute1).ToArray(), ParameterDirection.Input);
comm.Parameters.Add("p2", OracleDbType.Date, rows.Select(t => t.Attribute2).ToArray(), ParameterDirection.Input);
comm.ExecuteNonQuery();
// If another DML if needed, clear parameters and array bind
comm.Parameters.Clear();
comm.ArrayBindCount = 0;
//
comm.Transaction.Commit();

This time test resulted in 9.81 seconds, more than 98% improvement. Single insert statement sweeps all the overhead of round trips caused by multiple inserts.

These are the good, the bad and the ugly (if we skip using bind variable, it could be uglier) methods of bulk insert. 

Changing IP Addresses of a 11g RAC

There are three phases for changing IP addresses of an Oracle Cluster related to three groups of networks used; public, VIP and SCAN. Below you may find the steps to alter IP addresses of a 11g RAC installation running on Oracle Linux 5. Be sure you follow the steps in order because changes for public network should be done before VIP network changes.

Changing Public IPs

1. Check current network information:
$GRID_HOME/bin/oifcfg getif
eth0  10.10.11.0   global   public
eth1  192.168.0.0   global   cluster_interconnect

2. Delete the existing interface information from OCR
$GRID_HOME/bin/oifcfg delif -global eth0/10.10.11.0

3. Add it back with the correct information
$GRID_HOME/bin/oifcfg setif -global eth0/10.10.12.0:public

4. Shutdown the cluster
$ crsctl stop cluster -all

5. Modify the IP address at network layer, DNS and /etc/hosts file to reflect the change. Files to modify/check are:
   - /etc/sysconfig/network-script/ifcfg-eth0
   - /etc/sysconfig/network
   - /etc/hosts

Restart network interface to activate changes
$ ifdown eth0
$ ifup eth0

6. Restart the cluster
$ crsctl start cluster -all

Changing VIPs

1. Check current configuration
$ srvctl config nodeapps -a
Network exists: 1/10.10.11.0/255.255.255.0/eth0, type static
VIP exists: /racnode1-vip/10.10.12.11/10.10.12.1/255.255.255.0/eth0, hosting node racnode1
VIP exists: /racnode2-vip/10.10.12.12/10.10.12.1/255.255.255.0/eth0, hosting node racnode2

2. Stop the database instance and VIP:
$ srvctl stop instance -d <db_name> -n racnode1
$ srvctl stop vip -n racnode1 -f

3. Ensure VIP is offline and VIP is not bounded to network interface
$ crsctl stat res -t
$ ifconfig -a

4. Modify the IP address at network layer, DNS and /etc/hosts file to reflect the change

5. Alter the VIP
$ srvctl modify nodeapps -n racnode1 -A 10.10.12.12/255.255.255.0/eth0

6. Verify the change
$ srvctl config nodeapps -n racnode1 -a
VIP exists.: /racnode1-vip/10.10.12.12/255.255.255.0/eth0 hosting node racnode1

7. Start the database instance and VIP
$ srvctl start vip -n racnode1
$ srvctl start instance -d <db_name> -n racnode1

8. Ensure VIP is online and VIP is bounded to network interface
$ crsctl stat res -t
$ ifconfig -a

9. Repeat the steps above for the other nodes in the cluster

Changing SCAN IPs

1. Update DNS with the new IP addresses. If host file is used, change IP in the host file.

2. Stop the SCAN listener and the SCAN
$ $GRID_HOME/bin/srvctl stop scan_listener
$ $GRID_HOME/bin/srvctl stop scan
$ $GRID_HOME/bin/srvctl status scan

3. Check the current IP address(es) of the SCAN
$ $GRID_HOME/bin/srvctl config scan
SCAN name: <scan_name>, Network: 1/10.10.12.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /<scan_name>/10.10.11.15

4. Refresh the SCAN with the new IP addresses from the DNS entry:
$ $GRID_HOME/bin/srvctl modify scan -n <scan_name>

5. Check whether the SCAN has been changed
$GRID_HOME/bin/srvctl config scan
SCAN name: <scan_name>, Network: 1/10.10.12.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /<scan_name>/10.10.12.15

You can refer to documents; [ID 276434.1] for public and VIP network changes and [ID 952903.1] for SCAN IP changes for detailed information.

Oracle R Enterprise Output Error: ORA-22621

Embedding R into Oracle database is a terrific idea. I believe anyone practicing data analysis and/or data mining should should take look at it. Although it's not a mature environment yet, some workarounds may be needed sometimes. Such as the one I come up against recently. To be clear beforehand, I'm running Oracle R Enterprise 1.1 on a 11.2.0.3 database. To illustrate the problem here is a demonstration:

Create a embedded function which returns a data frame/table with three columns which are type of integer, decimal and character respectively:
begin
  sys.rqScriptCreate('Test',
     'function(){
        data.frame(a=1:10, b= seq(0.1, by=0.1),c=letters[1:10])
  }');
end;
/

And call it within a query:
select a, b, c
from table(rqsys.rqEval(null,
         'select 1 a, 1 b, cast(''c'' as varchar2(30)) c from dual',
         'Test'));

Which will result with error:
ORA-22621: error transfering an object from the agent
ORA-06512: at "RQSYS.RQTABLEEVALIMPL",  line 60
22621. 00000 -  "error transfering an object from the agent"
*Cause:    Any error returned from pickler routines on the agent side.
*Action:   Contact Oracle Support.

The actual problem is the output format in fact. rq*Eval() functions takes output table definition as a parameter and it works smoothly with numeric columns, yet same cannot be said for character based columns. The workaround I came up with is taking advantage of good old XML. If you call the function with XML option instead of giving table format definition, you have the result set without an error. So keep on going this way and process the output which is CLOB containing XML:

select x.*
from table(rqsys.rqEval(null, 'XML', '
Test')) t,
     xmltable('//root/frame_obj/ROW-frame_obj' passing xmltype.createxml(t.value)
             columns a varchar2(1) path 'a/text()',
                     b varchar2(3) path 'b/text()',
                     c varchar2(1) path 'c/text()')(+) x;


Database Triggers

Database triggers are well-matched ways of monitoring your database and logging events you'd like know about. It's possible to capture errors occurred and even notify yourself for exceptional situations. An example of these benefits is what I'll briefly present here. Below you'll find a demonstration of a trigger which logs errors and sends a mail when out of space error occurs.

First of all, start with creating the table to log into:

CREATE TABLE SERVERERROR_LOG
(
ERROR_TIME TIMESTAMP,
DB_USER VARCHAR2(30),
OS_USER VARCHAR2(30),
USER_HOST VARCHAR2(64),
CLIENT_PROGRAM VARCHAR2(48),
SQLID VARCHAR2(13),
ERROR_STACK VARCHAR2(2000)
);


Table keeps records of SQL running (actually ID of the SQL to save space, since full SQL text can be queried) and the error it caused beside client information. Here is the trigger:

CREATE OR REPLACE TRIGGER log_server_errors
AFTER SERVERERROR ON DATABASE
DECLARE
osuser VARCHAR2(30);
machine VARCHAR2(64);
prog VARCHAR2(48);
sqlid VARCHAR2(13);
err_type VARCHAR2(19);
obj_type VARCHAR2(19);
owner VARCHAR2(30);
ts VARCHAR2(30);
obj VARCHAR2(30);
sub_obj VARCHAR2(19);
BEGIN
SELECT osuser, machine, program, sql_id
INTO osuser, machine, prog, sqlid
FROM gv$session
WHERE audsid = userenv('sessionid') and inst_id = userenv('instance') and status = 'ACTIVE';

INSERT INTO servererror_log VALUES
(systimestamp, sys.login_user, osuser, machine, prog, sqlid, dbms_utility.format_error_stack);
COMMIT;

IF (SPACE_ERROR_INFO(err_type,obj_type,owner,ts,obj,sub_obj) = TRUE) THEN
UTL_MAIL.SEND( sender => 'oradb@xyz.com',
recipients => 'dba@xyz.com',
subject => 'Insufficient Tablespace Size',
message => err_type || ' at tablespace ' || ts
);
END IF;
END log_server_errors;


Trigger queries gv$session view to gather client information and inserts to log table. Then, if error is a out of space error, trigger sends an e-mail using UTL_MAIL package. This's a pretty straightforward trigger you can use and improve according to your needs.

Upgrading Oracle Enterprise Manager 12c R1 to 12c R2

As you might know, release 2 of Oracle Cloud Control 12c is released. If you already have installed 12c R1, upgrading to R2 is a simple process. 1-system upgrade approach is the only way to upgrade, even you have installed bundle patch 1 or not. Basically, upgrade contains two steps; upgrading Oracle Management Service and its repository (installer handles both upgrades) and upgrading agents. Here are the steps on Linux :

1. Download installation files from OEM Downloads page and unzip them. A disk space of 5.3 GB is required for decompressed installation files.

2. Stop OMS. You can also stop the service when installer prompts you to say it is still running.

$ emctl stop oms

3. Run the installer

$ ./runInstaller







 As a prerequisite, emkey should be configured properly. Run the following command:

$ emctl config emkey -copy_to_repos_from_file -repos_host <host_fqdn> -repos_port <port_number> -repos_sid <database_sid> -repos_user sysman -emkey_file /u01/app/Oracle/Middleware/wlserver_10.3.5/oms/sysman/config/emkey.ora






During installation process, repository database is also upgraded.

4. After the upgrade process, as the second phase, you need to upgrade agents. Logon to OEM, from menu Setup -> Manage Cloud Control -> Upgrade Agents. Click "Add"  and select agents to upgrade, then submit the job.


Linux based agents are marked as upgradable right away. However, when you check not upgradable agent from the Upgrade Agents page, you will see Windows based agent are listed. Because they need an extra step. Agent software should be updated from Setup -> Extensibility -> Self Update. Find the new versions of Microsoft Windows agents under Agent Software and download them. After download (you may select "Notify me" option for the download job), apply update. Following these steps, you may find your Windows based agents in upgradable agents list.

Finally, if you have upgraded Linux based agents with a non-root user, you need to run root.sh script located in <agent_home>/core/12.1.0.2.0/

NFS Sharing for Linux

Sharing a directory using NFS to other Linux systems is a common and handy way. To be able to explain briefly, let me separate tasks into two as server and client side. I use Oracle Linux 6 and Oracle Linux 5 respectively as server and client, though just the opposite goes through same steps. However these commands should run on every Red Hat based system.

On the server that you will create the share, edit the export file:

$ vi /etc/exports

Add line(s) to file such as:
/share_dir *(rw,sync,fsid=0) 
and/or
/shares/nfs 192.168.101.11(ro,sync,fsid=0)

Pay attention not to use extra space on a line, it may cause a problem. Here, instead of "*" - which means sharing is for every one, to world - you can use client's IP or FQDN as shown above. Also if it's going to be read-only share write "ro" instead of "rw". We use "fsid=0" parameter not to get "mount.nfs4: Operation not permitted" error.

Permissions of directory, "/share_dir" and/or "/shares/nfs", should be read and execute to others if it's a read-only share or read-write and execute to others if it's writable. Otherwise you will face with "mount.nfs4: Permission denied" error when mounting the share. So alter permissions of directory:

$ chmod 757 /share_dir

Restart related services (If your server is Linux 5, use portmap instead of portreserve. ):

$ /etc/init.d/portreserve restart
$ /etc/init.d/nfs restart

You can check shares by:

$ showmount -e <server_ip> 
$ exportfs -avr

On the client simply mount the share by:

$ mount -t nfs4 <server_ip_or_host_name>:/ /mnt

Notice that, even the shared directory is "/share_dir" or "/shares/nfs" we use "/" only.

Retrieving Index Usage Statistics

Indexes are old friends of every DBA. To check out what your friends are up to, in Oracle, basic way is enable monitoring for the index by:

alter index IX_FOO monitoring usage;

Then you can query V$OBJECT_USAGE to see if it is being used. After a period of time meaningful to you, you can stop monitoring till the next time by:

alter index IX_FOO nomonitoring usage;

However what is lacking in this way of monitoring is statistics. You can see whether index is used but cannot see how many times or how it is used. Fortunately Oracle provides many V$ views. The query below returns basic statistics about your indexes. How it is used (range or full scan etc.), how many times and when was the last time it was used:

select sp.object_name as index_name,
sp.options,
sum(sa.executions) as executions#,
max(timestamp) as last_used
from v$sql_plan sp, v$sqlarea sa
where sa.address = sp.address
and sa.hash_value =sp.hash_value
and sp. operation = 'INDEX'
and object_owner in ('SCOTT')
group by object_name, options
order by 3 desc;

With Exadata, we've started to use less indexes and even dropped some existing ones. Because we want to use Exadata features like storage indexes and smart scan instead, we let the cells do their job. On Exadata we prefer to see fast full scans instead of range scans. To see Exadata specific statistics related to indexes, you can add columns with prefix "io_cell" in V$SQLSTATS or V$SQLAREA views. Such as:

select sp.object_name as index_name,
sp.options,
sum(sa.executions) as executions#,
max(timestamp) as last_used,
sum(sa.disk_reads) as disk_reads,
sum(sa.direct_writes) as direct_writes,
sum(sa.io_cell_offload_eligible_bytes)/1024/1024 as offloadable_mb,
sum(sa.io_cell_offload_returned_bytes)/1024/1024 as offloaded_mb,
sum(sa.rows_processed) as rows_processed
from v$sql_plan sp, v$sqlarea sa
where sa.address = sp.address
and sa.hash_value =sp.hash_value
and sp. operation = 'INDEX'
and object_owner in ('SCOTT')
group by object_name, options
order by 3 desc;

There are many other statistics you can find in V$ views, so you can improve the query according to your needs.

ORA-08104 While Rebuilding Index

If a session has failed or cancelled by user while rebuilding an index online, when another user/session tries to rebuild or drop the same index may face with the following error:

ORA-08104: this index object 123456 online is being built or rebuilt

The reason beneath is that data dictionary has been left in a state reflecting a rebuild is on going, in fact it is not.

To solve the issue simply run the following:

DECLARE
RetVal BOOLEAN;
OBJECT_ID BINARY_INTEGER;
WAIT_FOR_LOCK BINARY_INTEGER;

BEGIN
OBJECT_ID := 123456;
WAIT_FOR_LOCK := NULL;

RetVal := SYS.DBMS_REPAIR.ONLINE_INDEX_CLEAN (OBJECT_ID);
COMMIT;
END;
/

Adding Hints with SQL Patch

As a DBA, time to time, you may wish to add a hint to a query sent by a developer or a third party application. Actually this is possible by using SQL Patch. Essentially SQL Patch is the recommendation provided by SQL Repair Advisor. When you apply the patch, it directs optimizer to the way it suggested. This issue has been revealed by a Oracle blog, let's try it in a different way. Let's say you want to run the query below in parallel: 


select count(*) from stage

To add PARALLEL hint to the query with SQL Patch:

begin
     dbms_sqldiag_internal.i_create_patch(
                         sql_text => 'select count(*) from stage',
                         hint_text => 'PARALLEL(stage,2)',
                         name  => 'test_parallel_patch');
end; /

When you run the query, you will see that it didn't run in parallel. To confirm you can run the query below and see no rows returned.

select px_maxdop, sql_id, sql_text
from v$sql_monitor
where sql_text = 'select count(*) from stage';

However if you check the execution plan, you will see the statement "SQL patch "test_parallel_patch" used for this statement" under the note section. So SQL Patch is applied though not run as accepted. There is a workaround for the problem. Display execution plan with outline option first:

explain plan for select count(*) from stage; select * from table(dbms_xplan.display(format=>'+OUTLINE'));
...
Outline Data
-------------
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$1" "STAGE"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
DB_VERSION('11.2.0.3')
OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
*/

Drop the SQL Patch you have created:

begin
  DBMS_SQLDIAG.DROP_SQL_PATCH(name  => 'test_parallel_patch');
end;
/

Recreate it as:

begin
    dbms_sqldiag_internal.i_create_patch(
                         sql_text => 'select count(*) from stage',
                         hint_text => 'PARALLEL(@"SEL$1" "STAGE"@"SEL$1",2)',
                         name  => 'test_parallel_patch');
end; /

Notice that we changed table name with the one displayed in execution plan's Outline Data section. Run the query again and see that it runs in parallel, execution plan says:

Note
--------
- Degree of Parallelism is 2 because of
- SQL patch "test_parallel_patch" used for this statement    

This situation does not takes place if you use a hint without a table name, such as using PARALLEL(2) instead of PARALLEL(stage,2), or a hint not requiring a table name.

ORA-02289 While ore.sync() in Oracle R Enterprise

After connecting to your Oracle database through R console and run the command ore.sync, you may have faced with:

ORA-02289: sequence does not exist

error. If you trace queries running while the sync command is in progress, you will see the one below:

SELECT rq$object_seq.NEXTVAL FROM DUAL

Sequence rq$object_seq belongs to RQSYS schema and actually select is granted for this sequence to RQROLE database role. Even you granted RQROLE to the user you are trying to use R with, this sequence is not accessible to the user. So the workaround is quite simple, create synonym for the sequence, a public synonym if you intend to grant RQROLE to more than one user:

CREATE PUBLIC SYNONYM RQ$OBJECT_SEQ FOR RQSYS.RQ$OBJECT_SEQ;

Adding New Disks to an Existing ASM Disk Group

When the free disk space level starts getting lower and lower, it is time to add disk(s) to feed your database. Most likely and advisable, your database is running on ASM (Automatic Storage Management) which provides abstraction over physical disks perfectly. Here are the steps of adding new disks:

1. Add new disks to your server physically. Assuming your database is running on a server class system or as a virtual machine,  it is possible to add disks without shutting down the server through SCSI interface either way.

2. Force Linux to discover new disks without need to reboot:
$ echo "- - -" > /sys/class/scsi_host/host0/scan

Run the commands below to check. You should see new disks and log entries regarding them:
$ fdisk -l
$ ls /dev/sd*
$ tail -f /var/log/message

3. Create a partition on each newly discovered disks:
$ fdisk /dev/sdx
Press 'p' to list partitions and see there is none. Then 'n' to create a new one. Answer questions as p, 1, default, default respectively. Press 'w' to save configuration.

4. Mark each disk as ASM disks:
$ /etc/init.d/oracleasm createdisk <disk_name> /dev/sdx1
Marking disk "<disk_name>" as an ASM disk:                   [  OK  ]

5. To add disks to a ASM disk group, as oracle run asmca tool.


Right click the disk group to be expanded and select "Add Disks"


Select disk(s) to add from the eligible disks list and click OK. When you return the first screen notice that free space is increased.

Configuring SQL Server Plug-in for OMS 12c

As you might have already noticed, Oracle released agents for Windows, both 32 and 64bit, and also the SQL Server plug-in. Today I'll try to briefly explain installing and configuring Microsoft SQL Server plug-in for Enterprise Manager Cloud Control 12c. I used to make use of the plug-in on 11g Grid Control and I found it as a pretty much good way of monitoring SQL Server databases especially by means of statistics collected by OEM. Besides, being able to view all of your databases -assuming you have both Oracle and SQL Server- together on a single page is very handy.

The first step is deploying agent to your Windows server. I won't get into it in detail. In a nutshell, you need install Cygwin and configure ssh service as a prerequisite and then deploy the management agent

Next step is deploying the plug-in:
1. Logon to OEM console. From Setup Menu navigate to Extensibility --> Self Update, click on Plug-in. Select Microsoft SQLServer Database and download it. If you can't find it in the list, most likely it is because MOS account is not configured. You can fix it by entering your MOS credentials from Setup --> My Oracle Support --> Set Credentials or Settings and re-running the UPDATE_MY_ORACLE_SUPPORT_COMPONENT_OOB_JOB job. Setting MOS credentials is also necessary to download agent for Windows.
2. When download is finished (you can select notify me option while submitting the download job), navigate to Setup --> Extensibility -->Plug-ins. Under databases, find Microsoft SQLServer Database and deploy it to your management server first then to the agent you have deployed to Windows server.

After finishing all deployments create a sql login on the SQL Server database instance to be monitored. I found that following rights is sufficient for monitoring:
  1. "View any database", "View any definition" and "View any database" on server
  2. Public role in each database, except master and tempdb, since public server role covers necessary access to them. Since login is added to model database, newly created databases from now on will also include this grant.
  3. Execute on msdb.dbo.sp_help_job stored procedure.
To add database as a target to OEM, navigate to Setup --> Add Target --> Add Targets Manually and select "Add Non-Host Targets by Specifying Target Monitoring Properties". Target type will be "Microsoft SQL Server" and choose the agent on Windows server from the list.  On the next screen fill in the Target Name, JDBC URL, Database Username (sql login you've just created) and Password of Database User. Test the connection and click OK if it succeeds.

 Below, you may find the home screen of SQL Server target.




Oracle Database Network Bandwidth Usage

Even it is not in the list of DBA's responsibilities essentially, time to time you may be asked for the required bandwidth value for your database. It could be for capacity planning of the WAN line between primary and disaster recovery sites or calculating traffic towards database by means of data streams etc.

Besides various networking tools, it is possible to calculate amount of data generated by monitoring usage of redo logs. Actually this is the suggested way to find out required bandwidth for Data Guard installations by Oracle.

The  metric to calculate bandwidth is "Redo Generated Per Sec". You can schedule a job that collects statistics regularly, say hourly. Script below inserts maximum and average values of the metric into a table with time stamp:

insert into bandwidth
(
     select sysdate, max(value) as max_val, avg(value) as avg_val
     from gv$sysmetric_history 
     where metric_id = 2016
);

The formulation to calculate bandwidth is:

Required bandwidth = ((Redo rate bytes per sec. /  0.7) * 8) / 1,000,000 = bandwidth in Mbps

Since you have collected enough data, you can figure out the peak and average bandwidth:

select 
   ((max_val/0.7)*8)/1000000 as peak_mbps, 
   ((avg_val/0.7)*8)/1000000 as average_mbps
from bandwidth;

Upgrading Oracle Enterprise Manager 11g Grid Control to 12c Cloud Control with Exadata Plug-ins

As you might know OEM 12c Cloud Control is out there for some while, even its bundle patch 1 has been released. 12c comes with a fully re-designed user interface and new Exadata features which takes my attention the most. To take advantage of these new features I've decided to upgrade my 11g box to 12c. What I have is a 64-bit Oracle Linux 5.7 with 11gR2 database and 11g Grid Control with Exadata plugins installed.

There are three upgrade approaches suggested by Oracle. I'll follow the 1-System Upgrade Approach which is in-place upgrade with minimal downtime.  After upgrading to 12c, I'll discover my database machine through 12c console and implement Exadata 12c integration.

Upgrading 11g Grid Control to 12c Cloud Control with 1-System Upgrade Approach:

1. As in every upgrade and patching, first thing to do is upgrading OPatch. Since we'll patch OMS 11g during upgrade process, we need to upgrade OPatch under OMS_HOME. Download latest OPatch from MOS 6880880 for release 11.1 and:

$ unzip p6880880_111000_Linux-x86-64.zip -d <OMS_HOME>/

2. As a prerequisite if you haven't installed OMS 11g PSU 1 (10065631), you should install it. Actually this is the prerequisite for installing preupgrade console. Download the PSU then:

$ unzip p10065631_111010_Generic.zip
$ <OMS_HOME>/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./10065631

If no conflict is found
$ <OMS_HOME>/bin/emctl stop oms
$ cd 10065631
$ <OMS_HOME>/OPatch/opatch apply
$ export ORACLE_HOME=<OMS_HOME>
$ <OMS_HOME>/bin/rcuJDBCEngine sys/<Password>@<host_name>:<db_sid> JDBC_SCRIPT post_install_script.sql $PWD $ORACLE_HOME
$ <OMS_HOME>/bin/rcuJDBCEngine sys/<Password>@s<host_name>:<db_sid> JDBC_SCRIPT post_install_pactht.sql $PWD $ORACLE_HOME
$ <OMS_HOME>/bin/emctl start oms   

3. Now we can install preupgrade patch. You can download it from OTN.

$ unzip p13597150_111010_Generic.zip
$ <OMS_HOME>/bin/emctl stop oms
$ <OMS_HOME>/bin/emctl start oms
$ cd 13597150
$ <OMS_HOME>/OPatch/opatch apply   
$ <OMS_HOME>/bin/rcuJDBCEngine sys/<Password>@<host_name>:<db_sid> JDBC_SCRIPT $ORACLE_HOME/sysman/preupgc/puc_dblink_pkgdef.sql
$ <OMS_HOME>/bin/rcuJDBCEngine sys/<Password>@<host_name>:<db_sid> JDBC_SCRIPT $ORACLE_HOME/sysman/preupgc/puc_dblink_pkgbody.sql
$ <OMS_HOME>/bin/rcuJDBCEngine sysman/<Password>@<host_name>:<db_sid> JDBC_SCRIPT $ORACLE_HOME/sysman/preupgc/pre_upg_console.sql   
$ <OMS_HOME>/bin/emctl start oms   

4. Now we are ready to start upgrading. Upgrade starts with agents. Log on to 11g console, at the Deployment tab click on the new link "Enterprise Manager 12c Upgrade Console" then "Manage Software". At the Software Location area write the directory of new plugins and click validate. 

To have plugins you can unzip all three disks of 12c those you have downloaded from OTN or you can just extract the plugins directory from the third disk. You also need to put the new version of agent(s) you have. You can download the one(s) from OTN too. 

There are two things to notice here. First, you must not extract the agent software zip file while putting under plugins directory. And the second is that there is no agent for Windows, not yet for 64-bit version and never ever going to be for 32-bit. So if you have a 32-bit Windows agent running you will not be able to use it with 12c.

When you click on validate a job will run and analyze your agent(s). Your agent(s) might fall into one of these groups:
  • Completely Upgradable - A non-windows agent with no or upgradable plugins.
  • Missing Plug-Ins Software -  A non-windows agent with not upgradable plugins. My agents falls in this category since it is a Linux agent with Exadata plugins. Starting from 12c, Exadata features are included in OEM, so no plugin is required.
  • Missing Agent Software - Agent software is not available
  • Not Supported - A Windows agent 
If what you have is a fully green pie chart, move on to next step. You can also move on with a yellow one which is due to not upgradable plugins. Otherwise, if you have not supported windows agent give up upgrading or uninstall the agent. And if you have not upgradable plugins, remove those plugins.

5. After verifying agent(s), upgrade them by following steps below described in Oracle documentation. First three steps will create a job, you can find job output screen in each step:

a) Deploy and configure the software binaries of Oracle Management Agent 12c


b) Generate a health report and check the readiness of the predeployed Management Agents


c) Verify and sign off the health check report
 
d) Switch over the old Management Agents to the newly deployed ones so that they can communicate with Enterprise Manager Cloud Control


Check the status of upgraded agent:
$ <AGENT12c_HOME>/agent_inst/bin/emctl status agent  

Out put should be like:

Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version     : 12.1.0.1.0
OMS Version       : (unknown)
Protocol Version  : 12.1.0.1.0
Agent Home        : <AGENT12c_HOME>/agent_inst
Agent Binaries    : <AGENT12c_HOME>/core/12.1.0.1.0
Agent Process ID  : 19089
Parent Process ID : 18963  
Agent URL         : https://<host>:3872/emd/main/
Repository URL    : https://<host>:4900/empbs/upload
Started at        : 
Started by user   : oracle
Last Reload       : (none)
Last successful upload                       : (none)
Last attempted upload                        : (none)
Total Megabytes of XML files uploaded so far : 0
Number of XML files pending upload           : 69
Size of XML files pending upload(MB)         : 0.07  
Available disk space on upload filesystem    : 21.46%
Collection Status                            : Collections enabled
Last attempted heartbeat to OMS              : 
Last successful heartbeat to OMS             : (none)

---------------------------------------------------------------
Agent is Running and Ready


6. It's time to upgrade OMS. Copy emkey from the OMS to the Management Repository and stop OMS service as prerequisite and then run the graphical installer from the directory you have unzipped the 12c CDs.

$ <OMS_HOME>/bin/emctl config emkey -copy_to_repos_from_file -repos_host <host_name> -repos_port 1521 -repos_sid <db_sid> -repos_user sysman -emkey_file <OMS_HOME>/sysman/config/emkey.ora
$ <OMS_HOME>/bin/emctl stop oms
$ ./runInstaller






During prerequisite check if anything found an error screen will appear as below, click yes to make the installer fix the issue

 Also some best practices and performance recommendations may appear in a warning window. You can implement them after installation





 Run the root script when prompted for 

You may now log on to new OMS Console.

Discovering Exadata Through 12c

1. First thing to do before start monitoring Exadata through 12c is installing 12c agents to database nodes. As you know, it is strictly disallowed to install anything on storage cells, therefore all Exadata components are monitored by agents on database nodes. Download agent software from MOS, bug number 12960610. In the zip file there is file named CloudControl_Standalone_12.1.0.1.0_LINUX.X64.tar.Z, copy it to first node under /tmp/emkit directory.

An important point here to pay attention is em.param file. The file is generated by Oracle Exadata Database Machine Configurator Spreadsheet and it should be located as /opt/oracle.SupportTools/onecommand/em.param. If not, you need to generate one. Once you have one, you have to add or modify the following lines properly:

OMS_LOCATION = MY_GC
EM_BASE = / u01/app/embase

OMS_HOST = oemserver.company.com
OMS_PORT = 4900


EM_USER=oracle
EM_PASSWORD=<password>


Do check the port number by running following command on OEM server:
$ emctl status oms -details
Look for the  "Upload HTTPS Port" in the output.

To install agent, as root:
$ tar xBvpf CloudControl_Standalone_12.1.0.1.0_LINUX.X64.tar.Z
$ cd /tmp/emkit/common
$ (sh setupem.sh -emOnly -emUser <em_user> -emBase <em_base> | tee /tmp/setupem.log) 3>&1 1>&2 2>&3 | tee /tmp/setupem.err

2. After installation of OEM 12c Agent for Exadata, it is time to discover database machine. Though, before starting do check the Exadata Database Machine Schematic file located as /opt/oracle.SupportTools/onecommand/databasemachine.xml. Its version must be at least 502 and well-formed as XML (In my case, I had problems, while discovery process, due to some invalid characters in the file). Instead of trying to fix errors later the hard way,  fix simply in the beginning.

Log on to 12c console. From the Setup menu, Add Target --> Add Targets Manually. Choose "Add Non-Host Targets Using Guided Process (Also Adds Related Targets)", select  Oracle Exadata Database Machine from the Target Types and click "Add Using Guided Discover".  To start the discovery wizard select "Discover a new Database Machine and its hardware components as targets" and click Discover Targets.

Go till the end of the wizard, process takes a little while to complete. At the end, you got your Exadata manageable via 12c Cloud Control.


Installing Oracle R Enterprise

Oracle has integrated popular statistical package R as a component of database's Advanced Analytics Option. Oracle R Enterprise comes with Big Data integrated too. It also can be installed separately to 11 R2 databases on Oracle Linux 5 64-bit and naturally to Exadata. I'll try to briefly explain how you can install it.

Prerequites:

1. If not already enabled, enable yum repos for Oracle. You can find out how from Oracle public yum site.
2. Install R

$ yum list  R
$ yum install R.x86_64
3. Patch your database if its version is not 11.2.0.3. Instal patch number 12598677 for 11.2.0.1 and  129765544 for 11.2.0.2.

Installing Client on Windows:

1. Next thing to be done is installing client tool. Download R for Windows from R Project and install it.
2. Download Oracle R Enterprise Client Packages and Client Supporting Packages for Windows from OTN and unzip them.
3. To install packages; run R from All Programs as administrator, from the menu: Packages -> Install package(s) from local zip files:
   a. Navigate to <unzip_dir>\ore-supporting-windows-1.0\bin\windows\contrib\2.13
   b. Select; DBI_0.2-5.zip, png_0.1-4.zip and ROracle_1.1-1.zip
   c. Click open:
        package 'DBI' successfully unpacked and MD5 sums checked
        package 'png' successfully unpacked and MD5 sums checked
        package 'ROracle' successfully unpacked and MD5 sums checked

   d. Repeat the step for client packages. Navigate to <unzip_dir>\ore-windows-1.0\bin\windows\contrib\2.13
   e. Select; ORE_1.0.zip, OREbase_1.0.zip, OREeda_1.0.zip, OREgraphics_1.0.zip, OREstats_1.0.zip, ORExml_1.0.zip
   f. Click open:
        package 'ORE' successfully unpacked and MD5 sums checked
        package 'OREbase' successfully unpacked and MD5 sums checked
        package 'OREeda' successfully unpacked and MD5 sums checked
        package 'OREgraphics' successfully unpacked and MD5 sums checked
        package 'OREstats' successfully unpacked and MD5 sums checked
        package 'ORExml' successfully unpacked and MD5 sums checked


Installing Server:

1. To install server download ore-server-linux-x86-64-1.0.zip from OTN.
2. Extract files
$ unzip ore-server-linux-x86-64-1.0.zip
$ tar xzf ore-server-linux-x86-64-1.0.tar.gz
3. Set environment variabeles
$ export R_HOME=/usr/lib64/R
$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
4. Install
$ cd ore-server-linux-x86-64-1.0/
$ ./install.sh
5. Enable R for the user:
SQL> grant rqrole to <user_name>

To connect your database through R run R client:
R> library(ORE)
R> ore.connect(user="<user_name>", sid="<DB_SID>", host="<host_name>", password="<password>", port = 1521)
R> ore.sync
R> ore.attach()
R> ore.ls()
R> help(Startup)


Core Dumps After Upgrading Grid Infrastructure to 11.2.0.2

After upgrading Grid Infrastructure from 11.2.0.1 to 11.2.0.2 on a single instance, you may observe huge amount of increase in disk usage due to core dump creation. You will see that a directory named like core_123 created every 10 minutes in <Grid_Home>/log/<host_name>/diskmon.

There is a bug numbered 10283819 in MOS considers the problem , however no workaround is provided. There are two ways you can follow to handle situation. First one is upgrading to 11.2.0.3. And the other one is uninstalling 11.2.0.1 completely, even remove the empty home, and restart all Oracle services.

How to move a datafile from a ASM disk group to another

I'll explain two ways of moving a datafile from a ASM disk group to another disk group. As you might guess, during migration of the data file you need to take whether the whole tablespace or just the the datafile offline, which makes the difference between two approaches I'm about to explain.

Here is the first way:

1. Find the name of the datafile that you're going to migrate:

SQL> SELECT FILE_NAME FROM DBA_DATA_FILES WHERE tablespace_name = 'EXAMPLE';

2. Take the tablespace offline:

SQL> alter tablespace example offline;

3. Copy the file using RMAN:

RMAN> COPY datafile '+RECOVERY/orcl/datafile/example.123.123456789' to '+DATA';

4. Change datafile's name to the name displayed as output of the command above:    

SQL> ALTER DATABASE RENAME FILE '<old_name>' TO '<output file name written at rman output>';

5. Bring back the tablespace online:


SQL> alter tablespace example online;

And here is the second way:

1. Find the name of the datafile that you're going to migrate just like firt method:

SQL> SELECT FILE_NAME FROM DBA_DATA_FILES WHERE tablespace_name = 'EXAMPLE';

2. Take the datafile offline instead this time:

SQL> ALTER DATABASE DATAFILE '+RECOVERY/orcl/datafile/example.123.123456789' OFFLINE;

3. Copy the file using RMAN:

RMAN> COPY datafile '+RECOVERY/orcl/datafile/example.123.123456789' to '+DATA';

4. Change datafile's name to the new name displayed as output of the command above to update data dictionary: 

SQL> ALTER DATABASE RENAME FILE '<old_name> TO '<new_name>';

5. Rename the datafile using RMAN to update ASM. This command is equivalent of "alter database rename file":   

RMAN> SWITCH DATAFILE '<new_name>' TO COPY;

6. Recover  the new datafile:

RMAN> RECOVER DATAFILE '<new_name>';

7. Bring it online

SQL> ALTER DATABASE DATAFILE '<new_name>' ONLINE;

Both ways do the job. To choose between them is deciding whether tablespace can be taken offline for while or not.

How to Install/Update Java (JDK) on Oracle Linux

1. Download JDK RPM from Oracle

2. As root:

$ chmod +x jdk-6u30-linux-i586-rpm.bin
$ ./jdk-6u30-linux-i586-rpm.bin. 

3. Re-link to new JDK:

$ /usr/sbin/alternatives --install /usr/bin/java java /usr/java/jdk1.6.0_30/bin/java 16030

or alternatively:

$ /usr/sbin/alternatives --config java

Deinstalling Old Oracle Homes After Upgrade

When Oracle Database and Grid Infrastructure (GI) is updated as Oracle recommended which is out-of-place upgrade, a new Oracle home is created. And a new home means extra disk space. Since no disk has unlimited capacity after a few or many upgrades you need to clean up.

So, assuming we are performing clean up on a single instance with ASM configured and  homes to be uninstalled is under /u01/app/oracle/product/11.2.0 let's start with db home:

$ cd /u01/app/oracle/product/11.2.0/dbhome_1/deinstall
$ ./deinstall

Deinstall script discovers the configuration. Notice that no database names that are configured in this Oracle home is found. Accept default answers and say yes to continue:

Oracle Home selected for de-install is: /u01/app/oracle/product/11.2.0/dbhome_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y

Deinstall tool does the cleanup smoothly.

Next step is uninstalling GI home which is a bit more tricky. According to Oracle documentation you need to change the permissions of the directory first as root:

$ cd /u01/app/oracle/product/11.2.0
$ chown -R oracle:oinstall grid
$ chmod -R 775 grid

GI also has deinstall tool in its home. However if you run it as you do for database, you see the following error though no error logged in the log file.

ERROR: The deconfiguration and deinstallation tool has detected runtime errors when checking the existing configuration due to which the tool cannot continue with clean up operation.  Please check the log files for more information.  Rerun the tool after fixing the errors to proceed with the ORACLE_HOME clean up.

Don't give up deinstall tool yet. Unzip the 7th of the installation zip files you've downloaded which is the deinstall tool and run it from the unzip directory by giving the home parameter:

$ ./deinstall -home /u01/app/oracle/product/11.2.0/grid

Oracle Grid Infrastructure Home is: /u01/app/oracle/product/11.2.0.2/grid
The cluster node(s) on which the Oracle home de-installation will be performed are:null
Oracle Home selected for de-install is: /u01/app/oracle/product/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Skipping Windows and .NET products configuration check
The home being deconfigured is NOT a configured Grid Infrastructure home (/u01/app/oracle/product/11.2.0.2/grid)
ASM was not detected in the Oracle Home
Do you want to continue (y - yes, n - no)? [n]: y

Again say yes to cleanup. When it finishes, you gain ~10GB of your disk space back.

Upgrading Single Instance Oracle Database 11.2.0.1 to 11.2.0.2

Today I'll upgrade my single instance database from 11.2.0.1 to 11.2.0.2. As you know patch sets are full installations so procedure will  be like installing for the very first time except choosing upgrade options.

My database takes advantage of ASM. Therefore we'll start with upgrading Grid Infrastructure. Unzip the 3. of the installation zip files to an appropriate directory such as /tmp. Since 11gR2, upgrades must be done as "out-of-place upgrades". So create the directories for grid and database upgrades.

$ mkdir -p /u01/app/oracle/product/11.2.0.2/grid
$ mkdir -p /u01/app/oracle/product/11.2.0.2/dbhome_1

If you don't show an empty home to installer you get the following error:

I will use graphical interfaces. To use UI, you can take a look at my blog about displaying x client on Oracle Linux. Run the installer:

$ ./runInstaller
Skip software updates.
 Select upgrade option.

Select language(s)
Leave groups as default

Select the new directory as software location. After prerequisite check (nothing came up hopefully)  installation will begin and you will be prompted to run root script:

Output of the root script should be like the following:


Grid Infrastructure upgrade is completed when the script is finished. You can check the new version by:

$ cd /u01/app/oracle/product/11.2.0.2/grid/bin/
$ ./crsctl query has softwareversion
Oracle High Availability Services version on the local node is [11.2.0.2.0]

Next step is upgrading the database. Unzip the first and the second of the installation zip files and run the installer.


Skip software updates.
Select upgrade option.
Select language(s)

Select the edition you  want to install.
Select the new directory as software location.
Leave groups as default
 After prerequisite check (nothing came up hopefully)  installation will begin and you will be prompted to run root script:
Output of the root script should be like the following:

When you click "OK" when the script succeeds, Database Upgrade Assistant (DBUA) automatically starts.

 Select the database to upgrade.
 
If Oracle finds out any warnings, you are prompted to correct them. You can run the recommended commands as fix and keep going.
Select an appropriate number of parallelism and turn off archiving otherwise some amount of redo log will be generated.
You don't want to move database files so leave as default
Select ASM disk group for FRA if not discovered
A summary report is generated.
Upgrade process when you click on finish.
Upgrade may take quite some while depend on your configuration.

At the end a result report  is generated.

The whole procedure is pretty much straight forward and goes smoothly.