Friday, November 28, 2008

Oracle 10g/11g RAC Clusterware Installation Tips

Part II: Oracle RAC High Availability Series


Since I have been building clusters with Oracle 10g and 11g RAC environments for many
new clients, I have decided to present a series on Oracle RAC tips and tricks. The previous installment for Oracle RAC technology included design and implementation
guidelines for best practices. This week, we will cover the post installation tips
with the Oracle 10g/11g Cluster Verification Utility called cluvfy. This wonderful tool provides insight into whether or not your Oracle RAC environment has been installed correctly.

The Oracle Clusterware utility for Oracle RAC comes in two forms: the cluvfy utility which is available after installation of the clusterware software and the runcluvfy.sh shell script which can be used before installation. In the following example, we will use the post install verification feature available with cluvfy to verify our Oracle 10g RAC clusterware installation.


Verify Oracle RAC Clusterware Installation using the Clusterware Verification Utility (CLUVFY)


Cluvfy uses stages to check the status of the installation for before and after each
phase of the Oracle RAC installation process.

Below is the basic syntax for cluvfy:

oracle@racnode1 $ cluvfy


USAGE:
cluvfy [ -help ]
cluvfy stage { -list | -help }
cluvfy stage {-pre|-post} [-verbose]
cluvfy comp { -list | -help }
cluvfy comp [-verbose]

USAGE:
cluvfy stage -post crsinst -n [-verbose]

is the comma separated list of non-domain qualified nodenames, on which the test should be conducted. If "all" is specified, then all the nodes in the cluster will be used for verification.

DESCRIPTION:
performs the appropriate checks on all the nodes in the nodelist after setting up Cluster Ready Services(CRS).




Now we run the post-CRS verification with cluvfy to ensure that our Oracle RAC clusterware has been correctly installed. This is useful since the Oracle 10g/11g RAC Clusterware (CRS) is the most challenging aspect of a RAC implementation.


oracle@racnode1> cluvfy stage -post crsinst -n all -verbose

Performing post-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "racnode1"
Destination Node Reachable?
------------------------------------ ------------------------
racnode1 yes
racnode2 yes

Result: Node reachability check passed from node "racnode1".


Checking user equivalence...

Check: User equivalence for user "oracle"
Node Name Comment
------------------------------------ ------------------------
racnode1 passed
racnode2 passed
Result: User equivalence check passed for user "oracle".

Checking Cluster manager integrity...


Checking CSS daemon...
Node Name Status
------------------------------------ ------------------------
racnode2 running
racnode1 running
Result: Daemon status check passed for "CSS daemon".

Cluster manager integrity check passed.

Checking cluster integrity...

Node Name
------------------------------------
racnode1
racnode2

Cluster integrity check passed


Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.

Uniqueness check for OCR device passed.

Checking the version of OCR...
OCR of correct Version "2" exists.

Checking data integrity of OCR...
Data integrity check for OCR passed.

OCR integrity check passed.

Checking CRS integrity...

Checking daemon liveness...

Check: Liveness for "CRS daemon"
Node Name Running
------------------------------------ ------------------------
racnode2 yes
racnode1 yes
Result: Liveness check passed for "CRS daemon".

Checking daemon liveness...

Check: Liveness for "CSS daemon"
Node Name Running
------------------------------------ ------------------------
racnode2 yes
racnode1 yes
Result: Liveness check passed for "CSS daemon".

Checking daemon liveness...

Check: Liveness for "EVM daemon"
Node Name Running
------------------------------------ ------------------------
racnode2 yes
racnode1 yes
Result: Liveness check passed for "EVM daemon".

Liveness of all the daemons
Node Name CRS daemon CSS daemon EVM daemon
------------ ------------------------ ------------------------ ----------
racnode2 yes yes yes
racnode1 yes yes yes

Checking CRS health...

Check: Health of CRS
Node Name CRS OK?
------------------------------------ ------------------------
racnode2 yes
racnode1 yes
Result: CRS health check passed.

CRS integrity check passed.

Checking node application existence...


Checking existence of VIP node application
Node Name Required Status Comment
------------ ------------------------ ------------------------ ----------
racnode2 yes exists passed
racnode1 yes exists passed
Result: Check passed.

Checking existence of ONS node application
Node Name Required Status Comment
------------ ------------------------ ------------------------ ----------
racnode2 no exists passed
racnode1 no exists passed
Result: Check passed.

Checking existence of GSD node application
Node Name Required Status Comment
------------ ------------------------ ------------------------ ----------
racnode2 no exists passed
racnode1 no exists passed
Result: Check passed.


Post-check for cluster services setup was successful.


The beauty of the Oracle RAC cluster verification tool is that it can pinpoint trouble spots before, during and after the installation process. It is an essential tool in the Oracle RAC expert's workbench.

Some additional details on the use of the cluvfy utility are available in the following Oracle Support online Metalink notes:

ML # 339939.1- Running Cluster Verification Utility to Diagnose Install Problems
ML # 316817.1- CLUSTER VERIFICATION UTILITY FAQ


Regards,
Ben

Sunday, November 23, 2008

Design considerations for Oracle RAC High Availability Design

I have recently been working with clients to implement and review Oracle 10g RAC environments. One item that has
been of importance is how to design a truly redundant and highly available (HA) infrastructure.

The goal is to avoid any single point of failure (SPOF) while maximizing performance and scalable factors

Here are my notes:

Hardware Considerations
Implement 3-4 nodes for the RAC design in the initial phase. By using at least 3-4 nodes for a new RAC cluster, you
help to protect yourself against the dreaded "Split Brain" condition. If you lose a single node, you still will have 2-3 nodes
to use for failover and servicing current mission critical applications.

Network Considerations for RAC
1. Implement multiple switches for both the interconnect to avoid loss in communication between nodes in the RAC cluster.
The danger of using only a single switch is that if this switch has a failure, the entire RAC cluster will crash and result will be
downtime. I see a lot of clients skimp on this item. By using 2 switches at both the interconnect (private network) and storage level (ie: the SAN fabric or iSCSI layer) you protect yourself against a network failure.

2. Use a fat pipe for the interconnect. Go with at least 4Gb+ Ethernet or even better, fiber channel for best throughput.
Even better, the Infiniband has robust performance for heavy duty applications.

3. Implement multiple dual homed NIC cards to avoid loss of a network adapter in the server. Other good network interfaces
have this built into the network card such as Sun's IPMP (IP Multipathing - Sun Traffic manager).

Storage Considerations for Oracle 10g/11g RAC
Invest in the best SAN possible with Fiber Channel (FC/FC-AL) for best performance and support.

Fiber is the best overall performance and HA solution for enterprise storage. Another suitable option is iSCSI which provides many similar benefits.

Other tips for Oracle 10g/11g RAC Design

1. Mirror and protect multiple copies of the Oracle 10g/11g Clusterware: have several copies of the OCR (Oracle Cluster Registry) and Voting Disks. These are small footprints in size and if you only have a single copy, guess what will happen to your entire RAC cluster if you lose your only copy of these critical files? The entire RAC cluster will fail because it will not be able to communicate. I have seen clients that have 1 copy of the OCR and vote disk and they put their RAC clustered environments at great risk. Even if the Unix or storage administrator says they mirror copies of them on storage, one cannot be too cautious to have multiple copies.

2. Use multiple ASM Disk Groups.

At least 4-5 ASM disk groups are recommended to split up the various Oracle 10g/11g database files for performance and availabilty reasons. For example, we can have the following sample ASM configuration:

+FLASHDG for flash recovery area within ASM to store backups and archivelogs
+DATADG for Oracle 10g/11g data files
+INDEXDG for Oracle 10g/11g indexes
+DATADG2 for additional Oracle 10g/11g application database files

Implement Oracle Data Guard for RAC
While Oracle RAC provides performance, scalability, and data protection against a single node failure for a RAC instance in the cluster, it does not protect against data loss in the event that the RAC database has a media failure and data loss. This is because the RAC cluster nodes all share the same database. Many folks incorrectly assume that RAC is a total HA solution. It is not. I recommend that a standby physical database be implemented with RAC environments to provide for protection against data loss and the single point of failure (SPOF) which is the Achilles Heel with RAC. By using an Oracle Data Guard with RAC, you gain failover and switchover features to protect against data loss. Downtime is bad enough for an already stressed DBA to worry about, but data loss will get a DBA fired and potentially cause a company to go out of business. Thus, Data Guard is the perfect solution to complement RAC for a comprehensive HA solution as part of the Oracle Maximum Availability (MAA) architecture.


Implement and Test RMAN Backup and Recovery with Oracle 10g/11g RAC
Oracle provides the ultimate backup and recovery tool called the Recovery Manager (RMAN) for free out of the box to provide essential backup and recovery for complex RAC environments. User managed hot backups were fine years ago before the RMAN age but sorry folks, they really do not cut the mustard for modern times. RMAN provides a ton of features such as block level media recovery and point in time recovery that are not available in the old user backups. Plus RMAN can be used to clone RAC databases and implement standby Data Guard environments as well as backup and recovery ASM disk groups with Oracle 10g and 11g for RAC.





Hope these tips and tricks help you with building a reliable and stable RAC environment!

Cheers,
Ben

Thursday, October 9, 2008

My Delayed Review for Open World

Well dear readers, sorry for the delay. I caught a sinus infection on the return flight home to San Diego and took me a week to recover. That and things have been hectic for me.

This year's OpenWorld conference was amazing. Since it was my very first time at OpenWorld as well as speaking at the conference, I had a great time. I want to give thanks to the hundreds of folks who attended my session. Below is a link to my session if you wish to download it:


But first, some photos from the OpenWorld conference.

And Oracle definitely knows how to throw a party with this year's Appreciation event on Treasure Island in San Francisco. I did enjoy the sushi and later on the music from live performances by Seal, Psychedelic Furs (which I remember from high school!), UB40 and Alan Jackson.





The massive Oracle Ferris wheel at the event entrance for the fun Appreciation event was incredible.

Even more amazing and since I love bright neon light shows was the gigantic stage and light show. Reminds me of the city in the sci fi classic movie Blade Runner:



The big news of the event was that Larry announced that Oracle was entering the hardware and storage business called the Exadata database server with HP. I have mixed views on this offering but it should prove interesting! Some details on it:

Here is a photo of the new beast from Oracle and HP:




Of course, no keynote from Larry would be complete without an introduction to his new super trimaran yacht as Larry has a passion for sailing and racing! Alex Gorbachev whom I had the pleasure to meet at the conference posted a nice little video of Larry's new racing boat

All that I can say is wow: I would love to have the chance to be on the racing boat and try to see if I can sail that ship.

Now the best part for me anyways was meeting new colleagues, business contacts and new friends!!!! After all, its really all about the people and sharing tips, tricks and camaraderie. I was fortunate to meet Tim Hall, Arup Nanda, Tom Kyte, Cary Millsap, Lewis Cunningham, Mark Williams, Greg Rahn and many other top Oracle experts at the conference. Poor Tim Hall got ill the last day of the conference and I hope that he is feeling better now. I have enjoyed his Oracle-Base website chock full of Oracle tips and scripts. And not to mention, it was fantastic meeting colleagues from ITC especially my collegues from Buenos Aires, Argentina office.

Tuesday, September 23, 2008

Undocumented Oracle Utilties OpenWorld

Encore Session for Ben Prusinski's Open World presentation


My session was a huge success yesterday with over 400 folks in attendance. For those who want to attend and were not able, I am doing an encore to share my expert Oracle DBA tips.

Back by popular demand after Monday's booked to capacity session, IT Convergence's Ben Prusinski will do an encore of session S301396: "Undocumented Oracle DBA Utilities: Black Magic for the Oracle Expert" this Wednesday at 11:30am at the Oracle OpenWorld Conference.

For the location please check the available online monitors for up to date information as room may change at last minute. Currently, the scheduled session will be hosted in Room 2002/04 in Moscone West.

Also, drop by the IT Convergence booth this week to view live demos for Oracle 10g RAC and ASM and to visit so we can share tech tips!

Regards,
Ben

Thursday, July 24, 2008

Busy getting ready for Oracle Open World 2008!

Well, good news-

My Oracle Mix presentation idea has received enough votes to be among the top new sessions to be at Oracle Open World 2008 this year. I am excited and hard at work on the workshop presentation for Oracle 11g tips and database internals. More details on my session can be found here.

I look forward to meeting all the top Oracle gurus at Open World this year- it should promise great fun and wonderful chance to meet new friends and business contacts.

Also, based on the Oracle Open World 2008 conference, besides the amazing array of technical sessions, the entertainment proves to be top notch with Seal and UB40 to perform during the conference week.


I am also at work on my second new book, Oracle 11g Insider Secrets, which will be completed in 2009. My first Oracle book should also be available from Rampant Press next year as well on database migrations to Oracle. Once I have some time, I will continue to discuss more 11g new features and other tidbits on Oracle technology.

Stay tuned for more to come!
-Ben

Friday, July 4, 2008

Oracle 11g Monitoring and Goodbye Alert Log!

Oracle 11g has introduced many new features for database monitoring and administration. This month, we will examine the new interface for monitoring database events within the Oracle 11g database release. In previous releases of Oracle database, there was the alert.log text based file that contains all database activity of use to the vigilant DBA. The alert.log file contains error messages and major database activities and operations which are monitored by the DBA as part of daily maintenance. However, this alert.log file disappears with Oracle 11g and is now replaced by an XML file called the log.xml file.

Oracle 11g also introduces a new command line utility called the ADR Command Interface or ADRCI to review the contents of the log.xml file. By default, the ADR tool is located under the Oracle 11g directory ORACLE_BASE/diag/rdbms/SID/SID/alert
. In order to use the ADR tool, the command adrci is issued with Oracle 11g.


Let's explore an example on how to use the new ADR command tool to view our new Oracle 11g diagnostic log contents.

The default location where the log.xml file is located on the test Oracle 11g Red Hat Linux server in this example:

/u01/app/oracle/diag/rdbms/ora11g/ORA11G/alert


Start a new session of ADR command tool by issue command adrci:







Now let's check out some of the cool new features of the ADR command tool. To display options with the help command from within ADRCI:




For extended help options we can use the help extended command from within the ADR command tool and shown below:




We can get detailed explanations of an option such as the BEGIN BACKUP command which will present a series of screens much like that found in the old UNIX man pages.





What you should finally realize at this point is that Oracle 11g has taken monitoring to a whole new level over 10g release. In fact, think of it as monitoring on steroids. We will next view an example of how to look at the log files for database monitoring with Oracle 11g on Red Hat Linux.




Lets scroll down and see more options for the show alert option with ADRCI:





As you can see, there are now options to review the old style alert.log files as well as the ability to view the log files from the Oracle listener and host. We can also parse the log file to see just a recent activity which is useful for quick monitoring tasks. Lets take a peek at one of the recent log files in the next example.

We choose option 1 and the alert log file is displayed below:




By default the log files are stored under the Linux /tmp directory structure.





To exit and return to the main ADR command screen we use the :q command within the Linux vi editor.


The ADR command tool also lets us view the new health monitor reports and functions.

For instance, if I wish to look at the most recent health monitor execution tasks then I can simply issue the show hm_run command within the ADR command interface.







Also of note with Oracle 11g are several new V$ tables that correspond to the ADR and new health monitoring features:

V$HM_CHECK
V$HM_CHECK_PARAM
V$HM_FINDING
V$HM_INFO
V$HM_RECOMMENDATION
V$HM_RUN


Complete syntax and further details on the ADR command tool is available online in the Oracle 11g Documentation located here

We have barely scratched the surface in how to use this potent new monitoring feature for Oracle 11g. Explore the power of the tool as it promised much needed functionality and useful instrumentation for busy Oracle DBAs in monitoring of large and complex Oracle 11g environments.

Regards,
Ben

Wednesday, June 18, 2008

Block Size and Oracle Database Performance

Recently, on the Oracle OTN Forums, there has been a major debate over the
Oracle block size and database performance. Some top Oracle experts including Jonathan Lewis and Steve Karam have joined the debate. More details can be found here and also here on the issue of whether or not block size for Oracle determines whether or not performance can be improved or not.

In a nutshell, some of the discussion touched on potential bugs in Oracle database code with respect to the use of ASSM and block sizes.

I may or may not create some actual benchmark tests to see how my results differ from the financial services customer I worked a few years ago to resolve the poor performance in the past. I saw major improvement by changing the block size from 8k to 16k. Due to NDA and non-disclosure laws, I was unable to post the actual statistics. Also, one item from the above block size and Oracle performance debate that has been interesting to note is we have two groups of thought at odds with each other: one such as Jonathan Lewis and Tom Kyte advocate extensive test cases and posting results to argue a point on Oracle and the second group represented by Steve Karam and Don Burleson advocate real world cases to prove how Oracle database really works in the real world.

It really depends on how much time I have.

Personally, both camps of thought have some interesting points to consider. I argue this because testing is important and actual results from real world production systems are critical as well. However actual benchmark testing from at the bare minimum a subset of a live production system appears to be more accurate than a simple test case.

Monday, June 9, 2008

My Favorite New Feature in Oracle 11g- Backup and Recovery

Oracle 11g has tons of cool new features that developers and DBAs should be excited about since the first release last summer. For me, as a DBA and consultant, I like the new Data Recovery advisors for Oracle 11g Backup and Recovery with RMAN.

Oracle ACE Arup Nanda has an excellent guide that covers all the details.

The two main features that I particular like about the new Data Recovery Advisor feature new to Oracle 11g release are the automatic repair scripts and the validate database command in RMAN. First lets take a look at an example on how these function in Oracle 11g Release 1(11.1).

Let's say that I have problems with my USERS tablespace and the datafile associated with this tablespace is corrupted or a disk media failure loses the data.

Assuming that I have been a prudent and more likely wise paranoid Oracle DBA, I have good backups tested and available to do the database restore and recovery operation.

Instead of using the normal RMAN commands that I, as the seasoned DBA know by heart to perform the database recovery operation for the lost datafile for the USER tablespace while half asleep at lets say 2am on a Saturday night when the operator pages me, I can make use of the Data Recovery Advisor in RMAN to help guide me to a quick and accurate recovery for my missing or corrupted data file.

First I simulate a data loss by deleting the data file from the operating system.

When I shutdown and startup my test Oracle 11g database, Oracle complains about the missing data files.




And we see the infamous words:



ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u01/app/oracle/oradata/ora11g/users01.dbf'



OK so how can we quickly diagnose the solution for our database recovery operation with Oracle 11g? Well thanks to the new Data Recovery Advisor, it is a piece of cake if we have a good backup.

First, lets start a new RMAN session and connect to our target or catalog if we are using an RMAN catalog. For sake of simple example, we just use target and no catalog for the test.

rman target /

Once we connect to the target database that requires database recovery, we use the new list failure and list failure detail commands with RMAN to find out the root cause of the database failure.





OK, so now I know what the problem is. Now let's try out the cool new feature of the Oracle 11g Data Recovery Advisor to help us with a repair script and recovery strategy for our lost data files.

We use the advise failure command new to Oracle 11g for RMAN to give us the details on how to recover from the data loss. Below is an example for Oracle 11g on Red Hat Linux.





Notice that the Data Recovery advisor informs us that the data files in question are missing or have been lost as well as recommended actions. As an experienced DBA, we should know these steps fairly well and can verify the steps with the advisor. One nice touch before we implicitly believe the Data Recovery Advisor is to run the repair failure preview command to test out a simulation using the automated repair scripts provided to us by the Data Recovery Advisor.


Let's check out what the Data Recovery Advisor tells us when we run the repair failure preview command from within RMAN for our test Oracle 11g database.




Notice that the Data Recovery Advisor provides us with both the list of steps to perform the database recovery option as well as the impact to the database for the recovery operation as well as location of the repair scripts. One nice feature of the Data Recovery Advisor is that these tasks can be scripted in the Unix shell scripting language of your choice and then scheduled to run via a daily cron job for detection and repair of database failures.

Now that we are satisfied with the output for the recovery scripts from the Data Recovery Advisor, let's go ahead and use the repair scripts above to complete our database restore and recovery.

The repair failure command will use the default repair scripts created by the Data Recovery Advisor to execute these tasks as shown in the following example.




First, the Data Recovery Advisor for Oracle 11g in RMAN shows us the tasks to be executed and then prompts us whether or not we really really want to do this. Since we like the choices given, we choose yes and the advisor quickly carries out the database recovery operation for us right away.




Once the restore and recovery operation has completed successfully, Oracle 11g RMAN asks us if we would like to open the database. Since we were unable to open the database last time due to the missing or lost data files, we choose yes to test whether the restore has been successful.

Since it takes a while to open my test Oracle 11g database, I get a cup of coffee and come back.




Now that RMAN has completed the database recovery, lets test to see if we can access our missing data from our USERS and EXAMPLE tablespaces that were lost earlier.

But first, I would like to illustrate the other way cool feature of Oracle 11g for Backup and Recovery with RMAN: validate database

In the past when an Oracle DBA has to check for corrupt blocks and repair corrupt blocks in earlier releases before 11g, the dbverify utility was the tool of choice along with the block recover feature in Oracle RMAN.
Not only was this time consuming tracking down which corrupt blocks belong to which data file and associated tablespaces, but it was a pain in the rear to deal with.

Well now with 11g for Oracle RMAN, we have the validate command. It is fantastic!

Let's view a few examples.

To check the entire database for block corruption- we have the validate database command.

RMAN will cycle through the entire database as shown in the example below:





My advice for Oracle DBAs out there is to run a validate command after each database recovery operation to verify that no block corruption exists. It also helps to do this on a regular basis just as part of good preventive maintenance and sanity checks.

Now lets run the validate tablespace command to check for block corruption with the recently restored EXAMPLE and USERS tablespaces:



And for our EXAMPLE tablespace:




We also have the validate datafile command to check for block corruption with a specific data file. For example, to check for corruption with data file 1 which happens to belong to the SYSTEM tablespace:





I like this a lot better than the old dbverify utility which frankly to be honest was a royal pain to deal with.

Now that we have covered the new features for Oracle 11g RMAN with backup and recovery for the new Data Recovery Advisor, lets verify that our data has been restored by logging on to our database and checking to see that our data has been restored for these tables in the USERS and EXAMPLE tablespaces.




Now that our data has been recovered we are satisfied with the power and ease that the new Oracle 11g Data Recovery Advisor affords us busy Oracle DBAs.

Cheers,
Ben

Saturday, May 24, 2008

How to Secure Oracle 10g/11g Enterprise Manager for SSL

Recently a poster on the Oracle Forums (forums.oracle.com) asked a question on how to secure Oracle 10g Database Control for Enterprise Manager (OEM) with SSL. I had to do this for some military customers when we migrated to secured networks for Oracle 10g.


According to the Oracle documentation and various tips from Oracle Metalink, Oracle uses wallets to provide the ability to secure OEM via SSL layer.

The emctl secure commands provide the tools to setup the wallet based security with SSL to lock down the Oracle 10g Database or Grid Control environment.

Reference
Oracle® Enterprise Manager Advanced Configuration
10g Release 2 (10.2)

The emctl secure agent utility performs the following actions:

* Obtains an Oracle Wallet from the Management Service that contains a unique
digital certificate for the Management Agent. This certificate is required in order for
the Management Agent to conduct SSL communication with the secure
Management Service.

* Obtains an Agent Key for the Management Agent that is registered with the
Management Service.

* Configures the Management Agent so it is available on your network over HTTPS
and so it uses the Management Service HTTPS upload URL for all its
communication with the Management Service.

To enable Enterprise Manager Framework Security for the Management Agent:

1. Ensure that your Management Service and the Management Repository are up
and running.
2. Change directory to the following directory:


AGENT_HOME/bin (UNIX)
AGENT_HOME\bin (Windows)


3. Stop the Management Agent:


PROMPT> ./emctl stop agent


4. Enter the following command:


PROMPT> ./emctl secure agent (UNIX)
PROMPT> emctl secure agent (Windows)


The emctl secure agent utility prompts you for the Agent Registration Password, authenticates the password against the Management Service, and reconfigures the Management Agent to use Enterprise Manager Framework Security.

Note:
Alternatively, you can enter the command all on one line, but if you enter the
command on one line, the password you enter will be displayed on the screen as
you type:


PROMPT> ./emctl secure agent agent_registration_pwd (UNIX)
PROMPT> emctl secure agent agent_registration_pwd (Windows)


Here is sample output for the various options for emctl secure command:


C:\>emctl secure
Oracle Enterprise Manager 10g Database Control Release 10.2.0.1.0

Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.

Usage :
secure oms -sysman_pwd -reg_pwd [-hos
t ] [-reset] [-secure_port ]
secure agent
secure em
secure dbconsole []
secure setpwd
secure status [oms url]
secure lock | unlock

In this case, I have Oracle 10g on Windows XP with Database Control for EM.
Here is an example:

C:\>emctl secure dbconsole oracle oracle karma

Oracle Enterprise Manager 10g Database Control Release 10.2.0.1.0
Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.

http://karma:5500/em/console/aboutApplication

Checking Repository... Done.
Checking Repository for an existing Enterprise Manager Root Key... Done.
Generating Enterprise Manager Root Key (this takes a minute)... Done.
Fetching Root Certificate from the Repository... Done.
Generating Registration Password Verifier in the Repository... Done.
Updating HTTPS port in emoms.properties file... Done.
Generating Java Keystore...Certificate was added to keystore
Certificate reply was installed in keystore
Done.
Securing OMS ... Done.
Generating Oracle Wallet Password for Agent.... Done.
Generating wallet for Agent ... Done.
Copying the wallet for agent use... Done.
Storing agent key in repository... Done.
Storing agent key for agent ... Done.
Configuring Agent...
Configuring Agent for HTTPS... Done.
EMD_URL set in C:\oracle\product\10.2.0\db_1/karma_orcl/sysman/config/emd.prop
ties
Configuring Agent ... Done.
Configuring Key store.. Done.

Saturday, May 17, 2008

Virtualization with Oracle 11gR1 and Oracle VM

I am excited about the Oracle focus on virtualization for database servers.

As a consultant, I like using virtual servers to use for test servers on my laptop which comes in handy for demos and testing.

Here are details from the Oracle Virtual Site:



Installing and Configuring Oracle Enterprise Linux 5 with Oracle Database 11g Release 1 as a Paravirtualized Machine (PVM) on an Oracle VM Server

I encourage fellow Oracle professionals to explore the future with the Oracle virtual server technology!

Wednesday, May 14, 2008

Support for third party database migrations to Oracle Using SQL Developer

As you all may or may not be aware of, in the past year or so, I have been busy writing a book on database migrations to Oracle 10g/11g using the freely available migration tools from Oracle. Originally these tools were in the form of what is called the Oracle Migration Workbench or OMWB for short. In 2006, Oracle released a development environment called SQL Developer.

Oracle has added migration tools to their flagship development product called SQL Developer which used to be called Raptor in the first beta version that will be the focus for migration and development tasks.

However, there are a few gotchas with SQL Developer tool in terms of third party migrations. Oracle as of yet after discussions with product management at Oracle on the SQL Developer team has no support in SQL Developer yet for IBM DB2 UDB and Informix migrations to Oracle using SQL Developer. SO, in spite of what Oracle may say, you still need to use the Oracle Migration Workbench if you wish to migrate off of IBM DB2 UDB or Informix to Oracle. After a brief chat with Sue Harper whom is the product manager at Oracle for SQL Developer, she was kind enough to let me know that support is coming eventually within SQL Developer in a future release to support Informix and IBM DB2 migrations to Oracle.

Saturday, April 19, 2008

The function of the High Water Mark in Oracle

Recently a potential client in San Diego quizzed me on the high water mark and performance tuning in Oracle.

Basically the high water mark (HWM) has a function that comes into play with tables that have heavy insert, update and delete activity. Every time data is changed in a table, the HWM moves to a setting in each table that shows how much free space is left in terms of blocks used and free in the segment object. What this means is that say I have a segment object type table A that has 100,000 rows inserted and then I perform a delete operation to delete 10,000 rows from that table, Oracle will move the HWM to a value dependent on the PCT_FREE and PCT_USED values based on the free list available in that table.

Things get tricky when we go to Oracle 9i which added the option to create automatic segment space management (ASSM) functionality with the table. Prior to Oracle 9i, all tables were manual segment managed which meant that a DBA had to play around with various parameters for PCT_FREE, PCT_USED, etc. to get the balance right for performance optimization of that table. ASSM allows bitmaps to manage that table and improves performance and management is far easier than tweaking all of the previous parameters.

Which leads us to the main issue of HWM and Oracle performance. When a DELETE operation is taking a long time, that is because the HWM value has not been reset. To clean this issue up and improve performance, the HWM has to be reset.

Now there are a couple of ways to do this: one is to perform a TRUNCATE operation on the table which will reset the HWM values and run a lot faster than a DELETE operation.

A second option is to reorganize the table using either export or import for Oracle 8i/9i releases or to use the data pump (expdp/impdp) utility available with Oracle 10g and Oracle 11g to reset the HWM values.

Lutz Hartman has an excellent discussion on the new high watermark settings for Oracle 10g and 11g:

To quote Oracle ACE Lutz Hartman on the concept of the HWM for Oracle:

"Starting with ORACLE 10gR1 we can use a new feature for adjusting the high watermark, it is called segment shrinking and is only possible for segments which use ASSM, in other words, which are located in tablespaces which use Automatic Segement Space Management."

Which brings us to a new discussion with terms that there is now a HHWM and LHWM values for segment objects for tables and other objects.

To further enhance performance, in terms of the values for tables, it is now possible with Oracle 10g and 11g to shrink segments in a table.

The nice thing about Oracle 10g and 11g is that we are not having to do the export and import tables and move them around to reset the HWM for performance reasons we can use the "ALTER TABLE ... SHRINK " commands after we have enabled row movement for the table and all of this can be done online with no downtime for the table!