Thursday, March 31, 2022

Exachk to the rescue!

 So do you have a large and complex Oracle Exadata environment and need to perform a health check or resolve a tricky issue? Well, one thing to keep in mind is that Oracle support may ask you to run some diagnostics to provide logs and so forth. This is where the utility exachk comes to the resue!

What is exachk? Well according to the Oracle Exadata documentation, this nifty utility "provide a lightweight and non-intrusive health check framework for the Oracle stack of software and hardware components."

As an extra bonus, the ExaChk utility works not only for Oracle Exadata engineered systems, but can diagnose issues with other Oracle database appliances such as the Exalogic and Oracle Database Appliance (ODA) systems. Recently, we had a power issue at work that required me to troubleshoot why the Oracle Clusterware (CRS) and ASM failed to come back on line after the power reboot. I tried to start the CRS using standard Oracle 19c RAC commands like crsctl and srvctl with no success as it complained about storage related issues on the Exadata Cloud @ Customer environment since we DBAs unfortunately are not provided with access to the grid disks and cell storage servers due to the setup. So how do you use Exachk? It is really simple actually. You can run it by default as the grid user and by default it runs all of the options to perform a full collection of the Exadata ecosystem. If you use the exachk -a that will run all options for diagnostic collection tasks. It then collects the details for the health check of the Exadata system into a series of tar files to upload to Oracle support.

So the next time you face a challenging Exadata or Oracle engineered system issue, be sure to use the powerful ExaChk utility!

References and Further Reading

Oracle Exadata Database Machine EXAchk (Doc ID 1070954.1)

Oracle Exachk Quick Start Guide

Thursday, February 4, 2021

Happy 2021 A New Age

 Hello my dear readers,

It has been a long time since I posted but ya know life sometimes gets in the way. Lots of moving, work, travel and projects and the past few years have been life changing on so many levels.  This will be a non tech post but next ones will be some new database content!

I learned Exadata and Oracle Cloud! After many years of hearing about these amazing platforms and attending sessions at past Oracle conferences, I was able last year to get hands on experience working on several large Exadata X4-8 systems. It has been a lot of fun learning and also using Oracle 19c new features like multi-tenancy pluggable databases, Oracle Cloud (OCI) and setting up Oracle 13c Cloud control as well. So fun! 

More to come my fellow data professionals and colleagues!


Tuesday, January 19, 2016

ASM Diskgroup Analysis Tip

Recently I have been performing multiple Oracle 12c RAC setups and digging deep into ASM internals. Besides query the ASM environment via the V$ASM_DISK and V$ASM_DISKGROUP dynamic tables, these lesser known command line tools provide detailed troubleshooting assistance for solving difficult ASM issues with your Oracle environments.

As a reference point for today's discussion, I found a very useful support note from Oracle that explains all three utilities to analyze Oracle ASM environments:

Kfod is a quick and dirty tool to examine the current ASM configuration for ownership, sizes and device mappings. Ensure that your kfod is in the local path for $GRID_HOME/bin/kfod to allow execution of the tool.

To get help with kfod use the help=y option:

Thursday, November 5, 2015

Oracle ASM Disks and multi path issue- Solution

So today I needed to create some new ASM disks on Hitachi enterprise storage with HDLM Hitachi Dynamic Link Manager multipathing setup with Oracle Linux 6.6 environment on a two node Oracle 12c RAC cluster. Unfortunately when I tried to create new ASM disks using the ASMLib and the Oracle ASM create disk command it failed to create the disks! I thought this to be odd because I had used the fdisk command on Linux to add a new partition on each of the LUNs presented to the OS for ASM. Permissions were correct and fdisk showed that the partitions were created for the newly provisioned LUNs to Linux.

I found a blog post that mentions a similar issue and MOS note that references the issue and workaround solution:

ASMLib: oracleasm createdisk command fails: Device '/dev/emcpowera1 is not a partition [Failed] (Doc ID 469163.1)

Now while this references EMC storage, the same issue and solution applies to other multipathing software and storage arrays such as Hitachi storage with HDLM in my case.

Instead of using the 'oracleasm createdisk' command I had to use an internal tool called asmtool that usually is called behind the scenes by Oracle tools.

# /usr/sbin/asmtool -C -l /dev/oracleasm -n VOL1 -s /dev/emcpowera1 -a force=yes

This saved me hours of frustration this week when I had to build new ASM disks and do a migration for a customer solution in a very short time period. Hopefully it will also save you hours of grief .

Getting used to Hitachi storage is a bit new for me but since I have years of EMC storage administration experience it is not too steep a learning curve and quite a bit of fun learning new things!

Saturday, October 24, 2015

On Solving the Right Problem

Dear Readers,

It has been quite a while since I wrote on the blog due to some past things that occupied my personal time that had to be addressed. Recently I began to solve new performance related problems for a customer. Here is the scenario:

Oracle 12c RAC
Large enterprise SAN
Converged infrastructure

The target goal is to achieve a minimal performance level (SLA) for storage and system performance.

The customer is only receiving a low number of storage performance in terms for overall IOPS with their two node Oracle 12c RAC configuration.

So with little information to go by from past experiences, I asked how they measure such performance values.

Enter the Benchmark Tools!

The customer is using a third party performance tool to measure performance for CPU, system and storage figures. So what is this tool you might ask? Well hold on for a minute we will get to that question in a short while. My first DBA spidey sense was to get the data from the horse's mouth so to speak that is from Oracle! I logged into the Oracle 12c RAC cluster and pulled the recent AWR reports from the cluster and noticed that overall there were no serious performance issues!

It turns out that the performance tool is reporting different numbers than the Oracle database!

Hmm well that sure is very odd! I review the infrastructure between customer sites and lo and behold find that configurations are different! So that calls to mind that Oracle per se is not guilty but a more fundamental issue of apples to oranges comparisons. Which leads to my next thought process



I know this seems obvious to most of us Oracle DBA types right? Well you'd be surprised at how many customers spin their wheels attempting to solve a performance issue by mistaking the forest for the weeds. Instead of immediate jumping to conclusions, take a deep breath and step back to look at the big picture. The following comes to mind:

1. Storage configuration- disks, HBA, HBA, multi path configurations
2. Firmware and patch levels for infrastructure- servers, SAN, networks
3. Review OS configurations and releases
4. Run basic tests to collect data points- Oracle AWR, sar, vmstat, et al.

Stay tuned on my next series of blog posts on how to exactly solve these types of problems. Oh yeah and get ready, set and go for Oracle Open World this coming week!!


Saturday, October 4, 2014

Oracle OpenWorld 2014

Dear Readers,

This year's Oracle Open World 2014 was a lot of learning and meeting old and new friends.
What a fun and great conference!

The big highlights for me were:

Oracle 12c In Memory Database- in sight and everywhere this was a key mantra at the event.

Big Data SQL- another new thing that intrigues me is using SQL to manage Big Data applications with Hadoop, NoSQL and other applications.

Engineered systems- Oracle demo grounds featured Exadata, Exalogic, Big Data Appliance (BDA), etc front and center.

Last but not least, Oracle Public Cloud was a centerpiece in the keynotes and sessions. Oracle is now a serious contender to not only IBM, HP, Cisco, and EMC but to Amazon Web Services (AWS) with public cloud offerings.

Last year, Oracle introduced hands on labs (HOL) to allow participants to learn first hand new technologies aside from the standard product marketing and general conference sessions.

This year the hands on labs (HOL) were greatly expanded and I spent the majority of my time attending a dozen or so of these excellent sessions for learning first hand how to work with Oracle Public Cloud, virtualization, big data and NoSQL.

Prior to OOW, I attended the 2 day Oracle ACE Director summit and had a great experience learning first hand from Oracle's Thomas Kurian, EVP of Product Development as well as many product and engineering folks from Oracle.

Last but certainly not least, I presented at the Delphix booth on performing Oracle E-Business Suite Upgrades with Delphix and had a great attendance. Next month I am presenting at the BGOUG conference in Sofia, Bulgaria which will be an exciting event.

Wednesday, September 24, 2014

OOW 2014 events

Dear readers,

This year promises to be an exciting Oracle OpenWorld.

I will be presenting in the Delphix booth on using Delphix to perform upgrades with Oracle E-Business Suite. Hope to see you all there and we also will have a Clone Attack with demo software available to learn about Delphix. In addition, DBVisit will be presenting REPAttack to show demos how to replicate seamless Oracle environments with the DBVisit software.

I am headed to the Oracle ACE Director briefing so stay tuned and hope to see everyone there.