Welcome!

Agile Computing Authors: Yeshim Deniz, Zakia Bouachraoui, Liz McMillan, Elizabeth White, William Schmarzo

Related Topics: Recurring Revenue, Java IoT, Machine Learning

Recurring Revenue: Article

How to Monitor Oracle Database Performance

What metrics are we interested in?

An Oracle database provides several v$ views to query information about the database instance, including statistical information that can be used for monitoring and problem analysis purposes. Rene Nyffenegger wrote a nice Summary on Oracle’s v$ views that gives an overview of all available views.

The following illustration shows a dashboard with key metrics that we pull out from an Oracle database when doing performance management with our clients:

dynaTrace Dashboard showing key performance metrics queried from an Oracle Database via v$ tables

dynaTrace Dashboard showing key performance metrics queried from an Oracle Database via v$ tables

In this article I provide a quick overview on what these metrics tell us and how to get them. If you run dynaTrace in your environment you can also download the [DL:Oracle Monitor Plugin] from our Community Portal .

I also appreciate feedback from all Oracle experts out there. I am sure there are many more metrics that are interesting and important for performance analysis, so please add those here.

What metrics are we interested in?
Every time you start up an Oracle instance the system allocates memory for its System Global Area (SGA) (Read more on Oracle SGA Concepts). A very interesting area involves internal data buffers. These buffers hold data in memory and are first searched when a request comes in before fetching data from disk. Buffering obviously saves on I/O and speeds up database requests. There is a great article that explains the different buffer pools, how they can be configured and it also gives recommendations on sizing: Using the dynamic SGA Features of Oracle 9i.

Buffer, Execution and I/O metrics
There are several great blogs that explain metrics such as Buffer Cache Hit Ratio, In Memory Sort Ratio,Parse to Execute Ratio or talk about I/O metrics and how to tune your database based on metrics retrieved from v$buffer_pool_statistics.

The metrics that we query therefore are Buffer Cache Hit Ratio, In Memory Sort Ratio, Parse to Execute Ratio, SQL Area Get Ratio, Buffer Busy Wait, Free Buffer Waits, Write Complete Wait, Consistent Gets, DB Block Gets and Physical Reads.

Connection and User Count
The number of connections and user sessions are key system and performance indicators. The number of connections is usually configurable via connection pool settings on the application server. The more connections the more resources you need on the database - on the other hand you can serve more concurrent users that request data from the database. The blog v$license view tipsgives a good overview of values exposed by this view.

The metrics that we query from this view are Maximum Concurrent User Sessions, Current Concurrent User Sessions, Highest Concurrent User Sessions and Maximum Named Users

Connection Time
Additionally to the metrics that we query from the system tables, we also monitor how long it takes to actually establish a physical connection to the database. Our monitors use the Oracle Database driver and measure the time it takes to get a connection. This metric gives us a good indication on how good the database can deal with new incoming connection requests.

How to query them
Querying these values is pretty straight forward. [Our monitor] is implemented in Java. We load the Oracle JDBC Driver and establish a connection to the Oracle Database Instance when we initially launch the monitor. Check out the following blog with an example on How To connect to Oracle with JDBC. Then we simply execute the SQL statements to retrieve the actual measures on a scheduled interval (e.g.: every 30 seconds).

Most of the statements we execute are explained in the blogs I linked to earlier in this article, e.g.: Buffer Cache Hit Ratio, In Memory Sort Ratio,Parse to Execute Ratio

How to run a monitor and how to read the values
We run our monitors on the dynaTrace APM Platform . This means that these monitors are executed on a scheduled interval, e.g.: every 30 seconds. A monitor itself is a Java OSGI plugin with an execute method that queries and returns the metrics from the database views. The monitored values can then be displayed in a dashboard (as seen in the screenshot in the top) or can trigger alerts, e.g.: notify my admins in case Buffer Cache Ratio drops under a defined threshold.

If you implement your own monitor you need to figure out how granular you need the data and you have to figure out how to display and process the individual measure points.

How to read these values?
The most important thing to understand is to not only look at individual metrics but really look at your system - and with that I mean looking at measures from all application components that are involved to process individual transactions. A low Buffer Cache Hit Ratio doesn't mean that your DBA's should run off right away trying to increase this value by tweaking the db_cache_size. The business function of the application largely dictates the way it interacts with its backend datastore.  For example, if the application is more Decision Support oriented, effective use of the Buffer Cache may be impractical or impossible due to the large data sets that are being analyzed.  On the other hand, if the application is OLTP oriented, the Buffer Cache should be highly utilized and the behaviour of the Hit Ratio with respect to transaction mixes, peak concurrencies, etc ... becomes a critical measure to understand.  Nonetheless, even in these cases, you need to look at overall Transaction Response Times and see if you actually have a problem that affects the end-user.

If that is the case you need to figure out whether the time is spent in the Application Layer (bad performing code), the Network (too much data being sent between components) or the Database (inefficient settings or non optimized indices, ...). If it turns out that the problem is mainly caused by the Database Layer you can check whether your connection pools are exhausted. If that is the case you actually want to talk with the application developers and see whether they can optimize the usage of connections. If you have too much traffic on the network because too much data is requested from the database you want to look into optimizing the application code to really only query the data that is needed. If it then turns out that the problem is on the database you have to observe the metrics discussed in this article. There are ways to tweak database settings to improve buffer usage. I am not the expert on Oracle but if you follow the links that I've posted throughout you will find many good recommendations on what to do in various scenarios.

The database is most often not the root cause of transactions that perform poorly from and end-user’s perspective. Too often it is the application that is e.g.: keeping connections open for too long or is querying more data than needed. I recommend reading Top 10 Performance Problems taken from Zappos, Monster, Thomson and Co. Problem Patterns #1, #4 and #7 often lead people to point to the DBA's first instead of identifying these problems in the application layer. That being said - there are of course scenarios where it really is the database that slows down transaction response times. And in these cases it is important to have enough information available to analyze the root cause of the problem. Correlating the database metrics with other metrics from the underlying operating system, network, application server, web server, ... will make it easier to find a solution to your performance problems.

What's your strategy on database monitoring?
This is one example on monitoring your database. I am interested in getting your feedback on which tools and approaches you use to monitor your Oracle, SQL-Server, MySql, ... databases. Let us know (via comments - login required) which measures you monitor and you consider important. Thanks

Related reading:

  1. Lessons Learned: How we Monitor our Community Portal from the Cloud Our dynaTrace Community Portal is our gateway to our users....
  2. Apache Web Server Status Monitoring with a dynaTrace Plugin provided by MCG Systems The extensible plugin architecture of dynaTrace opens many doors for...
  3. Automated Performance Analysis: What’s going on in my ASP.NET or ASP.NET MVC Application? I’ve spent some time in the last weeks playing with different...

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

IoT & Smart Cities Stories
While the focus and objectives of IoT initiatives are many and diverse, they all share a few common attributes, and one of those is the network. Commonly, that network includes the Internet, over which there isn't any real control for performance and availability. Or is there? The current state of the art for Big Data analytics, as applied to network telemetry, offers new opportunities for improving and assuring operational integrity. In his session at @ThingsExpo, Jim Frey, Vice President of S...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settl...
@CloudEXPO and @ExpoDX, two of the most influential technology events in the world, have hosted hundreds of sponsors and exhibitors since our launch 10 years ago. @CloudEXPO and @ExpoDX New York and Silicon Valley provide a full year of face-to-face marketing opportunities for your company. Each sponsorship and exhibit package comes with pre and post-show marketing programs. By sponsoring and exhibiting in New York and Silicon Valley, you reach a full complement of decision makers and buyers in ...
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessio...
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
Rodrigo Coutinho is part of OutSystems' founders' team and currently the Head of Product Design. He provides a cross-functional role where he supports Product Management in defining the positioning and direction of the Agile Platform, while at the same time promoting model-based development and new techniques to deliver applications in the cloud.
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
LogRocket helps product teams develop better experiences for users by recording videos of user sessions with logs and network data. It identifies UX problems and reveals the root cause of every bug. LogRocket presents impactful errors on a website, and how to reproduce it. With LogRocket, users can replay problems.
Data Theorem is a leading provider of modern application security. Its core mission is to analyze and secure any modern application anytime, anywhere. The Data Theorem Analyzer Engine continuously scans APIs and mobile applications in search of security flaws and data privacy gaps. Data Theorem products help organizations build safer applications that maximize data security and brand protection. The company has detected more than 300 million application eavesdropping incidents and currently secu...