Database Benchmarking

If you have seen advertising literature from major database system vendors, invariably you have seen statements about how that database performed against other systems (benchmark results). All of the major vendors support and participate in benchmarking tests because, to some degree, they have to. The mindset must be along the lines of “Everyone else is doing it, so we’d look odd not participating in the tests ourselves – even though most analysts agree the tests are somewhat artificial and flawed.”

A good introduction into database performance benchmarking can be found in an Oracle Magazine (March-April 2001) by Fred Sandsmark (http://www.oracle.com/oramag/oracle/01-mar/o21bench.html). And despite what you may think before reading the article, Mr. Sandsmark does not give Oracle preferential treatment just because he wrote this article for Oracle. Who is the referee or judge of database benchmarking? No one is going to believe Oracle, Microsoft, or anyone else is going to be impartial, and why should they be? They are businesses with the same end goal in mind: increased profitability for owners and shareholders. When Oracle, for example, claims it has the best-performing database on System X using Processor Y, that is probably true, and the results can be verified, but when you see just how many system/platform combinations there are in the benchmarking results and what the results were, the value of the claim may be diminished.

One important concept to take away from this discussion is that there is no singular, all encompassing, definitive test that allows a vendor to claim their system is the best one out there, no ands, ifs or buts. For Oracle, Microsoft, IBM, or Sybase to claim they are the best overall, well, it’s simply not true. A particular system can be the best on a particular platform under certain conditions, but to say a particular system is the best overall is going to make that vendor suspect with respect to credibility.

The purpose of this article – from an Oracle DBA standpoint – is to introduce you to what the benchmark tests are, who controls or regulates them, and show you how Oracle compares to other database systems. Three letters will over and over again come into play when vendors compare themselves to one another, and those letters are TCO (total cost of ownership). Defining what factors into and how exactly you measure TCO is an issue in of itself.

When someone (your management at budget time?) says, “Sybase rocks” or “SQL Server rules, just look at these benchmark results,” it is useful to know what those results mean, where they came from, and how they were obtained (under what conditions).

The Transaction Processing Performance Council

Yes, it is another one of those somewhere-on-the-Internet entities that wields power and control over our lives. The TPC “defines transaction processing and database benchmarks and delivers trusted results to the industry.” The home page is at www.tpc.org, and you can see who the major members are in the picture below.

As you can readily see, all of the big names – platform, operating system and database system – are represented.

From the “About the TPC” page, the TPC “produces benchmarks that measure transaction processing (TP) and database (DB) performance in terms of how many transactions a given system and database can perform per unit of time, e.g., transactions per second or transactions per minute.” Two commonly used metrics are variations of transactions per unit of time and cost per transaction.

The TPC benchmarks are divided into four categories: TPC-C, TPC-H, TPC-R, and TPC-W, and each category has it own set of metrics as just mentioned. In the interest of space and time, let’s focus on two of them: TPC-C and TPC-R.

Steve Callan
Steve Callan
Steve is an Oracle DBA (OCP 8i and 9i)/developer working in Denver. His Oracle experience also includes Forms and Reports, Oracle9iAS and Oracle9iDS.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles