How to Tune RDBMS in General?

Julyanto Sutandang
5 min readAug 5, 2024

--

Relational Database Management System, the most sophisticated Database for your business

Database servers—especially RDBMS, the core foundation of transaction systems—are very sensitive to any disturbance; if they are slow, they will jam the rest of the corporation. Solving this issue is not difficult; the problems are in the understanding, knowledge, and experience of the solution architect up to the administrators.

There are three sections that we should understand well:

  1. Hardware, from CPU type: number of clocks and cores, Architecture, sockets, memory type and size, I/O and storage type and size, bonding and redundancy technology: JBOD, SAS, RAID, etc.
  2. Servers, from the most expensive proprietary to open-source-based systems such as PostgreSQL, not to mention 11DB/Postgres, are the great fork of Open Source PostgreSQL with Seamless Encryption.
  3. Query, which also relates to the data structure. How is the query made and parsed, and what kind of approach for data management is aligned with the apps’ business process?

RDBMS is a unique type of computing system; it is more than HPC (High-Performance Computing). While HPC only focuses on the CPU and Memory, RDBMS should focus on the CPU Clock, number of cores, memory allocation for cache, process, heap, and stacks, and not forget to mention I/O: IOPS, Writing Strategy, type and size tiering, etc.

How to Optimize Hardware?

CPU: We should use a CPU with a higher clock and prioritize more than cores but with a low clock. The higher clock rate helps computing run faster since almost all searching processes in the database involve arithmetic and, therefore, will easily clog the CPU with a low clock rate. Moreover, RDBMS always correlates with high concurrency processes, which also raises another issue for locking. With a high CPU clock, locking will be much easier to resolve.

RAM: Our rule of thumb is that one core serves 4GB. Memory usage for RDBMS is divided into some pools: Share Buffers, Heap, Process, Kernel Caches, and Stacks. The usage of some memory pools is not easy to determine initially; it depends on user behavior and application usage. In a multithreaded environment, memory allocation can be done in a very dynamic way, in contrast with a multiprocess environment, just like Postgres and Oracle.

I/O: Unlike HPC, which didn’t exploit any storage/network usage, RDBMS has an obligation to ensure durability as part of the four Pillars: ACID (Atomicity, Consistency, Isolation, and Durability); this means RDBMS has very close coordination with persistent storage access. Every time RDBMS commits, it delivers the committed data into a permanent transaction log. Usually, we use NVMe-type storage to carry out this important task.

How to Optimize RDBMS Server?

To Optimize the Server itself, we should understand the RDBMS Server Architecture: is it based on Multiprocess or Multithread? In this writing, I will cover Postgres and/or Oracle in general. In a Multiprocess environment, one needs a shared buffer as a base for interprocess communication. Therefore, the size of the shared buffer is very critical. If booked too big, there is an inefficiency of memory usage since there will be a higher portion of memory that cannot be used effectively. There are a couple of memory settings that we should be aware of; some of them are dynamic allocation (it is at the Heap), and some of them are statically allocated or pre-allocated.

Some background processes run and check the system periodically; it is important to set the timing to avoid unnecessary overhead and minimize process latency. By setting up the timing and number of memory allocations for some tasks, we prepare the system to better accommodate higher loads and behavior.

How to Optimize Query?

The query is divided into two big categories: DDL and DML. DDL is a shorthand of data definition languages and should not be tuned since we don’t have any opportunity to tune it beyond their scope of work. While DML is a shorthand of Data Manipulation Languages, divided into 4 types: INSERT, UPDATE, SELECT, and DELETE. Almost all those types require optimization; the DELETE statement is quite rare in production and is usually used in housekeeping periodically. We will delve into each Statement and point out what kind of optimization can be done:

INSERT: If the target table contains a unique index, it will recalculate the index (to maintain its uniqueness) for every row INSERT-ed, and then for bulk INSERT, it is contra-productive for having a unique index. Or, we may not run bulk INSERT to the table with unique indexes or primary keys.

SELECT: There are some domains that we should focus on: the SELECT, the expression in the where clause, and the join condition. Type and goal of the join are also important since, usually, the application developer doesn’t put enough attention to creating the SELECT query and somehow has an inefficient join strategy. SELECT plays the second factor in slowing the system, especially when the data structure is not designed well and forgets to create an index of specific expressions. There are some tips to follow in general:

  1. Ensure indexes are created and aligned with the expression used in the Where clause part.
  2. Try to use ordinal data type for key and filter expression.
  3. Use the Trigram Index type for searching or filtering char data type
  4. In expression, avoid using OR; use UNION or IN instead.
  5. UUID Datatype is well supported in Postgres; use it with UUID Datatype in the database, not only in the Application.
  6. The generation of UUIDs, which involves many data sources, should use the Postgres UUID generation function and choose version 1 instead to better avoid collisions caused by failover.
  7. Choose column-based encryption over tablespace to avoid unnecessary encryption overhead. Encryption is very CPU-intensive and can slow your system significantly.

UPDATE: The major factor in RDBMS performance is the use of UPDATE statements in a highly concurrency environment. Having an UPDATE is unavoidable since the base of the transaction is the involvement of the UPDATE. RDBMS with high concurrency is a must since the main goal of RDBMS is the database for transactions, and the transaction is valued well when it happens in a high concurrency environment.

UPDATE oblige lock: It is unavoidable; when there is a lock, there is a high chance that other processes/threads are waiting for the lock and, therefore, slow down the other transactions. While a lock is unavoidable, then what we can do is to make the locking process is done as fast as possible. Some factors create lock contention: any computing activity involves storage and access to an AHCI chip and a low clock rate of the CPU. When there is a query, it is processed sequentially; some of it can be processed by multiple sequential (called parallel processing), but still, sequential processing is made, including locking and unlocking mechanisms is done sequentially. Having a faster CPU clock will significantly minimize lock contention. Storage access is mitigated by having M.2 SSD or NVMe SSD as your transaction log storage.

I had to go. Hopefully, this writing will be worth reading for some readers, and I will complete it later.

See you in IES 2024 in Jimbaran, Bali 6–8 August 2024

--

--

Julyanto Sutandang

Technopreneur, System Developer, PostgreSQL Expert, run IT Solution for High Performance System, call me: +628111188812 (julyanto@equnix.asia)