Tune Linux Kernel Parameters For PostgreSQL Optimization and better System Performance

Introduction

In my previous Article i explained  Tuning PostgreSQL Database Memory Configuration Parameters to Optimize Performance and as i said  Database performance does not only depend on Postgresql configurations but also on system parameters .Poorly configured OS kernel parameters can cause degradation in database server performance. Therefore, it is imperative that these parameters are configured according to the database server and its workload. In this article  i will be talking about centos/redhat  linux system specially . 

Story

I will start the article with small story where on one of our client huge amount of writes were there and customer have provided us 200 GB of RAM for that dedicated database server , So there were no problem of resources. 

Now what was happening that after sometime system loads get increased so much and on debugging we found no special query  around the time when load increases . Somewhere over internet we found if we clear the system cache  regularly then issue will be resolved . 

We then schedule a cron to clear system cache after some regular interval and issue got resolved . 

Now  the question is why issue was not coming after this ? ? ?

And the Answer is  that due to large cache size as we have so much of ram available   lots of data is collected in RAM (in GB’s) and when this whole data flushes out on to the disk   ,  system load becomes high at that time 

So from that we came to know that we also need to tune some system parameters also to optimize system and database(postgresql) performance . 

In above case we tuned vm.dirty_background_ratio and vm.dirty_ratio , these two system(os) parameters to resolve the issue . 

Kernel parameters Tuning

Now what values we set for these above two parameters described in story and what are all other  important Linux kernel parameters that can affect database server performance which we can tune are described as follows : 

vm.dirty_background_ratio / vm.dirty_background_bytes

The vm.dirty_background_ratio is the percentage of memory filled with dirty pages that need to be flushed to disk. Flushing is done in the background. The value of this parameter ranges from 0 to 100; however, a value lower than 5 may not be effective and some kernels do not internally support it. The default value is 10 on most Linux systems. You can gain performance for write-intensive operations with a lower ratio, which means that Linux flushes dirty pages in the background.

You need to set a value of vm.dirty_background_bytes depending on your disk speed.

There are no “good” values for these two parameters since both depend on the hardware. However, setting vm.dirty_background_ratio to 5 and vm.dirty_background_bytes to 25% of your disk speed improves performance by up to ~25% in most cases.

vm.dirty_ratio / dirty_bytes

This is the same as vm.dirty_background_ratio / dirty_background_bytes except that the flushing is done in the foreground, blocking the application. So vm.dirty_ratio should be higher than vm.dirty_background_ratio. This will ensure that background processes kick in before the foreground processes to avoid blocking the application, as much as possible. You can tune the difference between the two ratios depending on your disk IO

 vm.swappiness

vm.swappiness is another kernel parameter that can affect the performance of the database. This parameter is used to control the swappiness (swapping pages to and from swap memory into RAM) behavior on a Linux system. The value ranges from 0 to 100. It controls how much memory will be swapped or paged out. Zero means disable swap and 100 means aggressive swapping.

You may get good performance by setting lower values.

Setting a value of 0 in newer kernels may cause the OOM Killer (out of memory killer process in Linux) to kill the process. Therefore, you can be on the safe side and set the value to 1 if you want to minimize swapping. The default value on a Linux system is 60. A higher value causes the MMU (memory management unit) to utilize more swap space than RAM, whereas a lower value preserves more data/code in memory.

A smaller value is a good bet to improve performance in PostgreSQL.

vm.overcommit_memory / vm.overcommit_ratio

Applications acquire memory and free that memory when it is no longer needed. But in some cases, an application acquires too much memory and does not release it.  This can invoke the OOM killer. Here are the possible values for vm.overcommit_memory parameter with a description for each:

  1. Heuristic overcommit, Do it intelligently (default); based kernel heuristics
  2. Allow overcommit anyway
  3. Don’t over commit beyond the overcommit ratio.

Reference: https://www.kernel.org/doc/Documentation/vm/overcommit-accounting

vm.overcommit_ratio is the percentage of RAM that is available for overcommitment. A value of 50% on a system with 2 GB of RAM may commit up to 3 GB of RAM.

A value of 2 for vm.overcommit_memory yields better performance for PostgreSQL. This value maximizes RAM utilization by the server process without any significant risk of getting killed by the OOM killer process. An application will be able to overcommit, but only within the overcommit ratio, thus reducing the risk of having OOM killer kill the process. Hence a value to 2 gives better performance than the default 0 value. However, reliability can be improved by ensuring that memory beyond an allowable range is not overcommitted. It avoids the risk of the process being killed by OOM-killer.

On systems without swap, one may experience a problem when vm.overcommit_memory is 2.

https://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT

Generally speaking almost all applications which uses more memory depends on this , For example  , In Redis setting this value 1 is best . 

Turn On Huge Pages

Linux, by default, uses 4K memory pages, BSD has Super Pages, whereas Windows has Large Pages. A page is a chunk of RAM that is allocated to a process. A process may own more than one page depending on its memory requirements. The more memory a process needs the more pages that are allocated to it. The OS maintains a table of page allocation to processes. The smaller the page size, the bigger the table, the more time required to look up a page in that page table. Therefore, huge pages make it possible to use a large amount of memory with reduced overheads; fewer page lookups, fewer page faults, faster read/write operations through larger buffers. This results in improved performance.

PostgreSQL has support for bigger pages on Linux only. By default, Linux uses 4K of memory pages, so in cases where there are too many memory operations, there is a need to set bigger pages. Performance gains have been observed by using huge pages with sizes 2 MB and up to 1 GB. The size of Huge Page can be set boot time. You can easily check the huge page settings and utilization on your Linux box using cat /proc/meminfo | grep -i huge command.

Get HugePage Info – On Linux (only)

Note: This is only for Linux, for other OS this operation is ignored
$ cat /proc/meminfo | grep -i huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

In this example, although huge page size is set at 2,048 (2 MB), the total number of huge pages has a value of 0. which signifies that huge pages are disabled.

Script to quantify Huge Pages

This is a simple script which returns the number of Huge Pages required. Execute the script on your Linux box while your PostgreSQL is running. Ensure that $PGDATA environment variable is set to PostgreSQL’s data directory.

Get Number of Required HugePages

!/bin/bash
pid=head -1 $PGDATA/postmaster.pid
echo “Pid:            $pid”
peak=grep ^VmPeak /proc/$pid/status | awk '{ print $2 }'
echo “VmPeak:            $peak kB”
hps=grep ^Hugepagesize /proc/meminfo | awk '{ print $2 }'
echo “Hugepagesize:   $hps kB”
hp=$((peak/hps))
echo Set Huge Pages:     $hp

The output of the script looks like this:

Script Output

Pid:            12737
VmPeak:         180932 kB
Hugepagesize:   2048 kB
Set Huge Pages: 88

The recommended huge pages are 88, therefore you should set the value to 88.

Set HugePages Command :

sysctl -w vm.nr_hugepages= 88

Check the huge pages now, you will see no huge page is in use (HugePages_Free = HugePages_Total).

Again Get HugePage Info – On Linux (only)

$ cat /proc/meminfo | grep -i huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:      88
HugePages_Free:       88
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

Now set the parameter huge_pages “on” in $PGDATA/postgresql.conf and restart the server.

And Again Get HugePage Info – On Linux (only)

$ cat /proc/meminfo | grep -i huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:      88
HugePages_Free:       81
HugePages_Rsvd:       64
HugePages_Surp:        0
Hugepagesize:       2048 kB

Now you can see that a very few of the huge pages are used. Let’s now try to add some data into the database.

Some DB Operations to Utilise HugePages

postgres=# CREATE TABLE foo(a INTEGER);
CREATE TABLE
postgres=# INSERT INTO foo VALUES(generate_Series(1,10000000));
INSERT 0 10000000

Let’s see if we are now using more huge pages than before.

Once More Get HugePage Info – On Linux (only)

$ cat /proc/meminfo | grep -i huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:      88
HugePages_Free:       18
HugePages_Rsvd:        1
HugePages_Surp:        0
Hugepagesize:       2048 kB

Now you can see that most of the huge pages are in use.

Note: The sample value for HugePages used here is very low, which is not a normal value for a big production machine. Please assess the required number of pages for your system and set those accordingly depending on your system’s workload and resources.

Now, Tuning Postgresql parameters and kernel parameters is not enough for good Postgresql performance there are many other things like

  • How you are making Query
  • Proper Indexing — For this you can follow indexing series on our blog
  • Proper partitioning and sharding accroding to business usecase
  • and many more .

Stay tuned to get more blogs on optimizing postgresql performance

Refrences : https://www.percona.com/blog/2018/08/29/tune-linux-kernel-parameters-for-postgresql-optimization/

Checklists – System is Compromised or Hacked – Part 1

Introduction

As in my previous Blog where i explained how i came to know if my system is hacked or compromized (link here). Here in this blog i will explain what basic things we can check on our system when we have doubt if our system is compromized .

This Blogs have 3 parts

  • List of Checks which can determine if system is compromised or hacked – Part 1
  • List of checks which can give a direction how system is compromised or hacked – Part 2
  • What preventive steps (specially infra related) can be taken care to avoid hacking or to make system more secure – Part 3

Here , i am assuming system is Linux system with Centos installed .

List of Checks which can determine if system is compromised or hacked

  • Generally when hacker break into a linux system it is high chance that it will alter you main packages like openssh,kernel etc.. , So first if of please check if these packages are altered or there are some changes in the files or binaries provided by these packages . Following are commands to check on Centos
    • sudo rpm -qa | grep openssh | xargs -I '{}' sudo rpm -V '{}'
    • If therr are files shown by above command in which you did not change anything then it means there is high chance your system is compromised
  • Run rootkit Hunter to check if you system is compromised
    • Download rkhunter.tar.gz
    • copy it in /root and goto /root
    • tar zxvf rkhunter-1.4.2.tar.gz
    • cd rkhunter-1.4.2/
    • sh installer.sh --layout default --install
    • changes in /etc/rkhunter.conf ENABLE_TESTS="all" DISABLE_TESTS="none" HASH_CMD=SHA1 HASH_FLD_IDX=4 PKGMGR=RPM 7
    • /usr/local/bin/rkhunter --propupd
    • /usr/local/bin/rkhunter --update
    • /usr/local/bin/rkhunter -c -sk 10.
    • note output or check and copy /var/log/rkhunter.log
    • you can also check the link for using rkhunter
  • Check /var/log/secure to check if there are many authentication failure requests and someone trying brute force to enter in to system
    • following will be the comand :
      • [root@localhost ~]# less /var/log/secure | grep 'authentication failures'
    • and output will be something like :
      • Apr 25 12:48:46 localhost sshd[2391]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.29.14 user=root
      • Apr 25 12:49:33 localhost sshd[2575]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.29.14 user=root
    • In above output you can see the rhost from where login attempt are made . If you see lots of entries like this then also check if at some point of time some login attempt will be successful from any of attempting rhosts . In secure logs accepted logs will looks something like as follows :
      • Apr 25 12:53:10 localhost sshd[3551]: Accepted password for root from 192.168.29.14 port 36362 ssh2

  • Check in Processes if some unusual process is running and consuming high CPU using top and ps commands .
    • Command to list all process running in system : ps aux | less
    • Also check using top command if some unusual process trying to utiize high cpu
  • Check if there is some unusual entry in crontab of all users made on system
    • crontab -u <user> -l . by default user is root
  • Check if in id_rsa.pub , if some attacker has somehow made its entry in .ssh folder in every users’s home directory .

This was the Part 1 of the Blog , In later Parts i will explain some further checklist to ensure that you system will remain less hackable .

Thankyou .

Linux Machine Compromized/Broken -Power of Observation

Introduction :

In Debugging any issue or any dealing any problem or circumstance two things are important

  • Observation — Observation not only at the time of issue but in general times also .
  • Combining your general observations and Observations at the time of issue to conclude something .

In this Blog , I will explain the following :

  • What was happening on my machine
  • How i came to know my machine is broken into — Power of observation

What was happening on my machine

  • Load on my machine is going very high
  • On top command one process ./kwsapd0 is consuming around 3000% cpu

From here we get to know that kswapd is consuming process , The process kswapd0 is the process that manages virtual memory . So I thought that may be our some process is consuming more RAM and Virtual Memory is being used due to which kswapd process is doing its work but after hours of debugging we found no process is consuming RAM and around 80% RAM was free .

How i came to know my machine is broken into — Power of observation

There were two general observation which i observed and helped my geeting know what was the issue

  • 1st is kswapd process looks in top command like [kswapd] not ./kswapd
  • Kswapd0 can only consume 100% as it uses only one core in the machine .

From there I got to know that this kswapd0 is something unusual . On further debugging

I found ./.configrc/a/kswapd0 in root users directory .

Contents of this directory was :

$ find .configrc -type f
.configrc/dir2.dir
.configrc/a/kswapd0
.configrc/a/dir.dir
.configrc/a/a
.configrc/a/bash.pid
.configrc/a/run
.configrc/a/stop
.configrc/a/init0
.configrc/a/.procs
.configrc/a/upd
.configrc/cron.d
.configrc/b/sync
.configrc/b/dir.dir
.configrc/b/a
.configrc/b/run
.configrc/b/stop

There was also an entry in cron to run this .

So , From all of this i got to know that my system was compromised .

Yet I was unable to find out how my system was broken into . But in my future Blog i will explain what things one can check if your system is compromised and how it is compromised and what all security we can apply to our system to make it less hackable .