RHEL 6 – Controlling Cache Memory / Page Cache Size
Other Learning Articles that you may like to read
Free Courses We Offer
Paid Training Courses we Offer
What is Page cache ?
When a user process reads or writes a file, it is actually modifying a copy of that file in main memory. The kernel creates that copy from the disk and writes back changes to it when necessary. The memory taken up by these copies is called cached memory, in other words we call it as page cache.
Cached memory will be consumed whenever a user process initiates a read or write. The kernel will look for a copy of the part of the file the user is acting on, and, if no such copy exists, it will allocate one new page of cache memory and fill it with the appropriate contents read out from the disk. If the user only reads the file, this page will be marked as a “clean” cache page. However, as soon as the user writes the file, the page will be marked “dirty.” A kernel thread which appears in ps called “pdflush ( upto kernel version 2.6.31) / flush ( for kernel version 2.6.32 or later)” will wake up periodically and copy all of the pages marked dirty back to the disk, then mark them as clean again.
How do we see the current cache memory usage ?
Look at the cached column from the below output
[root@gurkullinux ~]# free -m
total used free shared buffers cached
Mem: 15976 15195 781 0 167 9153
-/+ buffers/cache: 5874 10102
Swap: 2000 0 1999
How to limit the Page Cache or Cache Memory size in Red Hat Enterprise Linux ( RHEL) 6?
In RHEL 6 page cache is dynamically controlled and it can take as much memory as available in the machine. The important point here is, in RHEL there is no kernel parameter to directly control page cache size, but all we can do here is limit the growth of page cache by tuning some configurable kernel parameters.
What are configurable kernel parameters that controls the size of the page cache ( or cache memory ) size?
vm.vfs_cache_pressure (default = 100)
it controls the tendency of the kernel to reclaim memory. Decreasing vfs_cache_pressure causes the kernel to prefer to retain dentry and inode caches. Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer to reclaim dentries and inodes
Increasing this value ( e.g. 500) will make your cause for more frequent reclaim of cache memory and limit the size of page cache.
vm.dirty_background_ratio ( default = 20)
Indicates the percentage of a system memory, the number of pages at which the pdflush background writeback daemon will start writing out dirty data. Decreasing this number will cause the pdflush start wrtiting out the dirt data sooner, and it will limit the size of page cache.
Indicates the percentage of the number of pages at which the processes writes out their own dirty data. Decreasing this number will cause the processes to write out the dirty data sooner, and that will limit the page cache size.
vm.dirty_expire_centisecs ( default = 3000 , mentioned in milliseconds)
Indicates the expire time for the dirty pages , so that they became eligible to be flushed out by the pdflush. Decreasing this value wil make the more dirty data pages eligible for flush out and that will limit the page cache size.
vm.swappiness ( default=60)
Indicates, how soon you want to swap out the data, the more the value the more likely to swap. Decreasing this values will cause the machine to less likely to swap and thus more like to write data out to disk. And that will limit the Page cache size.
Note: Make this Value Zero , doesn’t stop your system from Swapping.
How to Change these kernel Parameters ?
If you want to change parameters dynamically on a running machine you can use the command similar to
echo “500” > proc/sys/vm/vfs_cache_pressure
If you want persistent configuration ( the below below command will make modification to /etc/sysctl.conf)
sysctl -w vm.vfs_cache_pressure=”500″
If you are experiencing the problem like your entire page cache occupying the physical memory you should initially consider to tune the parameters , phase by phase, changing the values by 15% to 20%. But don’t change them at one shot altogether, because modifications to these values could affect the performance of the system in significant manner either in positive or negative manner.
And don’t make multiple changes at a time until you confirm previous change doesn’t impact your systems negatively.