In my daily tasks I deal with a lot of Linux servers, and from time to time decide to tweak them for performance reasons, depending on what task they are executing. A lot of the units I'm dealing with tend to be operating a postgres database and some sort of data store for a custom application that is being run (usually via tomcat). I've found that there are three easy things to play around with in order to get the most out of the system, especially if the resources on the box are fairly limited. Those things are the swappiness value, the I/O scheduler, and use of the renice command implemented with a script called via crontab.
Swappiness
The Swappiness value is what systems administrators and engineers use to instruct the linux kernel on how aggressive the system should be in storing pages of memory on disk, as opposed to in memory. Most default installations have this value at 60 which is supposed to represent a balanced number (the range is: 0-100). In my situation where I'm running a lot of database operations I've found that a higher value seems to help free up memory for use in postgres related processes, where otherwise idle system processes may have been holding on to that memory. This has been particularly effective in situations where I have application servers with just barely enough memory to get by.
You can adjust the swappiness value two ways. The first is more of a testing/temporary measure and can be done by using the following command (via the terminal):
sysctl -w vm.swappiness=(value you'd like to set it to)
You can also make this change by editing the following file: /proc/sys/vm/swappiness . One should exercise caution when editing this file though as it does require a bit of monitoring to make sure that you aren't breaking vital processes when changing memory allocation settings.
I/O Scheduler
CFQ (Completely Fair Queuing)
If my memory is still serving me well, on most Linux distributions this is the default setting. This scheduler serves as a sort of general use setting as it has decent performance on a large number of configurations and uses ranging from servers to desktops. This scheduler attempts to balance resources evenly for multiple I/O requests, and across multiple I/O devices. It's great for things like desktops or general purpose servers.
Deadline
This one is particularly interesting as it more or less takes 5 different queues and reorders tasks in order to maximize I/O performance and minimize latency. It attempts to get near real time results with this method. It also attempts to distribute resources in a manner that avoids having a process lose out entirely. This one seems to be great for things like database servers, assuming that the bottle neck in the particular case isn't CPU time.
Noop
This is a particularly lightweight scheduler and attempts to reduce CPU latency by reducing the amount of sorting occurring in the queue. It assumes that the device(s) you are using have a scheduler of their own that is optimizing the order of things.
Anticipatory
This scheduler uses a slight delay on I/O operations in order to sort them in a manner that is most efficient based on the physical location of the data on disk. This tends to work out well for slower disks, and older equipment. The delay can cause a higher level of latency as well.
In choosing your scheduler you have to consider exactly what the system is doing. In my case as I stated before I am administering application/database servers with a fair amount of load, so I've chosen the deadline scheduler. If you'd like to read into these with a bit more detail I'd check out this Redhat article (it's old but still has decent information: http://www.redhat.com/magazine/008jun05/features/schedulers/)
You can change your scheduler either on the fly by using:
echo <scheduler> > /sys/block/<disk>/queue/scheduler
Or in a more permanent manner (survives reboot) by editing the following file:
/boot/grub.conf
You'll need to add 'elevator=<scheduler> to the kernel line.
Using renice
Part of what my boxes do is serve up a web interface for users to interact with. When there are other tasks going on and the load spikes access to this interface can become quite sluggish. In my scenario I'm using tomcat as the webservices application and it launches with a 0 nice value (the normal user priority level in a range from -15-15 with lower being more important). The problem with this is that postgres also operates on the same priority and if it is loaded up with queries they are both on equal footing when fighting for CPU time. So in order to increase the quality of the user experience I've decided to set the priority for the tomcat process to -1, allowing it to take CPU time as needed when users interact with the server. I've done this using a rather crude bash script, and an entry on crontab (using crontab -e).
The script
--
#!/bin/bash
tomcatString="$(ps -eaf|grep tomcat|cut -c10-15)"
renice -1 -P $tomcatString
All the above uses are the ps,grep, and cut commands to pull the process ID and then run the renice command on that ID by streaming it in. The crontab entry just calls it on a periodic basis to make sure the process stays at that priority. In the case of the above it's doing it every 10 minutes, but it can be set to just about any sort of scheduling. To read more on how to use cron scheduling check out this article: http://www.debian-administration.org/articles/56.
Swappiness
The Swappiness value is what systems administrators and engineers use to instruct the linux kernel on how aggressive the system should be in storing pages of memory on disk, as opposed to in memory. Most default installations have this value at 60 which is supposed to represent a balanced number (the range is: 0-100). In my situation where I'm running a lot of database operations I've found that a higher value seems to help free up memory for use in postgres related processes, where otherwise idle system processes may have been holding on to that memory. This has been particularly effective in situations where I have application servers with just barely enough memory to get by.
You can adjust the swappiness value two ways. The first is more of a testing/temporary measure and can be done by using the following command (via the terminal):
sysctl -w vm.swappiness=(value you'd like to set it to)
You can also make this change by editing the following file: /proc/sys/vm/swappiness . One should exercise caution when editing this file though as it does require a bit of monitoring to make sure that you aren't breaking vital processes when changing memory allocation settings.
I/O Scheduler
CFQ (Completely Fair Queuing)
If my memory is still serving me well, on most Linux distributions this is the default setting. This scheduler serves as a sort of general use setting as it has decent performance on a large number of configurations and uses ranging from servers to desktops. This scheduler attempts to balance resources evenly for multiple I/O requests, and across multiple I/O devices. It's great for things like desktops or general purpose servers.
Deadline
This one is particularly interesting as it more or less takes 5 different queues and reorders tasks in order to maximize I/O performance and minimize latency. It attempts to get near real time results with this method. It also attempts to distribute resources in a manner that avoids having a process lose out entirely. This one seems to be great for things like database servers, assuming that the bottle neck in the particular case isn't CPU time.
Noop
This is a particularly lightweight scheduler and attempts to reduce CPU latency by reducing the amount of sorting occurring in the queue. It assumes that the device(s) you are using have a scheduler of their own that is optimizing the order of things.
Anticipatory
This scheduler uses a slight delay on I/O operations in order to sort them in a manner that is most efficient based on the physical location of the data on disk. This tends to work out well for slower disks, and older equipment. The delay can cause a higher level of latency as well.
In choosing your scheduler you have to consider exactly what the system is doing. In my case as I stated before I am administering application/database servers with a fair amount of load, so I've chosen the deadline scheduler. If you'd like to read into these with a bit more detail I'd check out this Redhat article (it's old but still has decent information: http://www.redhat.com/magazine/008jun05/features/schedulers/)
You can change your scheduler either on the fly by using:
echo <scheduler> > /sys/block/<disk>/queue/scheduler
Or in a more permanent manner (survives reboot) by editing the following file:
/boot/grub.conf
You'll need to add 'elevator=<scheduler> to the kernel line.
Using renice
Part of what my boxes do is serve up a web interface for users to interact with. When there are other tasks going on and the load spikes access to this interface can become quite sluggish. In my scenario I'm using tomcat as the webservices application and it launches with a 0 nice value (the normal user priority level in a range from -15-15 with lower being more important). The problem with this is that postgres also operates on the same priority and if it is loaded up with queries they are both on equal footing when fighting for CPU time. So in order to increase the quality of the user experience I've decided to set the priority for the tomcat process to -1, allowing it to take CPU time as needed when users interact with the server. I've done this using a rather crude bash script, and an entry on crontab (using crontab -e).
The script
--
#!/bin/bash
tomcatString="$(ps -eaf|grep tomcat|cut -c10-15)"
renice -1 -P $tomcatString
--
The crontab entry:
--
*/10 * * * * sh /some/path/here/reniceTomcat
--
All the above uses are the ps,grep, and cut commands to pull the process ID and then run the renice command on that ID by streaming it in. The crontab entry just calls it on a periodic basis to make sure the process stays at that priority. In the case of the above it's doing it every 10 minutes, but it can be set to just about any sort of scheduling. To read more on how to use cron scheduling check out this article: http://www.debian-administration.org/articles/56.
No comments:
Post a Comment