Tuesday, November 12, 2013

Google Glass first impressions

So today ended up being a rather interesting day. I managed to get an invite to the Google Glass Explorer program thanks a friend and I spent the day getting to know the device. In fact I would imagine that this is the beginning of what will be a series of posts talking about Glass, and hopefully a few of them will be about some APKs I'm going to attempt to build and side load into it. So what are my first impressions?

It's exciting. Walking through this thing is like entering a new world. The gesturing between tiles, the use of the interface feels normal, but it's one of those things where you are keenly aware that you are doing something that is vastly different than the norm. After that feeling fades one starts to attempt to get used to what they should and shouldn't use it for. Through out the day today I answered a couple of phone calls with Glass, read the gist of a few emails, and shared a couple of photos. I haven't yet attempted a video call with it yet, but that is certainly on my to do list. I have to say I'm pleasantly surprised thus far with the audio quality on the bone vibration tech, but I will certainly also mess around with the ear bud a little later. Below I've put an image I took with the camera, and a viginette I took to show the music APK in action:



As you can see the camera isn't too bad. It does get a little grainy in lower light scenarios, like the on the second photo there. But thus far I'm happy enough with it. I'll have to take a collection of shots in different conditions to get a true judgement.

As implied I also messed around with the new Music apk. Apparently there is now a Google Music APK floating around that you can load using adb in order to get the streaming service onto glass (instructions can be found here: http://phandroid.com/2013/11/11/google-play-music-google-glass/ and instructions on how to do the adb/debugging part can be found here: http://glassdev.blogspot.com/). I've played around with it a bit and have found the audio quality to be adequate though nothing to write home about. I still need to test with the ear buds however, which I imagine will give improvement.

Lastly, the Google Now integration is perhaps the coolest part. It takes the tiles that you are used to on your Android phone and makes some of that information available to you on glass. Here is an example of what one of those looks like:


There is clearly a lot of exploring I have to do. I'll definitely be writing up some more posts as I get a better impression on the device and/or start building something interesting to play with on it.



Thursday, August 15, 2013

Android and Tesseract (Part 2)

Since we have an environment with the Tesseract Library loaded we can now attempt to write some code utilizing it. I managed to create a simple sample app that can capture an image and then spit out what the text that the OCR managed to pick up. So let's walk through what you'll need for this.

Classes:

I used a series of four classes, including MainActivity:
1. MainActivity.java - The main activity
2. ExternalStorage.java - Operations for saving the image we'll be using
3. OCRActivity.java - Activity in which we interact with the image and OCR
4. OCROperation.java - Backend call to the OCR library

So let's start by looking at the code for MainActivity.java

package com.rkts.tipassistant;

import java.io.File;

import android.app.Activity;
import android.content.Context;
import android.content.Intent;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.net.Uri;
import android.os.Bundle;
import android.provider.MediaStore;
import android.util.Log;
import android.view.LayoutInflater;
import android.view.Menu;
import android.view.View;
import android.view.ViewGroup.LayoutParams;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.PopupWindow;
import android.widget.TextView;

public class MainActivity extends Activity {

public static Context appContext;
protected Button _button;
protected ImageView _image;
protected TextView _field;
protected String _path;
protected boolean _taken;
String testURI;

protected static final String PHOTO_TAKEN = "photo_taken";


Bitmap globalBitmap;

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
appContext = getApplicationContext();

ExternalStorage.createImageStore();

_image = ( ImageView ) findViewById( R.id.image );
_field = ( TextView ) findViewById( R.id.field );
_button = ( Button ) findViewById( R.id.button );
_button.setOnClickListener( new ButtonClickHandler() );
ExternalStorage es = new ExternalStorage();
_path = es.getImageStore().toString() + File.separator + "receipt.jpg";

File existingReceipt = new File(_path);

if (existingReceipt.exists()) {
boolean deleteSuccess = es.deleteExistingReceipt(existingReceipt);
if (deleteSuccess == true) {
Log.d("Debug(MainActivity): ","Removed existing image");
}
else Log.d("Debug(MainActivity): ","No existing image to remove");
}


}

@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.activity_main, menu);
return true;
}

@Override
protected void onSaveInstanceState( Bundle outState ) {
outState.putBoolean( MainActivity.PHOTO_TAKEN, _taken );
}
@Override
protected void onRestoreInstanceState( Bundle savedInstanceState)
{
Log.i( "MakeMachine", "onRestoreInstanceState()");
System.out.println(savedInstanceState.getBoolean(MainActivity.PHOTO_TAKEN));

if( savedInstanceState.getBoolean( MainActivity.PHOTO_TAKEN ) ) {
onPhotoTaken();
}
}



public class ButtonClickHandler implements View.OnClickListener
{
public void onClick( View view ){
startCameraActivity();
}
}

protected void startCameraActivity()
{
File file = new File( _path );
Uri outputFileUri = Uri.fromFile( file );
System.out.println(Uri.fromFile(file));
testURI = Uri.fromFile(file).toString();
Intent intent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE );
intent.putExtra( MediaStore.EXTRA_OUTPUT, outputFileUri );

startActivityForResult( intent, 0 );
}

@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data)
{
Log.i( "MakeMachine", "resultCode: " + resultCode );
switch( resultCode )
{
case 0:
Log.i( "MakeMachine", "User cancelled" );
break;

case -1:
onPhotoTaken();
break;
}
}



protected void onPhotoTaken()
{
_taken = true;
beginOCROp();
Log.d("Debug(MainActivity.onPhotoTaken:","End of method");
}


public void beginOCROp() {

Intent intent = new Intent();
intent.setClass(this, OCRActivity.class);
startActivity(intent);

}
}





What you'll see here is that during the onCreate() we set our selves up properly to write to device storage by using the getImageStore method. If my memory serves this should work for both devices with separated internal storage and those with actual external removable storage. We also set up our basic buttons, set a context for us to reference in other classes, and set our onClickListeners for our simple UI buttons. Down as we go you'll see the ButtonClickHandler which simply makes a call to the startCameraActivity method. That method is used to prep our use of the camera to store our image file that we will then analyze with the OCR library. One thing that is important about the startCameraActivity is the intent that comes back to it. In order to handle that intent data we have the onPhotoTaken() method which has a switch that determines what our course of action is. Essentially if we actually take a photo it kicks off beginOCROp() which starts the OCRActivity activity and if the user cancels it takes us back to our original state.

Now let's take a look at ExternalStorage.java

/**
*
*/
package com.rkts.tipassistant;

import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;

import android.content.Context;
import android.content.res.AssetManager;
import android.os.Environment;
import android.util.Log;

/**
* @author ryan
* This class is for operations that affect the device external storage.
*/
public class ExternalStorage {
String _path;
static File imageStoreDirectory;

public ExternalStorage() {

}

public ExternalStorage(File receipt) {

}

//This method is used to create the initial image store folder on external storage if it does not exist and is not a directory.
public static void createImageStore() {

File externalStorage = Environment.getExternalStorageDirectory();
imageStoreDirectory = new File(externalStorage + File.separator + "tipAssistant");

System.out.println("ImageStoreDirectory Exists?:" + imageStoreDirectory.exists());

Log.d("Debug:","Directory logic check");

if (imageStoreDirectory.exists()==false && imageStoreDirectory.isDirectory()==false) {
imageStoreDirectory.mkdir();


}
ExternalStorage es = new ExternalStorage();
es.copyAssets();

}

public Boolean deleteExistingReceipt(File receipt) {
boolean success = receipt.delete();
return success;

}

public File getImageStore() {
return imageStoreDirectory;
}


private void copyAssets() {
Context context = MainActivity.appContext;
_path = getImageStore().toString() + File.separator + "tessdata";
File tessdata = new File(_path);
if(!tessdata.exists()) {
tessdata.mkdir();
Log.d("Debug(ExternalStorage(copyAssets):","Making tessdata dir");
}

_path = getImageStore().toString() + File.separator + "tessdata";
AssetManager assetManager = context.getAssets();
String[] files = null;
try {
files = assetManager.list("");
System.out.println(files[0]);
} catch (IOException e) {
Log.e("tag", "Failed to get asset file list.", e);
}
for(String filename : files) {
InputStream in = null;
OutputStream out = null;
try {
in = assetManager.open(filename);
out = new FileOutputStream(_path + File.separator + filename);
copyFile(in, out);
in.close();
in = null;
out.flush();
out.close();
out = null;
} catch(IOException e) {
Log.e("tag", "Failed to copy asset file: " + filename, e);
}
}
}
private void copyFile(InputStream in, OutputStream out) throws IOException {
byte[] buffer = new byte[1024];
int read;
while((read = in.read(buffer)) != -1){
out.write(buffer, 0, read);
}
}






}

This class will handle our I/O operations with storage on the device in question. In our case all we are really looking to do is to create the folder we wish to store the image in temporarily, make sure our tesseract required files are in the proper place, and of course save (or delete and save if something is there) an image for us to analyze. createImageStore() handles prepping the storage environment for our use. It simply checks if the appropriate folder structure exists, and creates it if it does not. We have a method to delete the existing image file and return the success or failure of the operation (deleteExistingReceipt()) as well as a method to get the path to the image store (getImageStore()). After that is what we use to make sure tesseract will behave. We use the copyAssets() method to copy over all the resource data that tesseract needs on the device in order to operate properly. Lastly we have copyFile which is pretty self explanatory and just uses a buffer object.

Next we have OCRActivity.java


package com.rkts.tipassistant;

import java.io.File;

import android.os.Bundle;
import android.os.Environment;
import android.app.Activity;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.view.Menu;
import android.view.MenuItem;
import android.widget.ImageView;
import android.support.v4.app.NavUtils;

public class OCRActivity extends Activity {

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_ocr);
// Show the Up button in the action bar.
//getActionBar().setDisplayHomeAsUpEnabled(true);
String _path;
ExternalStorage es = new ExternalStorage();
_path = es.getImageStore().toString() + File.separator + "receipt.jpg";

File receiptImg = new File(_path );
System.out.println(_path);


try {
System.out.println("setting imageview to receipt...");
ImageView receipt = (ImageView) findViewById(R.id.previewReceipt);
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 4;
Bitmap myBitmap = BitmapFactory.decodeFile(_path, options);
receipt.setImageBitmap(myBitmap);
}
catch (Exception e) {
e.printStackTrace();
}

try {

OCROperation ocr = new OCROperation(_path);
ocr.runOCR(_path);

}
catch(Exception e) {
e.printStackTrace();
}






}

@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.activity_ocr, menu);
return true;
}

@Override
public boolean onOptionsItemSelected(MenuItem item) {
switch (item.getItemId()) {
case android.R.id.home:
// This ID represents the Home or Up button. In the case of this
// activity, the Up button is shown. Use NavUtils to allow users
// to navigate up one level in the application structure. For
// more details, see the Navigation pattern on Android Design:
//
// http://developer.android.com/design/patterns/navigation.html#up-vs-back
//
NavUtils.navigateUpFromSameTask(this);
return true;
}
return super.onOptionsItemSelected(item);
}

}


This one is pretty straight forward. The activity onCreate attempts to retrieve the image we have created, decode it, and then run the runOCR method from the OCROperation class. The activity also puts the image in an image view so you can see what you are working with when the text comes out the other side (to check for correctness).

Lastly we have OCROperation.java. This is perhaps the meat and potatoes of what we are trying to do here.


/**
*
*/
package com.rkts.tipassistant;

import java.io.File;
import java.io.IOException;

import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Matrix;
import android.media.ExifInterface;
import android.os.Environment;

import com.googlecode.tesseract.android.TessBaseAPI;


/**
* @author ryan
*
*/
public class OCROperation {

ExternalStorage es = new ExternalStorage();

public OCROperation(String _path) {

}

public void runOCR(String _path) throws IOException {
// _path = path to the image to be OCRed

BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 4;

Bitmap bitmap = BitmapFactory.decodeFile(_path, options);


ExifInterface exif = new ExifInterface(_path);
int exifOrientation = exif.getAttributeInt(
ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);

int rotate = 0;

switch (exifOrientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
rotate = 90;
break;
case ExifInterface.ORIENTATION_ROTATE_180:
rotate = 180;
break;
case ExifInterface.ORIENTATION_ROTATE_270:
rotate = 270;
break;
}

if (rotate != 0) {
int w = bitmap.getWidth();
int h = bitmap.getHeight();

// Setting pre rotate
Matrix mtx = new Matrix();
mtx.preRotate(rotate);

// Rotating Bitmap & convert to ARGB_8888, required by tess
bitmap = Bitmap.createBitmap(bitmap, 0, 0, w, h, mtx, false);
bitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
}

TessBaseAPI baseApi = new TessBaseAPI();
// DATA_PATH = Path to the storage
// lang for which the language data exists, usually "eng"
File externalStorage = Environment.getExternalStorageDirectory();
File baseDir = new File(externalStorage + File.separator + "tipAssistant");
String path = baseDir.toString() + File.separator;
baseApi.init(path, "eng"); baseApi.setImage(bitmap);
String recognizedText = baseApi.getUTF8Text();
baseApi.end();
System.out.println(recognizedText);


}




}


Here we are taking the image stored on the device and prepping it for use with the OCR library. To do this we make use of ExifInterface and the bitmap object. In tandem these make sure that we have the appropriate data type to analyze and that the user rotating the device wont screw us up too badly. At the end we actually create the TessBaseAPI object that is then instantiated and used to pull a string out of the image. Interestingly enough the actual call that gets you the string is quite simple:


String recognizedText = baseApi.getUTF8Text();

Once you set that string value you can take the information and do anything you'd like with it. The only thing limiting you at that point is your ability to manipulate strings.


I hope this was some what informative as I did have a bit of fun playing around with the library in order to write this. Feel free to comment or shoot me an email if you have questions or concerns.



Friday, June 21, 2013

Android and Tesseract (Part 1)

Over the past few days I've been playing around with the Tesseract native packages that one can rope into a library for Android applications. This library allows one to conduct optical character recognition on Android mobile devices, which is a rather intriguing concept. The ability to do this has been around for some time (2006). You can read more general information on its history and where it comes from here: http://en.wikipedia.org/wiki/Tesseract_(software). The story is rather interesting as the software was originally written by Hewlett Packard in the late 80s early 90s area and sometime down the road ended up in the possession of Google and thereafter available for use in Android. So I figured I'd share a bit on my experience with it in two parts. The first of these will be a brief overview on the setup and the next part will be a bit with some sample code I managed to get together for its use.

The set up is a fairly easy task, though it does require a little bit of critical thinking as there are some problems that can be hard to work through even with community resources. Before you start a project of your own, make sure your IDE (in my case Eclipse) has the ability to compile Java and C++. If you need to add this feature on Eclipse you can find it in their Indigo repository by adding it through the Help > Install New Software dialog.



Once you have those things you'll need to go out and download the Tesseract library project files which you can find here: https://github.com/rmtheis/tess-two. You can either clone the repository or simply download an archive copy, the choice is yours on that front. Upon download you then can simply import this project into your IDE environment. When you've finished doing so you'll want to make sure you have checked off in your project properties that it is indeed an Android library. In Eclipse it looks like the below screenshot.



After that you will need to make sure you have set up the Java NDK (http://developer.android.com/tools/sdk/ndk/index.html) with the TessTwo project . All you have to do here is unpack the archive somewhere accessible and define the path to it in your IDE. In my Eclipse set up the setting is here under the project properties:




Then you just need to run a build on it and let the IDE do its work. The build can take some time so if I were you I'd suggest finding something to do while the time passes. After the build completes you are ready to use it in a project. We will go into actually making use of this on the next post which I hope to have hammered out here this coming week.


Tuesday, May 21, 2013

Apologies and some news

First off I must apologize for neglecting this blog for a while. Over the past couple of months I have been going through a bit of a professional transition that has left me rather occupied and distracted. I will however make the effort to begin posting weekly once more and perhaps more often should time permit.

On to the second agenda item is a bit of news. As one may have found out from either linkedIn or a response to a blog comment I left a couple days ago, I am no longer working at Trustyd. It is a sad and unfortunate turn of events that lead to this point, but I have been away from that particular organization since sometime in April. If you see this and do have any questions for me about this, or wish for advice as someone who has a deep knowledge of the product, feel free to contact me via email (koch.ryan@gmail.com).

In any event I hope to return to the regular scheduled posting here within the next day or so. I will try to think of a riveting topic for all of you to enjoy!

Tuesday, February 12, 2013

Hard drives and Pacific disputes

Now reading the title of this you might wonder what Hard drives and territorial disputes in the Pacific ocean could possibly have to do with one another. As you may be privy to we experienced a hard drive shortage due to some natural disasters in 2011 and the reverberations of this can still be felt in prices today to some degree. One thing to note in all of this is that a fair share of the companies with production in Thailand and other places are owned by Japanese companies. With tensions rising between Japan and China over a set of disputed islands one might wonder if a potential conflict could exacerbate the shortage and drive prices up again.

The dispute is over small islands in the East Asian sea known as Diaoyu in China and the Senkaku in Japan. Some what recently there have been semi severe incidents in which a provocation was a real risk. An example of this recently is a Chinese vessel locking radar onto a Japanese warship. The Pacific is full of such disputes especially considering the nine dotted line map China released showing the territory they see as rightfully theirs.

But what does this have to do with hard drive supplies? Well an overlooked problem is one we faced previously as a bottle neck and that is the motor. Japan's Nidec firm which produces around 80% of the motors has some portion of its manufacturing based in China. Any conflict may reduce the production capacity of this operation and thus limit the number of hard drives available. A possible conflict could create other problems as computer components are manufactured all over East and South East Asia and a conflict between China and one of those parties may bring to bear significant barriers to trade for the duration. At the end of it the cost to trade will ultimately be paid by consumers who would have to pay premiums for technology goods whose supplies are strained.

The next question is how likely is all of this. Personally I believe China and Japan will find that it is not in their best interest to pursue a conflict, and that this is a rather unlikely scenario. It is not in China's best interest to become a belligerent power as it goes against their philosophy of a peaceful rise, which they have been using the assuage the concerns of regional powers. Japan would suffer in losing market access to China and the loss in manufacturing for companies with plants based there. Ultimately it doesn't look like it would be a positive for either power, however pride and territorial disputes can make nations act rather irrationally.

Thursday, January 17, 2013

Wing it and start coding

One of the better professional experiences I've had of late is attempting to learn how the Android environment works in regards to developing for it. I was tasked with a project involving the creation of an application which really had a very simple goal. But the task seemed horribly daunting, while I had taken object oriented programming, and have messed around with a few languages in the past I had not tried to develop anything for mobile. Honestly I have found the best thing to do is just jump in and start.

Seriously, just start planning the project

As with a lot of things the first step is the hardest. I spent a long time reading through random portions of the Android API documentation (http://developer.android.com/develop/index.html). Eventually though one has to actually start designing their project and then coding it. So one day I just started jotting down that the thing was supposed to do on a white board. After writing out each individual task the application needed to achieve, it was then easy to break it down into individual methods and classes. After that you now have a path or a check list of all the things you need to learn how to do in java using the Android APIs.

For example I needed to write an app to interpret XML data, store it and then display it to the user on demand, and complete the parse/download on a background thread. So breaking it down the tasks are:

- Download XML data and parse it
- Create storage space for parsed result
- Create some sort of UI to view results stored in a Database
- Start the download/parse on some sort of regular schedule

Those 4 tasks can then be broken up into individual methods and classes. For example in using a database to store information I needed to write a Database Handler to create it, define it's schema, and define all the I/O methods (more or less the CRUD stuff). One thing that is interesting to read is Oracle's beginning guide to java which also covers object oriented thinking as it's that sort of language (link: http://docs.oracle.com/javase/tutorial/java/concepts/).

Start using Google to find tutorials for everything

In my experience with this I found that more or less everything I was trying to do had been done by someone else in the past in some form, and was documented. It's actually really easy to search for and then figure out how to write classes and methods for a whole variety of tasks. For example I needed to figure out how to parse XML and found an amazing tutorial on that portion and combined with the lessons learned from a tutorial on Sqlite (embedded database).

Outside of doing stuff with Android I've also found that Code Academy is a pretty cool place to learn about coding. The interactive projects are actually rather good and certainly do an excellent job of teaching one the way a language works. It's perfect for beginners or someone trying to pick up a new language for kicks. Here's a link: www.codeacademy.com

I suppose while this article seems a bit aimless the point is to share with you that coding is fun and easy to pick up if you look in the right places. The internet is filled with pretty much everything you need from API docs, SDK docs, and tutorials. The best part is most of is completely free. So go ahead and wing it and start coding!

Wednesday, December 19, 2012

Linux server performance

In my daily tasks I deal with a lot of Linux servers, and from time to time decide to tweak them for performance reasons, depending on what task they are executing. A lot of the units I'm dealing with tend to be operating a postgres database and some sort of data store for a custom application that is being run (usually via tomcat). I've found that there are three easy things to play around with in order to get the most out of the system, especially if the resources on the box are fairly limited. Those things are the swappiness value, the I/O scheduler, and use of the renice command implemented with a script called via crontab.

Swappiness

The Swappiness value is what systems administrators and engineers use to instruct the linux kernel on how aggressive the system should be in storing pages of memory on disk, as opposed to in memory. Most default installations have this value at 60 which is supposed to represent a balanced number (the range is: 0-100). In my situation where I'm running a lot of database operations I've found that a higher value seems to help free up memory for use in postgres related processes, where otherwise idle system processes may have been holding on to that memory. This has been particularly effective in situations where I have application servers with just barely enough memory to get by.

You can adjust the swappiness value two ways. The first is more of a testing/temporary measure and can be done by using the following command (via the terminal):

sysctl -w vm.swappiness=(value you'd like to set it to)

You can also make this change by editing the following file: /proc/sys/vm/swappiness . One should exercise caution when editing this file though as it does require a bit of monitoring to make sure that you aren't breaking vital processes when changing memory allocation settings.

I/O Scheduler

CFQ (Completely Fair Queuing)
If my memory is still serving me well, on most Linux distributions this is the default setting. This scheduler serves as a sort of general use setting as it has decent performance on a large number of configurations and uses ranging from servers to desktops. This scheduler attempts to balance resources evenly for multiple I/O requests, and across multiple I/O devices. It's great for things like desktops or general purpose servers.

Deadline
This one is particularly interesting as it more or less takes 5 different queues and reorders tasks in order to maximize I/O performance and minimize latency. It attempts to get near real time results with this method. It also attempts to distribute resources in a manner that avoids having a process lose out entirely. This one seems to be great for things like database servers, assuming that the bottle neck in the particular case isn't CPU time.

Noop
This is a particularly lightweight scheduler and attempts to reduce CPU latency by reducing the amount of sorting occurring in the queue. It assumes that the device(s) you are using have a scheduler of their own that is optimizing the order of things.

Anticipatory
This scheduler uses a slight delay on I/O operations in order to sort them in a manner that is most efficient based on the physical location of the data on disk. This tends to work out well for slower disks, and older equipment. The delay can cause a higher level of latency as well.

In choosing your scheduler you have to consider exactly what the system is doing. In my case as I stated before I am administering application/database servers with a fair amount of load, so I've chosen the deadline scheduler. If you'd like to read into these with a bit more detail I'd check out this Redhat article (it's old but still has decent information: http://www.redhat.com/magazine/008jun05/features/schedulers/)

You can change your scheduler either on the fly by using:
echo <scheduler> > /sys/block/<disk>/queue/scheduler

Or in a more permanent manner (survives reboot) by editing the following file:
/boot/grub.conf
You'll need to add 'elevator=<scheduler> to the kernel line.

Using renice

Part of what my boxes do is serve up a web interface for users to interact with. When there are other tasks going on and the load spikes access to this interface can become quite sluggish. In my scenario I'm using tomcat as the webservices application and it launches with a 0 nice value (the normal user priority level in a range from -15-15 with lower being more important). The problem with this is that postgres also operates on the same priority and if it is loaded up with queries they are both on equal footing when fighting for CPU time. So in order to increase the quality of the user experience I've decided to set the priority for the tomcat process to -1, allowing it to take CPU time as needed when users interact with the server. I've done this using a rather crude bash script, and an entry on crontab (using crontab -e).

The script
--

#!/bin/bash
tomcatString="$(ps -eaf|grep tomcat|cut -c10-15)"
renice -1 -P $tomcatString
--
The crontab entry:
--
*/10 * * * * sh /some/path/here/reniceTomcat
--


All the above uses are the ps,grep, and cut commands to pull the process ID and then run the renice command on that ID by streaming it in. The crontab entry just calls it on a periodic basis to make sure the process stays at that priority. In the case of the above it's doing it every 10 minutes, but it can be set to just about any sort of scheduling. To read more on how to use cron scheduling check out this article: http://www.debian-administration.org/articles/56.