Thursday, May 8, 2014

Spring boot is pretty cool

In the not too distant past I sold the powers that be at my workplace on my building an API layer for some of our databases in order to make future projects easier. That of course led to doing research on what I was to use to build such a thing. I ended up playing with a few different tools and landed on Spring Boot as a favorite. Admittedly one of the reasons I gave it a shot was due to my dislike of having to edit XML configuration files. But once I started working with it I found it to be perfect for the type of system I wanted to build. It took care of a crap ton of boilerplate and configuration and then stood back to watch me code in the business logic and ended up being perfect for the Microservices Architecture I wanted to roll out.

So I've given out some pretty hefty praise thus far, but I suppose you're wondering specifically how I came to these conclusions. One of the best demonstrations of this is to look at a simple method from one of my controllers I've got setup.



There are a couple of things to notice in the Git Gist above including a number of interesting annotations that help make everything rather easy. The first of these you'll notice is that @Autowired annotation above my DataSource object. The use of that allows the application to take the configuration I have setup in my application.properties file and then automatically configure the datasource with those settings in mind. That configuration file is incredibly easy to set up as it just looks like this:

spring.datasource.driverClassName=net.sourceforge.jtds.jdbc.Driver
spring.datasource.url=jdbc:jtds:sqlserver://server/database
spring.datasource.username=username
spring.datasource.password=password

The @RequestMapping, @ResponseBody, and @RequestParam annotations are pretty self explanatory which then leaves the JdbcTemplate object. Since ultimately most of the functionality of a web service like this is to simply push data around where it needs to go, being able to call store procedures in a way that isn't super verbose is certainly a win. In the case above we're sending an email and password parameter over to the database and getting back data about a 'consumer.' The consumer is then built into an object through the use of a RowMapper and then returned. Were we attempting to select a bunch of customers this method wouldn't change at all, save for returning a whole List<Consumer> object instead of a single Consumer.

Lastly on this ramble I want to touch on how neat this is for microservices. For example in the application I'm building there are a couple of different databases I wish to push data back and forth from, and those databases have very different purposes. I could build one monolithic application to handle all of it, or I could take a more incremental approach and build each logical piece together as separate services (which is what I decided to do). Spring boot makes this easy to implement because you can export jar files that have an embedded tomcat instance built into them. This allows you to take that jar file on to more or less the server of your choice (with java installed of course) and just kick off the jar. When you need to redeploy something you merely need to stop the jar, replace it and kick it off again. As you can imagine it would be quite easy to automate this via script or your favored management tool. You also gain a bit of resilience with this model as one service need not affect the others if it goes down, be it for maintenance or some sort of catastrophic event. Instead of losing everything you lose functionality for one logical business interest. I could ramble on Microservices for another few paragraphs but instead of punishing you with that I offer you an excellent article on the subject.

That concludes my gushing about Spring Boot. Granted my exposure and use of it is pretty cursory but the experience has been positive thus far. Feel free to comment or email me if you have any questions or opinions to share. Happy coding!

Saturday, January 18, 2014

Glass Sommelier Part 2

Hello again guys, I come bearing a slight bit of news on my first Glass project. I've finished version 0.1 of Glass Sommelier and while it's rather basic it has taught me a bit about how native Glass development works. Right now the application has the following functionality:

  1. Search for wines (made a bit more intelligent with your location built in)
  2. Add a maximum price to your search by using the phrase 'under (number) dollars'
  3. Return a list of wines descending in rating
  4. Save wines to your timeline


Eventually I want to integrate some other ways to save and share the wine you wish to enjoy. For now though I figure it's time to release the bare bones version and let people play with it. If you are a fellow Glass Explorer and would like to play with my app you can get the APK here.  Feel free to ping me if you have any thoughts, questions, or really anything else. Be gentle though as like I said this is my first little experiment screwing around.


I'm hoping to start working on another fun project soon. Once I get a bit further in my coding for that I'll share some details. For now, here are some Glass Sommelier vignettes.








Tuesday, December 17, 2013

Glass Sommelier Part 1

Today I'm posting to let you guys know about a small project I'm working on that I'm rather excited about. The application is called Glass Sommelier and is a piece of Google Glassware that will allow the user to search for wine. The app can be launched by using the touch interface or by saying "Ok Glass, find wine." Once the user has completed a voice search they are then able to use the touch interface to scroll through the results. At that point the user is able to select a wine and then make use of a number of sharing methods in order to save the wine information. I haven't decided exactly which methods will be available at the start though one of them will be to email it to yourself.

For the longer term I'm working on a sort of Wine concierge which involves a bit of secret sauce. The goal of this portion of the application is to find just the right sort of wine based of off a series of questions posed by the Glass device upon your request. This particular portion of the application is probably still a month or two off, but I will definitely post when it's complete.

Once the app is functional and stable enough I'll post an APK so that my fellow Glass explorers can side load it. In the mean time here are teaser screen caps I took using it sitting at my home workstation. Enjoy!













Tuesday, November 12, 2013

Google Glass first impressions

So today ended up being a rather interesting day. I managed to get an invite to the Google Glass Explorer program thanks a friend and I spent the day getting to know the device. In fact I would imagine that this is the beginning of what will be a series of posts talking about Glass, and hopefully a few of them will be about some APKs I'm going to attempt to build and side load into it. So what are my first impressions?

It's exciting. Walking through this thing is like entering a new world. The gesturing between tiles, the use of the interface feels normal, but it's one of those things where you are keenly aware that you are doing something that is vastly different than the norm. After that feeling fades one starts to attempt to get used to what they should and shouldn't use it for. Through out the day today I answered a couple of phone calls with Glass, read the gist of a few emails, and shared a couple of photos. I haven't yet attempted a video call with it yet, but that is certainly on my to do list. I have to say I'm pleasantly surprised thus far with the audio quality on the bone vibration tech, but I will certainly also mess around with the ear bud a little later. Below I've put an image I took with the camera, and a viginette I took to show the music APK in action:



As you can see the camera isn't too bad. It does get a little grainy in lower light scenarios, like the on the second photo there. But thus far I'm happy enough with it. I'll have to take a collection of shots in different conditions to get a true judgement.

As implied I also messed around with the new Music apk. Apparently there is now a Google Music APK floating around that you can load using adb in order to get the streaming service onto glass (instructions can be found here: http://phandroid.com/2013/11/11/google-play-music-google-glass/ and instructions on how to do the adb/debugging part can be found here: http://glassdev.blogspot.com/). I've played around with it a bit and have found the audio quality to be adequate though nothing to write home about. I still need to test with the ear buds however, which I imagine will give improvement.

Lastly, the Google Now integration is perhaps the coolest part. It takes the tiles that you are used to on your Android phone and makes some of that information available to you on glass. Here is an example of what one of those looks like:


There is clearly a lot of exploring I have to do. I'll definitely be writing up some more posts as I get a better impression on the device and/or start building something interesting to play with on it.



Thursday, August 15, 2013

Android and Tesseract (Part 2)

Since we have an environment with the Tesseract Library loaded we can now attempt to write some code utilizing it. I managed to create a simple sample app that can capture an image and then spit out what the text that the OCR managed to pick up. So let's walk through what you'll need for this.

Classes:

I used a series of four classes, including MainActivity:
1. MainActivity.java - The main activity
2. ExternalStorage.java - Operations for saving the image we'll be using
3. OCRActivity.java - Activity in which we interact with the image and OCR
4. OCROperation.java - Backend call to the OCR library

So let's start by looking at the code for MainActivity.java

package com.rkts.tipassistant;

import java.io.File;

import android.app.Activity;
import android.content.Context;
import android.content.Intent;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.net.Uri;
import android.os.Bundle;
import android.provider.MediaStore;
import android.util.Log;
import android.view.LayoutInflater;
import android.view.Menu;
import android.view.View;
import android.view.ViewGroup.LayoutParams;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.PopupWindow;
import android.widget.TextView;

public class MainActivity extends Activity {

public static Context appContext;
protected Button _button;
protected ImageView _image;
protected TextView _field;
protected String _path;
protected boolean _taken;
String testURI;

protected static final String PHOTO_TAKEN = "photo_taken";


Bitmap globalBitmap;

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
appContext = getApplicationContext();

ExternalStorage.createImageStore();

_image = ( ImageView ) findViewById( R.id.image );
_field = ( TextView ) findViewById( R.id.field );
_button = ( Button ) findViewById( R.id.button );
_button.setOnClickListener( new ButtonClickHandler() );
ExternalStorage es = new ExternalStorage();
_path = es.getImageStore().toString() + File.separator + "receipt.jpg";

File existingReceipt = new File(_path);

if (existingReceipt.exists()) {
boolean deleteSuccess = es.deleteExistingReceipt(existingReceipt);
if (deleteSuccess == true) {
Log.d("Debug(MainActivity): ","Removed existing image");
}
else Log.d("Debug(MainActivity): ","No existing image to remove");
}


}

@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.activity_main, menu);
return true;
}

@Override
protected void onSaveInstanceState( Bundle outState ) {
outState.putBoolean( MainActivity.PHOTO_TAKEN, _taken );
}
@Override
protected void onRestoreInstanceState( Bundle savedInstanceState)
{
Log.i( "MakeMachine", "onRestoreInstanceState()");
System.out.println(savedInstanceState.getBoolean(MainActivity.PHOTO_TAKEN));

if( savedInstanceState.getBoolean( MainActivity.PHOTO_TAKEN ) ) {
onPhotoTaken();
}
}



public class ButtonClickHandler implements View.OnClickListener
{
public void onClick( View view ){
startCameraActivity();
}
}

protected void startCameraActivity()
{
File file = new File( _path );
Uri outputFileUri = Uri.fromFile( file );
System.out.println(Uri.fromFile(file));
testURI = Uri.fromFile(file).toString();
Intent intent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE );
intent.putExtra( MediaStore.EXTRA_OUTPUT, outputFileUri );

startActivityForResult( intent, 0 );
}

@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data)
{
Log.i( "MakeMachine", "resultCode: " + resultCode );
switch( resultCode )
{
case 0:
Log.i( "MakeMachine", "User cancelled" );
break;

case -1:
onPhotoTaken();
break;
}
}



protected void onPhotoTaken()
{
_taken = true;
beginOCROp();
Log.d("Debug(MainActivity.onPhotoTaken:","End of method");
}


public void beginOCROp() {

Intent intent = new Intent();
intent.setClass(this, OCRActivity.class);
startActivity(intent);

}
}





What you'll see here is that during the onCreate() we set our selves up properly to write to device storage by using the getImageStore method. If my memory serves this should work for both devices with separated internal storage and those with actual external removable storage. We also set up our basic buttons, set a context for us to reference in other classes, and set our onClickListeners for our simple UI buttons. Down as we go you'll see the ButtonClickHandler which simply makes a call to the startCameraActivity method. That method is used to prep our use of the camera to store our image file that we will then analyze with the OCR library. One thing that is important about the startCameraActivity is the intent that comes back to it. In order to handle that intent data we have the onPhotoTaken() method which has a switch that determines what our course of action is. Essentially if we actually take a photo it kicks off beginOCROp() which starts the OCRActivity activity and if the user cancels it takes us back to our original state.

Now let's take a look at ExternalStorage.java

/**
*
*/
package com.rkts.tipassistant;

import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;

import android.content.Context;
import android.content.res.AssetManager;
import android.os.Environment;
import android.util.Log;

/**
* @author ryan
* This class is for operations that affect the device external storage.
*/
public class ExternalStorage {
String _path;
static File imageStoreDirectory;

public ExternalStorage() {

}

public ExternalStorage(File receipt) {

}

//This method is used to create the initial image store folder on external storage if it does not exist and is not a directory.
public static void createImageStore() {

File externalStorage = Environment.getExternalStorageDirectory();
imageStoreDirectory = new File(externalStorage + File.separator + "tipAssistant");

System.out.println("ImageStoreDirectory Exists?:" + imageStoreDirectory.exists());

Log.d("Debug:","Directory logic check");

if (imageStoreDirectory.exists()==false && imageStoreDirectory.isDirectory()==false) {
imageStoreDirectory.mkdir();


}
ExternalStorage es = new ExternalStorage();
es.copyAssets();

}

public Boolean deleteExistingReceipt(File receipt) {
boolean success = receipt.delete();
return success;

}

public File getImageStore() {
return imageStoreDirectory;
}


private void copyAssets() {
Context context = MainActivity.appContext;
_path = getImageStore().toString() + File.separator + "tessdata";
File tessdata = new File(_path);
if(!tessdata.exists()) {
tessdata.mkdir();
Log.d("Debug(ExternalStorage(copyAssets):","Making tessdata dir");
}

_path = getImageStore().toString() + File.separator + "tessdata";
AssetManager assetManager = context.getAssets();
String[] files = null;
try {
files = assetManager.list("");
System.out.println(files[0]);
} catch (IOException e) {
Log.e("tag", "Failed to get asset file list.", e);
}
for(String filename : files) {
InputStream in = null;
OutputStream out = null;
try {
in = assetManager.open(filename);
out = new FileOutputStream(_path + File.separator + filename);
copyFile(in, out);
in.close();
in = null;
out.flush();
out.close();
out = null;
} catch(IOException e) {
Log.e("tag", "Failed to copy asset file: " + filename, e);
}
}
}
private void copyFile(InputStream in, OutputStream out) throws IOException {
byte[] buffer = new byte[1024];
int read;
while((read = in.read(buffer)) != -1){
out.write(buffer, 0, read);
}
}






}

This class will handle our I/O operations with storage on the device in question. In our case all we are really looking to do is to create the folder we wish to store the image in temporarily, make sure our tesseract required files are in the proper place, and of course save (or delete and save if something is there) an image for us to analyze. createImageStore() handles prepping the storage environment for our use. It simply checks if the appropriate folder structure exists, and creates it if it does not. We have a method to delete the existing image file and return the success or failure of the operation (deleteExistingReceipt()) as well as a method to get the path to the image store (getImageStore()). After that is what we use to make sure tesseract will behave. We use the copyAssets() method to copy over all the resource data that tesseract needs on the device in order to operate properly. Lastly we have copyFile which is pretty self explanatory and just uses a buffer object.

Next we have OCRActivity.java


package com.rkts.tipassistant;

import java.io.File;

import android.os.Bundle;
import android.os.Environment;
import android.app.Activity;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.view.Menu;
import android.view.MenuItem;
import android.widget.ImageView;
import android.support.v4.app.NavUtils;

public class OCRActivity extends Activity {

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_ocr);
// Show the Up button in the action bar.
//getActionBar().setDisplayHomeAsUpEnabled(true);
String _path;
ExternalStorage es = new ExternalStorage();
_path = es.getImageStore().toString() + File.separator + "receipt.jpg";

File receiptImg = new File(_path );
System.out.println(_path);


try {
System.out.println("setting imageview to receipt...");
ImageView receipt = (ImageView) findViewById(R.id.previewReceipt);
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 4;
Bitmap myBitmap = BitmapFactory.decodeFile(_path, options);
receipt.setImageBitmap(myBitmap);
}
catch (Exception e) {
e.printStackTrace();
}

try {

OCROperation ocr = new OCROperation(_path);
ocr.runOCR(_path);

}
catch(Exception e) {
e.printStackTrace();
}






}

@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.activity_ocr, menu);
return true;
}

@Override
public boolean onOptionsItemSelected(MenuItem item) {
switch (item.getItemId()) {
case android.R.id.home:
// This ID represents the Home or Up button. In the case of this
// activity, the Up button is shown. Use NavUtils to allow users
// to navigate up one level in the application structure. For
// more details, see the Navigation pattern on Android Design:
//
// http://developer.android.com/design/patterns/navigation.html#up-vs-back
//
NavUtils.navigateUpFromSameTask(this);
return true;
}
return super.onOptionsItemSelected(item);
}

}


This one is pretty straight forward. The activity onCreate attempts to retrieve the image we have created, decode it, and then run the runOCR method from the OCROperation class. The activity also puts the image in an image view so you can see what you are working with when the text comes out the other side (to check for correctness).

Lastly we have OCROperation.java. This is perhaps the meat and potatoes of what we are trying to do here.


/**
*
*/
package com.rkts.tipassistant;

import java.io.File;
import java.io.IOException;

import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Matrix;
import android.media.ExifInterface;
import android.os.Environment;

import com.googlecode.tesseract.android.TessBaseAPI;


/**
* @author ryan
*
*/
public class OCROperation {

ExternalStorage es = new ExternalStorage();

public OCROperation(String _path) {

}

public void runOCR(String _path) throws IOException {
// _path = path to the image to be OCRed

BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 4;

Bitmap bitmap = BitmapFactory.decodeFile(_path, options);


ExifInterface exif = new ExifInterface(_path);
int exifOrientation = exif.getAttributeInt(
ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);

int rotate = 0;

switch (exifOrientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
rotate = 90;
break;
case ExifInterface.ORIENTATION_ROTATE_180:
rotate = 180;
break;
case ExifInterface.ORIENTATION_ROTATE_270:
rotate = 270;
break;
}

if (rotate != 0) {
int w = bitmap.getWidth();
int h = bitmap.getHeight();

// Setting pre rotate
Matrix mtx = new Matrix();
mtx.preRotate(rotate);

// Rotating Bitmap & convert to ARGB_8888, required by tess
bitmap = Bitmap.createBitmap(bitmap, 0, 0, w, h, mtx, false);
bitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
}

TessBaseAPI baseApi = new TessBaseAPI();
// DATA_PATH = Path to the storage
// lang for which the language data exists, usually "eng"
File externalStorage = Environment.getExternalStorageDirectory();
File baseDir = new File(externalStorage + File.separator + "tipAssistant");
String path = baseDir.toString() + File.separator;
baseApi.init(path, "eng"); baseApi.setImage(bitmap);
String recognizedText = baseApi.getUTF8Text();
baseApi.end();
System.out.println(recognizedText);


}




}


Here we are taking the image stored on the device and prepping it for use with the OCR library. To do this we make use of ExifInterface and the bitmap object. In tandem these make sure that we have the appropriate data type to analyze and that the user rotating the device wont screw us up too badly. At the end we actually create the TessBaseAPI object that is then instantiated and used to pull a string out of the image. Interestingly enough the actual call that gets you the string is quite simple:


String recognizedText = baseApi.getUTF8Text();

Once you set that string value you can take the information and do anything you'd like with it. The only thing limiting you at that point is your ability to manipulate strings.


I hope this was some what informative as I did have a bit of fun playing around with the library in order to write this. Feel free to comment or shoot me an email if you have questions or concerns.



Friday, June 21, 2013

Android and Tesseract (Part 1)

Over the past few days I've been playing around with the Tesseract native packages that one can rope into a library for Android applications. This library allows one to conduct optical character recognition on Android mobile devices, which is a rather intriguing concept. The ability to do this has been around for some time (2006). You can read more general information on its history and where it comes from here: http://en.wikipedia.org/wiki/Tesseract_(software). The story is rather interesting as the software was originally written by Hewlett Packard in the late 80s early 90s area and sometime down the road ended up in the possession of Google and thereafter available for use in Android. So I figured I'd share a bit on my experience with it in two parts. The first of these will be a brief overview on the setup and the next part will be a bit with some sample code I managed to get together for its use.

The set up is a fairly easy task, though it does require a little bit of critical thinking as there are some problems that can be hard to work through even with community resources. Before you start a project of your own, make sure your IDE (in my case Eclipse) has the ability to compile Java and C++. If you need to add this feature on Eclipse you can find it in their Indigo repository by adding it through the Help > Install New Software dialog.



Once you have those things you'll need to go out and download the Tesseract library project files which you can find here: https://github.com/rmtheis/tess-two. You can either clone the repository or simply download an archive copy, the choice is yours on that front. Upon download you then can simply import this project into your IDE environment. When you've finished doing so you'll want to make sure you have checked off in your project properties that it is indeed an Android library. In Eclipse it looks like the below screenshot.



After that you will need to make sure you have set up the Java NDK (http://developer.android.com/tools/sdk/ndk/index.html) with the TessTwo project . All you have to do here is unpack the archive somewhere accessible and define the path to it in your IDE. In my Eclipse set up the setting is here under the project properties:




Then you just need to run a build on it and let the IDE do its work. The build can take some time so if I were you I'd suggest finding something to do while the time passes. After the build completes you are ready to use it in a project. We will go into actually making use of this on the next post which I hope to have hammered out here this coming week.


Tuesday, May 21, 2013

Apologies and some news

First off I must apologize for neglecting this blog for a while. Over the past couple of months I have been going through a bit of a professional transition that has left me rather occupied and distracted. I will however make the effort to begin posting weekly once more and perhaps more often should time permit.

On to the second agenda item is a bit of news. As one may have found out from either linkedIn or a response to a blog comment I left a couple days ago, I am no longer working at Trustyd. It is a sad and unfortunate turn of events that lead to this point, but I have been away from that particular organization since sometime in April. If you see this and do have any questions for me about this, or wish for advice as someone who has a deep knowledge of the product, feel free to contact me via email (koch.ryan@gmail.com).

In any event I hope to return to the regular scheduled posting here within the next day or so. I will try to think of a riveting topic for all of you to enjoy!