Saturday, March 26, 2011

Droidcon 2011 Recap

This week I attended the droidcon in Berlin and I wanted to share some impressions from the conference.

So what is the droidcon all about?

In short:
The droidcon is Germany's biggest android conference targeting developer interests and business trends. It is a two day conference where sessions or talks are held and hands-on workshops are provided. Topics usually include general platform features, third party services, device introductions, business strategies, future outlooks and so on.

If you want to know more about the conference itself visit their site.

As I already said the talks covered all kinds of topics. I want to give you a short wrap-up about the topics which I found to be the most interesting.

Android Test Automation by Julian Harty
Julian gave an overview about various test automation techniques and frameworks. A very useful list of those resources can be found here. One project in particular got my attention as I have been struggling with Activity tests in the past. Julian introduced Matthias Keppler the creator of Calculon, who gave a brief overview about his project. Calculon is a DSL for Activity testing. It gets rid of the boilerplate code and defines tests which are also readable by non-developers. This might be worth checking out.

A new dimension of Android - 3D and more by Erik Hellman
Simply put, this talk was just amazing. Erik Hellman is a Lead Software Architect for Android development at Sony Ericsson and he showed what is possible if you integrate OpenGL into your applications UI. Examples included shadow and light processing on View elements depending on the device's tilt, 2D to 3D conversion of the background image, page turning effects and many more. All that was done in realtime which made the user experience so much richer. He already provided a brief blog post, but you should also check the official presentation if it is released.

Android in the Field of Augmented Reality by Peter Meier
Peter Meier from metaio presented the junaio AR app, its possibilities when it comes to project 3D models or data onto 2D images and he gave a look into the future of AR. He also gave an inside view about the difficulty of image recognition. You can participate in their developer program and create your own AR channel and you can use their recognition capabilities for your own AR needs. Here is a link to their developer area.


Hidden Treasures: Open Android Apps by Friedger Müffke
One of the initiators of the droidcon itself gave a talk about the openness of the android platform itself and that is pretty hard to keep track or to even get information about the Intents which an app can process. He presented his organizations database of registered Intents. You can search for Intents and libraries which might fulfill a certain use case in the Intents registry. The site can be found here.

Workshop for using LiveView by Sony Ericsson
Sony Ericsson did a workshop to promote their LiveView device. It is a small gadget which connects to your android device via bluetooth. Basically it is a secondary screen for keeping track of notifications like incoming calls, facebook and twitter events, and so on. There is even a plugin mechanism to extend its functionality.

How do you Honeycomb? Android for Tablets by Sparky Rhode
Sparky Rhode from Google Munich presented some of the new capabilities of Android 3.0 tablets. He addressed the importance to adapt your apps to tablet devices. He also talked briefly about the backwards compatibility of the honeycomb features by integrating the static library for version 1.6 upwards. A how-to for using the android compatibility package can be found on his blog.

IS24 Dev Contest by Immobilien Scout GmbH
Immobilien Scout is Germany's biggest real estate internet portal and also my employer. They made it possible for me to attend this year's droidcon. As a mobile developer for IS24 I had the chance to participate in the internal developer contest some weeks ago. The outcome of the internal contest was great and a lot of applications and ideas were produced. The external developer contest was announced at the first day of the conference and got a good feedback so far. The contest is about integrating the IS24 REST-API into your app. You can access functionalities like auto completion for regions, exposé search, reference prices for regions, and so on. Cash prices are waiting for the winners of the contest. For complete information about the contest refer to this site. If you are like me and you like to tinker around with APIs you might want to give the contest a try. My app was shown as one of the internal examples. If you are interest in seeing it here is a link to a video demo.


All in all it was a pretty good conference with lots of information and interesting people to meet. Make sure to check the droidcon site over the next few days as they promised to put up the presentations and other useful resources. Thanks to the people who made it happen and I hope to see you next year around.

Sunday, March 20, 2011

Android and Accessibility

Today I want to show you how to enhance your android applications by providing better accessibility. Android has very good built-in features to support better accessibility. In this post I want to concentrate on the Text to Speech Feature and the Speech Recognition Feature. As there are already articles and API demos on the google android developer pages, I will only talk about the basics and provide you with an aggregated example of an Text to Speech and Speech Recognition application.



Text to Speech

With version 1.6 (Donut), android introduced the Text to Speech capability to the platform. The TTS Engine supports several languages: English, French, German, Italian and Spanish. Even accents are supported for british or american english.

To check if TTS is installed on your device you can use the intent mechanism of android by calling the startActivityForResult() method with predefined intent.

In your onActitivityResult() method you can instantiate a new TextToSpeech object afterwards. For the case that TTS is not installed on your device, google already provides an installation intent which you can trigger.

Use the article example provided by google for your integration and for further detailed information about the Text to Speech Feature. It can be found here.

Note that you have to implement the OnInitListener interface to determine when the TTS Engine is initialized. Afterwards you can configure it to your needs by providing a certain Locale for example.

Before configuring the engine for a specific Locale make sure to check if it is supported by your system and if the corresponding resources are installed.

If everything is set up, it is time to let your device talk to YOU for a change.
This can be done by calling the speak() method on your TTS object. Note that it has three parameters.

The first parameter is the text you want to have spoken.

The second parameter is the queue mode.
The TTS Engine holds a queue for the texts which have to be spoken and works through it one after another. If you want to just simply add a new text to the queue you would use the TextToSpeech.QUEUE_ADD mode. If you want to clear the current queue to start a new one you use the TextToSpeech.QUEUE_FLUSH mode. 

With the third parameter you can define optional parameters.

As the Text to Speech feature relies on a shared service it is crucial that you call the shutdown() method when you are finished.


Speech Recognition

The Speech Recognition capability of android is based on the Google Voice Search application. It is preinstalled on many android devices. Again you should check for its availability on the device before trying to use it. With the help of the PackageManager you can search for Activities which can handle a ACTION_RECOGNIZE_SPEECH Intent.
PackageManager pm = getPackageManager(); 
List activities = pm.queryIntentActivities(
new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH), PackageManager.MATCH_DEFAULT_ONLY);
if (activities.size() != 0) {
    //available
}
If the Voice Search application is available you can call it with the startActivityForResult() method.

You will see a progress indicator followed by a dialog which records your speech.


The data is sent to Google's servers for processing.


When the data has been processed, your onActivityResult() method is called with an Intent object which holds the result data. The result is represented as an ArrayList of type String which can hold multiple matches.

As an additional feature Google introduced the Voice Search as an input method for any text based input field in the android platform from version 2.1(Eclair) upwards. To use this feature you have to enable it on your soft-keyboard. Just press the cogwheel/settings icon on your keyboard and activate the voice input. Afterwards you will see a small microphone icon on the soft-keyboard. Just tap it and you can use the voice recognition as an input method.


Here is the example code:
package devorama.marioboehmer.accessibilityspeechexample;

import java.util.ArrayList;
import java.util.List;
import java.util.Locale;

import android.app.Activity;
import android.content.Context;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.content.pm.ResolveInfo;
import android.os.Bundle;
import android.speech.RecognizerIntent;
import android.speech.tts.TextToSpeech;
import android.speech.tts.TextToSpeech.OnInitListener;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.EditText;
import android.widget.Toast;

/**
 * Simple demo for better accessibility. Serves as an example for text to speech capabilities of android which is supported
 * from version 1.6 upwards and for speech recognition.
 * 
 * @author mboehmer
 */
public class SpeechActivity extends Activity implements OnInitListener {
private Button speakButton;
private Button listenButton;
private EditText editText;
private TextToSpeech mTts;
private static final int TTS_ACTIVITY_RESULT_CODE = 1;
private static final int SPEECH_RECOGNITION_ACTIVITY_RESULT_CODE = 2;
private boolean tTSInitialzed;
private boolean speechRecognitionInstalled;
private Context context = this;

@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
speakButton = (Button) findViewById(R.id.speak_button);
listenButton = (Button) findViewById(R.id.listen_button);
editText = (EditText) findViewById(R.id.edittext);
// check for TTS
Intent checkTTSIntent = new Intent();
checkTTSIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA);
startActivityForResult(checkTTSIntent, TTS_ACTIVITY_RESULT_CODE);
// check for speech recognition
PackageManager pm = getPackageManager();
List<ResolveInfo> activities = pm.queryIntentActivities(
  new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH), PackageManager.MATCH_DEFAULT_ONLY);
if (activities.size() != 0) {
speechRecognitionInstalled = true;
}

speakButton.setOnClickListener(new OnClickListener() {

public void onClick(View v) {
if (tTSInitialzed) {
// check if your locale is supported by tts
if (mTts.isLanguageAvailable(Locale.getDefault()) == TextToSpeech.LANG_COUNTRY_AVAILABLE) {
mTts.setLanguage(Locale.getDefault());
mTts.speak(editText.getText().toString(),
TextToSpeech.QUEUE_FLUSH, null);
// use the tts queue for following texts
// mTts.speak(editText.getText().toString(),
// TextToSpeech.QUEUE_ADD, null);
// or flush the queue to cancel current spoken text and
// start new text
// mTts.speak(editText.getText().toString(),
// TextToSpeech.QUEUE_FLUSH, null);
} else {
Toast localeNotSupportedToast = Toast.makeText(context,
R.string.locale_not_supported_text,
Toast.LENGTH_SHORT);
localeNotSupportedToast.show();
}
} else {
Toast ttsNotInitializedToast = Toast.makeText(context,
R.string.tts_not_initialized_text,
Toast.LENGTH_SHORT);
ttsNotInitializedToast.show();
}
}
});
listenButton.setOnClickListener(new OnClickListener() {
public void onClick(View v) {
if(speechRecognitionInstalled) {
Intent recognizeSpeechIntent = new Intent();
recognizeSpeechIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
                RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
recognizeSpeechIntent.putExtra(RecognizerIntent.EXTRA_PROMPT, getText(R.string.speech_recognition_prompt_text));
recognizeSpeechIntent.setAction(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
startActivityForResult(recognizeSpeechIntent, SPEECH_RECOGNITION_ACTIVITY_RESULT_CODE);
}
}
});
}

@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == TTS_ACTIVITY_RESULT_CODE) {
if (resultCode == TextToSpeech.Engine.CHECK_VOICE_DATA_PASS) {
// success, create the TTS instance
mTts = new TextToSpeech(this, this);
} else {
// missing data, install it
Intent installIntent = new Intent();
installIntent
.setAction(TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
startActivity(installIntent);
}
} else if (requestCode == SPEECH_RECOGNITION_ACTIVITY_RESULT_CODE) {
ArrayList<String> matches = data.getStringArrayListExtra(
                    RecognizerIntent.EXTRA_RESULTS);
if(matches.size() != 0) {
// we only want to show the best matching result
editText.setText(matches.get(0));
} else {
Toast speechNotRecognized = Toast.makeText(context,
R.string.speech_not_recognized_text,
Toast.LENGTH_SHORT);
speechNotRecognized.show();
}
}
super.onActivityResult(requestCode, resultCode, data);
}

@Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
tTSInitialzed = true;
}
}

@Override
protected void onDestroy() {
if (mTts != null) {
mTts.shutdown();
}
super.onDestroy();
}
}
The complete project for this example can be found on github.

Further information about the feature can be found in this android article.

Wednesday, March 9, 2011

Android as a User Interface to your Hardware Projects

In my Humidity Temperature Light and Pressure Project (HTLPSensorNode), I showed you the benefits of using Android for the data visualization of a hardware project. The communication was solely based on the Arduino having a connection to my network via WiFi or LAN and the Android device requesting data on the Arduino in a pseudo REST communication protocol.

This might be sufficient if you only have one project with which you want to interact, but sometimes you have a bunch of projects with different purposes. So what might be a good solution if you want to visualize/control several projects with one Android app?

Right now I am implementing my second revision of an Android to Arduino interfacing app.
It is designed in a dynamic way so that it could scale pretty well. Dynamic in this case means that the applications user interface will adapt to the data which is received.

So imagine a project like the HTLPSensorNode. It's a measurement project so you would only need to display data. In another project you might just need to control a servo, so you would need some movement buttons instead of visualization. You can achieve this by defining your own communication protocol where your hardware project dictates what should be displayed to the user of the Android device.

And now the really interesting part. How can you address your projects from your Android device? There are several possibilities which you could use. You could "hardwire" each URL of your projects, you could request data from a webservice, and so on. I took another approach.

I am a big fan of the openness and the synergy effects of the Android platform so I thought about some apps which I could use to "connect" to my projects. There is an app called Barcode Scanner which is around for quite some time now. It scans many types of barcodes and it is just perfect for scanning QR codes.
So what I did was to generate QR codes for my projects in the form of an URL.

This...


...translates to: http://192.168.1.114/id-1234567890/

You can generate your own QR codes here.

In the Android world apps communicate via so called Intents. Intents are actions which your app wants to do. So it only tells the system what to do but not how. The system then looks for installed applications which can serve the app's request. If there are several applications which are registered in the system that could serve for that particular request then the user can decide which one to use. Sometimes however there are situations where no application can serve that request and the user has to install some app from the Android market. To spare the user some irritation the team behind the Barcode Scanner has provided a small integrator library which you can integrate into your app. What it does is to check if the user has installed the Barcode Scanner app before trying to start it for scanning purposes.
You can find a detailed description on how to integrate it here.

After I attached the generated QR codes to my projects the workflow is as following.

1. Start up the Android visualization/control app.
2. Hold up device infront of your project.
3. Initiate scan via Barcode Scanner.
4. Fire a request to the scanned URL.
5. Receive the data from your project.
6. Parse the response and dynamically build your user interface on the Android client.
7. Repeat steps 2-6 for your other projects.

It's a quite charming approach and it could be a perfect user interaction for the art geeks among yourselves. Just imagine a user exploring your exhibition or art project and interacting with it with his dearest companion, his smartphone.

Since I didn't have time to make a demo video or to provide a finished prototype, I thought it would be best to share at least the concepts and the theory. My prototype worked quite well with that concept and if I have time I will post a short demo.

Until then, have fun experimenting.