There is More Than Just a Little Touching...

As a developer, you need to be aware of the potential ways that a person can interact with your programs. I've talked about avoiding assumptions on the user interface in the past. With changes around user input coming, it is a good time to talk again about avoiding assumptions on the method of entry or interaction. A couple of newer input methods could have an impact on how your applications behave. As such, you should be aware of what are proving to be mainstream user input methods.

There are more than seven input methods you should be considering.

There are the mouse and keyboard. Most of these you already are aware of these methods and will require no real change to what you're doing. The keyboard and mouse are our long-time friends. These are the assumed interfaces for most computers, and even most mobile devices provide equivalent functionality for keyboard input.

There is speech. For the most part, speech simply replaces the keyboard as an input mechanism and thus for most programs has little impact other than to potentially capture input from a speech recognition engine. Although speech hasn't taken over, it is getting closer. Windows Vista added better speech recognition and it seems crazy for mobile phone devices not to support speech. With increases in processing and storage power, speech is getting closer to recognizing most of what is said.

There is inking. In recent years, Tablet PCs were introduced. Tablet PCs use a pen device to allow the user to input into the application using handwriting and other gestures. This act of "inking" can be handled by the operating system if all that is wanted is simple input; however, it opened up additional inking possibilities for your applications. A user could tap, write, or gesture new commands. Unfortunately, most devices supporting inking are slightly more expensive than non-inking supported devices. With Tablet PCs, you generally need a specialized pen rather than a simply pointing device.

There is touch. Touch can be a simple alternative for the mouse. Instead of moving a pointing device around on a table or tablet, you simply touch the screen. For the most part, the addition of touch to a computing device really doesn't require you to do anything differently as a developer if you want to use it as simply a mouse replacement. Touch will require changes, however, if you want to start recognizing gestures. For example, you will need to recognize if a person touches the screen in a single location for a period of time, or if a person slides their finger across a line of text.

There is multi-touch. This is one of the new features coming with Microsoft Windows "7," so it will eventually be part of the dominant operating system. Now, rather than just having a single point of entry like a mouse or like simple touch, multi-touch will allow lots of entry points. It allows the user to touch the screen in multiple places at the same time. This will require a bit of additional thinking on your part as a developer. Whereas simple touch only generates input at one location on the screen at a time, with multi-touch, you are registering multiple input points at a single time. Multi-touch also opens up a number of new gestures for causing actions as well. For example, you might want to recognize that spreading two touch points (fingers) apart on the screen could indicate zooming in.

You might be inclined to say that you don't need to worry about touch and multi-touch. Be aware that they are coming with Windows "7." HP has already released the first devices supporting this and the cost is not out of line with existing computers. Touch will not be like Tablet PCs and inking where a special, expensive pen is needed. Within a few years, touch should end up simply being a natural part of new monitors. As such, your users will be touching and stroking their screens.

There is Vision. This is another input area that is not currently slated for personal computers, but with changes in technology, one day could be. You currently can see vision in use in Microsoft's Surface devices. Vision differs from touch in that it can recognize information about what is touching the screen. For example, vision could be used to recognize the shape of the object making contact with the screen. Vision could also be used to recognize tags. For example, the Surface device can recognize a 128-bit tag when placed on its surface. When your screen can start recognizing devices and items placed on it, the range of possibilities for what your program can do, or how it reacts, can begin to change.

There will be more. Imagine a computer with a light sensor, and accelerometer, or a variety of other input mechanisms. A simple light sensor on a screen could automatically adjust the contrast on your monitor. An accelerometer could be used to determine the angle you are holding a screen and adjust the fonts and display accordingly. There are endless possibilities for the types of things that could be done when you start considering the types of sensors that can be integrated into a system. This might seem farfetched; however, Windows "7" already has a Sensor Development Kit that allows you to work with exactly these types of sensors. This means that Windows "7" will be able to support such input methods. All of the sudden, you will be able to have your applications react to sights, sounds, movement, and more. In fact, the Nintendo Wii and Apple iPhone already use some of this technology!

The keyboard, the mouse, inking, and some touch are already here. Multi-touch is coming quickly. Sensor support is also coming in the near future. Vision is a possibility, but farther out. If you think these will be a long time in coming, I suggest you look around. It is amazing how quickly LCDs have replaced CRTs for standard monitors. Change can happen quickly, especially if it's change that will allow a user to interact more efficiently with their computer. I believe you'll find that new input devices can enter the world just as quickly. Before long, your users are going to have their hands all over your applications—literally. Will your applications be ready?



About the Author

Bradley Jones

Bradley Jones, in addition to managing CodeGuru, Brad! oversees the Developer.com Newtwork of sites including Codeguru, Developer.com, DevX, VBForums, and over a dozen more with a focus on software development and database technologies. His experience includes development in C, C++, VB, some Java, C#, ASP, COBOL, and more as well as having been a developer, consultant, analyst, lead, and much more. His recent books include Teach Yourself the C# Language in 21 Days, Web 2.0 Heroes, and Windows Live Essentials and Services.
Google+ Profile | Linked-In Profile | Facebook Page

Comments

  • There are no comments yet. Be the first to comment!

Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

  • On-demand Event Event Date: September 10, 2014 Modern mobile applications connect systems-of-engagement (mobile apps) with systems-of-record (traditional IT) to deliver new and innovative business value. But the lifecycle for development of mobile apps is also new and different. Emerging trends in mobile development call for faster delivery of incremental features, coupled with feedback from the users of the app "in the wild." This loop of continuous delivery and continuous feedback is how the best mobile …

  • Packaged application development teams frequently operate with limited testing environments due to time and labor constraints. By virtualizing the entire application stack, packaged application development teams can deliver business results faster, at higher quality, and with lower risk.

Most Popular Programming Stories

More for Developers

Latest Developer Headlines

RSS Feeds