Introduction
When considering if and how to add touch support to an MFC application, the first point worth assessing is how well the application will perform in touch-scenarios without explicitly adding any support at the application level. The Windows team has gone to great efforts to make applications work well even if they have no code present to deal with touch input. In these situations, Windows will primarily use touch in much the same mode as a mouse, with screen taps equating to mouse clicks, and an on-screen keyboard allowing users to enter text into edit controls. The exact behavior of touch input can be customized via the Windows Control Panel. Figure 1 shows a simple MFC Dialog-based application that has an Edit Control to allow text input. This application has had no explicit touch support added. When this application is run on hardware with digitizer support and the digitizer is used to set the focus to the Edit Control, an on screen keyboard icon is displayed.
Figure 1. Out-of-the-box touch experience with MFC and Touch
Clicking on the keyboard icon brings up the Touch Keyboard shown at the top of Figure 2. The text input panel on Windows 7 has two modes that can be switched between using the button at the top-left of the panel. The Touch Keyboard offers standard key-based text input, while the Writing Pad (shown at the bottom of Figure 2) has a much richer experience, offering the option to write in either character-by-character mode or using freehand style. Character-by-character has a blank cell where individual letters are entered using a pen, while freehand style offers a more natural paper-like experience of digital inking, and hand-writing recognition algorithms are used to form words and sentences. Windows 7 also supports the ability to personalize handwriting recognition at an operating system level.
Figure 2. Text Input Options
Once some text has been entered in either the Writing Pad or Touch Keyboard, an Insert button allows the text to be moved into the Edit Control, as shown in Figure 3.
Figure 3. Writing Pad with text ready for insertion
If an MFC application is designed with a clean, simple interface, the inbuilt operating system support for touch- and pen-based input may be sufficient. After covering the two other options for touch integration below, we’ll briefly return to this first choice and see how it can be improved on.
For some MFC applications, providing direct support for touch may lead to a better user experience. There are two ways that an application can support touch, indirectly by allowing Windows to translate the touch input into gestures (such as zoom, pan or rotate), and directly by receiving the low-level touch events and executing code that responds to them.
Responding to gestures in an MFC application is simple – Windows 7 and Windows Server 2008 R2 both support the WM_GESTURE
Windows message. MFC translates the various panning, zooming and rotating gestures that are all encapsulated by WM_GESTURE
into distinct CWnd virtual methods like OnGestureZoom
and OnGestureRotate
that take care of the parsing of the information in the WM_GESTURE
parameters into specific information relevant to the gesture.
The MFC Gesture CWnd virtual functions are:
OnGestureZoom(CPoint ptCenter, long lDelta) OnGesturePan(CPoint ptFrom, CPoint ptTo) OnGestureRotate(CPoint ptCenter, double dblAngle) OnGestureTwoFingerTap(CPoint ptCenter) OnGesturePressAndTap(CPoint ptPress, long lDelta)
Each function returns a BOOL that indicates whether the event has been handled. The MFC Class Wizard that ships with Visual C++ 2010 RTM does not currently have support for either the WM_GESTURE
message or the OnGestureXXX
virtual methods. However, adding the handlers is a simple exercise:
//in View header file class CMyView : public CView { // Overrides protected: virtual BOOL OnGestureZoom(CPoint ptCenter, long lDelta); //in View Source File BOOL CMyView::OnGestureZoom(CPoint ptCenter, long lDelta) { //code for zooming lDelta here return TRUE; }