Back to the Future: Coding for Multi-touch
Brandon Wallace

In previous posts, you’ve seen Lynn talk about the new “art of the possible” with respect to complex UX for multi-touch applications. If Apple is beginning to bring it to the consumer market, you can bet your enterprise customers will soon expect it in their app.

But she just dreams this stuff up, we have to make it work! And let me tell you folks, this stuff is on the cutting edge. As many of the coders out there already know, many of the whizbang features we’ve come to expect now come for free (.draggable, .rotate). When was the last time any of us had to calculate x and y coordinates with javascript?

However most of the leading javascript libraries were built for mouse and keyboard input. They are great for interpreting events on old devices. But for dealing with 10 finger pinch and rotates? Forget it. Hammer.js supports 2 fingers per hand. JQueryUI doesn’t support touch yet at all – it’s mouse-only. The way they handle touch at this point is to convert all events into equivalent mouse events.

What batch of mouse events will you receive from a certain gesture? It’s anybody’s guess. Oh yea, and those events are sequential.In what order will you receive them? Good luck interpreting all that stuff! 

Time to dust off your finite automata textbook from college.

No, what’s needed is a total facelift of existing javascript libraries to handle these things.

 

 

We recently worked on a similarly complex project using these technologies. In the span of a few weeks, we had to code the following things we used to take for granted:

● X&Y coordinates – Simple controls such as a draggable control don’t exist for use in a multi-touch environment. Existing libraries and controls will typically only let you drag a single object at a time (because there is only one mouse after all!) This required us to write custom panning code, which is further complicated by more strict UX requirements for touch motions and the inclusion of zoom functionality.

● Rotate – Rotation gestures is something that HammerJS helps out with tremendously. HammerJS gives information about the degree of the rotation as rotate events are being fired constantly throughout the motion. It’s important to be aware of how the degrees translate to the actual visual presentation. An example of difficulty faced in these kinds of calculations is that the change in degrees can quickly change from positive to negative at the “turning point” of a rotation gesture. It’s important to catch these situations and properly calculate the actual difference in rotation from the last processed event.

● Drag – the drag movement was not too hard to implement using HammerJS as it provided easy access to the deltaX values from the beginning of the motion. One interesting point here was how the same browser on different OS’s interpreted the propagated touch events. In an initial version we had not put in the appropriate ev.preventDevault() calls so all the drag events were being propagated to the browser in addition to the touched location. On iOS and using the Touch Emulator in OS X the browser just ignored these for both horizontal and vertical movements. However when tested on a MS Surface with Windows 8, these events were interpreted to as page scrolling and forward/back actions. HammerJS actually has some information on this here. Lesson learned here – make sure you understand your event propagation.

● Pinch – HammerJS is incredibly useful for writing zoom functionality. The pinch events that they generate include information on the scale of change from the start of the gesture. This means that it will tell you when the user has doubled the actual size of the pinch-out gesture. This can be translated pretty easily into scale on an image to recreate zoom functionality. The issue with this approach is that only modifying the scale of the image hard codes you into a “center-based zoom”, rather than a targeted zoom that expands based on where the pinch gesture starts on the screen. Additional work would be needed to calculate the offset based on the center of the pinch gesture and taking into account the current scale of the image being zoomed into.

● Distinguishing different gestures – At one point, we had UX where a long press would pop up some information, while a drag would pan the scene. We found that the HammerJS implementation of these gestures was not sensitive enough. Users would often accidentally trigger the long press while they were trying to drag. When you are tying multiple behaviors to a single finger, a lot of thought needs to go into deciding how to accurately distinguish the different gestures

● Hover tooltips – A staple of any mouse-driven user interface, this pattern of exposing additional detail to the user is useless in a multi-touch UI since current hardware can’t tell when your finger is hovering over a control. Many of the JavaScript libraries out there still depend on this capability. Since there isn’t a clear alternative for a touch interface, these JavaScript libraries usually provide no touch alternative.

● Angular support in HammerJS – it is worth a mention that there is an open source Angular directive that has been made, which makes it easy to wire up your DOM elements using simple ng attributes.

RECENT POSTS FROM
THIS AUTHOR