In previous posts, you’ve seen Lynn talk about the new “art of the possible” with respect to complex UX for multi-touch applications. If Apple is beginning to bring it to the consumer market, you can bet your enterprise customers will soon expect it in their app.
What batch of mouse events will you receive from a certain gesture? It’s anybody’s guess. Oh yea, and those events are sequential.In what order will you receive them? Good luck interpreting all that stuff!
Time to dust off your finite automata textbook from college.
We recently worked on a similarly complex project using these technologies. In the span of a few weeks, we had to code the following things we used to take for granted:
● X&Y coordinates – Simple controls such as a draggable control don’t exist for use in a multi-touch environment. Existing libraries and controls will typically only let you drag a single object at a time (because there is only one mouse after all!) This required us to write custom panning code, which is further complicated by more strict UX requirements for touch motions and the inclusion of zoom functionality.
● Rotate – Rotation gestures is something that HammerJS helps out with tremendously. HammerJS gives information about the degree of the rotation as rotate events are being fired constantly throughout the motion. It’s important to be aware of how the degrees translate to the actual visual presentation. An example of difficulty faced in these kinds of calculations is that the change in degrees can quickly change from positive to negative at the “turning point” of a rotation gesture. It’s important to catch these situations and properly calculate the actual difference in rotation from the last processed event.
● Drag – the drag movement was not too hard to implement using HammerJS as it provided easy access to the deltaX values from the beginning of the motion. One interesting point here was how the same browser on different OS’s interpreted the propagated touch events. In an initial version we had not put in the appropriate ev.preventDevault() calls so all the drag events were being propagated to the browser in addition to the touched location. On iOS and using the Touch Emulator in OS X the browser just ignored these for both horizontal and vertical movements. However when tested on a MS Surface with Windows 8, these events were interpreted to as page scrolling and forward/back actions. HammerJS actually has some information on this here. Lesson learned here – make sure you understand your event propagation.
● Pinch – HammerJS is incredibly useful for writing zoom functionality. The pinch events that they generate include information on the scale of change from the start of the gesture. This means that it will tell you when the user has doubled the actual size of the pinch-out gesture. This can be translated pretty easily into scale on an image to recreate zoom functionality. The issue with this approach is that only modifying the scale of the image hard codes you into a “center-based zoom”, rather than a targeted zoom that expands based on where the pinch gesture starts on the screen. Additional work would be needed to calculate the offset based on the center of the pinch gesture and taking into account the current scale of the image being zoomed into.
● Distinguishing different gestures – At one point, we had UX where a long press would pop up some information, while a drag would pan the scene. We found that the HammerJS implementation of these gestures was not sensitive enough. Users would often accidentally trigger the long press while they were trying to drag. When you are tying multiple behaviors to a single finger, a lot of thought needs to go into deciding how to accurately distinguish the different gestures
● Angular support in HammerJS – it is worth a mention that there is an open source Angular directive that has been made, which makes it easy to wire up your DOM elements using simple ng attributes.