Carnegie Mellon researchers have developed a touchscreen technology that distinguishes taps made by different parts of the finger.
Based on a microphone attached to a touchscreen, the TapSense technology can tell the difference between the tap of a fingertip, the pad of the finger, a fingernail and a knuckle.
It can distinguish between the four types of finger inputs with 95 percent accuracy, and between a pen and a finger with 99 percent accuracy.
It means that users could, for example, capitalize letters by tapping with a fingernail instead of a finger tip, or switch to numerals by using the finger’s pad.
“TapSense basically doubles the input bandwidth for a touchscreen,” says graduate student Chris Harrison.
“This is particularly important for smaller touchscreens, where screen real estate is limited. If we can remove mode buttons from the screen, we can make room for more content or can make the remaining buttons larger.”
TapSense works by classifying the sounds the different parts of the finger make as they strike the screen. All it needs is an additional, inexpensive microphone – those already used in phones are optimized for capturing voices.
Other applications, says the team, could include a painting app that uses different tapping modes to control colors, or switch between drawing and erasing without having to press buttons.
The technology also can use sound to discriminate between tools made from diffferent materials, such as as wood, acrylic or polystyrene foam.
This would enable people using styluses made from different materials to collaboratively sketch or take notes on the same surface, with each person’s contributions appearing in a different color.