There are a variety of reasons why it can be important to know who is using a given computing device. Here are three real world examples. First, when only certain users are permitted to use a given device and/or when each user is entitled to only certain privileges, there is a need to identify the user. Second, knowing who is using a given device enables the possibility of customized settings. Finally, knowing which user is performing what action is important in collaborative systems and interactive games.
The Future Interface Group at the Human-Computer Interaction Institute within the School of Computer Science at Carnegie Mellon University (Pittsburgh, PA) is developing a “technique that uses existing, low-level touchscreen data, combined with machine learning classifiers, to provide a degree of real-time authentication and identification of users.” The approach is called CapAuth.
An article discussing the technology and reporting the most recent results of the group was published in the Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (Madeira, Portugal, November 15 – 18, 2015). ITS ‘15. ACM, New York, NY. 59-62. The article is entitled “CapAuth: Identifying and Differentiating User Handprints on Commodity Capacitive Touchscreens.” A copy of the article can be found here.
First, a few words of background information.
Projected capacitive touchscreens, such as those commonly utilized in devices such as smartphones, typically work by detecting changes in a projected electric field caused by the near presence of a user’s finger. The pixel values are the picofarad differences between a baseline measurement and the current measurement. A touch controller collects capacitance measurements across the touch sensing grid. These values are utilized to resolve the pixel position of touch contacts and assemble a capacitive image.
CapAuth uses the details of this capacitive image to determine material and geometric variations between the hands of different users. Material variations include differing dielectric effects from differences in the skin thicknesses of users. Geometric variations include differences in the relative lengths of user’s fingers.
Based on the capacitive image, the team developed prototype software that ran on a commercially available Nexus 5 smartphone. The interface obtained a 16-bit, 15 x 27 capacitive image at 25 FPS. Each pixel of the image corresponded to a 4.1 x 4.1 mm square on the screen.
A video explaining and illustrating CapAuth technology can be found at the end of this article.
A study conducted by the team to determine the effectiveness of the system demonstrated twenty participant authentication accuracies of 99.6%. For twenty user identification, the software achieved 94.0% accuracy and 98.2% on groups of four.
Based on these results, the team concluded that CapAuth is “not well suited for high-security applications, nor use cases desiring user differentiation among large groups of users.” A large group was specified as ten or more people. However, the team went on to suggest that CapAuth can accurately identify users within smaller groups, such as a family of four. This, in turn, suggests the use of CapAuth as a simple user differentiation mechanism for shared devices where security is not paramount. Two examples of this are first, to provide parental control on a family tablet and second, to enable tracking of individual users on a shared workspace touch table.
In the conclusion section of their paper, the researchers offer some comments on the limitations of the technique. First, the approach is potentially susceptibility to environmental effects. The researchers specifically mentioned that CapAuth may not function as well under varying electrical conditions. They specifically mention that grounding to a charger and proximity to high power electrical devices could affect the capacitive image. They go on to mention that liquid on the screen (such as raindrops), gloves, rings, watches and other accessories could affect device accuracy.
Despite these limitations, the demonstrated state of technology development suggests that the approach has real potential. -Arthur Berman
Human-Computer Interaction Institute, Chris Harrison, [email protected]