Captchas show machines just don’t have warped minds

Those annoying little Captcha images look set to be around a while longer, following a study examining whether they really do prevent automated network attacks.

Penn State researchers explored the difference in human and machine recognition of visual concepts under various image distortions, and then used those differences to design image-based Captchas.

“Our goal is to seek a better understanding of the fundamental differences between humans and machines and utilize this in developing automated methods for distinguishing humans and robotic programs,” said James Z Wang, associate professor in Penn State’s College of Information Sciences and Technology.

Most people are familiar with Captchas, randomly generated sets of words that a user types in a box provided in order to complete a registration or purchasing process and verify that the user is human.

In Wang’s study, a demonstration program with an image-based Captcha called Imagination was presented. Both humans and robotic programs were observed using the Captcha.

The results, presented in IEEE Transactions on Information Forensics and Security, showed that robotic programs were not able to recognize distorted images. In other words, a computer recognition program had to rely on an accurate picture, while humans were able to tell what the picture was even though it was distorted.

But watch this space. “We are seeing more intelligently designed computer programs that can harness a large volume of online data, much more than a typical human can experience in a lifetime, for knowledge generation and automatic recognition,” said Wang. “If certain obstacles, which many believe to be insurmountable, such as scalability and image representation, can be overcome, it is possible that one day machine recognizability can reach that of humans.”