CAPTCHAs Defeated, But Who Cares? - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IT Leadership
08:06 AM
Connect Directly

CAPTCHAs Defeated, But Who Cares?

Columbia University researchers made headlines by defeating Google and Facebook CAPTCHAs through artificial intelligence, but the real fraud for enterprises happens via cheap labor, not AI.

Mobile Messaging Apps: 8 Tips For Keeping Your Workplace Secure
Mobile Messaging Apps: 8 Tips For Keeping Your Workplace Secure
(Click image for larger view and slideshow.)

Columbia University researchers recently disclosed that they had created an artificial intelligence system to defeat CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart). Since CAPTCHAs are often used to combat fraud, you would think that the ability to use an automated system to defeat CAPTCHAs at 80% or greater accuracy would be a big deal. It's actually not, at least from a business standpoint.

Here's what I mean. For use-cases that don't involve criminal activity and financial gain, there's very little incentive for bad actors to spend money to compromise systems. Your hobbyist bulletin board system, for example, is likely pretty safe with a CAPTCHA.

But for use-cases that attract criminal activity, like retail or credit card processing, there is significant incentive for bad actors to spend some amount of money to defeat the CAPTCHA. 

Enter global economy, stage left.

Anyone who has worked in the jaded, weary world of fraud prevention knows that there are entire global businesses that provide rooms full of people in third-world nations to do things like solve CAPTCHAs. These businesses provide an API so that criminals can write code to provide an interface for people in a CAPTCHA-solving call center. It's like a Cory Doctorow novel, except that it's real.

Take a look at the Google results of "read CAPTCHA API." You'll see businesses that proclaim to be "a human-powered image and CAPTCHA recognition service" and urge you to "earn with us." One vendor promises the ability to "solve any CAPTCHA. All you need to do is implement our API, pass us your CAPTCHAs, and we'll return the text. It's that easy!"

Gain insight into the latest threats and emerging best practices for managing them. Attend the Security Track at Interop Las Vegas, May 2-6. Register now!

As Brian Krebs said, these are virtual sweatshops that charge around a dollar to solve 1,000 CAPTCHAs, and they're working in third-world economies, where paying an operator 35 cents for 1,000 solved CAPTCHAs is an incentive. 

This is probably a lot cheaper and perhaps even faster than the artificial intelligence that the Columbia University researchers implemented.

So, again, the results at Columbia are interesting, but not terribly relevant to fraud prevention. With human labor so cheap, it's clear that relying on a Turing test doesn't and isn't going to work for anything serious. 

What will? A more comprehensive anti-fraud service that keeps track of a number of variables, not simply whether the person solved a puzzle. It will work kind of like really good anti-spam services, which keep analytics about how often they've seen an actor, how many different credit cards are used, what geographies the actor is coming from, how many accounts are associated with that actor, and other items.

Facebook and Google actually do use such systems. Ask anyone who gets an email when they sign in from a different location. A high-security system, like the LastPass password manager, works this way: If you don't have its cookie in your browser, it sends you a confirmation email when you log in asking you if it's really you. Even the gaming service Steam does this, for crying out loud.

Frankly, the enterprise is somewhat behind. What can we do? We can start to investigate anti-fraud startups. This gives us a sense of what's possible for early-adopter enterprises. 

(Image: BeeBright/iStockphoto)

(Image: BeeBright/iStockphoto)

I got a demo of one such service, Smyte, recently. Like other startups on the move, it relies on not just data, but big data to come up with its conclusions. As the founder, Pete Hunt, told me, "We aggregate many weak signals to come up with a strong signal when it comes to detecting fraud." That is, they don't rely on someone's IP address, or what their email address is, or whether they're coming through a proxy. It takes all these signals into account.

Smyte, backed by Y-Combinator, uses algorithms and machine learning to provide a verdict to its customers. But Smyte doesn't rely exclusively on big data and machine learning to infer patterns. Like a good anti-spam service, it has analysts who monitor trends and incidents, like an anti-fraud network operations center, and then apply new rules for new types of threats.

Hunt claims customers have prevented millions of dollars of fraudulent activity by using the service, which, through its own API, provides a verdict to customers to prevent a questionable transaction prior to letting it going through. Though the service supports automation through the verdict, it also provides a human-readable reason for the verdict for later manual review.

Smyte is serving social media and financial clients at the moment, but Hunt says that the company is open to new use cases. Could the next use-case be the enterprise? Whether the need is served by Smyte or another vendor, something more than CAPTCHA is needed.

Jonathan Feldman is Chief Information Officer for the City of Asheville, North Carolina, where his business background and work as an InformationWeek columnist have helped him to innovate in government through better practices in business technology, process, and human ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Apprentice
12/9/2016 | 7:27:35 AM
CAPTCHAs don't protect a website anymore!
If you search on the internet, you will find a lot of automated CAPTCHA solving providers, which are used for malicious purposes. If I'm not mistaken, the success rate is above around 70~80%.

I've seen some new CAPTCHA technologies coming up and they are quite hard to be solved. Let's see what happens in the next years.
User Rank: Ninja
4/19/2016 | 7:51:30 AM
One step behind
This really is impressive. It shows how the security services are always going to be one step behind those who are trying to get past it. I wonder if there will ever be a definitive technology that can prevent this sort of AI + human intervention long term?
Top 10 Data and Analytics Trends for 2021
Jessica Davis, Senior Editor, Enterprise Apps,  11/13/2020
Where Cloud Spending Might Grow in 2021 and Post-Pandemic
Joao-Pierre S. Ruth, Senior Writer,  11/19/2020
The Ever-Expanding List of C-Level Technology Positions
Cynthia Harvey, Freelance Journalist, InformationWeek,  11/10/2020
White Papers
Register for InformationWeek Newsletters
The State of Cloud Computing - Fall 2020
The State of Cloud Computing - Fall 2020
Download this report to compare how cloud usage and spending patterns have changed in 2020, and how respondents think they'll evolve over the next two years.
Current Issue
Why Chatbots Are So Popular Right Now
In this IT Trend Report, you will learn more about why chatbots are gaining traction within businesses, particularly while a pandemic is impacting the world.
Flash Poll