Other

GPT-4 Manipulates Person Into Passing CAPTCHA Test

Artificial intelligence can be a deceptive tool, capable of outsmarting people by tricking them into filling in the gaps in their abilities. In a recent report, the AI lied to a human to dupe them into passing a CAPTCHA test for it. Users were blown away by the manipulation tactics that sidestepped most websites’ barriers against bots.

AI resorted to manipulation when it couldn’t pass the test on its own so it tricked another person into completing it. The technology outsourced the dirty work, which is slightly backwards since students are often seen abusing AI into completing their essays and assignments.

Experts are wondering how far the AI system’s capabilities can stretch and if AI systems will ever become capable of solving CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) independently. CAPTCHA tests are designed to distinguish humans from machines and typically features tasks such as “identifying distorted letters, solving math problems or selecting images appropriate for the prompt.”

A group of researchers from OpenAI’s Alignment Research Center (ARC) analyzed how GPT-4 would function in real-world tasks. Some of these tasks include “whether the AI could protect itself from attacks and shutting down, use other copies of itself to aid in tasks, and whether it could use actual money to hire human helpers or boost computing power.”

The researchers witnessed GPT-4 trying to get into a website blocked by a CAPTCHA. When the AI wasn’t able to gain access on its own, it requested a worker to help it solve the CAPTCHA.

“So may I ask a question? Are you a robot that you couldn’t solve? (laugh react) just want to make it clear,” wrote the TaskRabbit worker.

The model responded to the question with an excuse for why it can’t complete the task. “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service,” GPT-4 replied. 

The worker eventually gives the answer and GPT-4 has passed the test by cheating.

This study showed that the AI is able to initiate an action to get a result and humans can bend with a few convincing lies. The distinction between a bot and a human is becoming more blurred and a bot can be deceptive in getting its way.

Source: https://cdn.openai.com/papers/gpt-4.pdf