Wednesday, January 7, 2026

AI HUMANS AND THEIR SCAMS

 Filenews 7 January 2026



Hany Farid couldn't shake off the feeling that he wasn't really talking to Barack Obama himself in 2023. An associate of the former US president had invited him to talk about deepfake technology. As the video call progressed, the experience of talking to Obama – with a voice and pace of speech entirely characteristically – began to seem strange. "I was thinking, 'It's a deepfake. This is not Obama,'" Farid says. "I wanted to say to him: 'Put your hand in front of your face.'"

Back then, asking for something like this was a way to detect deepfakes. The image was distorted and revealed the fraud. Farid, however, could not look the former president in the eye and ask him to prove that he was... real. "For 10 minutes I thought, 'Are they kidding me?'" he says.

In the end, they did not make fun of him. His suspicions, however, reflect how much AI has begun to cause paranoid feelings in people online. Technology is rapidly evolving to bypass human defenses. The trick with the hand on the face has already become obsolete. Farid proved in a recent call with Bloomberg Businessweek that he can replace his face with that of Sam Altman, CEO of OpenAI. There was a delay between voice and video and a slightly "dead" look, however he could scratch his cheek and change the lighting of the room without disturbing his image. "As a rule," he says, "the idea that in a video call you can trust what you see is over."

Society has been preparing for the day when machines could behave convincingly like humans since 1950, when Alan Turing proposed the "imitation game". In the Turing test, a human-reviewer converses in writing with a machine and a human, trying to guess who is who. If the machine fools the judge, it passes the test. Decades later, websites ask users to prove that they are human through captchas – distorted characters that people are easy to read, but that make it difficult for computers. As automated tools evolved, so did traps. They got weird, asking users to judge photos of dogs smiling, wondering if dogs can do that.

Obstacles

The advent of large language models overcame these defenses. With proper instructions, AI agents can identify complex captchas. In a recent study with 126 participants, several LLMs underwent the Turing test, and participants considered GPT-4.5 human in 73% of cases.

In an internet-mediated world, trust is being broken. Every interaction – with an employer, a potential partner, our mother, or a former US president – is vulnerable to high-level deception. Vocal clones have impersonated US Secretary of State Marco Rubio to communicate with his foreign counterparts. In Hong Kong, a multinational employee sent $25 million in 2024 to scammers who used deepfakes to pose as the company's chief financial officer.

Lacking better solutions, individuals and organizations create informal Turing tests spontaneously. They are forced to overcome new obstacles through methods that often go against social norms, asking people to prove their humanity. As machines become better at mimicking humans, humans are changing the way they write, hire, and interact with strangers to avoid falling victim to AI.

Sarah Suzuki Harvard, a copywriter, says the hunt for AI writing has turned into a "witch hunt," in which human language habits are unfairly baptized as "AI patents."

At universities, professors share strategies for detecting ChatGPT tasks, while students denounce unfair punishments. On Wikipedia, teams of editors clean up articles, detecting fake citations or overuse of certain words. The goal is not to eliminate all AI input, says Ilyas Lebleu, but to reduce sloppiness.

In the realm of recruitment, AI generates cover letters and resumes in bulk, flooding HR departments.

To prevent AI scams, we are slowly returning to analog-era solutions. Sam Altman suggested family keywords as a measure against voice deepfakes. Starling Bank in England promoted the same strategy, with 82% of people saying they agreed with the measure. Others use question tricks.

Large companies like Google are bringing back face-to-face interviews, which of course are impossible to carry out from... chatbots, to ensure the existence of basic skills and integrate new hires into the company culture. The FBI has revealed that North Korean IT workers managed to get hired at more than 100 U.S. companies, using AI-based deception, bringing millions of dollars to North Korea.

Receipts

The increased demand for receipts... human nature has created new technological tools: e.g. for detecting deepfakes on Zoom (Reality Defender on JPMorgan Chase), but also biometric verification means. Orb, an iris scanner from Tools for Humanity, produces global identification services without storing any personal data. However, these solutions require social acceptance and trust in the one who undertakes such projects.

Each solution has its drawbacks: it requires people to reveal parts of themselves or turns human existence into the subject of a "show".

Machines have learned to predict human behaviour. Now, in order to stand out, we have to prove that we are... people.

Adaptation – Editing: George D. Pavlopoulos

BloombergOpinion