AMERICANO

Home Technology Artificial Intelligence AI to fight AI fake content!

AI to fight AI fake content!

0
32
File image of a mobile phone with the American AI app, ChatGPT, and the Chinese AI app, DeepSeek. EFE/EPA/SALVATORE DI NOLFI
File image of a mobile phone with the American AI app, ChatGPT, and the Chinese AI app, DeepSeek. EFE/EPA/SALVATORE DI NOLFI

Madrid, Feb 16 (EFE).- By Raúl Casado

The digital world is flooded with images, videos, and memes that blur the line between authentic and artificial.

As generative artificial intelligence grows more sophisticated, a parallel industry has emerged: tools designed to detect content created, or manipulated, by AI itself.

Governments, technology firms and research centers have stepped up efforts to develop systems capable of distinguishing authentic images from synthetic ones.

Experts consulted by EFE, however, question how reliable and effective such tools truly are, and whether they have a future in societies such as Spain’s, where cybersecurity awareness remains limited and users may be reluctant to spend time verifying the content they receive.

Fraud linked to AI-generated or manipulated content caused losses exceeding $1.5 billion last year, according to various reports.

Fake invoice images are used to justify corporate expenses. Fabricated photos of damaged goods, cars, appliances or clothing, are submitted to claim compensation. Non-existent events are spread through media and social networks, and even executives’ voices are cloned to authorize fraudulent transfers.

Several platforms, including IMGDetector.ai, RealReveal, Tenorshare Deepfake Detection, AI or Not and Vericta, offer services, some free of charge, to identify AI-generated or altered images.

These tools analyze visual inconsistencies, pixel-level anomalies and digital “fingerprints” left by generative systems.

A picture worth, or not, a thousand words

Does a picture still speak louder than words? Georgina Viaplana, co-founder and director of the Spanish firm Vericta, argues that it does.

What has changed, she says, is people’s tendency to accept images at face value.

“Precisely because an image has so much power, it is worth manipulating it,” Viaplana told EFE.

Image made with artificial intelligence (AI) taken from the official account of the social network Truth Social @realDonaldTrump showing the president of the United States, Donald Trump. EFE-EPA/@realDonaldTrump/
Image made with artificial intelligence (AI) taken from the official account of the social network Truth Social @realDonaldTrump showing the president of the United States, Donald Trump. EFE-EPA/@realDonaldTrump/

Vericta, developed with support from several Spanish public institutions including the Center for the Development of Industrial Technology and Barcelona City Council, has created proprietary technology that can assess within seconds whether an image or video is authentic or AI-generated.

Viaplana believes such tools will become “indispensable,” particularly for companies, where trust in digital content has become “a structural need, not something accessory.”

When an image may represent a direct financial cost, she said, verification is no longer seen as a waste of time.

She cited insurance claims resolved solely on photographic evidence, e-commerce refund requests supported by manipulated delivery images, and real estate listings that may feature synthetic visuals.

A ‘digital arms race’

Labeling AI-generated content at its source could offer part of the solution, but only in theory, experts warn.

Watermarks can be altered, metadata can be removed, and not all AI systems operate under the same regulatory standards, meaning independent verification systems would still be necessary.

Hervé Lambert, head of Global Consumer Operations at cybersecurity firm Panda Security, described the situation as a “digital arms race.”

“Every improvement in detection provokes an improvement in generation,” Lambert told EFE, acknowledging that such tools are already useful in professional contexts such as journalism, law enforcement, and the courts.

He expressed skepticism, however, about their adoption among average users.

“Most people do not verify a news story, let alone an image that confirms their narrative,” he said, noting that surveys show 73 percent of Spaniards believe their mobile phones do not require additional protection.

While Lambert considers it technically feasible to label all AI-generated content, he pointed out that the borderless nature of the internet complicates enforcement.

“If one jurisdiction requires labeling and another does not, unlabeled content will continue to circulate,” he said.

He advocates a three-pronged approach: stronger regulation, greater platform responsibility, and improved digital literacy.

Above all, he emphasized the importance of critical thinking. “In cybersecurity there is never a single silver bullet.” EFE rc-sk

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

0 Shares
Share via
Copy link