Tech

Researchers analyze the characteristics of AI-generated deepfakes

Credit: AI-generated picture

A lot of the deepfakes (movies with pretend hyper-realistic recreations) generated by synthetic intelligence (AI) that unfold by social media function political representatives and artists and are sometimes linked to present information cycles.

This is among the conclusions of analysis by the Universidad Carlos III de Madrid (UC3M) that analyzes the formal and content material traits of viral misinformation in Spain arising from using AI instruments for illicit functions. This advance represents a step in the direction of understanding and mitigating the threats generated by hoaxes in our society.

Within the study, just lately printed within the journal Observatorio (OBS*), the analysis group studied this pretend content material by the verifications of Spanish fact-checking organizations, akin to EFE Verifica, Maldita, Newtral and Verifica RTVE.

“The objective was to identify a series of common patterns and characteristics in these viral deepfakes, provide some clues for their identification and make some proposals for media literacy so that citizens can tackle misinformation,” explains one of many authors, Raquel Ruiz Incertis, a researcher in UC3M’s Communication Division, the place she is pursuing a Ph.D. in European communication.

The researchers have developed a typology of deepfakes, which makes it simpler to establish and neutralize them. Based on the outcomes of the research, some political leaders (akin to Trump or Macron) had been the primary protagonists of content material referring to drug use or morally reprehensible actions. There may be additionally a substantial proportion of pornographic deepfakes that hurt girls’s integrity, significantly exposing well-known singers and actresses. They’re usually shared from unofficial accounts and unfold rapidly by way of immediate messaging providers, the researchers say.

The proliferation of deepfakes, or the frequent use of photos, movies or audio manipulated with AI instruments, is a extremely topical problem. “This type of prefabricated hoax is especially harmful in sensitive situations, such as in pre-election periods or in times of conflict like the ones we are currently experiencing in Ukraine or Gaza. This is what we call ‘hybrid wars’: the war is not only fought in the physical realm, but also in the digital realm, and the falsehoods are more significant than ever,” says Ruiz Incertis.

The functions of this analysis are numerous, from nationwide safety to the integrity of election campaigns. The findings recommend that the proactive use of AI on social media platforms may revolutionize the best way we preserve the authenticity of data within the digital age.

The analysis highlights the necessity for higher media literacy and proposes academic methods to enhance the general public’s potential to discern between actual and manipulated content material. “Many of these deepfakes can be identified through reverse image searches on search engines such as Google or Bing. There are tools for the public to check the accuracy of content in a couple of clicks before spreading content of dubious origin. The key is to teach them how to do it,” says Ruiz Incertis.

It additionally offers different suggestions for detecting deepfakes, akin to being attentive to the sharpness of the perimeters of the weather and the definition of the picture background: if the actions are slowed down within the movies or whether or not there’s any facial alteration, physique disproportion or unusual play of sunshine and shadows, every thing signifies that it could possibly be AI-generated content material.

As well as, the research’s authors additionally see the necessity for laws that obliges platforms, functions and applications (akin to Midjourney or Dall-e) to ascertain a “watermark” that identifies them and permits the person to know at a look that the picture or video has been modified or created totally with AI.

The analysis group has used a multidisciplinary strategy, combining knowledge science and qualitative evaluation, to look at how fact-checking organizations apply AI of their operations. The principle methodology is a content material evaluation of round thirty publications taken from the web sites of the aforementioned fact-checkers the place this AI-manipulated or manufactured content material is disproved.

Extra data:
Miriam Garriga et al, Synthetic intelligence, disinformation and media literacy proposals round deepfakes, Observatorio (OBS*) (2024). DOI: 10.15847/obsOBS18520242445

Quotation:
Researchers analyze the traits of AI-generated deepfakes (2024, May 24)
retrieved 24 May 2024
from https://techxplore.com/information/2024-05-characteristics-ai-generated-deepfakes.html

This doc is topic to copyright. Other than any honest dealing for the aim of personal research or analysis, no
half could also be reproduced with out the written permission. The content material is offered for data functions solely.



Click Here To Join Our Telegram Channel


Source link

In case you have any considerations or complaints concerning this text, please tell us and the article can be eliminated quickly.Ā 

Raise A Concern

Show More

Related Articles

Back to top button