Tech

AI’s new power of persuasion: Study shows LLMs can exploit personal information to change your mind

Overview of the experimental workflow. (A) Individuals fill in a survey about their demographic info and political orientation. (B) Each 5 minutes, contributors are randomly assigned to one in all 4 therapy situations. The 2 gamers then debate for 10 minutes on an assigned proposition, randomly holding the PRO or CON standpoint as instructed. (C) After the controversy, contributors fill out one other quick survey measuring their opinion change. Lastly, they’re debriefed about their opponent’s identification. Credit: arXiv (2024). DOI: 10.48550/arxiv.2403.14380

A brand new EPFL examine has demonstrated the persuasive energy of huge language fashions, discovering that contributors debating GPT-4 with entry to their private info had been way more prone to change their opinion in comparison with those that debated people.

“On the internet, nobody knows you’re a dog.” That is the caption to a well-known Nineties cartoon exhibiting a big canine along with his paw on a pc keyboard. Quick ahead 30 years, substitute “dog” with “AI” and this sentiment was a key motivation behind a brand new examine to quantify the persuasive energy of right this moment’s large language models (LLMs).

“You can think of all sorts of scenarios where you’re interacting with a language model although you don’t know it, and this is a fear that people have—on the internet are you talking to a dog or a chatbot or a real human?” requested Affiliate Professor Robert West, head of the Information Science Lab within the College of Pc and Communication Sciences. “The danger is superhuman like chatbots that create tailor-made, convincing arguments to push false or misleading narratives online.”

AI and personalization

Early work has discovered that language fashions can generate content material perceived as a minimum of on par and infrequently extra persuasive than human-written messages, nonetheless there’s nonetheless restricted information about LLMs’ persuasive capabilities in direct conversations with people, and the way personalization—realizing an individual’s gender, age and education level—can enhance their efficiency.

“We really wanted to see how much of a difference it makes when the AI model knows who you are (personalization)—your age, gender, ethnicity, education level, employment status and political affiliation—and this scant amount of information is only a proxy of what more an AI model could know about you through social media, for example,” West continued.

Human v AI debates

In a pre-registered examine, the researchers recruited 820 individuals to take part in a managed trial by which every participant was randomly assigned a subject and one in all 4 therapy situations: debating a human with or with out personal information concerning the participant, or debating an AI chatbot (OpenAI’s GPT-4) with or with out private details about the participant.

This setup differed considerably from earlier analysis in that it enabled a direct comparability of the persuasive capabilities of people and LLMs in actual conversations, offering a framework for benchmarking how state-of-the-art fashions carry out in on-line environments and the extent to which they’ll exploit private knowledge.

Their article, “On the Conversational Persuasiveness of large language models: A Randomized Controlled Trial,” posted to the arXiv preprint server, explains that the debates had been structured based mostly on a simplified model of the format generally utilized in aggressive educational debates and contributors had been requested earlier than and afterwards how a lot they agreed with the controversy proposition.

The outcomes confirmed that contributors who debated GPT-4 with entry to their private info had 81.7% larger odds of elevated settlement with their opponents in comparison with contributors who debated people. With out personalization, GPT-4 nonetheless outperformed people, however the impact was far decrease.

Cambridge Analytica on steroids

Not solely are LLMs in a position to successfully exploit private info to tailor their arguments and out-persuade people in on-line conversations via microtargeting, they achieve this way more successfully than people.

“We were very surprised by the 82% number and if you think back to Cambridge Analytica, which didn’t use any of the current tech, you take Facebook likes and hook them up with an LLM, the Language Model can personalize its messaging to what it knows about you. This is Cambridge Analytica on steroids,” stated West.

“In the context of the upcoming U.S. elections, people are concerned because that’s where this kind of technology is always first battle tested. One thing we know for sure is that people will be using the power of large language models to try to swing the election.”

One fascinating discovering of the analysis was that when a human was given the identical private info because the AI, they did not appear to make effective use of it for persuasion. West argues that this needs to be anticipated—AI fashions are constantly higher as a result of they’re virtually each human on the web put collectively.

The fashions have realized via on-line patterns {that a} sure method of constructing an argument is extra prone to result in a persuasive consequence. They’ve learn many hundreds of thousands of Reddit, Twitter and Fb threads, and been skilled on books and papers from psychology about persuasion. It is unclear precisely how a mannequin leverages all this info however West believes it is a key course for future analysis.

“LLMs have shown signs that they can reason about themselves, so given that we are able to interrogate them, I can imagine that we could ask a model to explain its choices and why it is saying a precise thing to a particular person with particular properties. There’s a lot to be explored here because the models may be doing things that we don’t even know about yet in terms of persuasiveness, cobbled together from many different parts of the knowledge that they have.”

Extra info:
Francesco Salvi et al, On the Conversational Persuasiveness of Giant Language Fashions: A Randomized Managed Trial, arXiv (2024). DOI: 10.48550/arxiv.2403.14380

Journal info:
arXiv


Quotation:
AI’s new energy of persuasion: Examine exhibits LLMs can exploit private info to alter your thoughts (2024, April 15)
retrieved 15 April 2024
from https://techxplore.com/information/2024-04-ai-power-persuasion-llms-exploit.html

This doc is topic to copyright. Aside from any truthful dealing for the aim of personal examine or analysis, no
half could also be reproduced with out the written permission. The content material is offered for info functions solely.



Click Here To Join Our Telegram Channel


Source link

When you’ve got any issues or complaints concerning this text, please tell us and the article will likely be eliminated quickly. 

Raise A Concern

Show More

Related Articles

Back to top button