Deepfaking threats escalate: The era of personalized forgery has arrived

#News ·2025-01-08

Researchers at Google DeepMind and Stanford University have created efficient AI replicas of more than 1,000 people through simple interviews.

Researchers at Google and Stanford University say a two-hour conversation with an AI model is enough to fairly accurately characterize the personality traits of a real person.

In a recent study, researchers generated 1,052 "simulated subjects" (i.e., AI replicas) based on two-hour interviews with each participant that followed interview protocols developed by the American Voices Project and explored a range of topics of interest to social scientists, Life stories and perspectives on current social issues are included to train a generative AI model designed to mimic human behavior.

To assess the accuracy of the AI replica, each participant completed two rounds of personality tests, a social survey and a logic game. When the AI replica completed the same test, its results matched the real participants' answers 85 percent of the time.

When answering personality questionnaires, the AI clones did not differ much from the real participants, and they were particularly good at replicating personality questionnaire answers and determining social attitudes, but their accuracy declined when predicting behavior in interactive games involving economic decisions.

Question of purpose

The motivation for developing simulated subjects, the scientists explain, is the potential to use them for research that would be costly, impractical or unethical if the subjects were real people. For example, AI models can help assess the effectiveness of public health measures or better understand how people react to product launches. Even simulating responses to major social events is conceivable, the researchers say.

"General-purpose simulations of human attitudes and behavior-in which each simulated individual interacts in a range of social, political, or informational situations-could provide a laboratory for researchers to test a range of interventions and theories," the researchers wrote.

However, the scientists also acknowledge that the technology could be abused. For example, mock agents can be used to fool others on the network with deep forgery attacks.

Security experts have seen deep forgery technology evolve rapidly and believe it is only a matter of time before cybercriminals find a business model to target companies.

A number of executives have said that their companies have recently been targeted by deepfake scams, particularly for financial data. Security firm Exabeam recently discussed an incident in which deep forgery technology was used in job interviews in connection with the growing North Korean scam of fake IT workers.

Researchers at Google and Stanford University propose to build a "subject library" of more than 1,000 simulated subjects they generate. The library, which will be hosted by Stanford University, will "provide research with controlled, research-only API access to understand subject behavior," according to the researchers.

While the study does not explicitly advance any capabilities of deepfaking technology, it does demonstrate the progress being made in the creation of simulated human personalities that are fast becoming possible in today's advanced research.

TAGS:

  • 13004184443

  • Room 607, 6th Floor, Building 9, Hongjing Xinhuiyuan, Qingpu District, Shanghai

  • gcfai@dongfangyuzhe.com

  • wechat

  • WeChat official account

Quantum (Shanghai) Artificial Intelligence Technology Co., Ltd. ICP:沪ICP备2025113240号-1

friend link