Generative AI, deep fakes and at scale deception

The deployment of generative AI to engage in complex forms of sustained mass misinformation and deception is inevitable.

Research Priority 2 (RP2)

With the rapid advancement of Large Language Models (LLMs), there is an inevitability to the deployment by threat actors of generative AI to engage in complex forms of sustained mass misinformation and deception. LLMs are a type of generative AI used to learn and deploy intelligent, adaptive, and knowledgeable text-based communication into the public domain. This cyberpsychology threat is a scale multiplier for threat actors. Albeit in their infancy, their targeting will be increasingly aimed at individual cognitive decision-making patterns to influence belief systems, defeat sociotechnical controls, and achieve scam compliance over long periods. Defending against this threat will require a deep understanding of deception in a cyberpsychology context.

However, not all advancements in generative AI within a deception context are negative. Generative AI and LLMs have the potential to directly assist the delivery of digital health solutions at scale. This is the positive side of these revolutionary developments which are of equal interest to our context.

This research priority encourages proposals that focus work in examining one or more of the following questions:

 

RP2.1 What is known in the research and grey literatures about the key factors that contribute to computer-mediated deception over time? What are the mediating processes and boundary conditions?

RP2.2 Can humans tell when they are conversing with an LLM? Can humans tell when they are being deceived by an LLM? What cognitive, demographic, situational and contextual factors affect this?

RP2.3 What are the most effective and efficient ways to interrupt and disrupt these processes at multiple levels and modalities (e.g.,human-to-human target; machine-to-human target, machine-to-machine target levels)?

RP2.4 How can generative AI (such as LLMs) be used in a scalable way to interdict deception at scale? Can machine-based cognitive support assist in detecting individual LLM deception instances? Are there levels or types of deception, and are some more prone to conventional or LLM delivery?

New PhD research addressing RP2 recently funded by IDCARE

Project title:

The Psychology of AI-Driven Deception: Theoretical Insights and Impacts

Project overview:

This project will explore the psychological impact of AI-driven deception, such as deepfakes, on receiver behaviours, beliefs, and mental wellbeing. It aims to develop a new theoretical model of computer-mediated deception by analysing interview data from IDCARE clients and case managers, focusing on how AI-driven content affects information processing, memory, belief systems, and psychological wellbeing. This pioneering research will be the first to explore the psychological mechanisms underpinning deception using AI-modified videos, which will continue to become a growing threat to our society if its impact is not effectively understood.  

Why the research is important:

As AI-driven content creation tools advance, they enable threat actors to orchestrate sophisticated campaigns of misinformation and deception on a large scale. These technologies, leveraging machine learning advancements, also allow for highly realistic manipulations of visual content such as deepfakes, posing significant risks to individuals’ belief systems, susceptibility to cyber-crime, and resulting mental health impacts. Addressing these challenges and mitigating harmful consequences requires an evidence-based approach that explores the intricate social, cognitive, and affective mechanisms involved in the psychology of AI-driven deception.

How will this research benefit community resilience and specialist treatments?

By uncovering the psychological mechanisms behind AI-driven deception and its effects on mental wellbeing, memory, and belief systems, this research can inform the redesign of support systems for cybercrime victims and contribute to the development of robust legislation and policies to combat deepfake misuse and other AI-driven deceptions. Additionally, educational programs and frameworks for identifying credible information will empower individuals and organisations to mitigate the risks associated with AI technologies, fostering a safer online environment.