Nov. 16, 2024

Amplifying Manipulation: Nudging, Propaganda, Dark Patterns in AI

Amplifying Manipulation: Nudging, Propaganda, Dark Patterns in AI
The player is loading ...
Amplifying Manipulation: Nudging, Propaganda, Dark Patterns in AI

AI isn't just reshaping industries—it's reshaping minds.

 

In this episode of Ethical Bytes, our host, Carter Considine, looks into an ethical quandary that demands more debate and discussion. AI tools, originally meant to enhance user engagement, are now able to pinpoint users' mental health vulnerabilities, creating parallels with predatory tactics seen in the gambling world.

 

It’s a harsh new reality that challenges us to confront the dark side of AI's ability to manipulate and exploit, and should serve as a wake-up call for developers and users alike to question where the line of ethical responsibility lies.

 

Beyond gaming, our host unravels the broader landscape of AI-driven manipulation, from data-harvesting technologies that feed insurance and advertising giants to sinister forms of generative propaganda.

 

In dissecting how AI can micro-target individuals, spread misinformation, and design addictive experiences, Carter takes a deep dive into the moral complexities of a digital age where autonomy and accountability can easily be undermined.

 

Key Topics:

  • Defining “Manipulation” (2:24)
  • Teaching Machines to Manipulate (3:54)
  • Types of Machine Manipulation (4:22)
  • Programmers Building Manipulative Machines (6:42)
  • Who is Responsible for AI Misbehavior?: Machine Agency and Consciousness (7:43)
  • The Bottom-Line: What Can Be Done? (9:22)

A few weeks ago, I was approached by a software engineer running a video game company. He approached me because he had experienced a serious ethical dilemma related to AI while working in the gaming industry which he considered to be more dangerous than any other ethical issue brought to the table. 

 

His colleague works at a large mobile gaming company with millions of users, including underage ones. 

 

The colleague, also a developer, mentioned her team had developed an internal AI that allowed for tracking a user’s engagement and spending. They also track users’ interest in activities they liked over time. 

 

The developer accidentally found that users tended to spend more money on microtransactions, or in-game purchases that give players access to special features, abilities, or content when their interest in activities they typically loved also declined. The developer also was aware that a loss of pleasure in activities you typically enjoy is a symptom of depression. Essentially, the more depressed a user got, the more they tended to spend, and the company was able to measure this decline to an extremely precise degree.

 

The company ultimately decided to shut down both algorithms, as such information could be used to very easily manipulate users, especially underage users, into spending more. But was this the only solution? Did the developers do the right thing?

 

Companies taking advantage of their consumers’ psychological state isn't a novel situation. Gaming companies in the gambling industry are known to track when the job market is in decline, debt is rising, etc. since those moments are when people statistically tend to be more desperate and tempted into online gambling.  

 

This podcast is going to cover psychological manipulation using AI. 

What is Manipulation?

When we speak about AI manipulating humans, we need to understand what we mean by manipulation. It's often the easiest to understand by looking at extreme examples, such as the psychological warfare used by the USSR during the Cold War. 

 

A less extreme but still dangerous example could include a trusted coworker who gaslights you into thinking you're not performing well, which makes you lose confidence, resulting in her getting the promotion over you. 

 

Clinical psychology, which has evolved the definition significantly, refers to manipulation as “a set of behaviors that are intended to control or influence clothes in a way that is selfish, harmful, and often without regard for the other person’s well-being.” Typical manipulative behaviors include lying, gaslighting, or emotional manipulation. Manipulation can lead to a range of negative outcomes, such as decreased self-esteem, social isolation, and anxiety or depression. 

 

From an ethics perspective, manipulation is considered a kind of harm and morally impermissible, from Plato to Foucault. Marcello Ienca states that there are four key features of manipulation, including intentionality, asymmetry of outcome, non-transparency, and violation of autonomy. 

Teaching Machines to Manipulate 

Machine learning is a method from computer science that uses data to create a probabilistic representation of the world. Machine learning algorithms infer patterns within data by learning from many examples of a task. 

 

Due to machine learning’s ability to amplify patterns in data, AI also amplifies behaviors that can take advantage of human vulnerabilities and easily manipulate behaviors and decision-making. AI can filter the information we receive online, spread misinformation through fake accounts, amplify or provoke sensational responses to current events, and optimize games so they become extremely addictive. 

Types of Machine Manipulation

There are several categories of manipulation powered by AI, for example:

  • Machines that observe include cars collecting extensive data on driving habits, such as speed, music choice, and pedal pressure, which is then shared with insurance companies, ad companies, and law enforcement. Retail spaces in cameras may track emotional reactions to promotions. 
  • Generative propaganda includes targeted misinformation, disinformation, and propaganda. Misinformation is inaccurately shared without malice, while disinformation is false and spread to deceive. Propaganda is aimed to push a certain standpoint, often reinforcing a power regime. The labels "misinformation" and "disinformation" are also used to censor narratives that are politically unfavorable but not necessarily untrue. 
  • Dark patterns are interface designs that influence decision-making processes, essentially tricking users into doing things and harming their autonomy. These include the deepfake calls, fake reviews, hidden options. A recent example includes the deepfakes being generated on YouTube of Elon Musk during the landing of his rocket, convincing users to send cryptocurrency to support SpaceX. Dark patterns optimize for "obfuscation" creating misleading narratives and shadow-banning. Since most AI lacks transparency, those affected often struggle to unearth why they were censored or banned.

 

AI has the power to microtarget, which is a technique used by advertisers to deliver personalized and highly targeted messages to specific individuals or groups based on their demographic, behavioral, or psychographic characteristics. 

Programmers building manipulative machines

However, some cases aren’t so straightforward. For example, if there's a pattern of AI that is microtargeting a customer segment to deliver a more efficient and effective search experience for users so they see things they actually want to buy. The algorithm achieves its goal but can become manipulative by limiting the diversity of information to users and creating filter bubbles, pushing people towards certain purchases behaviors without their consent. 

 

Does this mean that the developers of an AI are intentionally manipulating people? Typically, these behaviors are not intentionally built into AI, such as the case with the developer who worked at the large mobile gaming company. It’s often the case that developers are stuck in moral dilemmas without their knowledge, and they participate in downstream effects that only become visible after deployment.

 

So, if an AI is manipulative, who is responsible for it? 

Who is responsible for AI misbehavior? 

Machine Agency and Consciousness

First of all, let's emphasize that we are speaking about AI, or machines, manipulating humans. Machines do not analyze their actions as right or wrong according to a value system, as humans do. Algorithms lack agency; algorithms behave as they are trained and cannot do otherwise. Machines do not have consciousness and, therefore, lack a will of their own. 

Should developers be blamed?

Even if developers didn’t intend for AI to manipulate users, shouldn't they be responsible for AI misbehavior? Don't they know how the machine is going to behave? 

 

Unfortunately, AI may act in unpredictable ways. This is especially true of generative AI since the training data includes a large corpora of data that have yet to be combed through for things we don't want it to do. Besides this difficulty, translating technical requirements of a machine into things that make sense to the morality of a human is one of the most difficult things to complete. 

 

Responsibility for building non-manipulative machines can only be placed on the shoulders of someone who can change the direction of the product, which becomes obfuscated within large organizations. Individual developers often have little to no say in building new features, only to voice a complaint or leave. 

The Bottom Line

AI-driven manipulation is no longer a hypothetical threat but already affecting millions. As AI continues to develop and more users are negatively impacted, we must identify clear ways of evaluating systems for potential harm. Companies and developers who understand how to control algorithms may use the technology as a tool for extreme psychological manipulation. As Nell Watson states in Taming the Machine,

 

While propaganda used to be generic, the new age allows for micro-targeted psychological warfare. The sanctity of individual thought must be preserved against this backdrop of increasing intrusion. There is a discernible global trend towards a thin veneer of empty ‘liberal values’ disguising autocratic AI-driven governance, wherein ‘freedom’ could be reduced to a hollow term, devoid of its original scope and meaning.

 

AI’s ability to amplify patterns in our behavior, whether intentional or not, can lead to exploitation, especially when profit motives drive its design. But unlike humans, algorithms lack agency and moral judgment; they simply operate as programmed, often amplifying behaviors that developers may not have intended. Yet, this doesn’t absolve companies or developers from responsibility. We must continue to push for true accountability and effective ethical standards to ensure that AI serves humanity, rather than exploiting the vulnerable.