top of page
BrightWolves digital logo blue

DIGIT

Pitfalls of Artificial Intelligence

Artificial Intelligence (AI) has rapidly emerged as a transformative technology, disrupting traditional industries and revolutionizing the way we live and work. However, as with any powerful technology, there are potential pitfalls and risks that must be considered. As AI continues to advance and become more ubiquitous, it is crucial to understand the potential risks and take steps to mitigate them.

 

In this series, we will explore 5 key pitfalls of AI. Every pitfall will be illustrated by a story and effective mitigation strategies will be discussed.​

Cosmos patterns in blue
Introducton

Pitfall 2: Micro-targeting using AI-generated subliminal messages is alarmingly efficient ​

Illustrative story

In 2016, Cambridge Analytica worked on a social media campaign for Donald Trump's presidential campaign. They used a highly successful micro-targeting campaign, using subliminal messages, and the rest is history. But how did they manage to achieve such a feat?​

 

Cambridge Analytica gained access to the personal data of ~83 million Facebook profiles. Of these profiles, several hundred thousand users answered a series of personality questions that were used to create psychological profiles. ​

Using this data, a first model was created to infer the psychological profile of the remaining 82 million users based on their personal information. With these psychological profiles and personal data, a second model was created to select the most effective personalized ad for each user. The ads were deemed effective if they prompted Trump supporters to vote, discouraged Clinton supporters from voting, and swayed potential swing voters to the Trump side.​

 

It is worth questioning whether all Trump voters voted for him due to the "raw" content of his program, or if the highly customized messages they saw on social media played a significant role. If the latter is more plausible, then AI could be a powerful tool for swaying public opinion. The person being swayed is often unaware he is presented with a specific framing of a message and why he has been selected for that message. ​

 

The ethical concern in this scenario revolves around the extent of human control in the presence of algorithms that possess the ability to comprehend an individual's emotional motivations to influence their thoughts, purchasing decisions, etc.​

Why?

When taking a course on rhetoric and persuasion, one of the initial teachings is to understand your audience. Because depending on someone’s socioeconomic background and psychology, an argument might resonate differently. Not necessarily due to its raw content but also due to the choice of words, its format, and so on.​

 

For instance, if we were to persuade someone that flying is safe, our argumentation would vary greatly depending on the individual in front of us. For an analytical person, a statistical argument might suffice, such as: "Based on a scientific study, driving leads to 1.27 fatalities per 100 million miles driven compared to nearly zero per 100 million miles flown." Whereas for someone with a different psychological make-up, different arguments may be more effective. For example, "Uncle Marc is a pilot, and he is sure flying is safe" or “There is a seatbelt on the plane, so we should be safe.“

If we know which arguments work for which personality types, we could very easily send out a tailored message to everyone to convince them that flying is safe. Identifying which message works for which personality type is something AI is very good at. ​

 

AI offers the potential to deliver personalized messages at scale. However, it is crucial to consider the ethical implications of using such technology, especially when it comes to persuasion and manipulation.​

How to mitigate?

Micro-targeting using AI-generated subliminal messages can be a concerning issue for companies. Here are some strategies that can be used to mitigate this risk:

  1. Put the customer in the driving seat: Let your customer decide what degree of personalization in your ads he/she desires and what personal information he/she wants to share. This will foster trust and customer loyalty.​

  2. Be transparent about how data is collected, stored, and used. This will reduce the likelihood of negative backlash.​

  3. Monitor ad content: Let a person monitor the content of ads to ensure that they are not misleading or deceptive. 

  4. Develop an internal AI ethics policy: Develop ethical guidelines for the use of micro-targeting and AI-generated subliminal messages who are compliant with data protection legislation (including GDPR).​

By implementing these strategies, companies can mitigate the risks associated with micro-targeting using AI-generated subliminal messages. It is important for companies to prioritize user privacy and ethical considerations when using these technologies.​

Pitfall 2

WANT TO KNOW MORE?

AI can bring tremendous value to an organization if it is well-managed and understood. However, implementing AI can be complex and time-consuming, requiring specialized knowledge and resources.

At BrightWolves, we specialize in providing customized advice and solutions tailored to specific business needs. Our expertise in AI can help accelerate your digital & data transformation by providing valuable guidance on best practices and implementation strategies.

What sets us apart is our focus on the business side of data analytics, rather than just the technical aspects. We understand that data is only valuable if it helps businesses make better decisions and achieve their goals.

If you want to know more, do not hesitate to reach out to our AI experts:

bottom of page