top of page

The Pitfalls of Artificial Intelligence (New Series)

Written by Olivier De Moor

Artificial Intelligence (AI) has rapidly emerged as a transformative technology, disrupting traditional industries and revolutionizing the way we live and work. However, as with any powerful technology, there are potential pitfalls and risks that must be considered. As AI continues to advance and become more ubiquitous, it is crucial to understand the potential risks and take steps to mitigate them.

In this series, we will explore 5 key pitfalls of AI. Every pitfall will be illustrated by a story and effective mitigation strategies will be discussed.

Pitfall 1: Uninterpretable AI can lead to negative externalities ​

Illustrative story ​

All big retailers use newsletters to promote their products. The data-oriented ones use AI models to build personalized newsletters with tailored discounts based on individual consumption data. ​

Target (a retailer in the USA), for example, started sending coupons for baby items to customers they predicted were likely to give birth soon. It worked so well that they got a complaint from a father accusing Target to encourage his underage daughter to get pregnant. However, a few days later, the father was surprised to learn that his daughter was actually pregnant.​

While these AI-generated newsletters can boost sales, retailers may not anticipate potential negative externalities. For example, alcoholics might receive newsletters full of alcoholic products, anorexia patients might receive newsletters full of dieting pills and low-calorie products, and people with sugar addiction might receive newsletters full of sodas and ice cream. ​

The AI models used for generating tailored content are often very complex. Which makes it very difficult for a retailer to determine why a particular product was suggested to a particular customer. More interpretable AI would make it easier for retailers to identify negative externalities.​


In the last decades, machine learning models have become more complex due to the vast number of input parameters and the rising popularity of artificial neural networks (ANNs).​

ANNs are efficient and data-hungry, but also opaque, earning them the "black-box" label. ANNs are based on biological neural networks, predicting output based on input data through a structure of connected nodes passing information with each other (see figure below). Due to continuous improvements in computing systems, ANNs are becoming increasingly complex, with some surpassing millions of parameters (such as Chat GPT-3, which is based on 175 billion parameters).​

As the complexity of machine learning models will continue to increase, it is crucial to take into account the effects of the opaque nature of artificial neural networks.

How to mitigate?​

Companies may find it challenging to work with black-box algorithms. However, the following strategies can be employed to minimize the risks associated with using uninterpretable AI.​

  1. Develop explainability tools: SHAP, for example, lets you explain the output of any given machine-learning model. This increases the understanding, trust, and actionability of the trained models. ​

  2. Test, evaluate, and monitor algorithms: Thoroughly test the algorithm during its inception to ensure that it is making accurate and ethical decisions. Ensure some sort of human oversight when the model is deployed. ​

  3. Develop an internal AI ethics policy: Set up an organizational set of guidelines and principles to govern the development, deployment, and use of AI models. The policy should be governed by a cross-departmental team.​

  4. Upscale in-house data literacy: Generate awareness around the pitfalls of AI across the organization and train AI creators on the principles of AI (e.g., AI systems should be designed in a way that prioritizes simplicity over complexity whenever possible – Occam’s razor)​

  5. Consider interpretable algorithms: Many problems can be solved by a transparent and explainable AI model. Additionally, understanding how an AI model works will give you insights into your business process.​

Want to know more?

AI can bring tremendous value to an organization if it is well-managed and understood. However, implementing AI can be complex and time-consuming, requiring specialized knowledge and resources. ​

At BrightWolves, we specialize in providing customized advice and solutions tailored to specific business needs. Our expertise in AI can help accelerate your digital & data transformation by providing valuable guidance on best practices and implementation strategies.​

What sets us apart is our focus on the business side of data analytics, rather than just the technical aspects. We understand that data is only valuable if it helps businesses make better decisions and achieve their goals.​

If you want to know more, do not hesitate to reach out to our AI experts Olivier De Moor, Simon Knudde, and Koen Vanbrabant​

Discover the other pitfalls of Artificial Intelligence

Pitfall 2: Micro-targeting using AI-generated subliminal messages is alarmingly efficient ​

Pitfall 3: AI models are approximations and can be tricked​

Pitfall 4: Using unrepresentative training data leads to biased models

Pitfall 5: Recommendation algorithms are at the root of the polarization of our society​


bottom of page