FREE REVIEW
FREE REVIEW
6 Limitations of AI & Why it Won’t Quite Take Over In 2023!

6 Limitations of AI & Why it Won’t Quite Take Over In 2023!

I have taken a look into some of the limitations of artificial intelligence and why tools such as ChatGPT won't be ruling the world just yet.

Artificial Intelligence (AI) is a rapidly advancing field that has the potential to revolutionise many areas of our lives. However, despite its many advantages, there are also several limitations to the technology that must be taken into consideration…

Some of these limitations include the lack of common sense, transparency, creativity, emotion and safety and ethical concerns. 

These limitations have the potential to hinder the performance and effectiveness of AI systems and limit their applications, especially in areas such as decision-making, healthcare, transportation, and finance. Therefore, it is important to understand and address these limitations in order to fully harness the potential of AI.

1. Limited understanding of context

AI systems have a limited understanding of context and the nuances of human language and communication. 

Machines are often trained on large amounts of text data and are able to identify patterns and make predictions based on that data. However, they lack the ability to understand the nuances and subtleties of human language and communication. 

For example, they may struggle to understand sarcasm, irony, or figurative language. They also lack the ability to understand the context in which language is used, which can lead to errors or unexpected behaviour. Additionally, it is hard for AI to understand idiomatic expressions and cultural references, although the more they are trained and the more exposure they get, the better they handle this type of language.

Chat GPT understood each of these examples and gave the alternatives in bold:

  • Better late than never - It’s never too late to face the consequences..
  • Bite the bullet - Face the consequences
  • Call it a day - Wrap up for the day
  • Kick the bucket - Pass away

Despite surprising us with its translations in these examples, AI’s lack of context and understanding of human language and communication can limit the effectiveness of AI in tasks such as natural language processing and conversation. 

In this field, we humans win because of our understanding of context and people’s situations on a deeper level. 

2. Lack of common sense

AI systems currently lack the ability to apply common sense reasoning to new situations. They are only able to make predictions and decisions based on the data they have been trained on, meaning they are not able to apply their knowledge in a flexible way to new situations. This lack of common sense can make AI systems prone to errors, particularly when dealing with novel situations.

For example, an AI system trained to identify objects in images may not be able to recognise an object that it has not seen before, meaning it will still require human input to feed it the new item and programme it for future experiences. 

Additionally, when it is faced with a similar but slightly different task, it might fail because it doesn't have the ability to understand the subtleties behind the task or concept and it can only perform what it was trained for. 

This lack of common sense can limit the effectiveness of AI in tasks such as decision making, problem solving and understanding of the world.

3. Bias

AI systems can perpetuate and even amplify existing biases in the data they are trained on. 

Biases can be introduced in the data through various means, such as human error, sampling bias, or social and historical factors. For example, an AI system trained on a dataset of job applicants that is mostly composed of men will likely be biased towards men and make less accurate predictions for women. 

Similarly, an AI system trained on a dataset of criminal defendants that is mostly composed of people of colour will likely be biased towards people of colour and make less accurate predictions for white defendants. 

There was one incident with an AI robot that was trained by sorting through billions of images and captions and it ultimately developed racist and sexist thought patterns. In one instance, the robot, when asked to identify criminals, chose black men 9% more often than white men.

Furthermore, AI systems can show biases in decision-making processes such as hiring, lending, and sentencing. For example, an AI model trained to predict loan defaults might inadvertently discriminate against certain groups if the historical data it was trained on was biased in a similar way. This can lead to unfair and unjust decisions and have serious consequences for individuals and society. 

It is crucial to identify and address biases in AI systems, through techniques such as data pre-processing and bias correction.

4. Lack of creativity

Despite the recent debate surrounding the creative capabilities of AI, it does still lack creativity. 

AI systems are based on algorithms and mathematical models, which allow them to recognise patterns and make predictions based on large amounts of data. However, these systems are not able to create new ideas or concepts. 

AI is not able to come up with something completely novel, like a human artist creating a new painting, or a human scientist discovering a new theory. Although, having said that, it could be argued that nothing anyone creates is truly novel since we are influenced by everything around us and everything that has come before us.

Such is the paradox we live in!

This apparent lack of creativity limits the potential applications of AI, and makes it difficult for the technology to be used in fields that require original thinking, such as art, music, and literature, although I have seen examples of AI being successfully used to write new songs in the style of famous singers, and doing a pretty good job - presumably because it’s learnt that style/tone/pace and is regurgitating it.

It’s also fair to say that AI can have a pretty good crack at improving existing content… 

Here’s the original M&S description of a ready meal:

M&S Slow Cooked Beef Bourguignon. Tender boneless British beef bourguignon in a rich red wine sauce with mushrooms, onions and bacon. Perfect served with creamy mash or with a fluffy baked potato. Simply warm on the hob, in the microwave or oven and enjoy. Gluten free. Serves 2.

And here’s the Chat GPT alternative:

Indulge in the rich and succulent flavours of our M&S Slow Cooked Beef Bourguignon. Made with tender boneless British beef, simmered in a mouthwatering red wine sauce with the addition of mushrooms, onions and bacon. Enjoy it with creamy mashed potatoes or a fluffy baked potato for the ultimate comfort meal. Conveniently heat it up on the stove, in the microwave or oven, and it's ready to be devoured. Gluten-free and perfect for serving 2.

Overall, this Chat CPT seems pretty good!

5. Lack of Emotion

AI systems do not have the ability to feel emotions. Here is where Chat CPT did a poor job…

Question: what are the differences between the human emotions of jealousy and envy?

Answer: Jealousy and envy are similar emotions, but they have some key differences.

Jealousy is the feeling of resentment or anger towards someone else's success or possession. It is often directed towards a person who is perceived as having something that the jealous person wants or desires. Jealousy can be triggered by a romantic partner, a friend, or a colleague who is doing well in their life or career.

Envy, on the other hand, is the feeling of wanting something that someone else has. It is often directed towards a person who is perceived as having something that the envious person wishes they had. Envy can be triggered by someone's possessions, status, or accomplishments.

In summary, jealousy is an emotion that arises when someone feels like they are losing something they have, while envy is an emotion that arises when someone wants something they don't have.

Here’s a dictionary.com explanation (presumably written by a human and easier to understand): Jealousy and envy both involve a feeling of desire for what another person has, but jealousy is usually thought to be more negative—it often involves resentment toward the other person. Envy is also a negative feeling—like a mix of admiration and discontent—but the word doesn’t usually imply hostility. Another difference is that envy can be used as both a noun and a verb.

They can only process data in a logical and structured way. They can recognise patterns in data that may indicate certain emotions, such as facial expressions or tone of voice, but they do not experience emotions themselves. In other words, AI does not have consciousness or emotions, it does not have the subjective experience of feeling happy, sad or angry.

Some researchers are working on developing AI systems that can simulate emotions, for example, in chatbots or virtual assistants, in order to improve their ability to interact with humans. However, this simulation is not the same as the real emotion, it's just a way to make the interaction more natural or human-like.

Here’s an example from Chat GPT of the M&S ready meal description in the tone of an angry person:

M&S Slow Cooked Beef Bourguignon? More like M&S Slow Cooked Disappointment! Tender? More like chewy and tough. British beef? More like it's from God knows where. Rich red wine sauce? More like watered down and flavourless. Mushrooms, onions, and bacon? More like a handful of bland veggies. Perfect served with creamy mash or fluffy baked potato? More like you should serve it with a side of disappointment. Gluten free? More like flavour free. Serves 2? More like serves 2 unhappy customers!"

Not quite so good here is it?

The question remains: will AI ever be capable of true emotions?

To circle back to point 4 on creativity, many question whether AI can truly capture emotions in art if it does not truly understand emotions itself.

Despite the advancements of artificial intelligence, it may not be possible for machines to, in a perfect capacity, capture those more subtle nuances in facial features that can portray emotions. The most intricate of AI generated artwork, while certainly impressive in its ability to generate photorealistic works, can still fall short if scrutinised under closer examination. 

For example, one basic telltale sign that a portrait may have been generated by AI is the fake smile which lacks that all-important sparkle in the eyes and warmth on the lips.

Ultimately, AI generated artwork still has many limitations when compared to traditional forms of hand-drawn artwork and their ability to convey emotion; however, with further development and improved computer vision technology such as real-time facial tracking, these limitations could start to become redundant.

In all honesty, AI is at an impressive level right now - take a look at the two images below; do you feel the emotions of the man? Does the first’s eyes reflect his kind soul and happiness? Does the second’s posture reflect his downtrodden demeanour? We’d argue yes, yes it does! So in this respect, perhaps AI artwork can capture emotions, albeit if it has been programmed to understand how to portray each specific emotion.

Note below that our input didn’t specifically request the AI to show this emotion through posture or facial expressions, yet it did so in an accurate way:

This application can also produce more accurate results if you add more detail, take a look at this example where a user has inputted a high level of detail to the description, and as a result the image is tailored more to their liking. 

It’s really all about knowing what to write to prompt the AI system to produce better results. 

6. Lack of robustness

Another limitation of AI systems is the lack of robustness, which makes them susceptible to manipulation. AI systems are based on large amounts of data and complex algorithms, which can make them difficult to interpret and understand. As a result, they can be easily fooled by malicious actors who may use techniques such as adversarial examples to manipulate the system's decisions. 

Adversarial examples are inputs, crafted specifically to fool the model, which can cause the AI system to make a mistake. For example, a malicious actor could create an image that is almost identical to a "stop" sign, but with slight modifications that cause an autonomous car's AI system to recognise it as a "yield" sign, leading to an accident. 

This lack of robustness makes it difficult to trust AI systems in critical applications and raises important concerns about security and reliability. Again, testing and designing software that is robust and cannot be manipulated remains of utmost importance.

Safety Concerns in AI

So, we know AI’s limitations, but what about the safety of AI? Can AI cause harm? How safe are these systems? 

These are all questions we should be asking ourselves as we see an increase in the presence of Artifical Intelligence in our day to day lives. 

As AI systems become more advanced and are used in more critical applications, such as self-driving cars or medical diagnosis, the potential for harm increases. If an AI system is not designed or used responsibly, it can cause accidents or make decisions that harm people. 

For example, self-driving cars may cause accidents if the system fails to recognise an obstacle or misinterprets sensor data. Similarly, an AI-powered medical diagnosis system may misdiagnose a patient if it is not trained on a diverse set of data. 

Many also are concerned about the use of AI for cyber attacks or as a tool for surveillance. We must make an effort to ensure that AI systems are developed and used in a way that prioritises safety and minimises potential harm.

Ethical Concerns in AI

Ethical concerns surrounding AI are also prominent, though this is not a new concept. 

We have been sceptical about the ethics of AI ever since we saw the creation of robots that are trained to speak like humans. Will these machines take over? Which human jobs will they replace? Who is accountable in the event that an AI system makes a mistake? A robot can’t be prosecuted, can it… 

As AI systems become more complex and autonomous, it becomes increasingly difficult to determine who is responsible for their actions. It's important to consider these ethical concerns and ensure that AI systems are developed and used in a way that is fair, transparent, and respects people's rights.

Limitations of AI in retail 

So, how can AI be related to the retail sector? 

We already rely on AI to help with a lot of processes in retail, such as automating the checkout process, or in some cases even re-stocking shelves. Although these things are becoming normal, there are still limitations.

  • Limited understanding of customer behaviour: AI systems in retail struggle to understand customer behaviour and preferences in the same way that human sales associates can. This can make it difficult for AI-powered systems to make personalised recommendations or provide high-quality customer service.
  • Lack of flexibility: When used in retail, AI can often require a large amount of data to train on and can be difficult to adapt to new products, trends, or customer preferences. This can limit their ability to respond to changing market conditions or customer needs.
  • Lack of creativity: AI systems in retail are not able to come up with new product ideas or marketing strategies in the same way that human retailers can. This can limit their ability to innovate and stay competitive in the market.
  • Limited ability to handle unstructured data: AI systems in retail are often better at handling structured data such as numerical values, they struggle with unstructured data such as images and text, which can make it difficult to analyse customer reviews and feedback.
  • Dependency on human input: AI systems in retail still heavily rely on human input to function, for example, to curate products, to design the interface, to set prices and to evaluate the performance.

So is AI set to take over?

In short, not yet!

We have seen many of the limitations of AI systems and the learning which still needs to go into the building process of these machines. Because this is all relatively new to most people, there’s still a long way to go, especially until mass adoption.

We are certainly at a cross-roads; previously, humans did all the thinking AND the creative work. 

Now, humans almost do half a job; either they tell AI what they want and it handles creativity, or AI is given a creative brief, and does all the production. It still, however, needs that final human ‘touch’ to get the most out of it, be it detailed and accurate prompts or proof-reading and fact-checking.

Whether you love it, hate it or are on the fence with AI, one thing is for sure: as the world evolves, we must find a way for AI and humans to co-exist and collaborate productively, otherwise, those sticking to traditional methods will lose out against those leveraging AI.

Found this blog post useful?

Why not get a FREE brand review to boost your brand communications...

About Phillip Adcock

My name is Phillip Adcock: I have more than 30 years of human behavioural research and analysis, and have developed a unique ability to identify what it is that makes people psychologically and physiologically 'tick'.

Would you like to know more about how shoppers and consumers think? Download my FREE guide now. Alternatively, check out www.adcocksolutions.com, where there are more FREE downloads available there. Or why not simply email me with what's on your mind?

If you think there is value in this article then please, please share it, thank you.

Phillip Adcock

Phillip Adcock CMRS
Psychology & Behaviour
Change Consultant

Phillips Signature

Explore our Brainsights

Adding Shopper Psychology to Retail Media

Retail media allows brands to reach shoppers directly, right where and when they’re making purchase decisions. Here's how to incorporate shopper psychology.

Read Story

Shopper Marketing Challenges & Possible Solutions

To stay competitive, it’s crucial to address the following shopper marketing challenges with strategies informed by shopper psychology and behavioural science.

Read Story

Get the latest brainsights straight to your email box

We will never share your email address with third parties.
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.