Why AI Can’t Understand Metaphors or Sarcasm — And How to Fix It
Video not loading? Click here.
Can AI truly understand the soul behind human language? Suzan Awinat explores how poetic techniques like sarcasm and metaphor challenge even the most advanced language models — and what it takes to train them differently.
© IE Insights.
Transcription
The main idea of AI is to mimic the actions of a human. To make the machine understand or act and interact like a human. So this is the problem, if you try to generate a text, you will see that it’s repetitive, it’s just repeating. It doesn’t have a soul.
Writing a poem is like a drawing a picture. So you put your feelings there. You put your experience, your history, your environment, your culture, everything related to you. So it’s like just drawing yourself or trying to express yourself on paper. So not everyone sits down to write a poem. But actually we are using poetic techniques in our day-to-day expressions and conversations.
We are using metaphors. We are using sarcasm. We are using irony. So this is why it’s important to teach the machine how to understand this irony and metaphors, and all of these techniques to fill the gap between the AI-generated text and the human-generated one. For example, if we want to analyze the comments or the feedback of the users for a product or company, and one of them says, “oh great, it took one month to receive my product”.
For us as humans, we can directly understand that it’s sarcasm and it’s a negative review, not a positive one. But for AI, if they see the word great, this means that it will be classified as positive. So we need to teach the model or train it more, to understand these techniques
How to teach the models or the LLMs how to understand these techniques? Actually, we are training the models on metaphorical language, but I think it’s not enough. What I did is I built a cultural metaphorical dictionary for Arabic language. So I used more than 5 million poems to use it later to generate the metaphor and to understand the culture behind each metaphor.
I’m building a new embedding for a language having three different levels. The first level, which is the literal meaning of the word, then the metaphorical meaning of the word, then the cultural meaning of the word. Our model will perfectly understand the metaphor beyond any expression. For example, if I tell you that I received a Joseph T-shirt in English, it has no meaning.
Maybe you are talking about the fabric of the t-shirt, but for example in Arabic culture this means hope. And this is related to Prophet Joseph’s story with his father, Prophet Jacob, when he received the t-shirt of Joseph, he gets his vision back. When you say that I received a Joseph, t-shirt, this means that I’m getting my hope back.
If you give this text to an English model or a model that trained on English language, not a culture-aware model, it will deal with it as a normal text. But if you give it to a model that’s trained on cultural Arabic language, it will understand it. So when you train the model on the cultural and metaphorical meaning, it will connect both meanings with a culture directly and understand that this is a metaphorical expression, not just a normal one.
Even though that it’s my project and I’m working very hard to make the AI-generated poetry reach the quality of the human-written poem, but I don’t think that it will replace it or reach to this quality. I don’t think that the machine replaces humans at all, but maybe we can use it to help us.
So, for example, they can give you a list of metaphorical expressions to make your poetry stronger. So it could be used for inspiration, not for writing. The same as if you ask the ChatGPT or Copilot “write me a strategic plan”. It will help you to organize your work to give you the topics, but it will never be a complete plan for you.








