Join Our Workshop with One of Microsoft’s Top Tech Leaders

Register here
Close menu

Artificial Intelligence and Machine Learning models are some of the most exciting frontiers in science and technology today. But here’s the catch: while we’re getting better at making these models highly accurate, they often operate like mysterious black boxes—spitting out predictions without clearly showing how they got there. That’s a problem, especially when AI is being used in areas like healthcare, finance, and law.

To help crack this challenge, Bachelor in Applied Mathematics (BAM) students at IE’s School of Science & Technology welcomed Dr. Dolores Romero Morales, a distinguished professor, advisory board member for BAM and Editor-in-Chief of TOP: The Operations Research Journal, for an eye-opening seminar on Multi-Objective Optimization to Make AI/ML More Transparent.

The Balance Between Accuracy and Transparency

Dr. Romero Morales, who specializes in Mathematical Optimization for Machine Learning, broke down the dilemma: AI models are often designed to prioritize accuracy, minimizing prediction errors. However, this often comes at the cost of transparency, making it difficult to understand why a model made a particular decision.

“Transparency has been a long-standing requirement in industries like credit scoring,” she explained. “With new regulations, its importance is expanding across all sectors. The challenge is that the most accurate AI models tend to be the least transparent.”

So how do we fix this? The key lies in multi-objective optimization, a mathematical technique that allows researchers to tweak AI models to improve their explainability while keeping accuracy levels high. Dr. Romero and her team have been at the forefront of this effort, developing optimization strategies that make AI more interpretable with only a slight trade-off in performance.

Why This Matters for Future Scientists and Engineers

As AI continues to shape industries, understanding how to make these models more interpretable isn’t just an academic exercise—it’s a real-world necessity. Imagine a medical AI diagnosing patients without doctors being able to verify why it reached a conclusion, or a financial model approving or denying loans without clear reasoning. Transparency builds trust, ensures fairness, and makes AI more useful for everyone.

Dr. Romero’s advice to Sci-Tech students looking to make an impact? “Make a fair use of your math skills!”

With applied mathematics playing an increasingly vital role in AI and machine learning, today’s students have a golden opportunity to push these fields forward—ensuring that the next generation of AI isn’t just powerful, but also understandable and ethical.