• Saturday, 31 January 2026
logo

Opening the Black Box: Dr. Cynthia Rudin on the Urgency of Interpretable AI

Opening the Black Box: Dr. Cynthia Rudin on the Urgency of Interpretable AI

Bio: Dr. Cynthia Rudin

Dr. Cynthia Rudin is a Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University. She is the director of the Interpretable Machine Learning Lab and a globally recognized authority in explainable AI. Dr. Rudin has developed transparent and trustworthy models for real-world use in healthcare, criminal justice, and finance. She is the recipient of the 2022 AAAI Squirrel AI Award for pioneering work on interpretable machine learning.

Gulan: What is interpretable machine learning (IML), and why is it essential for AI applications in high-stakes domains like healthcare, criminal justice, and finance?

Dr. CYNTHIA RUDIN: In interpretable machine learning, we build predictive models that people can understand. These could be simple scoring systems, logical rules, or interpretable neural networks. For example, in computer vision, interpretable models can explain their predictions by comparing image parts to other images—like saying, “This bird looks like that bird because its head is similar.”

Interpretable ML is absolutely essential for trust. Data is messy. When you use it to train a black box model, all those errors and biases get baked in—but you can’t see them. That’s dangerous in high-stakes settings. You don’t want a model making mistakes about someone’s medical diagnosis, freedom, or loan eligibility without anyone understanding how the decision was made.

Gulan: How does IML address transparency and accountability when AI models are used to make impactful decisions?

Dr. CYNTHIA RUDIN: You can’t have accountability without transparency. Black box models are not transparent, and they don't mix well with human oversight. Either the human blindly trusts the model—or doesn’t trust it at all. With interpretable models, you can see how a decision was made, step by step.

Take a loan denial: if the model is interpretable, you might realize it used incorrect data from your account. That error can be caught and corrected. With a black box model, you’re stuck. That’s the value of transparency in systems prone to error.

Gulan: In healthcare, how can IML ensure AI systems are trustworthy and understandable to professionals and patients?

Dr. CYNTHIA RUDIN: Health data is often incomplete or wrong. Many companies make money selling black box models and sometimes exaggerate their effectiveness. There’s a documented case where a model claimed to be nearly perfect in predicting embryo implantation success. But it turned out their data included many “easy” cases that didn’t need AI at all.

If you can’t trust the model—or the evaluation of it—you’re stuck. Interpretable models avoid this problem because they allow independent verification.

Gulan: What about the use of AI in criminal justice? How can IML help mitigate bias in this sensitive domain?

Dr. CYNTHIA RUDIN: I don’t think fairness is possible without interpretability. You can’t even properly detect bias unless the model is interpretable.

People argue about whether risk assessment models rely on race. But we shouldn’t have to argue. The model should be made public so we can check. Sometimes what seems like racial bias is really a proxy variable, but we can’t know that unless we see the model logic.

Gulan: Financial models are also high-stakes. What challenges exist in making them interpretable?

Dr. CYNTHIA RUDIN: I addressed this earlier. Errors in financial models can cost people the ability to get a loan or buy a house. Interpretable models help prevent that.

Gulan: There’s a belief that more accurate models must be more complex. Is there a tradeoff between accuracy and interpretability?

Dr. CYNTHIA RUDIN: This is a very common misconception—and it’s false. There’s no evidence for a tradeoff in high-stakes settings. In every application I’ve studied—mammography, heart monitoring, materials science—interpretable models perform as well or better than black box models.

They’re easier to debug, build, and improve. In fact, there's a strong mathematical foundation that supports this. When the data is noisy or outcomes are hard to predict, simpler models often do just as well.

Gulan: How can interpretable models help build public trust in AI, especially in sensitive fields like healthcare and justice?

Dr. CYNTHIA RUDIN: I think I’ve answered that already. People trust what they understand. Interpretable AI enables that trust.

Gulan: What role should policymakers play in encouraging interpretable models and regulating AI transparency?

Dr. CYNTHIA RUDIN: Policymakers should require that black box models not be used for high-stakes decisions—unless there's a compelling reason. There’s a power imbalance between model designers and the people subject to their decisions. We have to protect the public from unnecessary complexity that can lead to harm.

Gulan: Looking forward, what are the biggest challenges and opportunities for the future of interpretable ML?

Dr. CYNTHIA RUDIN: The tech is scalable—we’ve already built large, interpretable models. But the challenge is mindset. Many people still believe that interpretable models are less accurate. That’s the biggest barrier. Once people see it working, they don’t go back to black boxes.

The toughest frontier now is natural language processing. We know how to interpret image and time-series models, but we don’t even have a definition of interpretability for systems like ChatGPT. That’s a whole different beast. Right now, it’s like blending information into a smoothie—you have no idea where anything went.

Gulan: What steps can researchers, educators, and practitioners take to speed up adoption of IML?

Dr. CYNTHIA RUDIN: They should keep three things in mind:

1. Interpretable models can be just as accurate as black boxes for high-stakes tasks.

2. Open-source tools for building interpretable models are available and easy to use.

3. We lack proper regulation. Companies benefit from selling black box models with inflated performance metrics. Interpretability can keep that in check.

Also, educators need to update what they teach. Too many machine learning courses ignore interpretability in favor of flashy, massive models. We have to do better.

Top