Interview with AIIP keynote speaker and AI specialist Dr. Mike Ridley
Dr. Michael Ridley is Librarian Emeritus at the University of Guelph where for many years he was Chief Information Officer (CIO) and Chief Librarian. Dr. Ridley will be presenting on Human-centred explainable AI at AIIP25 on Thursday, April 10 at 3 pm EDT. Ahead of that presentation, he was kind enough to answer a few AI-related questions.
How did your background in library science inspire your current research into explainable AI?
Oddly enough, the two happened together. When I was first working as a librarian in a health sciences library, expert systems were just coming out. I decided we would try to create a rule-based expert system for health library reference. We set up a project using the conventional tools that were available at the time. And we tried to do this for a while. It was a catastrophic failure. But we did discover a lot of things about what at that point AI could do and couldn’t do and how we had to approach it from an ontological perspective, from a perspective of how the data was provided. So I’ve been doing AI, or trying to do AI, in one form or another almost my entire library career.
When you talk about making AI more transparent, can you give a real-world example that illustrates why “explainability” is a better goal than transparency?
Transparency is the wrong word in many ways; if we made these systems truly transparent we wouldn’t understand them because they are made of complex code and involve complex interactions. It wouldn’t make sense even to some experts.
We, as humans, use explanations all the time. They are how we understand ourselves and the world around us. We express ourselves through explanations. When we try to explain something, often what we’re doing is saying a little bit about our beliefs about the world and our beliefs about ourselves. Therefore, explainability belongs in AI.
One very mundane example I can use to illustrate explainability is job applications. In a future scenario where the entire selection process is managed by AI, the process itself needs to be explainable from the perspective of the person applying for the job. If an applicant didn’t get the job, why not? What criteria made others successful? Such explainability should tell me not only why the result came about but what I could do next to improve upon it. The explanation has a built-in next step. This is why explainability is sometimes described in terms of actionable information and why it is promoted as being desirable.
The job application example is familiar to everyone; we know there is AI intervention now, and it is going to get more important as these and other systems become much more autonomous. Even then, I think greater degrees of explainability will be required as we go forward. It’s a good example, too, of the role of regulation.
In the abstract for your presentation, you highlight the importance of accountability in AI. How do you think we can strike a balance between implementing regulations to ensure clarity and allowing the freedom to innovate in AI development?
You know the story about technology and regulation. It’s done either too soon or too late. Too soon and we impede innovation, and too late, we’ve got harm in society. That’s happened in iterations of previous technologies. There are a couple of differences though with AI. One is that the technology is emerging very, very quickly. And regulation, as good as it can be, works very, very slowly, even at the best of times.
We’re in that uncertain state where we are definitely going to do, and are doing, harm. And we are trying at the same time to advance innovation, largely because there’s an enormous amount of money involved here. There are huge national and corporate stakes. Putting in any kind of roadblock is seen as detrimental, even if harms occur as a result. I’m not optimistic that in the short term we’re going to come to terms with this.
I’m very happy that the EU has made what is really a historic move in trying some regulation. I once heard a commentator say the EU AI law proved that at least regulation is possible. Compared to where I live in North America, it’s light years ahead.
What is happening, which I see as a good thing, is that rather than just thinking about regulating AI, we are starting to regulate where we know the harms are. One good EU example is around deepfakes.
4. You have two intriguing hobbies – 17th-century English opera and wooden alphabet blocks and type for letterpress printing. How do you see these seemingly unrelated interests connecting to your work in AI and information technology?
The alphabet blocks and the type are all about literacy and about the alphabet as a fundamental game changer. AI people often talk about AI being the next best thing since fire. I think they forgot about the alphabet. The alphabet has changed so much about how cultures function and how we can communicate and how we work. It’s a phenomenal technology.
How does 17th century English opera connect to this? In 1656, during the Interregnum, an opera production was the first to use movable scenery on the public theater stage. This was an example of technology changing the way art was presented. And that interested me as an application of technology within an information related cultural setting. It’s my MA thesis BTW.