Audio
Jutta Treviranus: removing bias from AI
Inclusive Design expert discusses how to make AI fairer and more accessible to all.
This series comes from Remarkable, an initiative of the Cerebral Palsy Alliance. It explores how disability drives innovation with pioneers from across the globe who are pushing the boundaries on tech, business and social norms.
In this episode: Artificial Intelligence (AI) impacts important decisions like jobs, loans, and medical care. But is it fair for everyone? And how is it regulated?
In this episode host Vivien Mullan explores this question with Dr Jutta Treviranus, Professor and Director of the Inclusive Design Research Centre at Ontario College of Art and Design (OCAD) University in Toronto. Tune in to learn more about Jutta’s groundbreaking work on removing bias from AI, drawing from her extensive experience in digital accessibility and inclusive design.
[00:00] Viv Mullan:
How can AI shape a fairer world free of bias?
I'm Viv Mullan and today on Remarkable Insights, our guest is leading the charge against AI's data biases. I'm so excited to have you on the show. Can you give our audience a quick intro about yourself and any part of your life or identity that feels important to share?
[00:17] Dr Jutta Treviranus:
Wonderful.
So I'm Jutta Treviranus and I'm the the director of something called the ‘Inclusive Design Research Centre’, at OCAD University in Toronto, Canada, and my pronouns are she/her. And if someone is not seeing the video, I'm older woman with grey hair and glasses in a fairly dark black background, actually.
[00:44] Viv Mullan:
Well, I would love to start by just getting to know a bit about, how you ended up really driving inclusive design and your experience with artificial intelligence. Could you sort of go back to where that journey began for you?
[00:58] Dr Jutta Treviranus:
The artificial intelligence journey, yes.
So I started in the field of accessibility and access, digital access, actually digital inclusion, way back in 79, which was when the first personal computers emerged. And I was looking at how to use them as translation devices. And back then, I started to play with artificial intelligence, specifically to recognise voices that were not understandable by people who were unfamiliar with people, so dysarthric voices. And I was able to develop systems which would recognise a fairly large number of repeatable utterances that kids could make and that adults who had dysarthria due to cerebral palsy and a variety of other disabilities. And so that started my journey in artificial intelligence and it was fairly positive.
But when we came to 2013, because of my background in disability and accessibility and in artificial intelligence, I was asked by our Ministry of Transportation to test some machine learning models that would be used to guide vehicles through intersections. So it would tell the car or the vehicle to either proceed through the intersection turn or stop. And I thought, “Well, I'm going to test it with an instance that is unusual or outlying”.
We have data gaps. There's a data desert with respect to data about people with disabilities because there are not as many digital markers, there are not as many opportunities to participate in places where data is being gathered. That began my journey where I was pulled back into AI.
[02:52] Viv Mullan:
And I wonder, the model that you're proposing is that the innovation and the real sort of groundbreaking ideas, would sort of rest in the insights from the outliers.
But I wonder how does that convert to people understanding that the priority isn't the majority needs, but it's addressing the outlier insights and the needs of those being outliers, because for so long people have worked on sort of addressing majority need.
[03:27] Dr Jutta Treviranus:
To us appears obvious is that if you address the requirements right out to that jagged edge, not only are you going to… are you more likely to innovate. So if you are hoping to innovate or create some innovative business or startup, etc. then that's where you're going to find the true innovations. It isn't the complacent middle that will have ideas about how things should change.
The other thing, of course, is that if you design for that full spectrum of requirements, you're creating a system that is going to be much more adaptable and flexible. If you're only creating for that middle or some sort of majority, then your system is going to become very brittle because it will only meet a portion of the population. And so there will be lots of requests for changes, additions, features that change, and the environment changes, the context changes, the world changes. So your system will have to adapt. It will have to flex. But if you've only created for that middle portion of the population, then it's going to be less adaptable. And so you'll have to be bolting on changes, fixes, additions, new features. It becomes very, very brittle and it reaches end of life much more quickly.
So design with the outer edge and you give the middle a lot more room to move. You create a system that can flex and can adapt to changes that are unexpected shifts within the context.
[05:10] Viv Mullan:
With the wake of AI, and it's been around for a very long time, but with chat GPT becoming part of ubiquitous people's day-to-day jobs, is there anything that we can do to inform people of how to be more aware of the statistical discrimination that perhaps we don't know we're just sort of accepting? You know, a way that we can inform people to challenge the information and the structures of AI that we can use and consume in our day to day lives.
[05:41] Dr Jutta Treviranus:
Yeah. And it's interesting because of course, AI wasn't the first system to use statistical reasoning. It forms so many parts of our life as well. So I think AI is a really great mirror to start to think about what are the conventions that we're accepting, that we're unconsciously used to, and we're no longer questioning at all? So for example, if I'm an academic in academia, the empirical research is the golden standard, right? So and that is based upon statistical power. But what does that do to people who are not like the statistical norm or the statistical mean?
All of those scientific determinations that you hear in the news, you know, the average woman, the average man, the average teenager, etc. well, what happens to the not average and people that fall through the cracks of those categories as well? So I think this is an opportunity not only to question what are we doing with AI, but what are the conventions that AI is now amplifying that are really discriminatory and that caused the type of disparity and the type of crises that we're currently facing.
The digital systems and the data markers that AI is using, they're valorizing or they're promoting largely monetisation and attention. So the attention economy that we have at the moment, AI is just increasing that at an exponential rate. But the way that we are valuing attention is also quite destructive. AI is just making it worse. I think it behoves us to look at what are we using AI to power forward.
AI is a power tool. It's like we're taking a human habit, a human success marker, or a strategy, and we're using a power tool to amplify that, to make it more efficient, more effective. And I think what we need to do is we need to pause for a second and see, well, “Is this actually beneficial at all?” I mean, why would we want to accelerate, amplify, make this more efficient, make it more accurate?
Do we want more accurate discrimination?
Do we want more efficient discrimination, etc.?
What happens to the people that fall through the cracks or are stranded at the edges of those categories?
How does our categorisation or boxing actually harm people?
The 80/20 rule, the idea that just pay attention to the 80% that only take 20% of the effort and forget about the difficult 20% that take 80% of the effort.
[08:48] Viv Mullan:
And you're working with trying to shape legislation to right some of the wrongs that are currently existing in the world of artificial intelligence and how we move forward with that and machine learning. I mean, how do you feel about where we're at in the direction that we're heading in and how much do you think we can control it to some degree?
[09:09] Dr Jutta Treviranus:
Of course. Yeah, that's a really good question.
And I think this debate about AI and regulation of AI and the regulatory vacuum that exists is very complex. There is an over amplification of the sort of huge dystopian or utopian scenarios. And people are not aware that we're already pervasively deploying AI.
I mean, AI is involved in almost every critical decision in our lives.
Who gets a job,
Who gets admitted into universities,
Who gets a loan,
What medical treatment you're going to get,
Who gets audited by tax systems.
So I think we need to both tone down the type of dystopian and utopian issues. It is a very practical problem right now, and everyone can get involved in helping to change the way that we deploy, make decisions with AI.
I drafted a regulatory standard for AI called the ‘Accessible and Equitable AI standard for the Accessible Canada Act’ and we've turned it into an international committee with a number of international participants as well addressing not just the accessibility of AI to make sure that people with disabilities can participate in every part of the AI life cycle. Not just as consumers of AI products, but also planners, designers, developers, implementers, evaluators of AI systems.
But then in addition to that, there's the equitable part, making sure that AI decisions treat people with disabilities equitably. So the regulations that we're working on, what we're trying to do is to layer it on top of the existing systems. So that we cover the types of things that are not covered within other regulatory standards.
[11:17] Viv Mullan:
I can see that also having a, being a great sort of blueprint for how other standards should really take into consideration the perspectives of people with disability from, you know, design and creation all the way to consuming. I would love to ask two final questions. One of them is, I am looking to the future, is there something about what's in the works with the technology being created in the artificial intelligence space that you think could and will be revolutionary for people with disability?
[11:47] Dr Jutta Treviranus:
We've been looking at a lot of different approaches. The current trajectory of AI is not baked in. At the moment, AI is tuned towards what's called data exploitation. We're taking the successes of the past and we're propagating them on the future to say, “If it was successful before, if we multiply that success, you know, thousand millions fold, then we're going to be even more successful”. Well, our past successes have not landed us in a very good place at the moment, if you think about the crises we're facing.
So rather than reducing diversity and denying that we're in a complex situation, which is really what our conventions are assuming at the moment, what if we view diversity and differences and variability as an asset instead. And so there are ways in which we can use things like AI to explore instead. An exploration engine within an AI would actually increase the diversity. So if I'm hiring, give me as many people who bring new perspectives, novel ideas to my team or into my organisation.
I mean, all of these things are beneficial for the population as a whole because we need to have a greater understanding and a compassion for a larger diversity of individuals. We need to move away from the type of tribalism and fragmentation that's happening.
[13:21] Viv Mullan:
And I'm sure that you've already said, I mean, you've said some incredible points in this conversation, but I do like to ask our guests to finish our interview by sharing a Remarkable Insight and for you, that could be a piece of advice that you'd like people to take away from this, or could be some food for thought for people innovating in the space of AI and machine learning.
[13:44] Dr Jutta Treviranus:
There's so many. That's a hard choice to make. But I think certainly for people that are looking at AI, that are looking at intelligence, and that are looking at decision systems, which AI is frequently used for, the point that I'd like to make is that if we create intelligence that works with the edge of our human scatterplot, so the individuals that are currently outliers that are not actually well addressed by either our intelligence systems or design systems, we're going to create systems that are much better at addressing the unexpected crises to come. It will allow us to detect the cracks and the risks within the systems that we develop. It'll allow us thereby to transfer to new contexts.
So whatever you're designing, you're going to be able to pivot to the changes that are coming. It increases the longevity of anything that we're working on. And I think most important to me is it will reduce that disparity because if you look at all of the issues that we're globally facing at the moment, many of them have to do with that inequity and disparity that our current conventions and assumptions are creating.