VIDEO: Safer AI with Inclusive Design

Summary: Safety with AI is essential, and you’ll need to embrace it. Legislation on AI safety is forthcoming. You can wait to comply with future laws, or you can enhance AI safety through inclusive design now.

Safe and Inclusive AI

Microsoft recently outlined a set of AI Access Principles. Significant is this emphasis on safety, harm reduction, and inclusion. It emphasizes what the AI community has been working on for years and also what the new legislation promotes:

“We are applying a strong Responsible AI Standard to keep people at the center of AI design decisions and respect enduring values like fairness, reliability, safety, privacy, inclusiveness, transparency, and accountability.”

We’re seeing this in real-time: Google’s new image generator Gemini was released and pulled– to correct AI image generation that was representing historical inaccuracy. And that’s from a company that prioritizes Inclusive Design.

As the AI safety legislation rolls into your roadmap, AI will necessarily move toward a more inclusive and accessible interaction design.

Two ethical AI habits teams should adopt

  • Abusability Testing: The first is an ethics and harms modeling audit where your team tries to “break a design”  or live model. The Abusability Testing framework is an excellent resource for more. At Experience Dynamics, as part of our Inclusive Design program, we use a team of Inclusive Designers, Testing with impacted and underrepresented communities, and reviews by Product Ethicists and subject matter experts (SMEs).
  • Assumption Buster: A technique I teach in my Inclusive Design training. The template for it can be found below; I discuss how to use it in this video:

Transcript

“Safety with AI is a topic that you’re going to have to embrace. It’s going to be legislated– it’ll be the law. So, you’ll either follow the laws and try and figure it out or start doing it now so that you can make your AI safer. The way to do that is through inclusion and through Inclusive Design.

See UX Teams Must Embrace Inclusive Design

“Now, there’s a study that came out from Stanford, Princeton, and MIT that indexes AI against 100 different transparency factors, and they found that of all the systems they checked, none of them disclosed harm or the potential for harms, and only 37% of 100% was the average score for transparency.

“So basically, a lot of the AI that’s being developed is not truly as open as maybe it’s billed as…So this is important to you because if you do not factor in for harm and if you do not factor in for bias…and you know stereotyping, discrimination, other types of social harms, for example, you’re going to run into trouble with your AI. So, just like ChatGPT before it was released, it was deeply racist, and they muzzled it! In other words, they trained their algorithm to have the social manners and the legal requirements of diversity, equity, and inclusion, which most organizations are at least following from an employee standpoint. So this is applying it to design and applying it to AI one of the things you can use is called an “Abusability Test” or a quick one a version of that is an Assumption Buster.

“So Abusability Testing is where you use different factors to try and find harm one of those might be a a technical term called “red teaming” which you apply to like ethics and harm and exclusion and see if you can kind of *break your design* concept or your design. The Assumption Buster is a version of this you can use with your team. So the first thing you do with an Assumption Buster is: State the benefits and what benefits of your product the way you’re saying they are and the ideal…and then: List out and go through rate on a scale of one to three: one is low amount, two is some, and three is high amount… Who’s going to be left out?; Who is going to be annoyed?; Who is going to be offended?; Who’s going to be insulted?; Who’s going to be threatened?; and you go through each of these question categories and rank it with your team…and discuss it so it’s a way to think about these things that cause unsafe AI, which is like bias and exclusion and harm that you might not see when you’re developing your product.

“So it’s super important that AI’s focus is inclusive and that we can indeed move into a future of living beside AI–that it serves us and protects our communities. So try the Assumption Buster and let me know how it goes thank you so much for tuning in! See you soon!”

Grab the Template: Assumption Buster template

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Search

Search

Tags

Recent Posts

Scroll to top

Get a quote or discuss your project

Tell us about your project

Arrange a 30 min call

Project in mind?

logoblack

Fight for the rights of your users. We'll show you how.

Read more articles like this for exclusive insights into the best ways to approach UX and Service Design challenges. Find out when events occur first. Privacy protected, no exceptions.

Subscribing indicates your consent to our Privacy Policy

Should we add you to our email list?

Privacy protected-You can unsubscribe at any time.

Download the Better UX kit