paint-brush
Creating a Culture of Responsible AIby@loveyourdata
261 reads

Creating a Culture of Responsible AI

by John MichaelidesSeptember 8th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The development and deployment of new ‘AI’ based tools continues to gather momentum. As this rapidly becomes big-business, we need to be doing much more to ensure that, unlike the big business ideas of the past, we build in essential safeguards to the technology. Here are six simple principles you can follow, that will help act and deploy your AI tools responsibly.

People Mentioned

Mention Thumbnail
featured image - Creating a Culture of Responsible AI
John Michaelides HackerNoon profile picture

The development and deployment of new ‘AI’ based tools continues to gather momentum. As this rapidly becomes big-business, I believe we need to be doing much more to ensure that, unlike the big business ideas of the past, we build in essential safeguards to the technology and our ways of using that technology before the opportunity arises for serious harm to humans or humanity.

I should make it clear that I’m not talking about doomsday scenarios here like killer robots wiping out the world. Whilst these scenarios do bear consideration, I do not believe they are the most immediate threat. I’m talking about your everyday mundane use of AI to automate decisions and slowly take people out of the conversation and decision loop. This is where we need to have strong rules, governance, and ethics [1] that ensure our AI tools are always safe when used, can be deployed without causing harm, and that a tool is able to explain its reasonable and show its course of action is appropriate.

Are you ready to be a responsible AI practitioner? Here are six simple principles you can follow, that will help act and deploy your AI tools responsibly. Each principle is based on a cornerstone of ethical behaviour and should feel familiar – the impact is in bringing them altogether and staying true to them.

Six Behaviours for Responsible Practitioners

1. Act Honestly

An easy one to start with. Make sure everyone impacted knows that they are interacting with an AI-driven system or that such a system has made an automated decision on behalf of the business. Don’t pretend your chatbot is a human. Don’t imply a person has carefully weighed a decision or has understood the details of an individual case.

2. Be Trustworthy

Use privacy enhancing technology, like homomorphic encryption, differential privacy, and federated learning. These minimise the leaking of private information whilst still providing the statistics and patterns of interest. Additionally, properly deployed, they can help us combine data across different privacy jurisdictions.

Technology is only one part of the solution. You must also have strict policies in place limiting how information is used and when new permission needs to be sought.

However, privacy is only one variable of the trust equation. It is equally important that your models can be relied upon when learning from fully aggregated data or completely non-personal datasets. Always be clear about when and how you are sourcing information and clear about how you verify every piece of information in the model.

3. Offer Respect

There is a question of whether driving a particular interaction, decision, or recommendation through an AI without other options is most appropriate to a particular individual. Should we have to accept an automated result that isn’t validated or checked by a person. When deploying AI into our processes we should consider how can respect the sovereignty and beliefs of others and make it so that the application of AI is optional, replaceable, or at least under the supervision of a person.

4. Operate Transparently

It’s important to be clear how information is processed and how any system reaches a decision or recommendation. It’s important that, as far as possible any system can explain how it has reached a decision and the way in which it has combined factors to deliver a result. Being able to show sources and their reliability is also important.

Avoiding black boxes where reasoning cannot be followed is essential in any responsible decision making or recommendation process. This is a challenge using current commercial large-scale models and so investing in building your model using open technologies and securing professional external support to establish and train the models may be required.

5. Show Compassion

How do you cope with exceptions? Will your system need to interact with people with special or difficult circumstances? Is it possible to recognise when a person needs support that’s outside of system capabilities and direct them to more appropriate help?

6. Maintain Integrity

Let people know what your ethical principles are and your intentions in this. Decide how you will follow the other principles discussed and publish that intent. Let people see that you are abiding by them and that they can have trust in you and your actions. This also means being prepared to be open when things go wrong and be prepared to put them right.

Ensure you act in a way that is fair to those using or impacted by the system. Understand it’s limitations and biases. Let people know what they are instead of glossing over them. Have a plan to mitigate bias in your model. When it’s not practical to easily overcome limitations or mitigate bias, don’t allow the model to be deployed where there is risk of harm.

References

[1] These ideas are paraphrased and adapted from Isaac Asimov’s commentary on how his own Laws of Robotics had analogues for every tool, whether robotic or not