As a company that uses a lot of automation, optimization, and machine learning in their day-to-day business, Google is set on developing AI in a socially responsible way.
Fortunately for us, Google decided to share their principles and best practices for us to read.
Google’s Objectives for AI applications
The details behind the seven objectives below you can find here.
- Be socially beneficial.
- Avoid creating or reinforcing unfair bias.
- Be built and tested for safety.
- Be accountable to people.
- Incorporate privacy design principles.
- Uphold high standards of scientific excellence.
- Be made available for uses that accord with these principles.
Moreover, there are several AI technologies that Google will not build:
Google’s best practices for Responsible AI
For the details behind these six best practices, read more here.
- Use a Human-centered approach (see also here)
- Identify multiple metrics to assess training and monitoring
- When possible, directly examine your raw data
- Understand the limitations of your dataset and model
- Test, Test, Test,
- Continue to monitor and update the system after deployment