Responsible AI becomes critical in 2021

Presented by Appen

Artificial intelligence (AI) continues to transform businesses and society, putting pressure on companies in nearly every industry to invest in the rapidly-evolving space. As AI exerts an ever-increasing effect on our lives, the need for responsible AI grows. Responsible AI may already be a widely-discussed topic, but how many companies are actually putting its principles into practice?

In part one of our five part series on 2021 predictions, we focus on the future of responsible AI. Responsible AI can mean many things; it can mean reducing model bias, enhancing data privacy, fair pay for members of the AI supply chain, and more. In any sense, responsible AI is expected to become a key focus for many companies in 2021 and grow in importance over the next five years. Driving this trend is oversight from executives, who are increasingly involved in ethics discussions, and ownership with customers, who demand non-discriminatory treatment and data privacy, and in some cases, from a company’s board of directors who want to avoid short- and long-term damage to the brand.

The risks of not approaching AI responsibly will likely mount over the next few years, as the price of failure increases.

Where are we now?

Data on responsible AI shows that progress in the space has been slow. While larger numbers of companies are seeing responsible AI as a component to business success, according to the 2020 State of AI and Machine Learning Report, only 25% of companies surveyed said unbiased AI is mission-critical. Companies that scaled AI globally were more likely to say they were thinking about AI through an ethical lens, perhaps due to greater pressures applied from the C-suite, and from external factors.

If you’re receiving pushback that responsible AI still doesn’t garner the attention it deserves, this should be cause for concern. AI built responsibly is less biased, more effective, and works better for more people. Not using an ethical lens when building AI solutions could have dire consequences for a brand’s reputation and customer experience. We’ve already seen companies get it wrong, even the tech-savvy big hitters, which has had adverse effects on the people they serve and their bottom line. For example, if a company that sells cars equipped with speech recognition in multiple countries trains the product with only native male speakers for each language, the system may struggle to understand women or anyone with an accent.

Perhaps it’s the absence of clear or consistent best practices that is creating ambiguity or misconceptions around the topic. Recently, various authorities and organizations have come together to bridge the gap and try to create more solid frameworks for building AI responsibly. In 2016, for instance, a group of companies formed the Partnership on Artificial Intelligence to Benefit People and Society. It included names like Amazon, Facebook, Google, Microsoft, and IBM, and documented best practices in the responsible AI space.

We’ve also seen governments step in. In April 2019, the European Commission published a set of non-binding ethical guidelines for developing trustworthy AI in the European Union, sourced from input from 52 independent experts. Also in 2019, 42 countries, known collectively as the Organization for Economic Cooperation and Development (OECD), came together to create a global framework for AI around common values of trust and respect.

The World Economic Forum is currently developing a toolkit for corporate officers to identify the benefits of artificial intelligence for their companies and how to operationalize it in a responsible way. Leaders from around the world, including Appen, have partnered with the Forum to ensure responsible practices are incorporated every step of the way.

“We launched the platform ‘Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning’ to bring together key stakeholders to create a framework to accelerate the benefits and mitigate the risks of AI and ML,” said Kay Firth-Butterfield, Head of AI and Machine Learning and Member of the Executive Committee at the World Economic Forum. “The first place for every company to start when deploying responsible AI is with an ethics statement. This sets up your AI roadmap to be successful and responsible.”

These frameworks are, however, non-binding and fairly new. Nonetheless, they can be considered a step in the right direction toward creating a more centralized repository for AI guidelines. If anything, they further the conversation around what ethical AI looks like.

Where organizations are focusing responsible AI efforts

For the companies that already understand the importance of building responsible AI, many are approaching it through several key lenses:

Risk management. Developing AI comes with inherent risks, including within security, regulations, legal, and more. For many companies, these risks may be new or take on new forms via AI. Companies are working to embed controls in their model-building processes to mitigate potential risks. They’re also dedicating resources to ensure models are compliant with the highest levels of security regulations.

Governance. Governance frameworks in AI help manage processes, tools, and teams to further de-risk projects and ensure compliance with various standards. Data governance in particular is a critical area to get right, as privacy concerns mount and transparency in data collection grows in importance.

Ethics. Companies developing responsible AI should approach projects from an ethical perspective throughout the model build process. This includes not only ensuring the AI will be used for ethical purposes, but also minimizing bias wherever possible. Most organizations have the best of intentions when creating AI solutions, but end up with discriminatory products due to unintended biases introduced in the data.

In addition to these key areas, companies are always seeking out new best practices and solutions around explainability. Documentation, clear frameworks, and detailed pipelines are all temporary solutions to handling the increasingly complex nature of AI models. More progress certainly still needs to be made in this space and it will be one to watch going forward.

In the case of risk management, governance, ethics, or explainability, companies that integrate pipelines and embed controls throughout building, deploying, and beyond are more likely to experience success. Note that these lenses, collectively, don’t make up the entirety of the potential factors involved in building AI responsibly. And while progress in this space has been slow to date, 2021 looks promising as more organizations, regulatory bodies, and even customers join the conversation. Providing platforms for diverse voices to give input will help advance the dialogue further, even if there’s still plenty of progress to be made.

At Appen, we have spent over 20 years annotating and collecting data using the best-of-breed technology platform and leveraging our diverse crowd to help ensure you can confidently deploy your AI models — responsibly. Learn more about this 2021 prediction in Leading Your Organization to Responsible AI from the Train AI Summit Keynote (available on demand).

Wilson Pang is CTO at Appen.

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].

Source: Read Full Article