Safety by design, security by design, privacy by design. As software capabilities continue to evolve, developers need to adapt the way they think and work. Ethical design is the next thing that will be integrated into the pipeline, given the popularity of artificial intelligence (AI). Like safety and security by design, ethical design helps avoid unintended outcomes that spur lawsuits and damage brands. However, there’s more to ethical design than just risk management.
“Digital ethics is mission-critical. I don’t see how something that could damage people whether it’s disappointing them annoying them, discriminating against them or excluding them could be considered academic,” said Florin Rotar, global digital lead at global professional services company Avanade. “It’s a question of maturity. We have development managers, team lead, and development managers that are putting together a digital ethics cookbook and adding it to their coding and security standards.”
Generally, the practical effect of ethical design is less intuitively obvious at this point than safety by design and security by design. By now it’s common knowledge that the purpose of safety by design is to protect end customers from harm a product might cause and that security by design minimizes the possibility of intentional and inadvertent security breaches. Ethical design helps ensure several things: namely, that the software advances human well-being and minimizes the possibility of unintended outcomes. From a business standpoint, the outcomes should also be consistent with the guiding principles of the organization creating the software.
Although the above definition is relatively simple, integrating digital ethics into mindsets, processes and products takes some solid thinking and a bit of training, not only by those involved in software design, coding and delivery but others in the organization who can foresee potential benefits and risks which developers may not have considered.
“The reality is, as you think about a future in which software is driving so many decisions behind the scenes, it creates a new form of liability we haven’t had before,” said Steven Mills, associate director of Machine Learning and Artificial Intelligence at Boston Consulting Group‘s Federal division. “If something goes wrong, or we discover there’s a bias, someone is going to have to account for that, and it comes back to the person who wrote the code. So it’s incumbent upon you to understand these issues and have a perspective on them.”
What’s Driving the Need for Ethical Design
Traditionally, software has been designed for predictability. When it works right, Input X yields Output Y. However, particularly with deep-learning, practitioners sometimes can’t understand the result or the reasoning that led up to the result. This opacity is what’s driving the growing demand for transparency.
Importantly, the level of unpredictability stated above referred to one AI instance, not a network of AI instances.
“Computer scientists have been isolated from the social implications of what they’ve been creating, but those days are over,” said Keith Strier, global and Americas AI leader at consulting firm EY. “If you’re not driven by the ethical part of the conversation and the social responsibility part, it’s bad business. You want to build a sustainably trustable system that can be relied upon so you don’t drive yourself out of business.”
Business failures, autonomous weapons systems and errant self-driving cars may seem a bit dramatic to some developers; however, those are just three examples of the emerging reality. The failure to understand the long-term implications of designs can and likely will result in headline-worthy catastrophes as well as less publicly-visible outcomes that have longer-term effects on customer sentiment, trust and even company valuations.
For example, the effects of bias are already becoming apparent. A recent example is Amazon shutting down an HR system that was systematically discriminating against female candidates. Interestingly, Amazon is considered the poster child of best practices when it comes to machine learning and even it has trouble correcting for bias. A spokesperson for Amazon said the system wasn’t in production, but the example demonstrates the real-world effect of data bias.
Data bias isn’t something developers have had to worry about traditionally. However, data is AI brain food. Resume data quality tends to be poor, job description data quality tends to be poor, so trying to match those two things up in a reliable fashion is difficult enough, let alone trying to identify and correct for bias. Yet, the designers of HR systems need to be aware of those issues.
Granted, developers have become more data literate as a result of baking analytics into their applications or using the analytics that are now included in the tools they use. However, grabbing data and understanding its value and risks are two different things.
As AI is embedded into just about everything involved with a person’s personal life and work life, it is becoming increasingly incumbent upon developers to understand the basics of AI, machine learning, data science and perhaps a bit more about statistics. Computer science majors are already getting exposure to these topics in some of the updated programs universities are offering. Experienced developers are wise to train themselves up so they have a better understanding of the capabilities, limitations and risks of what they’re trying to build.
“You’re going to be held accountable if you do something wrong because these systems are having such an impact on people’s lives,” said BCG Federal’s Mills. “We need to follow best practices because we don’t want to implement biased algorithms. For example, if you think about social media data bias, there’s tons of negativity, so if you’re training a chatbot system on it, it’s going to reflect the bias.”
An example of that was Microsoft’s Tay bot which went from posting friendly tweets to shockingly racist tweets in less than 24 hours. Microsoft shut it down promptly.
It’s also important to understand what isn’t represented in data, such as a protected class. Right now, developers aren’t thinking about the potential biases the data represents, nor are they thinking in probabilistic terms which machine learning requires.
“I talk to Fortune 500 companies about transforming legal and optimizing the supply chain all the time, but when I turn the conversation to the risks and how the technology could backfire, their eyes glaze over which is scary,” said EY’s Strier. “It’s like selling a car without brake pads or airbags. People are racing to get in their cars without any ability to stop it.”
In line with that, many organizations touting their AI capabilities are confident they can control the outcomes of the systems they’re building. However, their confidence may well prove to be overconfidence in some cases simply because they didn’t think hard enough about the potential outcomes.
There are already isolated examples of AI gone awry including the Uber self-driving car that ran over and killed a Tempe, Ariz. woman. Also, Facebook Labs shut down two bots because they had developed their own language the researchers couldn’t understand. Neither of these events have been dramatic enough to affect major changes themselves, but they are two of a growing number of examples that are fueling discussions about digital ethics.
“You’re not designing an ethically neutral concept. You have to bake ethics into a machine or potentially it will be more likely to be used in ways that will result in negative effects for individuals, companies or societies,” said EY’s Strier. “If you are a designer of an algorithm for a machine or a robot, it will reflect your ethics.”
Right now, there are no ethical design laws so it is up to individual developers and organizations to decide whether to prioritize ethical design or not.
“Every artifact, every technology is an instantiation of the designer so you have a personal responsibility to do this in the best possible light,” said Frank Buytendijk, distinguished VP and Gartner fellow. “You can’t just say you were doing what you were told.”
And, in fact, some developers are not on board with what their employers are doing. For example, thousands of Google employees, including dozens of senior engineers protested the fact that Google was helping the U.S. Department of Defense use AI to improve the targeting capability of drones. In a letter, the employees said, “We do not believe Google should be in the business of war.”
Developers will be held accountable
Unintended outcomes are going to occur and developers will find themselves held accountable for what they build. Granted, they aren’t the only ones who will be blamed for results. After all, a system could be designed for a single, beneficent purpose and altered in such a way that it is now capable of a malevolent purpose.
Many say that AI is just a tool like any other that can be used for good or evil; however, most tools to date have not been capable of self-learning. One way developers and their organizations could protect themselves from potential liability would be to design systems for an immutable purpose, which some experts are advocating strongly.
“In many ways, having an immutable purpose is ideal because once you’ve designed a purpose for a system, you can really test it for that purpose and have confidence it works properly. If you look back at modeling and simulation, that’s verification and validation,” said BCG Federal’s Mills. “I think it will be hard to do that because many of these systems are being built using building blocks and the building blocks tend to be open source algorithms. I think it’s going to be hard in a practical sense to really ensure things don’t get out for unintended purposes.”
For now, some non-developers want to blame systems designers for anything and everything that goes wrong, but most developers aren’t free to build software or functionality in a vacuum. They operate within a larger ecosystem that transcends software development and delivery and extends out to and through the business. Having said that, the designer of a system should be held accountable for the positive and negative consequences of what she builds.
The most obvious reason why developers will be held accountable for the outcomes of what they build is that AI and intelligently automated systems are expressed as software and embedded software.
An important question, practically speaking, is how to build digital ethics into developers’ mindsets, processes and pipelines which the next article in this series addresses (see “How to Achieve Ethical Design”). There is also a third article in the series, “Axon Prioritizes Ethical Design” which explains how one company approaches ethical design. A mini glossary of terms has also been included in this series to familiarize developers with some of the concepts reflected in the articles.
“I think the big piece of this is asking the really hard questions all along. Part of it comes back to making sure that you understand what your software is doing every step of the way,” said BCG Federal’s Mills. “We’re talking about algorithms in software, why they’re acting the way they are, thinking about edge cases, thinking about whether groups are being treated equally. People need to start asking these kinds of questions as they build AI systems or general software systems that make decisions. They need to dig into what the system is and the consequences that can manifest if things go awry.”