Motivation

All Technology Involves Design, and All Design Involves Values

It is tempting to believe that technology is neutral. By “neutral”, we mean the idea that technology is just a tool – it is neither good nor bad in itself, and the ethics of technology is all about its use and its user. After all, the thinking goes, software is just assembly code running on silicon transistors; source code is basically just applied algebra; data are just inert numbers. Seen in this light, software and computers are simply objects that are neither inherently good nor bad. In much the same way that a pen can be a means to good ends (authoring a novel, writing a letter to a friend) or bad (slandering someone in the press, stabbing an enemy in the hand), the ethics surrounding computing and information technologies are about the character and intentions of who is using them, and to what ends.

However, the idea that technology is a neutral tool is false. Or, at least, it is only part of the truth. Computing and information technologies are means, often powerful ones, for accomplishing human ends. However, all software and hardware are also the products of human actions and minds. They are the result of numerous decisions, big and small, by the people crafting them. These decisions express their values, outlooks, perspectives, intentions, and desires. As a result, computing and information technologies, like all technologies, are value-laden. It is impossible to divorce a piece of technology from the humans that designed it.

Moreover, once created and put into use, technologies very often change us. Social networking transformed social interactions and relationships. Digital music transformed the music industry and how people create, share, and listen to music. AI technologies are transforming organizational processes and decisions-making. In all these cases, the technologies empower some people and disempower others. Because computing and information technologies are constantly altering how we do things, our relationships to other people, and the institutions we are part of, they also impact our own outlooks, values, and perspectives. They change how we see the world, what and who we care about, and what behaviors we think are appropriate. In this way, technologies, including information and computing technologies, impact us, even as we use them. They are not merely neutral tools.

Once we recognize that technology both expresses and impacts human values, it becomes clear that technology must be designed in a value informed way. Consider a team of American engineers who decide to build an app for fitness tracking. They design the app’s user interface in English; this excludes all non-English speakers, including some American citizens. They choose to design the app for high-end smartphones paired with a smartwatch; the cost of the necessary hardware makes the service inaccessible to people of lesser means, including some students. The company chooses to create an additional stream of revenue by selling data collected from their users to a marketing company; users may not be aware that their location and activity data is being shared with a third-party, or, even if they are aware, what inferences can be drawn about them using this data (where they live, where they work, the stores at which they shop, they recreational activities in which they engage). All of these design choices impact who will be able to use their app, and potentially even impact non-users who happen to engage with app users.

Building technology is all about design choices, and these choices matter. We rarely make technology just for the sake of it; we make technology because we expect to deploy it out into the world, where it will interact with all different kinds of people and be used in all different kinds of contexts. Therefore, technology will be better when it is designed in a way that is thoughtful about the values it embodies and informed by the social, institutional, and cultural contexts in which it is to be used. Technologies that are designed in a value sensitive way are more intuitive, more accessible, more seamless, and more delightful than those that are not. They are also more likely to do good: promote human flourishing, generate societal benefits, and contribute to environmental sustainability. They are more likely to work effectively, and they are more likely to be successful.

Examples of Values in Technology

There is a constant stream of news stories about controversial, ethically-problematic information and computing technologies, as well as about technologies that raise significant ethical and social issues. Below are a few, brief examples. These cases illustrate the importance of designing technologies in a value-informed way that includes careful consideration of the social and institutional contexts of deployment and use:

  • Facial recognition systems have the potential to make our lives more convenient (automatic face tagging on social media, face unlock on your smartphone), but also have profound implications for individual privacy. Professor Woodrow Hartzog from the Northeastern School of Law argues that facial recognition should be banned.
  • Smart speakers are extremely popular, but users were upset to learn that transcripts of their audio were secretly being reviewed by human beings. The key issues were (1) that users were not informed about this practice, and (2) audio recordings from within our homes are considered by most people to be very sensitive data.
  • Sharing economy apps that let anyone rent out their car or home are very convenient for those with resources to offer and those looking to rent, but they are also having a negative impact on cities in the form of increased roadway congestion and displacement of local residents.
  • You pay for “free” online services by divulging your personal data, which is collected by hundreds of advertising companies. People are often distressed at the extent to which this data can be used to hyper-target advertising, as best evinced by the Cambridge Analytica scandal.
  • The impending rollout of self-driving cars raises challenging questions about how these cars should be designed to ensure human safety, and how these vehicles, their makers, and their operators should be held accountable when accidents occur. Even more challenging ethical questions arise when we consider the development of autonomous military drones.
  • Many organizations are adopting machine learning systems in an effort to reduce costs and remove human bias from processes. These systems evaluate whether people are eligible for loans, insurance, employment, social services, and even parole. However, machine learning systems are not neutral, and these systems have been found to exhibit human biases like racism and sexism.
  • User interfaces are powerful mechanisms for shaping how users interact with systems. However, some designers adopt intentionally deceptive user interfaces called “dark patterns”.

Motivated by these example, we next introduce VSD as an outlook and process for grappling with these kinds of issues in socio-technical systems.