Building ethical technology is not easy, but that doesn’t mean we can ignore the challenges! In this section, we discuss some common challenges when using VSD and strategies to address them.
We know that all technology is socially situated. One of the major tasks of empirical investigation, and then value investigation, is to determine what social context(s) will surround a given technology, and then use this to inform the determination of who the stakeholders are, and what values are at play in this context.
When thinking about social structures, there are several important questions that VSD practitioners should endeavour to answer:
Identifying relevant stakeholders is a critical task in VSD, as this directly determines whose values, concerns, and perspectives will be represented during value analysis. Here, we offer some examples of direct and indirect stakeholders to help guide practitioners.
This list of stakeholders is not meant to be exhaustive. There is no substitute for empirical investigation for identifying stakeholders, and who the stakeholders are will vary according to the projects and social contexts involved.
It is tempting to be overly comprehensive when enumerating stakeholders, but not every impacted individual has legitimate values at play. For example:
None of these stakeholders are legitimate, and their values may be safely ignored. In general, if the goals of a stakeholder are illegal or unethical, then they do not merit consideration.
Some values, such as accessibility, _justice-, human rights, and privacy, are universal, and designers can safely assume that they are always at stake. Others are tied to specific stakeholders and social contexts. Identifying these values requires a thorough conceptual and empirical understanding of the relevant features of the social situation. Furthermore, designers of a given technology can sometimes gain insights into values and norms by examining the design decisions undertaken by similar technologies, or other technologies deployed in the same social context.
When examining stakeholders’ values, it is critical that designers reflect on the scale of their impacts. The intuition is that designers should prioritize their efforts towards addressing the most critical ethical challenges. In some cases, this may suggest focusing on the values held by a majority of stakeholders, to ensure the greatest benefits to the largest populations. However, in other cases it may be more important to prioritize the values of minority stakeholders if those values are severely impacted. For example, historically disadvantaged groups often bear the brunt of privacy-invasive surveillance technologies, so prioritizing their concerns may be most important despite these groups being in the minority.
There is a rich literature on human values. Here, we list a few examples of (arguably) very important values that are often implicated by technology.
|Human rights||Inalienable, fundamental rights to which all people are entitled|
|Social justice||Encompasses both procedural justice (the process is fair) and distributive justice (the outcomes are fair)|
|Human welfare||People’s physical, material, and psychological well-being|
|Accessibility||Making all people successful users of the technology|
|Respect||Treating people with consideration and valuing their perspective|
|Calmness||Fostering a peaceful and composed psychological state|
|Freedom from bias||Avoiding systematic differential treatment between individuals or groups, including pre-existing social bias, technical bias, and emergent social bias|
|Ownership and property||The right to possess an object (or information), use it, manage it, derive income from it, and bequeath it|
|Privacy||A claim, an entitlement, or a right of an individual to determine what information about themself can be shared with and used by others|
|Trust||Expectations that exist between people that interactions are in good faith or involve good will, and will not aim to exploit vulnerabilities or dependencies|
|Accountability||Ensuring that the actions of a person, people, or institution may be traced to them both causally and with respect to responsibility ascriptions|
|Autonomy||People’s ability to decide, plan, and act in ways that they believe will help them to achieve their goals, as well as the capacity to choose their own goals|
|Informed consent||Garnering people’s agreement, encompassing criteria of disclosure, comprehension, voluntariness, and competence|
|Identity||People’s understanding of who they are over time, and their ability to represent themselves in that way to others|
|Environmental sustainability||Maintaining the integrity and stability of ecological systems, processes, and components in a way that meet the needs of the present without compromising them for the future|
Again, please note that this list is not meant to be exhaustive. There is no substitute for empirical and value investigations for identifying the values at stake in a case; and not all values will be salient for all designs or social contexts. Note, also, that how a value is defined and how it is operationalized can vary according to case. As discussed earlier, how best to define and generate a working definition of values such as justice, autonomy, privacy, and freedom of expression is often context dependent.
Dealing with value tensions is the most challenging step in VSD, by far. During the technical investigation, decisions must be made about the design the technical system, i.e., how will it be implemented. The values discovered during previous investigations must be addressed in these designs, but there will be cases where values are not directly compatible (or not maximally so). Addressing these tensions may be possible through creative solutions that mix technical and social interventions. The design team might also need to make hard choices about which values to support.
Here are a few tips for addressing values in your designs:
Technology will be adopted in unanticipated ways. As recent history has shown, technology companies are routinely caught flat footed when innovative (and sometimes malicious) individuals use their platforms, products, and services in ways the creators did not anticipate or intend. Being intellectually rigorous means considering and mitigating risks in designs ahead of time, so far as is possible.
Consider the following “what if?” scenarios:
To be clear, it is not reasonable to expect that designers will be able to anticipate all of the potential consequences of their technology. After all, the world is a large and unpredictable place. However, this does not imply that designers should simply ignore potential future consequences – many harmful outcomes are indeed predictable, especially for people familiar with the history of contemporary technology. There is no excuse for not mitigating these consequences, so far as possible, ahead of time. In other technology design contexts, the practice of trying to preemptively identify and address concerns early in the design process is sometimes referred to as anticipatory development or upstream assessment.