Challenges

Building ethical technology is not easy, but that doesn’t mean we can ignore the challenges! In this section, we discuss some common challenges when using VSD and strategies to address them.

Identifying Relevant Social Structures

We know that all technology is socially situated. One of the major tasks of empirical investigation, and then value investigation, is to determine what social context(s) will surround a given technology, and then use this to inform the determination of who the stakeholders are, and what values are at play in this context.

When thinking about social structures, there are several important questions that VSD practitioners should endeavour to answer:

  • Are there socio-economic or historical inequalities? These deeply ingrained inequalities are often the root of fairness or bias problems that may impact new technologies, and designers should strive to mitigate, or at least not perpetuate, these inequalities.
  • How could the technology change the social structure into which it is introduced? Technology is both impacted by, and in turn impacts, extant social structures. For example, consider how Facebook has changed the media ecosystem, or how Amazon has changed the face of commerce. Anticipating the impact of your technology can help designers avoid negative outcomes.
  • How will social structures change over time? Similar to the previous question, designers should remain agile in the face of changing social structures and norms. The assumptions that underlay your analysis of stakeholders and values may shift over time.
  • What are the benefits and costs of the technology? Benefits and costs are often context dependent.
  • How are the benefits and costs distributed across individuals and society? Principals of justice require that the benefits of a technology be fairly distributed among stakeholders. Striving to achieve this through careful design ensures that the benefits of technology are felt by the many, rather than confined to the few; and that those who shoulder the burdens (or receive the costs or disadvantages) of a technology also have access to its benefits.

Identifying Relevant Stakeholders

Identifying relevant stakeholders is a critical task in VSD, as this directly determines whose values, concerns, and perspectives will be represented during value analysis. Here, we offer some examples of direct and indirect stakeholders to help guide practitioners.

  • Direct Stakeholders
    • The sponsor (e.g., your employer or funder)
    • Members of the design team. Your values matter!
    • Demographically diverse users – for example, races and ethnicities, gender, LGBTQIA+, differently abled, geographic regions or countries, and educational attainment.
    • Populations that may deserve special consideration – for example, children, the elderly, victims of intimate partner violence, families living in poverty, the incarcerated, indigenous peoples, the homeless, religious minorities, and non-technology users.
    • User roles – for example, content creators, content consumers, and power users.
  • Indirect Stakeholders
    • Bystanders: those who are around your users, e.g., pedestrians near an autonomous car
    • Human data points: those who are passively surveilled by your system
    • Civil society: people who care deeply about the issues or problem being addressed, e.g., people who aren’t on social media still care about, and are impacted by, disinformation spread on these platforms
    • Those without access – for example, due to cost, education, availability of necessary hardware and/or infrastructure, or institutional censorship

This list of stakeholders is not meant to be exhaustive. There is no substitute for empirical investigation for identifying stakeholders, and who the stakeholders are will vary according to the projects and social contexts involved.

Filtering Stakeholders

It is tempting to be overly comprehensive when enumerating stakeholders, but not every impacted individual has legitimate values at play. For example:

  • Foreign election meddlers are affected by content moderation and want to protect their “free speech”
  • Dictatorships are impacted by universal encryption, since it hinders their surveillance capabilities
  • Cyber criminals want to steal things, and are against cybersecurity measures

None of these stakeholders are legitimate, and their values may be safely ignored. In general, if the goals of a stakeholder are illegal or unethical, then they do not merit consideration.

Identifying the Full Range of Values

Some values, such as accessibility, justice, human rights, and privacy, are universal, and designers can safely assume that they are always at stake. Others are tied to specific stakeholders and social contexts. Identifying these values requires a thorough conceptual and empirical understanding of the relevant features of the social situation. Furthermore, designers of a given technology can sometimes gain insights into values and norms by examining the design decisions undertaken by similar technologies, or other technologies deployed in the same social context.

When examining stakeholders’ values, it is critical that designers reflect on the scale of their impacts. The intuition is that designers should prioritize their efforts towards addressing the most critical ethical challenges. In some cases, this may suggest focusing on the values held by a majority of stakeholders, to ensure the greatest benefits to the largest populations. However, in other cases it may be more important to prioritize the values of minority stakeholders if those values are severely impacted. For example, historically disadvantaged groups often bear the brunt of privacy-invasive surveillance technologies, so prioritizing their concerns may be most important despite these groups being in the minority.

Example Values to Consider

There is a rich literature on human values. Here, we list a few examples of (arguably) very important values that are often implicated by technology. These are drawn, in part, from the VSD book.

Value Definition
Human rights Inalienable, fundamental rights to which all people are entitled
Social justice Encompasses both procedural justice (the process is fair) and distributive justice (the outcomes are fair)
Human welfare People’s physical, material, and psychological well-being
Accessibility Making all people successful users of the technology
Respect Treating people with consideration and valuing their perspective
Calmness Fostering a peaceful and composed psychological state
Freedom from bias Avoiding systematic differential treatment between individuals or groups, including pre-existing social bias, technical bias, and emergent social bias
Ownership and property The right to possess an object (or information), use it, manage it, derive income from it, and bequeath it
Privacy A claim, an entitlement, or a right of an individual to determine what information about themself can be shared with and used by others
Trust Expectations that exist between people that interactions are in good faith or involve good will, and will not aim to exploit vulnerabilities or dependencies
Accountability Ensuring that the actions of a person, people, or institution may be traced to them both causally and with respect to responsibility ascriptions
Autonomy People’s ability to decide, plan, and act in ways that they believe will help them to achieve their goals, as well as the capacity to choose their own goals
Informed consent Garnering people’s agreement, encompassing criteria of disclosure, comprehension, voluntariness, and competence
Identity People’s understanding of who they are over time, and their ability to represent themselves in that way to others
Environmental sustainability Maintaining the integrity and stability of ecological systems, processes, and components in a way that meet the needs of the present without compromising them for the future

Again, please note that this list is not meant to be exhaustive. There is no substitute for empirical and value investigations for identifying the values at stake in a case; and not all values will be salient for all designs or social contexts. Note, also, that how a value is defined and how it is operationalized can vary according to case. As discussed earlier, how best to define and generate a working definition of values such as justice, autonomy, privacy, and freedom of expression is often context dependent.

Addressing Value Tensions

Dealing with value tensions is the most challenging step in VSD, by far. During the technical investigation, decisions must be made about the design the technical system, i.e., how will it be implemented. The values discovered during previous investigations must be addressed in these designs, but there will be cases where values are not directly compatible (or not maximally so). Addressing these tensions may be possible through creative solutions that mix technical and social interventions. The design team might also need to make hard choices about which values to support.

Here are a few tips for addressing values in your designs:

  • What are the core values that cannot be violated? Inevitably, there will be values implicated by your technology that simply cannot be violated. These core values might be driven by legal mandates, or derive from non-negotiable qualities like supporting basic human rights. It may simplify your design process to prioritize these core values and address them first. By default, this may decide the outcome of some value tensions.
  • When a tension cannot be reconciled, whose values take precedence? This is where having a deep understanding of social and historical context becomes critical. Not all stakeholders will be impacted equally by your technology – understanding how benefits and harms are distributed across stakeholders can guide you towards designs that prioritize justice, exhibit beneficence, and mitigate harms to the most vulnerable stakeholders.
  • What tensions must be addressed immediately, versus later on through additional features? It may not be feasible to address all value tensions in your initial prototypes and deployments. It is important to identify early in the design process where future features can address tensions, as these early decisions will unavoidably foreclose future design possibilities. In other words, it pays to plan for a flexible future and lay extensible groundwork.

Identifying Unintended Consequences

Technology will be adopted in unanticipated ways. As recent history has shown, technology companies are routinely caught flat footed when innovative (and sometimes malicious) individuals use their platforms, products, and services in ways the creators did not anticipate or intend. Being intellectually rigorous means considering and mitigating risks in designs ahead of time, so far as is possible.

Consider the following “what if?” scenarios:

To be clear, it is not reasonable to expect that designers will be able to anticipate all of the potential consequences of their technology. After all, the world is a large and unpredictable place. However, this does not imply that designers should simply ignore potential future consequences – many harmful outcomes are indeed predictable, especially for people familiar with the history of contemporary technology. There is no excuse for not mitigating these consequences, so far as possible, ahead of time. In other technology design contexts, the practice of trying to preemptively identify and address concerns early in the design process is sometimes referred to as anticipatory development or upstream assessment.