At this point, it is clear that the primary value tensions at play with respect to identification technologies are public safety and accountability on one side, and personal privacy with its related values of calmness, autonomy, self expression, and human rights on the other. In this section we continue our application of VSD to examine how different designs can enable us to trade-off between these values.
Numerous municipalities and states have decided to ban facial recognition technologies in public life, e.g., by local law enforcement agencies. These efforts are one potential response to these technologies: designing regulations to prohibit them entirely, thus favoring personal privacy above all other values. This stance rests on the idea that the potential for misuse of identification technologies is simply too great to countenance their existence, especially in the hands of already powerful institutions like government and law enforcement. Proponents also note that the benefits of identification technologies are, at best, unclear – for example, the police are already capable of investigating and solving crimes, so why are powerful new capabilities like facial recognition necessary? Although these existing regulatory efforts are narrowly tailored to facial recognition systems, there is no reason they couldn’t be extended to other identification technologies, or expanded to the federal level.
Critics sometimes contend that banning identification technologies is futile. They point out that the techniques to build these systems (e.g., using machine learning) are well-known in the academic literature, and that software that implements these techniques is open source. Given this availability, what good is a ban? These criticisms miss a key point, however, which is that identification technologies rely on vast quantities of data (.e.g, images, voice recordings, etc.) that are not generally available and require a massive resources to compile and manage. In other words, only well-funded, dedicated entities can build large-scale identification systems, which makes regulation and enforcement tractable.
Another potential design is to allow identification technologies to be built, but carefully regulate who may build them and how they may be used. In this design, transparency and accountability are the key values meant to help balance privacy concerns against the potential for benefits such as increased public safety.
For example, we can envision a world where only specially designated, public sector agencies are allowed to build and use large-scale identification technologies. These agencies would answer to the public and their governance would be democratic. All aspects of their systems would be transparent: the use policies, the source code, the sources of data, etc. Algorithm audits by third-party experts could be mandated to ensure that the identification systems do not exhibit inappropriate biases. Only pre-approved entities would be allowed to query the systems (e.g., the FBI) and only in response to a valid warrant.
Software design, in addition to policy, has a key role to play in ensuring that public sector identification technologies are not abused. Cybersecurity is paramount, i.e., to ensure that only authorized individuals may query the system and that sensitive data is secure from hacking and data breaches. Robust logging and auditing capabilities are necessary to ensure accountability for people querying the system. Data scientists must carefully construct and validate the machine learning algorithms used by the system to ensure that the functioning of the system is interpretable by human operators and free from obvious biases.
Private sector applications of identification technologies are the most problematic since it becomes difficult to control who is building these systems and who they are allowing to query the databases. Given the profit motive, there is a tendency for private sector companies to grant expansive access to their identification products even if some of the uses and users are ethically and morally questionable. Furthermore, assessing whether privately-owned identification technologies are free from problematic biases is challenging, since private companies will necessarily resist audits that might compromise their intellectual property or reveal flaws in their technology.
That said, regulatory regimes around other sensitive industries, like pharmaceuticals and nuclear waste disposal, offer lessons for paths forward. It is clear that identification technologies are dual use in the strongest sense of the phrase, i.e., capable of benefits but also catastrophic harms. Thus, regulation is key for avoiding the downsides of this technology.