Test your knowledge with a few review questions.
Nope, identification technologies that are meant for authentication on personal devices like smartphones and laptops are generally safe from a privacy perspective. These systems are only designed to identify a small number of people, i.e., the user or users of a particular computer, and the data they use to identify people is stored locally on the device. Apple and Google, for example, make it very clear that fingerprints and face models captured by their smartphones are never stored in the cloud and are heavily encrypted – even Apple and Google themselves don’t have access to this data, by design.
While it’s true that the law does not recognize strong individual privacy guarantees in public spaces, that doesn’t mean people have no expectation of privacy while in public, or that violating this expectation isn’t ethically and morally problematic. People in public spaces rely on obscurity to achieve anonymity, but identification technologies can be used to de-anonymize people without their knowledge or consent.
Nope. The government routinely collects this data, e.g., fingerprints, photos, and DNA samples from people who are arrested; photos from vehicle registration databases; and photos and fingerprints from international travelers. Private companies also routinely extract biometric data from user-provided content like photos. Only Illinois has a robust law regulating the collection of biometric information.
Incorrect. Several cities and states have successfully banned these technologies. The key thing to realize is that access to software is not enough to build an identification system: it also requires access to vast quantities of data. This data requirement raises the barrier of entry to identification technologies and makes them much easier to regulate. Contrast this to cryptographic algorithms, where the US government tried and failed to regulate them because the knowledge of cryptographic algorithms is widely available, open source software implementations exist, and there are no barriers to use (e.g., data or hardware requirements).
Bans are one potential response, but not the only one. Bans definitively resolve the value tensions at the heart of identification technologies in favor of personal privacy. However, there are other values at play, like public safety, and there are potential combinations of policy and software design that attempt to strike a balance between these values. There are strong arguments in favor of blanket bans, but this shouldn’t preclude us from at least taking the time to engage in an exercise of VSD, laying out the stakeholders, critically examining their values, and using this knowledge to make informed decisions.