Answer: B - Meta’s efforts to weigh both the risks and the benefits of facial recognition technology before ultimately deciding to discontinue utilization of the tool is an example of Privacy Risk Management.
Facebook alluded to the benefits of utilizing facial recognition tools to improve accessibility as one of the problematic tradeoff decisions it had to make.
On November 2, 2021, Jerome Pesenti, VP of Artificial Intelligence for Meta, published a release saying, “But like most challenges involving complex social issues, we know the approach we’ve chosen involves some difficult tradeoffs. For example, the ability to tell a blind or visually impaired user that the person in a photo on their News Feed is their high school friend, or former colleague, is a valuable feature that makes our platforms more accessible. But it also depends on an underlying technology that attempts to evaluate the faces in a photo to match them with those kept in a database of people who opted-in. The changes we’re announcing today involve a company-wide move away from this kind of broad identification, and toward narrower forms of personal authentication.
Facial recognition can be particularly valuable when the technology operates privately on a person’s own devices. This method of on-device facial recognition, requiring no communication of face data with an external server, is most deployed today in the systems used to unlock smartphones.
We believe this can enable positive use cases in the future that maintain privacy, control, and transparency, and it’s an approach we’ll continue to explore as we consider how our future computing platforms and devices can best serve people’s needs. For potential future applications of technologies like this, we’ll continue to be public about intended use, how people can have control over these systems and their personal data, and how we’re living up to our responsible innovation framework." [Read the full release
here]