Asset managers are putting pressure on tech firms over their implementation of facial recognition software.
Mark Gilbert is a Bloomberg Opinion columnist covering asset management. He previously was the London bureau chief for Bloomberg News. He is also the author of “Complicit: How Greed and Collusion Made the Credit Crisis Unstoppable.”
After successfully persuading the energy industry to curb its climate impact, the fund management world is turning its attention to reining in tech. But there’s a danger that by trying to cover too many non-financial bases, investors will end up weakening their clout.
Their primary focus is on facial recognition technology, which is developing rapidly both as a law enforcement tool for governments and in applications for companies wanting to target individual customers. How these software systems are used — or misused — looks set to become the next pressure point for investors seeking to allocate capital in line with environmental, social and governance standards.
Our Internet browsing has already provided a trove of data, allowing websites to pepper us with pop-up ads designed to appeal to our particular habits. Now information gathering is increasingly happening in the physical world. Imagine an electronic billboard with a camera that can scan your visage to identify your age and gender, and then flash up an ad specifically targeting your demographic.
Privacy rules are struggling to keep up with the enhanced ability of software to screen large groups of people in real time, amassing data without permission and risking running roughshod over the most basic civil liberties.
“When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” Elizabeth Denham, Britain’s privacy chief as U.K. Information Commissioner, said in a blog.
Earlier this month, 50 global asset management firms overseeing more than $4.5 trillion pledged to press the companies they invest in to ensure facial recognition technology is developed “in an ethical way, with the right regulation and oversight.” They want companies to subject their technology to independent screening for accuracy, disclose the sources of their image databases and exercise due diligence before supplying systems to customers. Firms should “assess their human rights impacts,” Louise Piffaut, senior ESG analyst at Aviva Plc’s investment arm, said in a statement supporting the pledge.
The initiative was led by Candriam, New York Life Insurance Co.’s European fund manager with 140 billion euros ($168 billion) of assets. In 2019, Candriam divested from Hangzhou Hikvision Digital Technology Co. after becoming concerned about the Chinese company’s business supplying mass biometric surveillance systems. That prompted the firm to start investigating the issue. “The more we dug, the more we found controversies,” Candriam’s engagement analyst Benjamin Chekroun told me last week.
The fund management alliance is taking on some of the world’s biggest companies developing face scanning software, such as Alphabet Inc.’s Google, Apple Inc., Tencent Holdings Ltd. and Alibaba Group Holding Inc. The zeitgeist over privacy is in the group’s favor.
Last month, Amazon.com Inc. indefinitely extended a moratorium on allowing police forces to use its Rekognition algorithms, which a January 2019 study by two artificial intelligence researchers showed made more mistakes when used on people with darker skin, particularly women. At the company’s May 26 annual meeting, investors including the State Board of Administration of Florida and Ontario Teachers’ Pension Plan voted in favor of a shareholder proposal asking Amazon to consider the human rights impacts of the technology, although the suggestion didn’t garner sufficient backing to become policy.
Last week, the U.S. Federal Communications Commission proposed a ban on surveillance cameras made by five Chinese companies, including Hikvision and Huawei Technologies Co., citing national security concerns. And the European Union in April proposed strict constraints on the use of biometric technology in the bloc, with large fines threatened for firms that ignore the rules.
The concern over developing technologies makes sense. The problem for asset managers, though, is that getting involved risks diluting their ESG efforts. Financial materiality, to use the current jargon, is clearly in their remit; social materiality, however desirable, is less clearly their responsibility.
There’s a compelling argument that the climate crisis poses a clear and present economic danger, with risks ranging from coastal flooding to stranded oil and gas assets. Similarly, the financial case for companies improving the gender and racial diversity of both their workforces and executive boards is convincing. Missing out on swathes of the available talent pool will hobble any enterprise.
It’s harder to put a price on the operational and reputational risks that might befall companies for misusing facial recognition technology. But it’s possible. Earlier this year, Facebook Inc. paid $650 million in a settlement with a San Francisco court over allegations it collected and stored digital scans of users’ faces without consent. And Swedish furniture retailer Ikea was fined 1 million euros by a French court last week for spying on staff and customers.
The fund management community has achieved notable successes in recent years in forcing companies to do more to reduce their carbon footprints and do less damage to the environment. If “technology is the new climate change,” as Candriam’s Chekroun says, I wouldn’t underestimate the power of capital to protect our faces from being secretly scanned.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Sorry, the comment form is closed at this time.