The company has become more aware of privacy concerns by banning facial recognition, but critics say that it has not altered its DNA.
One year ago, a top Facebook executive approached Mark Zuckerberg. He offered to add facial recognition to his products.
That executive’s name was Andrew Bosworth. Bosworth claimed that facial recognition technology could enable the company to identify individuals in virtual environments. It could also create labels that would appear right next to their bodies. This technology was in use for over a decade to tag and identify people in Facebook photos.
This is all according to an internal source. That person spoke under the condition of anonymity to discuss sensitive issues.
According to the source, Zuckerberg would not agree to it. After years of scandal, the company wanted to take a new direction that prioritized encryption and privacy.
Already, U.S. cities and countries had adopted privacy laws to restrict facial recognition. Other company leaders believed that Facebook should be ahead of them.
The Growth of Facial ID
It took months to dismantle facial identification. It was a technology Facebook pioneered and was crucial in enabling it to grow virally.
Facebook then shocked the world by announcing it was closing the program. It was deleting more than one billion faces from its databases. This was due to public concern over unregulated technology.
The announcement came amid the worst public relations crisis of its 17-year history. The Facebook whistleblower revealed internal documents. The platform was shown to have awareness of societal harm.
Some observers and ex-insiders speculate that the timing was chosen to appease critics. They claim that the company doesn’t care about the safety of its users as it builds its products.
In fact, the decision had been in the making for almost a year before the current scandal.
The internal artificial intelligence team championed the proposal. Facebook policy professionals supported that team. They believe regulation of controversial technologies will come eventually, according to several people familiar with company thinking.
According to two sources, the proposal to make the change was presented to Mike Schroepfer, chief technology officer, and Bosworth, in June. It consisted of a 50-page policy document that outlined the pros and cons of getting rid of facial recognition in every division.
Changes to Facial Recognition Policy
Some critics and ex-insiders claim that executives believe they can change the company.
Related Stories from SmallBizTechnology
Paul Argenti, Dartmouth University professor of corporate communication, said that “This is a leadership issue — full stop.” Argenti went on to say that, “This attitude — we’re right and you’re wrong — is a part of the company’s DNA.”
Facebook referenced its blog post explaining the reasons behind shuttering the program in response to a comment request.
The motivations behind the sudden change are unclear. This follows a pattern of making big announcements in times of crisis.
For example, Instagram has stopped creating Instagram for children for several weeks, saying that it needed to “listen” and respond to parent concerns. It promised privacy and safety would be considered “from the beginning” when it launched a new suite for virtual reality plans last week.
- If you want to have a cleaner lifestyle without depriving yourself, say goodbye to these 8 behaviors - Baseline
- If you want a thriving love life in your retirement years, say goodbye to these 8 habits - Global English Editing
- 8 subtle signs someone isn’t actually as bright as they pretend to be - Small Business Bonfire
Then it did what critics call the most significant deflection: Facebook changed its name to Meta.
The Constant Crisis
For the past four years, Facebook has been in constant crisis. After discovering that Russians had extensively abused their service, Facebook minimized the severity of the problem in 2017.
One year later, the company revealed that Cambridge Analytica — a Trump-affiliated political consulting firm — and a researcher had used the company’s loose data policy to improperly siphon user profile information from tens of millions of U.S. Facebook users.
Both times, Zuckerberg publicly apologized dramatically. The company made hurried announcements to make it seem proactive.
The Russian disgrace led to the CEO using his Facebook wall to apologize during the Jewish holiday of atonement called Yom Kippur. The company ran full-page ads in major newspapers after the scandal of Cambridge. These were Zuckerberg’s apology letters.
Frances Haugen, a product manager at the company’s division of civic integrity, made public a cache of tens to thousands of documents in October. These documents show how Facebook knew that its service led to political polarization. The use of misinformation harmed the mental well-being of teenage girls. In many cases, it even rescinded the steps it was proposing for reducing the harm.
Scandal
After so many crises, Facebook executives have a tried-and-true strategy. It’s simple:” “Flood the zone with good news to counter any bad.”
“They might not be able to predict how large a crisis will be, but they’ll look at the news and see what you can do to impact it.” Katie Harbath, former director of policy at Facebook, helped to manage many company scandals. Her work included the Cambridge Analytica privacy controversy. However, she claimed that she didn’t know about the facial recognition decision.
Facebook’s public relations department controls every aspect of product decisions. According to people and numerous documents obtained by The Washington Post, managers can concoct negative and positive headlines for any potential product announcement.
Sheryl Sandberg is the chief operating officer at Facebook. She is also the highest-ranking executive in charge of the company’s public relations strategies. She has named her private conference room “Only Good News” at Menlo Park headquarters. Zuckerberg’s foundational “Move quickly and break things” philosophy has since been removed from the Menlo Park Campus.
After the Russia scandal, the company employed thousands of content moderators. It also established a new division to combat “coordinated, inauthentic behavior.”
Facebook made significant changes to its data-sharing policies following the Cambridge Analytica scandal. These changes were partly motivated by legal actions against Facebook. They helped lay the foundation for the announcement of facial recognition ending this week.
FTC: No Facial Recognition
In 2019, the Federal Trade Commission filed charges against the company. It later settled for $5 billion — the largest privacy settlement ever made against any company.
This settlement allowed regulators to have greater oversight of the company’s data practices and facial recognition. The company will reorient the message platforms of Zuckerberg, which includes Messenger and WhatsApp. No one, including Facebook, will be able to access the messages afterward.
Apple’s 2020 decision to limit data that apps can collect on its platform is a significant blow to Facebook’s business model.
Argenti stated that the “playbook Facebook uses to manage its crises” is “classic wrong.” Argenti noted that any company trying to correct a mistake should admit it, explain how to fix it, and then assure everyone that it won’t happen again.
“This is not their DNA. He said, “And, yes. What I’m advocating for is more of a transformation in their business, how they lead and how they communicate.”
Feeling stuck in self-doubt?
Stop trying to fix yourself and start embracing who you are. Join the free 7-day self-discovery challenge and learn how to transform negative emotions into personal growth.