October 18, 2021

Lessons About AI Algorithms from the Facebook Hearings

Written by

In the recent Senate hearings regarding Facebook, we learned that Facebook likely turned to artificial intelligence (AI) algorithms in an attempt to cut costs and keep the business running even while understaffed. While they’ve profited greatly from this change, it may now be time to pay the piper. For other tech companies looking on, the main takeaway should be that, while AI is great at cutting costs and speeding up operations, it can’t replace the employees you need to run your business.

Jump to:

AI requires human oversight

While AI is a great tool for automating processes and reducing operating costs, it requires human oversight to ensure accuracy. Without human input, especially in the early stages of implementation, the AI won’t know when it has made a mistake, meaning it will keep making it. Mike O’Malley, SVP for SenecaGlobal, explains, “What people need to understand kind of at the top level is what AI does. AI shows you what’s in the data, and it helps you mine insights. AI isn’t going to tell you whether that’s good or bad or whether you should do it.”

“AI shows you what’s in the data, and it helps you mine insights. AI isn’t going to tell you whether that’s good or bad or whether you should do it.”

Mike O’Malley, SVP for SenecaGlobal

Data scientists are critical for outlining the parameters of what the AI algorithm will examine, but they have to continually engage with the model to ensure that it’s working as planned. Without human oversight, the AI might continue to deliver results that it believes to be relevant, even if they have harmful outcomes. We saw the fruits of this during the recent Facebook hearings.

The Facebook Ads experiment

Early in 2021, the Tech Transparency Project (TTP) submitted six advertisements to Facebook for approval. These ads were targeted towards minors between the ages of 13 and 17, and they included messaging related to drug paraphernalia, dieting, and gambling. While the organization did not run these ads, not wanting to subject children to harmful messaging, Facebook did approve them.

Facebook investigated the results and said they’d block advertisers from targeting ads to people under 18 based on their interests. But when the TTP tried again in September, the ads were once again approved. 

The problem likely lies with understaffing, according to Facebook whistleblower Frances Haugen’s testimony to the Senate. AI is responsible for sifting through and approving these ads, and it’s likely doing so without human oversight. Because of this, it’s not getting feedback that these ads are harmful to children and shouldn’t be approved.

AI needs extensive training

AI is great at scaling businesses and reducing operating costs, but it needs extensive training to be accurate. And the more complicated the model is, the more data it will need to consume during training. Take a customer service chatbot for example. These models need to be able to use natural language processing (NLP) to interpret a customer’s question, relate that question to a topic in the knowledge base even if the wording isn’t exactly the same, and then relay that information to the customer. 

Customers often phrase things differently, so while one might say, “I’d like to track my order,” another might ask, “Where is my order?”. With the right level of training, an AI model can identify these as the same request and direct the customer to the right answer. 

Also read: CRM+Bots: Make Them Work (Together) for You

AI can’t replace your staff

AI can help you scale your business and take some of the work away from your human employees, but it can’t replace them. In the chatbot example above, you still need customer service reps that the AI can escalate situations to when the conversation goes beyond its capabilities. And sometimes, customers are just more comfortable talking to a human. 

Companies also need quality data scientists to train and oversee the algorithm. O’Malley says, “Data scientists, who are the ones that write and tune and evaluate the algorithms need to be well informed and need to be in place to oversee all of this, otherwise it’s just a computer algorithm run amok. It’s going to optimize to whatever it’s been trained to do, but you may have a lot of unintended consequences [without oversight].”

The toll of acting as a Facebook moderator

A smartphone showing content that has been censored by a moderator.

Even Facebook needs human staff to work alongside its AI algorithms, especially as moderators. These moderators examine posts for harmful messaging, including violence, graphic images, and self-harm. But these roles are intense, often leading to mental health issues. 

According to a BBC article, there’s even a disclaimer that moderators have to sign stating, “I understand that exposure to this content may give me post traumatic stress disorder.

“I will engage in a mandatory wellness coaching session but I understand that those are not conditions [sic] and may not be sufficient to prevent my contracting PTSD.” 

While AI would obviously be the better choice here, the ad experiment shows that AI likely wouldn’t catch enough of these harmful images to protect the average user. Unfortunately, human moderators are still necessary at this stage. 

AI learns whether you’re saving data or not

While Facebook says it isn’t storing data on their teenage users, the unfortunate reality is, the AI is still learning from their behaviors. Even if that data gets deleted afterward, the algorithm still knows what will be successful with that age group.

O’Malley likens an AI algorithm to reading a book. “It would be as if you told a bunch of students to read five novels and then destroy them afterward. Well, they’re still going to remember, whatever it is they remember,” he says. “And AI is going to work, the same way. It’s going to remember, whatever you’ve taught it to pick out from those books and learn.”

How your business can apply these lessons and use AI ethically

AI doesn’t have to be a bad thing; it’s great for automating simple tasks and providing insights on the data it gathers. But it needs quality training and oversight to ensure that it’s performing ethically. Businesses need to take responsibility for their AI models, training them to support business goals without doing lasting harm. While government oversight sounds great in theory, it’s really not plausible for the myriad of ways businesses use AI.

If you’re looking to incorporate AI into your own business processes, check out our Product Selection Tool. After answering some questions about your business needs, you’ll get a customized list of AI software recommendations.

Read next: What is AI and How Can Businesses Use It?