Wednesday, March 16, 2016

Making a More Empathetic Facebook



The company’s compassion department researches ways to make confrontations and breakups a little easier online.


In 2011, amidst backlash over rising cyberbullying, Facebook hired a small team of researchers from University of California, Berkeley, and Yale University to figure out how to give users a greater sense of control. In March 2011, based on their recommendations, Facebook rolled out a social reporting tool to help users resolve conflicts among themselves, by allowing them to engage directly with other people over objectionable content. If a person found a photo offensive, for example, they could send a pre-populated message that read, “Hey, I didn’t like this photo. Please remove it.”
“This was for the photos Facebook had no capacity to address,” said Emiliana Simon-Thomas, the science director for Berkeley’s Greater Good Science Center, who worked on the project from the beginning. “They’d report a photo of people drinking at a party, and Facebook wouldn’t be able to help them because it didn’t violate the Terms of Service.”
“Our thinking was, how do we get people to work it out themselves rather than appeal to Facebook for mediation?” she added.
That was the beginning of Facebook’s compassion department, a unit within the company that makes suggestions to the engineers about ways to help users use the site with greater emotional intelligence. The department’s new tool was a measured success; 85 percent of the time, the person who posted the photo took it down or sent a reply. In surveys asking about the interactions, 65 percent of respondents felt positive about the person sending the message, while 25 percent feel neutral.
Over time, the team tinkered with the messages. They found that using names, for example, tended to be more effective: “Hey John” worked more often than a simple “Hey,” and phrases like signifying humility or vulnerability, like “please” or “this bothers me,” took the edge off the confrontation. Today, one of these prepopulated message reads, “Hey John, this is a bad photo of me and I don't want people to see it. Would you please take it down?”  
A number of other changes over the years have been based on the research of the compassion department. Ending relationships got easier, for example. “Hide” and “unfollow” options made it easier for people to remove objectionable content from their feeds.
Tools like these help users “to see that just because they’re online, they still have the ability to walk out of the room,” said Pamela Rutledge, the director of the media psychology research center at Fielding Graduate University. “By making people feel more in control of their lives, it influences their sense of identity and resilience.”
While features like these were well generally received, the compassion team has had some misses over the years. Heated comment wars underneath news articles, for example, are one area they’ve struggled to address. In a bid to help users resolve public confrontations, the team briefly experimented with an option for users to bring in a mediator. If two friends were fighting over their political beliefs, for example, and things are getting heated, they could choose to notify a third friend to intervene and help settle the argument.
“The idea came from psychology literature that said if you bring in an ally, someone who was unbiased, you’d be able to resolve the interpersonal conflict,” Simon-Thomas said. But testing revealed that the tool largely went unused, so it never rolled out to the general public.
And for a long time, the compassion team grappled with the idea of introducing a “Dislike” button, a frequent request for users who wanted to show sympathy for friends who posted about difficult times.
“It was very carefully thought about for a lot of really good reasons,” Simon-Thomas said. “We looked at the contexts where people would dislike something, like if your grandma died, but what we were seeing was users using hearts to express ‘I care and I support you.’ So that’s what we advised them to build.”
And they listened. Last month, Facebook unveiled its emoji reactions — thumbs-up, love, haha, yay, wow, sad, and angry. The Pixar story artist Matt Jones designed the stickers and, in an effort to capture the nuances of human expression, he based them on Darwin’s book The Expression of Emotion in Man and Animals.  
The progression from the “like’ button to these stickers is a pattern that repeats itself across Facebook: Many of the features we use today are more nuanced, meticulously thought out iterations of previous models. For example, the memorialization tool—introduced in 2009 to ensure the privacy of Facebook users who die— now allows users to choose a "legacy contact" to make one last post on the deceased’s behalf when they die, as well as manage the account. When you end a relationship, a new feature called breakup flow limits photos, videos and status updates from a former flame as opposed to “hiding” them forever. There are more resources addressing the problem of cyberbullying, such as Brackett’s new project, inspirED, a Facebook community for educators to develop and share anti-bullying resources.
Indeed, the goal has never been to eradicate all negativity from Facebook with algorithms, but rather to create an environment where people can grow from the negative things they see online, Simon-Thomas said.
“Sometimes we ask ourselves, ‘How can we create a better experience than real life? Or do we even want to be?’” she said. “People suffer and feel lonely in this world. That’s part of life.”
That’s why Facebook gives us options — do you want to block the offensive person? Hide them? Or send them a polite message asking them to cease?
“They’re trying to model altruistic behavior and allow people to control their threads,” Rutledge said, “without using a bunch of algorithms that make presumptions about what we want to see.”

No comments:

Post a Comment