Monday, April 13, 2015

There are more Facebook users than there are Catholics

*at least as of 2 months ago. Title has nothing to do with this blog post except I thought it was a fun pull quote from the article/podcast.

Facebook has become enormous, and consequently, a tremendous amount of photos are posted to Facebook each day. Back in 2011, during the Christmas season, so many photos were reported to Facebook that in an attempt to cut down on the man hours and funds that would be required to deal with all these photos, Facebook attempted to understand why these photos were reported in the first place. If they could understand why, maybe they could be better prepared to deal with it.
As a team sat down to analyze the reports, they  noticed that 97% of reports were for personal reasons, rather than for prohibited or illegal content (hate speech, drug use, etc.). To deal with this problem, the team inserted the question of “How does this photo make you feel?” into the reporting process for users. Around half of the users selected the “embarrassing” option. However, the second most popular option was “other,” into which box people wrote “it’s embarrassing.” Intrigued by this phenomenon, the choice “embarrassing” was changed to “it’s embarrassing,” and as a result, there was nearly a 25% increase in the amount of people who selected that option.

Nudge.

This realization led to a series of social experiments by Facebook. They implemented a feature that prompted users to send a message to the person whose photo they were reporting to explain why they reported it. When this was met with a lukewarm response, they tried inserting premade messages. By experimenting with the specific words used in these premade messages, they were able to control both the frequency with which users send the message and the frequency with which the other person responded and complied (by removing the photo).

“At any given moment, a Facebook user is a part of 10 different experiments at once.”

The fact that Facebook was undertaking these social experiments was revealed to the public last year when it was revealed that Facebook was manipulating users’ news feeds to modify their emotions, based on the idea that exposure to images and news of a certain emotion (positive or negative) would provoke a similar response. This was used to map users’ emotions. There was a public outrage.
From the perspective of the Facebook “Trust Engineers” (now with a more PR-friendly name), Facebook was using their power to make users’ online selves more like their real-world selves. In real life, no one would “report” (to whom? The police?) anyone for snapping an embarrassing photo of themselves; they would instead talk face-to-face with that person about how they felt. This initiative, and others like it, would encourage that.

Personally, I don’t understand the outrage. What Facebook is doing is nothing new. Sure, they hold a great deal of power, and they must use it responsibly (to paraphrase Stan Lee). There is potential for nefarious deeds. One obvious example is suggested in the podcast – influencing users’ voting habits. However, even though Facebook has financial incentive to study its users, the same can be said about any other institution that performs similar studies with no non-financial benefits. Facebook’s experimentation can potentially improve the world (or ruin it I guess). Grocery stores rearranging their layout based on customer psychology to improve sales is purely the application of psychological principles for financial gain. Facebook, however, offers a sample size unprecedented in history. It is the perfect location for social experimentation. If financial incentives are what drive Facebook to arrive at new psychological principles that can be used elsewhere, I believe that is a net gain for the world.


http://www.radiolab.org/story/trust-engineers/

1 comment:

  1. When I listened to this podcast, I felt that it was incredibly similar to a question that we tackled in class (maybe we already referenced this?) about where the responsibility lies with a company regarding moral decisions. Is the fact that you know that a decision can be made to change a person's outlook a bad thing? Is the company required to take action? Is there a moral obligation for these companies to "do the right thing?", and are companies protected if the "wrong thing" isn't even in their mission statement or business model? Also, why are we entitled to vilify the company that is seemingly trying to keep their user’s best interests at heart?

    I think that this is a "damned if you do/don't" situation. Not entirely sure what direction each company should go ether--even the morally correct decision, such as Facebook’s case, can be deemed as a breach to privacy and spark backlash. I have a feeling that we’ll figure it out one day.

    ReplyDelete