На информационном ресурсе применяются рекомендательные технологии (информационные технологии предоставления информации на основе сбора, систематизации и анализа сведений, относящихся к предпочтениям пользователей сети "Интернет", находящихся на территории Российской Федерации)

XOJane

14 подписчиков

ASK A SCIENTIST: Why The Facebook Study Is Terrible

Because of the casual disregard for informed consent that these researchers showed, we have no idea if they or others like them are conducting more studies like this.

By now you’ve probably read or heard about Facebook’s controversial study regarding emotions, called “Experimental evidence of massive-scale emotional contagion through social networks,” which was conducted in 2012 and published this month in the prestigious journal PNAS (Proceedings of the National Academy of Sciences). The full article is available here for free.

Photo: Robert Scoble, Flickr.

To summarize: Facebook employee Adam Kramer and two other authors used the FB algorithm to manipulate users’ News Feeds over the course of one week. Feeds would either show increased positive emotional content, increased negative emotional content, or neither (neutral content), and the study analyzed what happened afterward. It turned out that people posted less in general if they saw neutral content. If they saw positive or negative content, they would post more positive or negative content, respectively. The experiment affected 689,003 people.

In various coverage of this story, many questions have been raised.

Perhaps the most important question is did the study authors get informed consent from study participants? In the paper, they state that by clicking the terms of service, we who are Facebook users agree to participate in such research. According to Kashmir Hill at Forbes, Facebook added the word “research” to their terms of service four months AFTER Kramer’s research occurred. So, their statement regarding terms of service is disingenuous at best, and an outright lie at worst.

For federally-funded research1 involving human subjects (I prefer the term participants)2, there are much stricter requirements for consent. These requirements involve things like informing participants what the study will involve; giving them the option not to participate; and informing them of any risks involved. To ensure that studies meet these criteria, researchers submit their study plans to an IRB, or Institutional Review Board, which exists to protect the interests of study participants.

While this study was not federally funded, it did involve researchers affiliated with universities, and universities usually want to protect themselves by using IRBs. It’s unclear at this point whether Kramer’s study passed an IRB: The editor at PNAS in an interview at the Atlantic, seemed to think that it did, while this statement from Cornell makes it seem like a review did not happen. Even if the study was reviewed and approved by an IRB, it does not pass my own personal test of ethics.

I happened to update my own human subjects training certification at work today, which gave me an excuse to look at the The Belmont Report. It’s true that I’m in medical science, not social science, but all people doing human subjects research are advised to read this document. What stands out for me in regard to Kramer’s study is the apparent lack of something referred to as beneficence. To quote the Belmont report:

Persons are treated in an ethical manner not only by respecting their decisions and protecting them from harm, but also by making efforts to secure their well-being...(1) do not harm and (2) maximize possible benefits and minimize possible harms.

There seems to be no acknowledgement in the paper or in the follow-up statement by Kramer that any harm may have been done. I find this statement, “I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety,” to be particularly problematic. Kramer acknowledges that people are upset about the study; but does not acknowledge that the study itself may have done harm.

Photo: Dimitris Kalogeropoylos, Flickr.

Kramer’s stated reasons for doing the paper (“because we care about the emotional impact of Facebook and the people that use our product”) don’t acknowledge some of the reasons for doing research and publishing it: it can further one’s career, it’s prestigious to get a publication in a major journal, and it gives a person social status.

In contrast, my reading of the paper seems like some researchers had some tools at their disposal, and used them because they could, without really thinking it through. They wanted to see if they could make a bunch of people feel bad, even if just a little bit, just by tweaking here and there, and it turns out they could. In short, it seems casually cruel. And it could be happening again and again, right now, by Facebook and other corporations. Because of the casual disregard for informed consent that these researchers showed, we have no idea if they or others like them are conducting more studies like this.

So what harm may have been done? It seems to me that manipulating someone’s News Feed so that it’s almost entirely negative emotional material -- even for just one time they are checking into Facebook that day or week -- could have negative consequences on their mental or emotional health. Imagine if suddenly you see that your support network, all your friends and family, are having a crap day, and maybe you are the kind of person who feels like you can’t approach them with your own needs for fear of over-burdening them. Maybe you feel that the world is just that much crappier. Maybe you depend on Facebook to cheer you up, and instead it’s brought you down, and now you know that someone intentionally manipulated your feed to do that very thing. Just because they wanted to see if they could.

People with more knowledge of these research methods than I have questioned the accuracy and robustness of the researcher’s methods. John Grohol points out that the data analysis used is meant for larger blocks of text, and the computer analysis would rate the statement “I’m not having a great day” as a neutral statement, because the “not” and the “great” cancel each other out. So it’s possible that this study had no effects whatsoever.

If John Grohol is correct -- that there were no real effects to this study and the reported effects were just “a statistical blip that has no real-world meaning,” then probably this particular study was of little consequence to people’s moods. But if Kramer and his co-authors are correct, then people’s emotions were really affected-- probably in very small ways, but real ways.

Research studies are supposed to have built-in protections for minimizing harm, and mechanisms in place for what to do if harm occurs related to the study. This one apparently did not. Furthermore, the inclusion of minors and vulnerable groups (including people with disabilities), are supposed to be addressed in research proposals. There is nothing in the paper or in follow-up statements addressing the inclusion of these groups. It is very likely that such groups were included with no thought to how they may have been harmed.

Academia has research protections and policies for reasons, mostly because of a history of unethical research. It is disappointing and angering to see studies like this that flaunt such standards. It undermines trust in researchers like myself who work hard to do our jobs well and be ethical people. It has certainly undermined trust in Facebook, although I feel less sad about that, because perhaps it will encourage people to seek out other social networks that are based on better principles. (I personally enjoy Dreamwidth.org, which, while a completely different format from Facebook, is ad-free and has a diversity statement.) Perhaps (although it seems like a lot to hope for), Facebook or the study authors will learn some lessons from this mistake. Meanwhile, we as users can demand better from services we use. We generate the content for Facebook and other social networks, content that they profit from. We do not have to be their unwitting experimental subjects also.

1. Although perhaps difficult to regulate, ideally all human subjects research (including corporate-funded research) would meet the same strict standards as federally-funded research. Return

2. “Human subjects research” is the standard language right now, but “participants” feels more respectful to me. Subject is the opposite of “object” but it can also refer to “lord and subject.” Return

Ссылка на первоисточник

Картина дня

наверх