Review of “Privacy, Anonymity, and Perceived Risk in Open Collaboration”

Summary and comments on “Privacy, Anonymity, and Perceived Risk in Open Collaboration: A Study of Tor Users and Wikipedians”, by Andrea Forte, Rachel Greenstadt, and Nazanin Andalibi. Download PDF.

See also:

For anyone who missed this when it went around in October, it is worth reading in its entirety, although there is a lot of information packed into it. The premise is that there is an element of risk in open-source collaboration–for example, Bassel Khartobil, who disappeared after collaboration on the world heritage site at Palmyra, now destroyed by ISIS, or the anonymous Wikipedian of the year for 2015, who Jimmy Wales named “in pectore” for fear of reprisals. The researchers wanted to understand how these risks are perceived inside open-source movements by the users themselves. They did a series of structured interviews with users from Tor and Wikipedia, then used Atlas.ti software to identify the themes.

Those who usually skim through the section for review of literature may find this one interesting. For such a seemingly new topic, there has been a surprising amount of research done . There is research that shows that lack of diversity among editors leads to inequalities in content. For an encyclopedia, this means that diversity must be managed. Anyone who has anything to do with privacy policy will also want to look over the sources on non-participation, organizations’ practices with the handling of personal data, and threat models prompting anonymity.

Threat models

The authors grouped the responses into interconnected themes of 1) threat types, 2) threat sources, 3) why some people do not perceive threats, and 4) users’ strategies for dealing with threats. They then make some suggestions for moving privacy strategies from the individual to the community.

It is pretty clear from the descriptions that the authors interviewed some elite and probably non-representative users: administrators and arbitrators. But as a garden-variety, and fairly inexperienced Wikimedia contributor, strangely enough, I found the analysis right on target. This is the strength of the paper then, in articulating an accurate threat model. Where it becomes less useful is in venturing proposals–proposals that don’t seem immediately practical. But I haven’t seen any better ideas; in fact, I haven’t seen *any* other ideas, so for now we might as well consider these.

Types and sources of threats

Types of threats that users worried about were 1) surveillance by unknown persons that might lead to their edits being linked to their real life identity, 2) loss of employment and other opportunities, based on observations of other users’ experiences, 3) fear of physical safety, rape threats, death threats, danger to family, 4) harassment and intimidation 5) loss of professional reputation if someone goes on a vendetta against them. The sources of threats were seen to be governments and businesses, whose interests might not be the same as the encyclopedia, and private individuals, both insiders and outsiders, as well as organized groups. What they were afraid of, because they saw happening to other people, was threats, doxing, fake information, being beaten up, or having their heads photoshopped onto porn. Threats could come from other project members, including those in positions of responsibility.

Those who do not perceive threats

Some participants rarely perceive threats, and believe this is because they belong to some privileged group. Participants most concerned about threats were female, from an ethnic minority, transgender, or editing in a controversial topic area.

New participants may not understand the dangers, but later come to realize it only takes one bored kid to turn a volunteer hobby into a career liability. Some may also come to realize their edits have revealed something about their identity.

Mitigating risk

Strategies used to mitigate risk include: divulging a real name and avoiding topics that might attract attention, using Tor to participate anonymously (not usually available with Wikipedia), maintaining multiple accounts to prevent tracking of interests, and logging out to edit topics that show location. Wikipedia users can get special permission for Tor.

So, Tor.

I don’t really understand the value of this, from the standpoint of Wikipedia. If you are inside a country where you are worried about surveillance, chances are you have one internet provider, or maybe one underwater cable, for the whole country, and all the internet traffic goes through some sort of government censorship agency. So as soon as you make an edit to access Tor or a VPN, they know where you accessed it from. It would seem to me that making an edit with Tor or with a VPN in this circumstance is the fastest way to draw attention to yourself, at least in countries that have sophisticated internet surveillance.

By the time the edit hits Wikipedia, it is obscured, and Wikipedia can’t see your origin, but within the country they can track you by IP. In fact, they can stop your edits by IP, or hack your Facebook by IP. Your best bet in that case is to have a few extra SIM cards, in particular a SIM card in someone else’s name, although I understand this is getting harder and harder to find, and in some countries you may not be able to get SIM cards at all, but have to access Wikipedia through a cellphone or internet cafe. In that case you might do a check for “what is my ip?” and find it is assigned to a neighboring country, or even a country several thousand miles away. So who knows what is going on with that, or how you can protect yourself.

So, editing Wikipedia with Tor.

unblock-request

Requesting an exemption

If you have never seen the policy, it is here, at WP:EXEMPT. There are 137 users who have this flag, and Wikipedia publishes a list of their user names. Feel safer yet? To request the exemption you go to this page and fill out this form. At least that is where you end up when you trigger whatever it is that you trigger.  The first bit of information they want is your email address.  Yes, you are expected to publish this into the unknown web. If you are trying to edit around being caught up in a school zone with frequent vandals, no big deal, but if your reason is privacy, not so much.  They also want to know if you have an account–so you just told your potential surveillers whether or not they should look for your user name. And they want to know what articles you intend to edit.  Wonder what happens if you write “Tiananmen square”. So now you have just linked your IP to an email address and probably to your user name as well.  You might as well just edit logged in, because if you are sitting in the middle of a country with a repressive government, and anyone is watching, you have just given them all of that, and probably linked it to your identity IRL as well.

Which reminds me, why is my total edit history available on enwiki, for anyone in the world to view, complete with most frequent times edited, to be so very helpful for anyone trying to determine my time zone and personal schedule. Yes, I have tried to turn it off.  So, what’s up with that?

dalekkkOkay, now the other two privacy strategies, using multiple accounts and editing logged out.  Both of these privacy strategies will put you in conflict with the sockpuppet policy, WP:SOCK.  Yes, in theory at least anyone can edit, and in theory you don’t have to make an account.  But in practice, once you make an account, you can never go back to IP editing, and you can not make more than one account. They will hunt you down with a vengeance. And they don’t tell you this before you make an account.

Check out the latest suggestion for harassment: better blocking tools:

Better blocking tools and detection - the Wikimedia community works
   hard on the front lines keeping our users safe from harassment, through
   monitoring noticeboards and recent changes for problems, investigating
   “sock” accounts used to abuse contributors, and placing blocks on
   problematic users. Improvements to blocking tools, and the ability to
   detect harassing comments sooner can empower contributors to be more
   effective at these tasks.

Any evidence that harassment is coming from IP users? Not that I have seen. But in spite of a policy against it, there are admins and checkusers who will go on “fishing expeditions” looking for users who are editing logged out, and linking user names to IPs, without so much as an SPI request.  Some of them are even from countries where the Wikipedia is censored. Is there any doubt that the governments of these countries already have your IP address?

And there is no supervision of admins, no review of admin actions, nothing to prevent them from doing this, or even a way to find out that they have done this.  And any admin can indefinitely ban someone from Wikipedia forever, without discussion, and no one will notice, unless that person has some powerful friends.  Make no mistake, there are good and decent admins, but there is also no one watching the store.

So maybe the Tor verification system discussed on Google hangout is the way to go, even if Wikipedia stands to lose some autonomy by it. Even if Tor does not end up being useful for the privacy issues, it may be prove to be a way around Wikipedia’s longstanding admin problem.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s