Microsoft's new policy on blogging, censorship, and surveillance

I’m at the Berkman Center feverishly trying to finish testimony for the Congressional Human Rights Caucus hearing tomorow in DC.  I’m just starting to consider Microsoft’s announcement of its new policy on blogging, censorship and surveillance. 

At a minimum, I am pleased to see the transparency of their decision-making, the commitment to a process internally before replying to a state’s request for information, the commitment to making content blocked in one state accessible in other states, the commitment to transparency to users about what’s being blocked, and the clear message that this is not just about China.  Perhaps most of all, their call for a broad-based dialogue on how to manage this problem is right on, in my view.

Here are the operative segments, cut-and-pasted from the announcement:

“* Explicit standards for protecting content access: Microsoft will remove access to blog content only when it receives a legally binding notice from the government indicating that the material violates local laws, or if the content violates MSN’s terms of use.

* Maintaining global access: Microsoft will remove access to content only in the country issuing the order. When blog content is blocked due to restrictions based on local laws, the rest of the world will continue to have access. This is a new capability Microsoft is implementing in the MSN Spaces infrastructure.

* Transparent user notification: When local laws require the company to block access to certain content, Microsoft will ensure that users know why that content was blocked, by notifying them that access has been limited due to a government restriction.”

Will a policy of this sort make any difference?  Well, it’s surely just a first step, but I think it is a positive step.  So much will become clear if we ever find out how Microsoft acts when pushed on matters by a repressive regime.  Will these policies render Microsoft’s stance more protective of civil liberties — in appropriate contexts — than that of another company, perhaps one based in that regime?  Entirely possible, but so much turns on the application of the policy on the ground when trouble starts.

* * *

Not specific to this announcement, but prompted in part by puzzling over it: one theme that I think ought to emerge is the distinction between various contexts.  Microsoft’s announcement is somewhat helpful in this parsing process, though of course does not provide all the answers. 

Start with the presumption (though I know one might take issue with this starting point) that a United States company is competing in the marketplace of another state that has an extensive filtering and surveillance regime in place.  (For examples, see the OpenNet Initiative’s country studies.)  Consider whether we think the ethics are, or may be, different in the following scenarios, when a US company:

1) blocks access to content published by a citizen of another state at the explicit request of that other state,

  a) which blocking disallows the content to be viewed by another citizen of that state

  b) which blocking disallows the content to be viewed by those requesting to see it from states other than the home state of the author (such as the United States);

2) blocks access to content published by a citizen of another state at the implicit request (i.e., “you should generally block things of this nature”) of that other state

  a) which blocking disallows the content to be viewed by another citizen of that state

  b) which blocking disallows the content to be viewed by those requesting to see it from states other than the home state of the author (such as the United States);

3) turns over information about the user of an online service, pursuant to a specific legal notice from another state, when

  a) that user is the citizen of another state,

  b) that user is the citizen of the United States but acting in the other state; 

4)  turns over information about the user of an online service, pursuant to an informal request from another state, when

  a) that user is the citizen of another state,

  b) that user is the citizen of the United States but acting in the other state; 

[Does it matter whether the user’s alleged infraction was one that was a crime in the United States, or whether the information is sought because of political speech by that user that would plainly be protected under US law?  Whether the state needs the information to save a life, or to carry out a preventive law enforcement act?  How can the US company know?  What about when the request is to support research on a general policy issue, such as the US DOJ’s request for search engine data, apparently without user-specific data, in the COPA matter?

5) develops general-use technology that is used in the filtering and surveillance practices of another state; or,

6) develops specific-use filtering and surveillance technologies that are used in such regimes in other states.

There are no doubt many other permutations, but these seem to me to be starting points for parsing out the thorny ethical problems buried here.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s