Hard Questions for #iLaw2011's Freedom of Information/Arab Spring Sessions

We’ve revived the iLaw program after a five-year hiatus. This year, it’s an experiment in teaching at Harvard Law School: part class (for about 125 students) and part conference (with friends from around the world here for the week). And JZ has taken the baton from Terry Fisher as our iLaw Chair.  An exciting day.

I’ve been preparing for two sessions on Day 1: “Freedom of Expression and Online Liberty” and then a case study on the Arab Spring (which will feature, among others, our colleague Nagla Rizk of the American University in Cairo). I’ve been thinking about some of the hard questions that I’m hoping we’ll take up during those sessions.

– What effect does a total shutdown of the network have on protests? I’ve been enjoying reading and thinking about this article on SSRN.  The author, Navid Hassanpour, argues (from the abstract): “I argue that … sudden interruption of mass communication accelerates revolutionary mobilization and proliferates decentralized contention.”

– We’ve assigned two chapters from Yochai Benkler’s landmark book, the Wealth of Networks (the introduction and the first 22 pages of chapter 7, which you can read freely online).  I am trying to figure out how well Yochai’s theoretical from a few years ago is holding up.  So far, so well, I think.  The examples in the second chapter that we assigned – Sinclair Broadcasting and Diebold – feel distant from the Arab Spring and Wikileaks examples that are front-of-mind today.  But the essential teachings seem to be holding up very well.  How might we add to the wiki, as it were, of WoN, knowing what we now know?  (Another way to look at this question, riffing off of something Yochai hits in his own lecture: what was the role of Al-Jazeera and other big media outlets, in combination with the amateur media and organizers?)

– We have gotten very good at studying some aspects of the Internet, as a network and as a social/political/cultural space.  We can show what the network of bloggers or Twitterers look like in a given linguistic culture.  We can show what web sites are censored where around the world (see the ONI).  We can survey and interview people about their online (and offline) behaviors.  But lots of things move very fast online and in digital culture, and it’s hard to keep up, in terms of developing good methods and deploying them.  What are the things that we’d like to be able to know about that we haven’t learned yet how to study?  Plainly, activity within closed networks like Facebook is a problem: lots is happening there, and surveys of users can help, but we can’t do much in terms of getting at Facebook usage patterns through technology (and there are privacy problems associated with doing so, even if we could).  Mobile is another: our testing of Internet filtering, for instance, is mostly limited to the standard web-browsing/http get request type of activity.  What else do we want/need to know empirically, to understand politics, activism, and democracy in a networked world?

– How much did the demographic element — a large youth population in several Middle East/North African cultures — matter, if at all, with respect to the Arab Spring?  How important were the skills, among elite youth primarily, to use social media as part of its organizing?

– How did the online organizing of the Arab Spring mesh with the offline activism in the streets?

– How much did the regional element matter, i.e., the domino quality to the uprisings?  Does this have anything to do with use of the digital networks, shared language, and social/cultural solidarity that crossed geo-political boundaries?

– What, if anything, does the Wikileaks story have to do with the Arab Spring story?  Larry Lessig pulls them quickly together; Nagla Rizk and Lina Attalah balk at this characterization.  We’ll dig in this afternoon.

– [Student-suggested topic #1, via Twitter:] What’s the effect of the US State Department’s Internet Freedom strategy?

– [Student-suggested topic #2, via Twitter:] Does the distribution/democratization of channels of discourse undercut rather than support dissent, organizing, etc.?

There’s much more to unpack, but these are some of the things in my mind…

Research Confidential and Surveying Bloggers

In our research methods seminar this evening at the Berkman Center, we got into a spirited conversation about the challenges of surveying bloggers.  In this seminar, we’ve been working primarily from a text called Research Confidential, edited by Eszter Hargittai (who happens to be my co-teacher in this experimental class, taught concurrently, and by video-conference, between Northwestern and Harvard). The book is a great jumping-off point for conversations about problems in research methods.

The two chapters we’ve read for this week were both excellent: Gina Walejko’s “Online Survey: Instant Publication, Instant Mistake, All of the Above” and Dmitri Williams and Li Xiong’s “Herding Cats Online: Real Studies of Virtual Communities.”  Both chapters are compelling (as are the others that we’ve read for this course).  They tell useful stories about specific research projects that the authors conducted related to populations active online.  In support of our discussion about surveys in class, these two chapters tee up many of the issues that we ought to have raised in this conversation.  Gina also came to class to discuss her chapter with us, which was amazing.  (Come to think of it, I would also have liked to have met the two authors of the second chapter; they wrote some truly funny lines into the otherwise very serious text.)

In a previous class, we started with Eszter’s Introductory chapter, “Doing Empirical Social Science Research,” as well as Christian Sandvig’s “How Technical is Technology Research? Acquiring and Deploying Technical Knowledge in Social Research Projects.”  These two chapters were a terrific way to start the course; I’d recommend the pairing of the two as a possible starting point for getting into the book, even though they’re not presented in that order (with no disrespect meant for those who chose the chapter order in the book itself!).

While many of Research Confidential’s chapters bear on the special problems prompted by use of the Internet and the special opportunities that Internet-related methods present, the book strikes me as very useful read for anyone conducting research in today’s world.  I strongly recommend it.  The mode of the book renders the text very accessible and readable: unlike most methods textbooks, this book is a series of narratives by young researchers about their experiences in approaching research problems, some of them related to the Internet and others not so technical in nature.  As a researcher, I learned a great deal; as a reader, I thoroughly enjoyed the book’s stories.

Solicitor General's Brief in Cablevision Case

The United States Solicitor General’s office has filed its brief (posted online here) in the long-running RS-DVR matter, popularly referred to as the “Cablevision” case. The brief is terrific. The United States takes the position that the Supreme Court should not review the case, which had been decided unanimously by the Second Circuit in favor of the cable companies. This case has significant copyright implications, as well as implications for the balance of power between cable providers and those who hold copyright interests in television and movie programming.

The Solicitor General takes the position that the case did not meet the traditional standard for the Supreme Court to grant cert and that the Second Circuit “reasonably and narrowly resolved the issues” before it. The reasoning in the brief is persuasive.

For more information: Several news outlets have the story. (The Reuters piece says that the SG “denied” the plaintiffs’ request for a hearing, which — at least in technical terms — overstates the matter a bit by implying decision-making authority in the SG. Though the Court asked for the SG’s opinion, the Court reserves the right to decide whether or not to hear the case. Practically speaking, though, that seems somewhat unlikely now, after the filing of this strong brief.) For previous coverage which touches on the procedural aspects of the case, see, e.g., an article by the LA Times’s David G Savage from January, 2009. Also, see the press release and summary page on the case published by Public Knowledge, which has worked on this matter; Gigi Sohn, the president, says she is pleased with the SG’s brief.

By way of disclosure: the United States Solicitor General and counsel of record in this matter, Elena Kagan, is my former boss when she was dean of Harvard Law School for six years prior to her appointment to the Obama Administration.

Online Intermediaries

Issues swirling around Craigslist have given rise to a new round of consideration of our liability scheme of online intermediaries. David Ardia — a very thoughtful observer of this scene, a Berkman fellow, and director of our Citizens Media Law Project — comments on a podcast at Legal Talk Network. The themes are similar to those that Adam Thierer and I took up in a debate at ArsTechnica recently.

This discussion of intermediary liability is only going getting more important as time passes. Follow along as the issue develops at CMLP’s new Section 230 site.

Pushing Forward on the Legal Casebook Idea

There’s a lot of energy coming out of the Collins/Skover/Rubin/Testye workshop of a few weekends ago on the next-generation legal casebook.  It’s the sign of a great gathering: after you’ve landed at your home airport, you are still thinking about the issues that you were kicking around at the conference.  I think it’s also a sign of the strength of the idea: something of this sort *will* happen if we keep that energy up. 

One follow-up is a call that Gene Koo and CALI has organized to see if cyberlaw law professors would want to be first up.  It’s a very practical next step, and one with promise.  As one such cyberlaw prof, I’m definitely in.  This specific project is an obvious follow-up to much of what JZ has been working on for years, through H20 and otherwise.

Turkey at the Edge

The people of Turkey are facing a stark choice: will they continue to have a mostly free and open Internet, or will they join the two dozen states around the world that filter the content that their citizens see?

Over the past two days, I’ve been here in Turkey to talk about our new book (written by the whole OpenNet Initiative team), called Access Denied. The book describes the growth of Internet filtering around the world, from only about 2 states in 2002 to more than 2 dozen in 2007. I’ve been welcomed by many serious, smart people in Ankara and Istanbul, Turkey, who are grappling with this issue, and to whom I’ve handed over a copy of the new book — the first copies I’ve had my hands on.

This question for Turkey runs deep, it seems, from what I’m hearing. As it has been described to me, the state is on the knife’s edge, between one world and another, just as Istanbul sits, on the Bosporus, at the juncture between “East and West.”

Our maps of state-mandated Internet filtering on the ONI site describe Turkey’s situation graphically. The majority of those states that filter the net extensively lie to its east and south; its neighbors in Europe filter the Internet, though much more selectively (Nazi paraphernalia in Germany and France, e.g., and child pornography in northern Europe; in the U.S., we certainly filter at the PC level in schools and libraries, though not on a state-mandated basis at the level of publicly-accessible ISPs). It’s not that there are no Internet restrictions in the states in Europe and North America, nor that these places necessarily have it completely right (we don’t). It’s both the process for removing harmful material, the technical approach that keeps the content from viewers (or stops publishers from posting it), and the scale of information blockages that differs. We’ll learn a lot from how things turn out here in Turkey in the months to come.

An open Internet brings with it many wonderful things: access to knowledge, more voices telling more stories from more places, new avenues for free expression and association, global connections between cultures, and massive gains in productivity and innovation. The web 2.0 era, with more people using participatory media, brings with it yet more of these positive things.

Widespread use of the Internet also gives rise to challenging content along with its democratic and economic gains. As Turkey looks ahead toward the day when they join the European Union once and for all, one of the many policy questions on the national agenda is whether and how to filter the Internet. There is sensitivity around content of various sorts: criticism of the republic’s founder, Mustafa Kemal Atatürk; gambling; and obscenity top the list. The parliament passed a law earlier in 2007 that gives a government authority a broad mandate to filter content of this sort from the Internet. To date, I’m told, about 10 orders have been issued by this authority, and an additional 40 orders by a court to filter content. The process is only a few months old; much remains to be learned about how this law, known as “5651,” will be implemented over time.

The most high-profile filtering has been of the popular video-sharing site, YouTube. Twice in the past few months, the authority has sent word to the 73 or so Turkish ISPs to block access, at the domain level, to all of YouTube. These blocks have been issued in response to complaints about videos posted to YouTube that were held to be derogatory toward the founder, Ataturk. The blocks have lasted about 72 hours.

After learning from the court of the offending videos, YouTube has apparently removed them, and the service has been subsequently restored. YouTube has been perfectly accessible on the connections I’ve had in Istanbul and Ankara in the past few days.

During this trip, I’ve been hosted by the Internet Association here, known as TBD, and others who have helped to set up meetings with many people — in industry, in government, in journalism, and in academia — who are puzzling over this issue. The challenges of this new law, 5651, are plain:

– The law gives very broad authority to filter the net. It places this power in a single authority, as well as in the courts. It is unclear how broadly the law will be implemented. If the authority is well-meaning, as it seems to me to be, the effect of the law may be minimal; if that perspective changes, the effect of the law could be dramatic.

– The blocks are (so far) done at the domain level, it would appear. In other words, instead of blocking a single URL, the blocks affect entire domains. Many other states take this approach, probably for cost or efficiency reasons. Many states in the Middle East/North Africa have blocked entire blogging services at different times, for instance.

– The system in place requires Internet services to register themselves with the Turkish authorities in order to get word of the offending URLs. This requirement is not something that many multinational companies are going to be able or willing to do, for cost and jurisdictional issues. Instead of a notice-and-takedown regimes for these out-of-state players, there’s a system of shutting down the service and restoring it only after the offending content has been filtered out.

* * *

The Internet – especially in its current phase of development – is making possible innovation and creativity in terms of content. Today, simple technology platforms like weblogs, social networks, and video-sharing sites are enabling individuals to have greater voice in their societies. These technologies are also giving rise to the creation of new art forms, like the remix and the mash-up of code and content. Many of those who are making use of this ability to create and share new digital works are young people – those born in a digital era, with access to high-speed networks and blessed with terrific computing skills, called “digital natives” – but many digital creators are grown-ups, even professionals.

Turkey is not alone in how it is facing this challenge. The threat of “too much” free expression online is leading to more Internet censorship in more places around the world than ever before. When we started studying Internet censorship five years ago, along with our colleagues in the OpenNet Initiative (from the Universities of Toronto, Cambridge, and Oxford, as well as Harvard Law School), there were a few places – like China and Saudi Arabia – where the Internet was censored.

Since then, there’s been a sharp rise in online censorship, and its close cousin, surveillance. About three dozen countries in the world restrict access to Internet content in one way or another. Most famously, in China, the government runs the largest censorship regime in the world, blocking access to political, social, and cultural critique from its citizens. So do Iran, Uzbekistan, and others in their regions. The states that filter the Internet most extensively are primarily in East Asia, the Middle East and North Africa, and Central Asia.

* * *

Turkey’s choice couldn’t be clearer. Does one choose to embrace the innovation and creativity that the Internet brings with it, albeit along with some risk of people doing and saying harmful things? Or does one start down the road of banning entire zones of the Internet, whether online Web sites or new technologies like peer-to-peer services or live videoblogging?

In Turkey, the Internet has been largely free to date from government controls. Free expression and innovation have found homes online, in ways that benefit culture and the economy.

But there are signs that this freedom may be nearing its end in Turkey, through 5651 and how it is implemented. These changes come just as the benefits to be reaped are growing. When the state chooses to ban entire services for the many because of the acts of the few, the threat to innovation and creativity is high. Those states that have erected extensive censorship and surveillance regimes online have found them hard to implement with any degree of accuracy and fairness. And, more costly, the chilling effect on citizens who rely on the digital world for their livelihood and key aspects of their culture – in fact, the ability to remake their own cultural objects, the notion of semiotic democracy – is a high price to pay for control.

The impact of the choice Turkey makes in the months to come will be felt over decades and generations. Turkey’s choice also has international ramifications. If Turkey decides to clamp down on Internet activity, it will be lending aid to those who seek to see the Internet chopped into a series of local networks – the China Wide Web, the Iran Wide Web, and so forth – rather than continuing to build a truly World Wide Web.

Francois Leveque on Standards, Patents, and Antitrust

As part of our Berkman@10 celebration this year, we at the Berkman Center tonight welcome Francois Leveque, professor at the Ecole des Mines, Paris, and visiting prof at the faculty of law at UC Berkeley. He’s presenting the findings of two new papers, each co-authored with Yann Meniere: “Technology standards, patents and antitrust” and “Licensing commitments in standard setting organizations.”

Prof. Leveque offers us a series of insights about the interaction of economics and law in the context of patents in the standards setting process. One key finding of his papers: it would be best for consumers and for innovation in general for the licensing of patents by players in standards setting processes to occur ex ante, rather than ex post. More surprising, he and M. Meniere argue that it also may be better, under some circumstances for the patent holder also to set the royalty level ex ante. He notes that, in this setting, the interests of consumers and patent owners are aligned. As he goes on to explain, in other settings, these interests may not be so well aligned. Read the papers for more insights, including with respect to the VITA royalty cap policies, ways to mitigate the costs of the risk of hold-up, and his proposal of announcing a royalty cap ex ante as a more flexible means of accomplishing such mitigation while still enabling patent holders to revise the royalties.

Prof. Leveque very kindly participated in both the Weissbad (Switzerland) and Cambridge (MA, USA) workshops that guided our work on Interoperability and Innovation over the past year. His interventions were crucial to informing our understanding of these complicated matters and he was unusually generous with his input, for which Urs Gasser and I and our teams are extremely grateful.