Loading…
This event has ended. View the official site or create your own event → Check it out
This event has ended. Create your own
View analytic

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Thursday, September 24
 

12:00pm

Capitol Hill Lunch and Learn - CANCELLED
Due to the Pope's address to Congress the morning of Sept. 24, we are cancelling this session as there will be significantly increased security and anticipated crowds. 

Thursday September 24, 2015 12:00pm - 1:30pm
TBA

2:15pm

Graduate Student Consortium - venue change

TPRC will host a Graduate Student Consortium the afternoon of Thursday, September 24th, 2015, from 2:15 pm to 5:30 pm, at AT&T, 1120 20th Street, NW.  

The Consortium aims to provide graduate students at all levels with opportunities for mentoring by academics, industry, and government leaders, as well as the opportunity to network with other graduate students. Consortium participants will gain insights on research topics of interest from the various sectors of the TPRC community.

During the session, students will engage in discussion, receive feedback on their proposed research topics, and interact with fellow graduate students as well as with mentors. Mentors will be leaders from academia, industry, government, and the non-profit sectors, chosen to ensure balance among these multiple perspectives. 


Thursday September 24, 2015 2:15pm - 5:15pm
AT&T 1120 20th Street, NW, washington dc
 
Friday, September 25
 

1:00pm

Registration
Friday September 25, 2015 1:00pm - 2:00pm
George Mason University School of Law Atrium

2:00pm

Lessons from BTOP for Broadband Policy and Research
The panel brings together researchers, program administrators, and policymakers involved in promoting broadband to draw lessons learned from the BTOP program for future broadband policy, broadband policy evaluation, and broadband research in general.  Although the BTOP program was launched as part of a much larger stimulus effort, crafted on a very tight time schedule to address economic recovery needs in the midst of a major recession, it provides rich qualitative and quantitative data and lessons to inform future broadband policies. 

To initiate a broad discussion with the audience, the panelists will address the following issues: 1) The BTOP enabling legislation was specific in some places but broad and general in others.; 2) With about $2.9 billion committed, the largest allocation involved funding broadband networks across unserved and underserved areas; 3) One of the goals of BTOP and broadband policy in general is to harness the positive social and economic impacts of broadband investments; 4) The NTIA data is quite disparate in its format, including qualitative and quantitative data focused on the grantees activities and employment figures, complemented by information from the broadband mapping; 5) In extracting lessons from the BTOP program for the purposes of informing better broadband policies in the future, it is important to have a vision of the broadband future we expect and want to see to provide context.

Moderator:

Johannes M. Bauer, Department of Media and Information, Michigan State University

Panelists:


  • Jon Gant, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign

  • William Lehr, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology

  • Stephen Rhody, ASR Analytics

  • Sandeep Taxali, National Telecommunications and Information Administration, U.S. Department of Commerce


Moderators
avatar for Johannes M. Bauer

Johannes M. Bauer

Professor and Chairperson, Michigan State University
I am a researcher, writer and teacher interested in the digital economy, its governance as a complex adaptive systems, and the effects of the wide diffusion of mediated communications on society. Much of my work is international and comparative in scope. Therefore, I have great interest in policies adopted elsewhere and the experience with different models of governance. However, I am most passionate about discussing the human condition more... Read More →

Presenters
JG

Jon Gant

Professor and Director, Center for Digital Inclusion, University of Illinois at Urbana-Champaign
Director of Center for Digital Inclusion @ University of Illinois, Graduate School of Library and Information Science | Director for Urbana Champaign Big Broadband Project (UC2B) - Middle mile and FTTH gigabit network build funded through BTOP CCI grant | Research director and consultant for BTOP Evaluation Study with ASR Analytics
SR

Stephen Rhody

ASR Analytics
ST

Sandeep Taxali

Senior Policy Analyst, NTIA


Friday September 25, 2015 2:00pm - 3:00pm
GMUSL - Room 121

2:00pm

Domestic Content Policies in the Broadband Age
The rapid rise in audio-visual distribution platforms is challenging regulators’ abilities to fashion and maintain domestic content policies for television broadcasting.  Broadcasters in a number of nations and regions operate under content regulatory schemes designed to serve cultural and economic purposes, put into place during the age of terrestrial broadcasting when national policymakers were able to use licensing to tightly control the use of imported programming. 

The panelists are drawn from an international research team which undertook a collaborative study into the challenges that the digital age poses to traditional domestic content policies through an analysis of the rationales, policy approaches, operations, and effectiveness of domestic content policies in four countries: Australia, Canada, Ireland, and South Korea.  Their findings were recently published as a report by the News and Media Research Centre of the University of Canberra, Australia.  The main goal of this panel is to gain an understanding of the individual cases and to compare responses to the new challenges each country faces in the digital era.

Moderator:

Steve Wildman, College of Communication Arts & Sciences, Michigan State University

Panelists:


  • Charles H. Davis, RTA School of Media, Ryerson University, Toronto

  • Sora Park, News & Media Research Centre, University of Canberra

  • Robert G. Picard, Department of Politics and International Relations, University of Oxford


 

Moderators
avatar for Steve Wildman

Steve Wildman

Michigan State University and University of Colorado
Steven S Wildman is a Senior Fellow at the Silicon Flatirons Center and a Visiting Scholar with the Interdisciplinary Telecommunications Program, both at the University of Colorado, Boulder. Prior academic positions include: 15 years as the J.H. Quello Chair of Telecommunication Studies at Michigan State University, where he also directed the Quello Center for Telecommunications Management and Law; Associate Professor of Communication Studies... Read More →

Presenters
avatar for Charles Davis

Charles Davis

professor, Ryerson University
I am a professor in Ryerson Unversity's RTA School of Media (Faculty of Communication & Design), and I hold the ES Rogers Sr Research Chair in Media Management and Entrepreneurship. | | My research interests have to do with the IT, media, and content industries, in three main lines of research: Innovation management and policy in creative/IT industries; audiences, reception, and mediated experiential consumption; and labour, freelancers... Read More →
avatar for Robert Picard

Robert Picard

Professor, University of Oxford
Robert G. Picard is a specialist on media economics and policy and the business challenges facing media in the digital age. He is affiliated with the Reuters Institute at the University of Oxford, the Kennedy School of Government at Harvard University, and the information Society Project at Yale Law School. He is the author and editor of 30 books and has written hundreds of articles on media issues for scholarly journals and industry... Read More →


Friday September 25, 2015 2:00pm - 3:30pm
GMUSL - Room 225

2:00pm

Localizing IP Interconnection: Experiences from Africa and Latin America

There is a growing literature suggesting that the presence of Internet Exchange Points (IXPs) promotes investments, reduces transit costs and increases the quality of Internet access services in developing countries (Sowell, 2013; Galperin et al., 2014). Other studies suggest that IXPs also promote local content hosting, as content producers and application developers seek to take advantage of reduced latency and shorter routes (Kende and Rose, 2015). While the theoretical case is well established, empirical evidence about the technical performance of IXPs in such contexts and its impact on local access and hosting markets continues to be scarce. There is also uncertainty about whether technical standards and measurement tools developed in high-connectivity countries are appropriate. Further, these technical debates have been recently complicated by policy initiatives promoting mandatory data localization in several countries.

This panel seeks to contribute to these debates by bringing together leading scholars whose work focus on IP interconnection and the performance of IXPs in Africa and Latin America. The panel is based on case studies that offer a variety of different perspectives. Some papers are more technically oriented, seeking to establish how IXPs are changing the topology of IP connectivity within countries and across regional links, and discussing alternative measurements for best capturing these changes. These papers also address the question of how to develop appropriate technical standards that facilitate new IXP deployment in low-connectivity contexts. Other papers are more policy oriented, addressing questions related to the impact of IXP initiatives on industry performance and the key factors that facilitate or hinder successful implementations.

The topics of the panel are of relevance to the TPRC audience for several reasons. First, they address questions about changes in Internet topology and interconnection economics that have been of interest to the TPRC community for several years. Further, the panel introduces a development perspective to these questions, presenting evidence from a range of case studies in Africa and Latin America where Internet infrastructure and services are significantly lagging behind. Lastly, the panel addresses methodological questions about broadband quality measurements, Internet topology and IX impact assessment that are relevant to TPRC community at large.



Moderators
DR

David Reed

University of Colorado at Boulder, University of Colorado
Dr. David Reed is the Faculty Director for the Interdisciplinary Telecommunications Program at the University of Colorado at Boulder. He also leads the new Center for Broadband Engineering and Economics that specializes in the interdisciplinary research of the emerging broadband ecosystem, and is Senior Fellow, Silicon Flatirons Center for Law, Technology, and Entrepreneurship at the University of Colorado. | | Dr. Reed was the Chief... Read More →

Presenters
JC

Jane Coffin

Director, Development Strategy, Internet Society
IXPs, connectivity, access, connecting the next billion, development
HG

Hernan Galperin

University of Southern California
avatar for Nishal Goburdhan

Nishal Goburdhan

Internet Analyst / IXP Manager, Packet Clearing House / INX-ZA
IXPs, DNS, BGP


Friday September 25, 2015 2:00pm - 3:30pm
GMUSL - Room 120

3:30pm

Coffee Break
Friday September 25, 2015 3:30pm - 4:00pm
George Mason University School of Law Atrium

4:00pm

Copyright: What Is the Appropriate Economic Goal of Copyright in the Digital Age? and Reconsidering Copyright
Moderators
EN

Eli Noam

Columbia University

Presenters
GF

George Ford

Chief Economist, Phoenix Center for Advanced Legal & Economic Public Policy Studies
MM

Michael Mandel

Progressive Policy Institute


Friday September 25, 2015 4:00pm - 5:30pm
GMUSL - Room 121

4:00pm

Internet Access: The Constitutional Conundrums Created by the FCC's Open Internet Order and Deja Vu All Over Again: Questions and a Few Suggestions on How the FCC Can Lawfully Regulate Internet Access
Moderators
avatar for Jonathan Nuechterlein

Jonathan Nuechterlein

FTC
Jonathan E. Nuechterlein is General Counsel of the Federal Trade Commission, representing the Commission in court and providing legal counsel on a range of antitrust and consumer protection issues.  He joined the FTC in June 2013 from Wilmer Cutler Pickering Hale & Dorr, where he was a partner and Chair of the Communications, Privacy, and Internet Law Practice Group.  He previously served as Deputy General Counsel for the Federal... Read More →

Presenters
avatar for Rob Frieden

Rob Frieden

Pioneers Chair and Professor of Telecommunications and Law, Penn State University
Rob Frieden holds the Pioneers Chair and serves as Professor of Telecommunications and Law at Penn State University. He has written over seventy articles in academic journals and several books, most recently Winning the Silicon Sweepstakes: Can the United States Compete in Global Telecommunications, published by Yale University Press. | | Before accepting an academic appointment, Professor Frieden held senior U.S. government policy making... Read More →
avatar for Ben Sperry

Ben Sperry

International Center for Law and Economics


Friday September 25, 2015 4:00pm - 5:30pm
GMUSL - Room 120

4:00pm

Privacy: Meaningful Consent: The Economics of Privity in Networked Environments and Privacy Concern, Trust and Desire for Content Personalization
Moderators
avatar for Ashkan Soltani

Ashkan Soltani

Chief Technologist, FTC
Ashkan has more than 20 years of experience as a consultant and researcher focused on technology, privacy, and behavioral economics. His work has informed policy debates on privacy and security and has been cited by several national media outlets.   Ashkan is a co-author of the Washington Post’s NSA series that was awarded the 2014 Pulitzer Prize for Public Service, a 2014 Loeb Award, and a 2013 Polk Award for National... Read More →

Presenters
avatar for Jonathan Cave

Jonathan Cave

University of Warwick (Econ. Dept.), University of Warwick
avatar for Darren Stevenson

Darren Stevenson

Ph.D. Candidate, University of Michigan at Ann Arbor


Friday September 25, 2015 4:00pm - 5:30pm
GMUSL - Room 225

5:30pm

5:30pm

A Model for Internet Governance and Implications for India
Abstract Link

The increasing role of Internet in economic growth and social aspects has brought the significance of Internet Governance to the forefront. New paradigms of Internet Governance recognize the contribution and role of governments, private organizations, civil society and other communities. The border-less and distributed architecture of the Internet substantially differentiates Internet Governance from traditional governance, challenging the established dominant role of nation-states in policy-making. Access, human rights, privacy and standards have become important Internet Governance issues. This has led to an enhanced role of nation states.

Many developed countries recommend multi-stakeholder approach where nation-states are only one of the many stakeholders that include private sector and other communities. India’s position on Internet Governance recommends a multi-lateral approach which is at variance with emerging scenario globally. This has isolated India and created a negative signal for investment in the ICT sector.

India’s position has been based on a limited focus on the international aspects of Internet Governance dealing largely with cyber-security. Although this is a critical aspect, this approach has been at the expense of economic and social goals domestically. Hence, there is a need for India to focus on Internet Governance on dimensions other than cyber-security and adopt a wider perspective.

Studies of Internet Governance have not systematically addressed these issues in the design of responsive organizations or national systems for effective governance. This paper contributes to addressing this lacuna by:

i) Developing a conceptual model for Internet Governance based on both the underlying architecture of the Internet and a proposed framework for evaluating the perceived legitimacy of the suggested model and

ii) Combining the two models, this paper develops the Multi-Tier Open Participation (M-TOP) approach for its application to India. This approach not only strengthens domestic Internet Governance, but also increases India’s role in regional and international processes.

Methodology:

This work was done at the request of Department of Electronics and Information Technology (DeitY), India to help them develop a framework for Internet Governance. We used in-depth personal interviews and focused group discussions with policy makers in India as our primary source of data. Active participation in, and interviews with several key attendees of Internet Governance Forum, Istanbul, 2014 have contributed to this study. For secondary data, we have examined the existing literature. After initial development of our proposed framework, we sought feedback from key decision-makers in DeitY and the industry to check for the feasibility and consistency of our initial proposed framework.

Outcomes:

We have developed a Multi-Tiered model based on underlying architecture of the Internet. It analyses the different tiers, key issues, lead and other actors, and geographic scope of decision-making. Based on models of decision-making processes, we developed a Perceived Legitimacy Model (PLM). Here, we identify parameters on the basis of which stakeholders assess the legitimacy of different stages of decision-making. We combine both the multi-tier and PLM models to develop the M-TOP approach. This approach recognizes that there is no single approach to governance that is applicable across all tiers of Internet architecture and relevant public policy issues. Subsequently, we design an integrated framework for Internet Governance in India which incorporates the M-TOP approach. It also addresses the issue of multi-stakeholder and multi-lateral approaches in a nuanced way. Our recommended framework also takes into account that Internet Governance principles for India should be in consonance with its democratic ethos and openness and dovetail with the inherent characteristics of the Internet, namely, openness, dynamism, and innovation. This framework takes cognizance of the need for flexibility for new technologies and international developments.

Presenters
RJ

Rekha Jain

IIM Ahmedabad


Friday September 25, 2015 5:30pm - 6:30pm
George Mason University School of Law Atrium

5:30pm

A Solution for a Real Problem: Challenges for Effective Network Neutrality in Latin America
Paper Link

The attack against the need of network neutrality regulation for lack of incentives to block or discriminate applications is not only theoretically weak but also disproved in practice by the behavior of Internet providers. Focusing on the experience in Latin American countries, this paper shows that there is a considerable amount of examples of blocking and discriminatory practices executed by Internet providers. In particular, a research of public databases like academic articles, press articles, regulatory agencies’ and courts’ archives, from Argentina, Brazil, Chile, Colombia, Mexico and Peru, reveal a set of examples of harmful practices that show that network neutrality regulation is a solution for a real problem.

Initially, these practices focused on blocking of Internet telephony services, a competitive threat to ordinary telephony, which had been a long-standing source of revenue for telecommunication companies. Timely responses by the Regulatory Agency in charge of the telecommunications sector (in the case of Brazil) or of the Antitrust Agency (whose decision was upheld by the Supreme Court in Chile) made the infrastructure providers step back and look for more subtle practices. Subsequently, it was possible to identify practices of traffic shaping, usually related to slower performance of peer-to-peer applications and streaming of video.

The Internet as a free space may be extremely valuable in Latin America. It has the potential to positively impact long-standing inequalities, reducing the gaps among different layers of the population. It may also strengthen democratic institutions and empower people that would not have voice in traditional media vehicles. This potential may never be turn into reality if network neutrality principles are not observed. Thus, the network neutrality debate is particularly relevant in Latin America and needs to be taken seriously by policymakers, academics and companies.

Brazil, Colombia, Chile and Peru have issued network neutrality regulations recently, but the level of public debate has been relatively limited. The scarce academic discussions, press coverage and debates preceding the enactment of network neutrality rules are a clear signal that network neutrality has not been the subject of robust and thorough debates in Latin America. An important consequence of the limited debate over appropriate regulation is the risk of incomplete regulation and ineffective enforcement. An analysis of the regulation issued in these countries reveal a lack of a clear ban on all types of prioritization, and the adoption of open concepts that might leave too much space for harmful practices and case-by-case interpretation, increasing the costs of regulation, uncertainty and the risks of innovation.

Against this background, current practices in such countries show that preferential treatments not based on “fast lanes” have become recurrent. In particular, strong players like Facebook, Twitter and Whatsapp have frequently used zero-rating apps and application-specific pricing to consolidate their market positions. Moreover, network providers have favored applications they own offering special plans with unlimited bandwidth for such applications. This scenario seems to support the case for weak regulation and ineffective enforcement, demanding an in-depth debate about the established regulatory framework.

Presenters

Friday September 25, 2015 5:30pm - 6:30pm
George Mason University School of Law Atrium

5:30pm

A Taxonomy of Household Internet Consumption
Abstract Link

Despite an immense literature on Internet adoption and, to some extent, usage, there is still limited knowledge of how households consume information on the Internet. In this paper, we intend to add to this knowledge at a very basic level, by measuring and classifying Internet information consumption along time and space. Specifically, we aim to characterize household web surfing “types” according to the amount of time they spend online and how they distribute this time across different web domains. We plan to define these types according to the following dimensions of Internet use (or subset thereof): Number of days online per week, total time online per week, time per domain, and concentration of domains visited according to time and views. The last of these dimensions indicates whether a household’s visits across N domains are concentrated among just a few or more evenly spread across all N, as measured by time or visit frequency. Using these dimensions to partition the usage space, examples of types that would emerge include the “tourist,” who infrequently visits a handful of sites for a relatively short period of time, and the “lingerer,” who spreads a great deal of time online across a few domains.

We have assembled several years’ worth of data on web browsing behavior from ComScore to conduct this study. These data span the years 2008, 2009, 2012 and 2013, and track households over an entire year, recording all of their web browsing behavior on a home machine. The information collected includes the domains they visit, how long they spend at each domain, and the number of pages visited within the domain, along with several demographic measures, including income, education, and household size. Using these data, we can classify households along the aforementioned dimensions, thus identifying a distribution across web surfing types. Further, we can determine demographic predictors for each type; for example, we can measure the predictive power of income, education, household composition, etc. in determining whether a household is a “lingerer.” The methods employed to conduct these measures will be a mix of descriptive statistics and querying to establish the distribution over types, followed by standard linear regression and/or logit analysis for type prediction.

We have also amended the data to include a characterization of web domains visited by households as online video distribution (OVD) providers. Using this added information, we can determine whether a household consumes video via OVD, and characterize the general usage patterns specific to OVD consumers. This added measure can be informative toward better understanding consumers that are engaged in over-the-top (OTT) video consumption.

Presenters

Friday September 25, 2015 5:30pm - 6:30pm
George Mason University School of Law Atrium

5:30pm

Beyond Technical Solutions: Understanding the Role of Governance Structures in Internet Routing Security
Abstract Link

Internet routing security incidents often make headlines when they occur (Cowie, 2010, RIPE 2008). But not all networks on the Internet (known technically as Autonomous Systems) experience incidents, and some experience more than others. While efforts to standardize and deploy secure routing technologies continue and commercial services to help mitigate them are now commonplace, a broader empirically-based contextual understanding of these incidents does not exist.

Informed by theories of institutional economics and networked governance (Jones et al, 1997; Mueller et al, 2013), and using existing data from large-scale monitoring projects operated by computer scientists (e.g., Argus, Routeviews, CAIDA), the objective of this research is to shed light on why network operators experience different levels of routing security incidents. Our research method for the entire project uses quantitative measures of routing incidents over time (the dependent variable) and a set of independent variables that reflect variations in the macro, meso and micro level governance structures among Autonomous Systems. This paper, representing an early stage of the research, focuses mainly on the independent variables related to the meso level; i.e., “structural embeddedness” (SE). SE is defined as the degree to which Autonomous Systems are embedded within interconnections of other Autonomous Systems.

Contrary to the expectations established by Pastor-Satorras, R., Vázquez, A., & Vespignani, A. (2001) and our hypothesis that higher levels of SE among Autonomous Systems are negatively correlated with the number of routing incidents, SE has at best a very weak role in explaining the prevalence or absence of incidents. The strongest factors influencing susceptibility to routing incidents appear to be the number of peering and transit relationships an Autonomous System (AS) maintains with other Autonomous Systems, followed by the number of prefix advertisements an AS originates. These findings actually reinforce our ongoing study’s focus upon governance structures, making it more likely that specific operational and organizational practices (e.g., filtering, number of transit relationships) and institutional factors (e.g., legal and contractual relationships) will prove to play a significant role.

 


Presenters
avatar for Brenden Kuerbis

Brenden Kuerbis

Postdoctoral Reseacher, Georgia Institute of Technology
avatar for Milton Mueller

Milton Mueller

Professor, Georgia Institute of Technology
(TBC) Milton Mueller is Professor at the School of Public Policy, Georgia Institute of Technology, USA. Mueller received the Ph.D. from the University of Pennsylvania’s Annenberg School in 1989. His research focuses on rights, institutions and global governance in communication and information industries. He is the author of two seminal books on Internet governance, Ruling the Root and Networks and States. Mueller was one of the founders of... Read More →


Friday September 25, 2015 5:30pm - 6:30pm
George Mason University School of Law Atrium

5:30pm

Can a Vertically Integrated Provider Use QoS to Unreasonably Advantage Itself Over OTT Content Providers?
Abstract Link

Most broadband providers are moving their circuit-switched video to video-over-IP and are multiplexing this traffic with broadband Internet traffic. These vertically integrated video services are thus competing with over-the-top (OTT) video services for customers and for network capacity.

Broadband providers are deploying quality-of-service (QoS) technologies to improve the performance of video-over-IP by reserving network resources for or prioritizing their video traffic. Broadband providers may sell QoS to OTT video service providers or directly to end users. Alternatively, vertically integrated broadband and video service providers may refuse to provide access to QoS to competing video service providers.

A key net neutrality question is whether a vertically integrated provider can unreasonably advantage itself over competing content providers by selling QoS at unreasonably high prices or by refusing to provide QoS to competing content providers. There is a substantial academic literature comparing various types of neutrality. Most, however, focus on an absolute prohibition of prioritization. In contrast, we focus here on the effects of a prohibition of third party paid prioritization on competition between a vertically integrated provider and an OTT provider.

We develop a mathematical model, grounded jointly in network architecture and economic theory, of competition between one vertically integrated provider and one OTT provider. The two offer horizontally differentiated services that differ by the amount and type of content, based on a Hotelling model. End users are similarly differentiated by their preference for the type of content. End users decide which service to subscribe to (if any) so as to maximize surplus.

The broadband provider decides whether to deploy QoS, which incurs an incremental network cost per user. QoS, if used, increases user utility proportionally. We consider two types of markets: a market in which the broadband provider charges the OTT content provider for QoS, and an alternative market in which the broadband provider charges end users directly for QoS. The vertically integrated provider sets its video service price and the QoS price to maximize profit, defined as the sum of video service revenue and QoS revenue minus the corresponding incremental costs. The OTT provider sets its video service price to maximize profit, define as video service revenue minus the corresponding incremental content cost and the QoS cost.

We first analyze the duopoly competition game under a fixed QoS price. The price, market share, and profit of the ISP and of the OTT provider at the Nash equilibrium are derived. We then consider the broadband provider’s decision to deploy QoS, and when it does, how it will set the QoS price. In both types of markets, we analytically determine when the broadband provider will sell QoS and when the OTT provider or users will purchase QoS.

We find that the price for QoS when sold to the OTT provider may be higher than when sold directly to users. If users are relatively homogeneous in their preferences, then the broadband provider may charge a higher price for QoS than is social welfare maximizing. If the QoS price is set to maximize social welfare, then social welfare may be higher if QoS is sold to users than if it is sold to content providers.

Finally, we present numerical results based on current Internet statistics. In addition to verifying the analytical results, we illustrate when the broadband provider’s decision does not maximize social welfare; the effect of QoS price on content prices; the variation of each content provider’s market share with QoS price; and the variation of QoS price, content prices, and market shares with the benefit of QoS and the amount of content differentiation.

Presenters
avatar for Scott Jordan

Scott Jordan

University of California


Friday September 25, 2015 5:30pm - 6:30pm
George Mason University School of Law Atrium

5:30pm

Examining the User Acceptance of Gigabit Broadband Service: The Case of UC2B
Abstract Link

The focus of this research paper is to examine the user acceptance of broadband among the population of households that may be less inclined to adopt newly launched gigabit broadband services. This research paper presents a study about the Urbana-Champaign Big Broadband (UC2B) project; a $29 million BTOP funded middle mile and fiber-to-the-premise project completed in 2013. UC2B is among the first gigabit broadband networks built in the US to provide ultra-high-speed Internet access to the home and community anchor institutions. The network specifically focused on connecting low-income households as well as connecting community anchor institutions that serve households in low-income neighborhoods. The study provides insight into public policy discussions about the public value and return on investment of the construction of gigabit broadband municipal networks. Building the network requires the ability to forecast the demand for the service when potential subscribes face uncertainty about the value of the proposed service. Balking is costly. UC2B, like the other well-known recently built municipal gigabit networks experienced balking. This issue has an economic impact on construction costs and time to delivery, as well as, limits the project’s social impact. This pattern may be particularly prevalent among households located in areas that have been unserved or underserved with broadband. These factors may help to explain why the social and economic impact of broadband investments may be less than expected.

Using data from 2,058 households served by the Urbana-Champaign Big Broadband project (UC2B), the paper examines the adoption of broadband using the Unified Theory for the Acceptance and Use of Technology (UTAUT) (Venkatesh et al., 2003). The model’s determinates of user acceptance and usage behavior includes performance expectancy, effort expectance, social influences and facilitating conditions. The study breaks new ground by considering customer service, consumer response to price and service changes in the local competitive broadband ISP landscape in response to a new ISP, and changing consumer preferences for mobile broadband as additional factors to explain the gap between the intent to adoption broadband and the acceptance and use of broadband.

We collected data for this analysis were collected through mixed method analysis. We are using various approaches to develop a two-stage logistic modeling to estimate the intent of broadband adoption and then to estimate the acceptance and use of broadband adoption. In step 1, we estimate the intent of broadband adoption. In step 2, we estimate the likelihood of broadband adoption as a function of the intent to adopt and other customer service quality factors based on the customer experience between the time the household signed a contract pledging to subscribe to the service and the final successful completion of the service installation at the home.

For the first stage of the model, we estimated the model using data collected through a household survey of computer and Internet use behavior and preferences administered in 2009 and repeated in 2011. The survey asked respondents about the intent to adopt broadband based whether they were likely to subscribe to the UC2B service for $19.99 per month for 20 Mbps service. The responses for the independent variables are measured based on 26 questions using a 5-point Likert scale. The data were collected by a door-to-door canvassing strategy with over 19,000 trips.

For the second stage of the model, we also constructed an adoption indicator using the additional customer service data for households collected through a CRM application capturing each customer interaction indicating, for which households intended to adopt the service by signing a pledge contract, which ones followed through and adopted the service by signing the subscription service contract. Additional independent variables related to the adoption of the broadband service were constructed from an array of data indicating the performance of the construction process of building the middle mile network and the FTTH laterals to connect service to the premise. Factors such as impatience waiting for the construction of the network to be completed, customer service for the installation, intensified competitive responses, or comfort with existing modes of access may turn households away from their decision to adopt broadband. We also are examining data whether the household has existing service from Comcast or AT&T or mobile service and including this in the model. We include changes in competitive pricing and services and preferences for the mode of access including mobile as and alternative that occurred after the new project was announced. These data permit us to extend the UTAUT model to carefully measure customer service of the construction process and competitive factors as an important facilitating condition for the acceptance and use of broadband.

We also collected data through interviews and performed content analysis of the responses to frame the construction of variables for the econometric analysis.

The preliminary econometric modeling to date estimates the intent to adopt the UC2B broadband service. Early results show: Performance expectancy: Measured by the extent to which the household respondent feels that Internet access is important. Performance expectancy is a significant factor in the likelihood that the household intends to adopt broadband. Effort expectancy: Measured by the level of the frequency of use of the Internet. Testing whether being less familiar with using the Internet means that the household is less likely to intend to adopt broadband. The model shows that households that have never used the Internet or are moderate users of the Internet are not likely to intend to adopt. Social Influences: Measured by reporting the extent to which the respondent has friends and family who use the Internet. The model shows that social influences do matter significantly in whether a household intends to adopt broadband. Facilitating Conditions: Measured by has a computer at home is a significant factor. We also found that having children between 5-18 years of age in the household is a significant factor. We included measures to indicate the respondents perception about the affordability of broadband, presence of family members over 60 years of age, perception of Internet security are included in the model but not significant.

Presenters
JG

Jon Gant

Professor and Director, Center for Digital Inclusion, University of Illinois at Urbana-Champaign
Director of Center for Digital Inclusion @ University of Illinois, Graduate School of Library and Information Science | Director for Urbana Champaign Big Broadband Project (UC2B) - Middle mile and FTTH gigabit network build funded through BTOP CCI grant | Research director and consultant for BTOP Evaluation Study with ASR Analytics


Friday September 25, 2015 5:30pm - 6:30pm
George Mason University School of Law Atrium

5:30pm

Job Search Online: Special Privilege or a Resource for All?
Abstract Link

While the Internet may offer benefits to people in numerous ways, undoubtedly one of the most important ways it may improve people’s situation is during the job-search process. From the opportunity to network with both strong and weak ties to offering access to data bases of job openings, digital media can lower the search cost between employer and future employee. Despite the significance of potential benefits to using the Internet in the job-search process, surprisingly little research has addressed the topic. The few studies that have been written in this domain mainly consider whether job searchers are Internet users in general. Our data set includes more details about how people have used the Internet in their job-search process, allowing for a unique contribution to the literature on Internet use and job search.

We analyze data about 1,600 Americans’ Internet uses focusing specifically on the job-search process. The data were collected in May 2013 with an oversample of African Americans to allow for focused analysis on racial differences in online behavior, something past research has established regarding various Web uses. The sample is diverse with all regions of the United States represented. The average age is 49, somewhat more women (57%) than men participated. Over a third of respondents have no more than a high school education, less than 30% have a college education or more.

Over a fifth of the sample (22%) reported having searched for a job in the past four years. African Americans, Hispanics and younger adults are more likely to be in this category than Whites and older adults. Whether a person uses the Internet has no relation to whether he or she searched for a job in the past four years.

When it comes to using the Internet for job search, we find that factors beyond being an Internet user (i.e., reporting use of the Internet at all) are important correlates of such technology use. Regarding demographic and socioeconomic factors, younger adults, men, Hispanics, and those with higher education are more likely to use the Web for job searching. We also find that, after controlling for age, gender, race, ethnicity, and education, both autonomy of use (operationalized as number of devices on which the participant accesses the Internet) and Internet skills (operationalized as familiarity with several Internet-related terms) are related to use of the Internet for job search. Additionally, we show that the use of social network sites for job search is especially important for African American and Hispanic respondents compared to Whites. But Internet skills still matter in these cases. Those who understand digital media better are more likely to use the Internet more generally and social media specifically for labor-market outcomes.

Most current policy aimed at addressing inequalities in Internet use focuses on infrastructural support. Our findings suggest that while a necessary condition, it is not sufficient to address inequities in use of the Internet for the important and widely-applicable activity of searching for a job. This study contributes to policy discussions by highlighting that beyond ensuring equitable physical access to the Internet, intervention may also be necessary at the level of training, education and support especially among the vulnerable population of less educated people in the midst of searching for a job.

Presenters
avatar for Eszter Hargittai

Eszter Hargittai

Delaney Family Professor, Northwestern University
avatar for John Horrigan

John Horrigan

Senior Researcher, Pew Research Center
I have done extensive work on tech adoption, including barriers to adoption, as well as exploring the impacts of online connectivity. I have done this at the Pew Research Center, the FCC (National Broadband Plan), and as a consultant. I work in DC, but am a proud resident of Baltimore, MD.


Friday September 25, 2015 5:30pm - 6:30pm
George Mason University School of Law Atrium

5:30pm

Life Span of Data Surveillance Marketing in Wearable Computing
Paper Link

Digital marketers have been open about possessing the capacity to predict the consumption pattern of a pregnant woman, place the targeted ads about a product at the right moment during her pregnancy, and deliver discount coupons according to her preferences and stages of pregnancy. It is thus easy to imagine how the life span of one’s digital data starts even prior to the birth, and is linked to her consumption pattern, and stored, processed and analyzed according to commercial demands and priorities.

We analyze policy challenges facing the cornucopia of personalized digital data marketing brought by wearable technologies. We ask the following questions: How should the life span of personalized data be regulated given the increasing presence of wearable technologies? More specifically, what are the salient policy concerns when it comes to wearable media? What are the types of regulation likely to succeed and to fail?

Personalized Marketing and U.S. Policy Landscape: By wearable media, we mean networked devices with built-in computing capabilities that can be worn or attached to a human body, such as a smartwatch, smartphone, Google Glass, or Apple Watch. What is striking about wearable media is the depth of interlinked data networks through which these tools could provide an unprecedented scale of seamless personal data integration. We argue that the institutional data practices typical of wearable media will pose policy challenges and herald yet another dramatic shift to personalized data marketing. We also point out the characteristics of existing synergetic data practices that will shape the development of wearable devices that utilize different life stages of personalized data.

There is a growing disjuncture between (1) the institutional-commercial incentive of wearable technology conducive to the intensification of data marketing; and (2) the regulatory inaction safeguarding the pressing issues related to data collection and appropriation. In the U.S. regulatory context, digital marketing industry has thrived on a non-interventionist approach since the mid-1990s, when the Federal Trade Commission (FTC) established industrial self-regulation for e-commerce in 1996. Various studies (e.g., Kang, 1998; Lessig, 1999) suggest the ineffectiveness of self-regulation in the online sector. There has been ample evidence that from the early years digital marketers in the U.S. did not conform to the standards of voluntary compliance of consumer protection (FTC, 1998).

We argue that it is possible to narrow the disjuncture through the reconstruction of the multi-layered codes ingrained in (existing and future) wearable technologies that are currently outpacing policy innovations. The overall premise that we are proposing is that data marketing surveillance protocols can be re-coded (Sandvig, 2007; Wu, 2011) with a clear government oversight and authority in designing interconnections among different life spans of data surveillance as they develop over time. In the European context, ‘the right to be forgotten’ introduced public debates on how the life cycle of personal data can and should be regulated. For instance, we can learn lessons from the EU experiment and their mistakes in defining the life span of one’s digital data, not as data holders’ rights, but as the rights of persons whose digital identities are at stake.

Digital Marketing in Future Wearable Computing; In sum, the wearable media have great capabilities to change our life experiences by constructing or dissecting personal data for commercial purposes, as we need to delineate the corresponding policy responses. We conclude by highlighting the key areas of policy concern and future solutions, with the discussion on the future of institutionalized practices related to personal data retention, collection, and appropriation. Overall, we argue for (1) the clearer policy oversight and (2) the need for policy principles on how the life span of personalized data is constructed in digital marketing ecosystem.

Presenters

Friday September 25, 2015 5:30pm - 6:30pm
George Mason University School of Law Atrium

5:30pm

Quantified Discrete Spectrum Access (QDSA) Framework
Abstract Link

Dynamic spectrum sharing is essential for meeting the growing demand for RF spectrum. The key requirement for dynamic spectrum access system is ensuring coexistence of multiple heterogeneous RF systems sharing spectrum in time, space, and frequency dimensions. There are several technical, business, and regulatory challenges around defining and enforcing a dynamic policy that can provide simple, flexible, and efficient spectrum sharing and enable protection of spectrum rights. In this paper, we propose a framework for dynamic spectrum sharing paradigm that articulates spectrum rights in terms of quantified spectrum usage footprints at the lowest granularity of spectrum access. The proposed framework essentially enables treating RF-spectrum as a commodity that can be shared, traded in simple, flexible, and efficient manner.

Transmitters consume RF-spectrum by in terms of RF-power in space, time, and frequency dimensions. Receivers consume RF-spectrum in terms of constraining the RF-power in space, time, and frequency dimensions. The framework is based on discretized spectrum space model wherein spectrum usage by the transceivers is quantified at a sample point in the unit spectrum spaces. Thus, using proposed discrete spectrum consumption quantification (DSCQ) methodology, the spectrum assigned or utilized by a transmitter or receiver can be quantified. The discretization and quantification approach transforms spectrum into a commodity that can be exchanged with service providers, a policy that can be regulated, and a resource that can be precisely controlled for making an efficient use.

Within the QDSA framework, an entity that wishes to request spectrum access communicates with a Spectrum-access Policy Infrastructure (SPI). Here, the entity requesting spectrum access could be an individual transceiver, a wireless service provider, or a spectrum broker. The spectrum access request provides information about the transceiver positions, transceiver performance attributes, capabilities, and desired spectrum-access attributes (e.g. duration of spectrum access, SINR at the receiver).

The SPI communicates with Spectrum Analysis Infrastructure (SAI) in order to define spectrum-access footprints for the individual transceivers. SAI receives real time information regarding spectrum consumption from Spectrum Sensing Infrastructure (SSI). The SSI employs an external dense RF-sensor network and estimates usage of spectrum by individual transceivers in real time using advanced signal processing and learning algorithms.

SAI evaluates feasibility of coexistence and allocates quantified spectrum-access footprints to the individual transceivers of the spectrum-access request. SPI maps the spectrum-access footprints into an enforceable spectrum-access policy and spectrum-access is granted to the requesting entity.

By estimating utilized and available spectrum space in real time, SSI provides the ability to define and regulate a dynamic spectrum-access policy. When the spectrum usage footprint estimated by SSI violates the assigned spectrum usage footprint, SPI can void the spectrum-access policy and can take regulatory action.

Following are the key contributions of QDSA: The quantified approach of QDSA enables easier understanding and interpretation of the outcomes. With spectrum as quantified resource perspective, the spectrum trade conversation could be on the following lines: ”I have 'x' units of spectrum right now, I have given `y' units of spectrum to somebody and have 'z' units of spare spectrum which I would like to share or may be keep as a reserve”. QDSA enables spatial overlap of multiple RF-systems while protecting spectrum rights. This has a significant implication in devising spectrum sharing services with a large number of fine-grained spectrum-accesses in a geographical region. In our simulation results, we show that upto 100 small footprint RF-networks coexisting without harmful interference within 4.3 km x 3.7 km geographical region within a single frequency band [1]. In addition to providing the ability to defining and regulating a quantified spectrum-access policy, discretization of spectrum space facilitates aggregating spectrum access opportunities in space, time, and frequency dimensions for efficient routing and allocation of spectrum. It enables provisioning redundancy to spectrum links in order to meet desired link quality under dynamic conditions. Spectrum aggregation facilitated by the proposed methodology helps building a bigger spectrum pool and thus enables building attractive business models for dynamic spectrum access.


Friday September 25, 2015 5:30pm - 6:30pm
George Mason University School of Law Atrium

5:30pm

The Impact of Spectrum Aggregation Technology and Allocated Spectrum on Valuations of Additional Spectrum Blocks and Spectrum Policies
Abstract Link

Whenever a cellular carrier expands its spectrum holdings, it must determine which additional spectrum blocks would be most valuable to its portfolio. Regulators must also understand the relative value of different spectrum blocks when establishing spectrum caps. The value per MHz of a spectrum block depends on: the characteristics of the spectrum block--including its bandwidth and frequency--and the population covered. Holding population constant, spectrum blocks in lower frequency bands are more valued by cellular carriers than blocks in higher frequency bands, and blocks with wide bandwidth are valued per-MHz higher than narrower blocks. Moreover, cellular carriers differ in their valuation of the same blocks, depending on existing spectrum holdings, and what technology they have adopted to use combinations of spectrum blocks, e.g. spectrum aggregation (SA).

This paper studies the value of a new block of spectrum to a given operator as a function of frequency and bandwidth, as well as the spectrum already held by the carrier, and whether the carrier uses SA. We developed a detailed engineering-economic model to estimate the cost of building and operating a greenfield nationwide wireless network using LTE technology. The value of a new block of spectrum to a given carrier is then the difference between the cost of building and operating the network using only the carrier’s initial spectrum holdings and the cost of that same network with the addition of the new block.

The model takes, as an input, results from a technical model that simulates performance of a small-scale LTE network and estimates its capacity per cell. A nationwide area network covers many regions having different population densities and different environments. Thus, we make justified assumptions about area covered, population density, user demand and traffic and targeted market share in order to estimate the required number of cells for each network scenario, which allows us ultimately to estimate total deployment costs. We run this experiment many times while varying the following:

- Frequency bands of the initial endowments of spectrum, and the additional blocks of spectrum
- Total bandwidth of initial endowments of spectrum, and the additional blocks of spectrum
- The contiguity (i.e. how fragmented is the spectrum) of the initial endowments of spectrum and the additional blocks of spectrum
- Whether blocks operate as independent carriers or using spectrum aggregation technology.
Results show that, absent SA, an operator would assign a wider block of spectrum a higher value per MHz-POP versus narrower fragments. However, when the operator implements SA, the difference in valuation between narrower and wider bands becomes less significant. This can be attributed to spectrum aggregation’s ability to fully utilize any fragment, pushing the valuations closer. Results show also that, contingent on existing holdings, if the additional block is in a low-frequency band, it will be more valuable than if it is in a higher-frequency band. Holders of low frequency spectrum value an additional low frequency block less than holders of high frequency spectrum. Finally, the combination of having access to multi-band spectrum and SA makes the operator's valuation of additional spectrum less dependent on the frequency and block size of the increment.

Our findings allow us to comment on the impact of spectrum aggregation technology and spectrum allocation methodology (i.e. allocation of multiple fragmented blocks in multiple bands versus a single wide block in a single band) on policies for spectrum caps, band planning and size of blocks made available for auctions and assignment. We conclude that with SA, regulators should implement policies that give operators access to smaller blocks across multiple bands, which in turn leads to fairer access to spectrum among competitors.

Presenters
JP

Jon Peha

Professor, Carnegie Mellon University

Authors
MS

Marvin Sirbu

Carnegie Mellon University

Friday September 25, 2015 5:30pm - 6:30pm
George Mason University School of Law Atrium

5:30pm

Unleashing Broadband Deployment: Clearing Barriers & Building Smart Infrastructure
Abstract Link

Despite all the debate surrounding the FCC’s Open Internet saga, there remains a good deal of consensus when it comes to broadband. Everyone agrees that consumer demand for data and overall traffic flows will continue to grow in the foreseeable future, so we need to ensure that we have broadband infrastructure in place ready to handle that growth. Further, most everyone agrees that Internet access is becoming increasingly critical to education, public safety, and civic engagement — much like the telephone system of the 20th century — so we need to ensure both that everyone has access to a connection, and that they can afford it, even if they are on a tight budget. So, if we accept those premises, the question becomes: How do we flood the market, and oversupply broadband capacity, in order to put downward pressure on consumer prices and allow edge providers and consumers to have enough breathing room to experiment with new data-intensive applications and services?

Essentially, that was the task assigned to the FCC when it had to issue its National Broadband Plan, and the reports from that working group underscore the importance of promoting the deployment of new broadband infrastructure and capacity. The only remaining question is how best to do that? A myriad of different deployment models and strategies have been tried and tested in recent years, to varying degrees of success. In cities like Kansas City and Austin, TX, private companies have negotiated favorable deals with city commissions to get favorable access to rights-of-way and utility poles, thus deploying next-gen gigabit networks to these thriving metropolises. But in other areas, particularly where population density is low, private ISPs are less willing to deploy, so these areas have had to take additional steps to clear barriers to entry and incentivize new deployments. For some places, just incurring the small added cost of installing dig-once conduits under major city streets and state highways is enough to encourage private ISPs to come in and finish the rest of the job, filling the conduits with optical fiber, stringing cables to consumers’ homes, and installing the other various network elements necessary to provide service. For others, it may be necessary to go even further, and deploy dark fiber inside dig-once conduits and lease access to the fiber’s capacity to private ISPs who will then deploy infrastructure and provide service to consumers over the last mile. And for still others, perhaps the business case (i.e., likelihood of receiving a return on investment) is so bleak that consumers can only get access to high quality broadband connections if they are willing to pay to construct the entire network and operate it as a cooperative.

In sum, there is no one-size-fits-all approach to broadband deployment. This paper will address the various barriers to entry that are present in certain markets, the various steps local and state governments have taken to promote broadband deployment, the recent steps taken by the FCC to promote broadband deployment using its authority under Section 706 of the Telecommunications Act of 1996, and potential steps Congress should consider taking as part of a CommActUpdate, including both general policy goals and specific legislative recommendations.

[No poster session]

Presenters
avatar for Thomas Struble

Thomas Struble

Legal Fellow, TechFreedom
Legal Fellow @TechFreedom. Tech policy enthusiast. @GWLaw alumni. @KUAthletics & @LFC supporter.
avatar for Berin Szoka

Berin Szoka

President, TechFreedom
Berin Szoka is the President of TechFreedom. Previously, he was a Senior Fellow and the Director of the Center for Internet Freedom at The Progress & Freedom Foundation. Before joining PFF, he was an Associate in the Communications Practice Group at Latham & Watkins LLP, where he advised clients on regulations affecting the Internet and telecommunications industries. Before joining Latham's Communications Practice Group, Szoka practiced... Read More →


Friday September 25, 2015 5:30pm - 6:30pm
George Mason University School of Law

5:30pm

5:30pm

Wireless Network Virtualization: Opportunities for Spectrum Sharing in the 3.5 GHz Band
Paper Link

The three-tier model for spectrum sharing in the 3.5 GHz band, outlined in the PCAST report [1], has drawn considerable attention from stakeholders, researchers and policy-makers. In its Further Notice of Proposed Rulemaking, the FCC points out that “[t]he 3.5 GHz Band could be an “innovation band,” where we can explore new methods of spectrum sharing and promote a diverse array of network technologies, with a focus on relatively low-powered applications.” [2]. In this context, we evaluate the technical, economic and policy aspects of wireless network virtualization in this band.

Wireless network virtualization has been proposed as a promising mechanism for granting increased opportunities for spectrum sharing, and thus enhancing the efficiency in spectrum usage. To date, we can find myriad definitions and applications of wireless network virtualization, most of them focusing on the technical implications and feasibility of this approach [3–7]. In fact, it has been pointed out that deeper levels of virtualization increase the flexibility of spectrum [6], thus providing us with significant opportunities to address problems of shortage and scarcity.

Beyond the purely technical aspects of wireless virtualization, we focus on the advantages of this technique and its applicability to a broader set of spectrum sharing scenarios. We show that the benefits of virtualization can be leveraged for the deployment of secondary markets for spectrum since it offers a path toward increased market liquidity and viability. Indeed, in [8], an initial attempt toward linking wireless virtualization with existing spectrum trading scenarios [9,10] was made. The particular virtualization method adopted consisted in the creation of a pool of spectrum resources, consistent with what was presented in [7]. It was not surprising that benefits from virtualization were attained: the results obtained showed increased market viability opportunities. In this study, these benefits occurred because some of the physical complexities of electromagnetic spectrum no longer played a role in the market.

In this paper, we explore the broader implications of virtualization in the spectrum sharing context. This requires analyzing the definitions and scope of virtualization and merging them with economic and regulatory characteristics intrinsic to the use of electromagnetic spectrum. Thus, we explore the conditions for the formation of pools of virtualized spectrum, the interaction between Priority Access users and General Authorized Access users; the requirements for economic feasibility and efficiency of such an approach and finally, the role played by regulation. This enables us to take into account characteristics pertinent to the stakeholders, to the physical spectrum resource and the appropriate framework in which they would interact. By following this path, we aim at shedding light on the advantages, boundaries and feasibility of spectrum sharing and trading in the 3.5 GHz band in a virtualized setting. In this manner, we would be able to obtain a more comprehensive view on what could be an efficient alternative for spectrum sharing by leveraging current technology advances and examining practical spectrum sharing scenarios.

Presenters
avatar for Marcela Gomez

Marcela Gomez

PhD Student - Telecommunications and Networking Program, University of Pittsburgh - School of Information Sciences


Friday September 25, 2015 5:30pm - 6:30pm
George Mason University School of Law Atrium

6:45pm

Dinner and Keynote Speaker
Speakers
avatar for Jonathan Sallet

Jonathan Sallet

Jonathan Sallet is the General Counsel. Mr. Sallet has been a partner in three law firms, O’Melveny & Myers LLP, Jenner & Block and Miller, and Cassidy, Larroca & Lewin, and served as Chief Policy Counsel for MCI Telecommunications, later MCI WorldCom. Mr. Sallet also served as Director of the Office of Policy & Strategic Planning for the Department of Commerce, and was a law clerk to the Honorable Lewis F. Powell, Jr... Read More →


Friday September 25, 2015 6:45pm - 9:30pm
George Mason University School of Law
 
Saturday, September 26
 

9:00am

Market competition following Mexico’s Telecommunications and Broadcasting Reform: Present and Future
Paper Link

This paper explores the effects of increased competition in the telecommunications and broadcasting sectors that will be brought about by Mexico's recent Telecommunications Reform (2013). The Reform includes measures to encourage competition in the telecommunications sector by way of a new institutional framework. A new Federal Telecommunications Institute (IFT) has been set up, with the power and autonomy to regulate competition in the marketplaces. Specialist tribunals have also been set up and an amendment made to the amparo law to prevent any immediate overturning of the regulator’s rulings. This is in addition to encouraging foreign investment by allowing 100% investment in the telecoms sector and an opening up of the broadcasting sector, allowing for up to 49% foreign capital, subject to a reciprocal investment deal in the corresponding country of origin. When it comes to policies on market competition, we examined the results of the new designation of “preponderante” (market dominant agent) implemented by the Reform, which grants the new regulator an immediate entitlement to impose pro-competition requirements on any economic agent with a greater than 50% share nationally in a given sector. The regulator may thus impose asymmetric regulation on interconnection, local loop unbundling, passive infrastructure sharing, roaming, and potentially call for the divestment of assets to prevent anti-competitive behavior. The paper is organized into two main sections. The first section presents the institutional and regulatory progress made, including the aforementioned legislation designed to foster competition in the telecommunications and broadcasting service markets. The second section focuses on the effects of the Reform, describing progress that has taken place in the implementation of these measures in 2014 and the market’s response over the course of 2015, specifically in terms of the following variables: trends in prices to end-users as a result of the investment: we determine whether there are any changes in investment flows on the part of existing operators and newcomers to the market as a result of the new regulation. Distributive effects of the Reform: we break down figures for access to and expenditure on telecommunications services by decile (Engel Curves) and analyze the distribution of telecommunications services as a function of different levels of household income. We compare the results of this analysis between 2012 and 2014. Economic activity in the telecommunications industry: changes in economic activity are examined, based on published GDP figures attributed to “other telecommunications”. We may also posit future scenarios for how the Reform might be the implemented within the wider context of Mexico’s setup. In the short term (2014-2015), it is possible to achieve successful implementation of the Reform. In the long term, however, two scenarios are foreseeable: optimistically, the Reform may see successful, long-term deployment; less optimistically, success may be limited, since it would require a far-reaching transformation of Mexico’s overall institutional setup.

Moderators
DS

Donald Stockdale

Bates White

Presenters
CC

Cristina Casanueva-Reguart

Universidad Iberoamericana

Authors

Saturday September 26, 2015 9:00am - 9:32am
GMUSL - Room 221

9:00am

The Training Difference: How Formal Training on the Internet Impacts New Users
Paper Link

This paper will address a question that is relevant to stakeholders in the public and private sectors: Have investments in programs to encourage broadband adoption paid off? After five years of attention to the issue (dating to the American Recovery and Reinvestment Act’s investments in the Broadband Technology Opportunities Program and the release of the National Broadband Plan), the question remains relevant in light of ongoing gaps in home broadband adoption in the United States.

The paper will address this question in two ways:

1) A review of research and "best practice" that has arisen in the past several years that has sought to explore broadband adoption programs.

2) Through analysis of a unique longitudinal dataset that interviews new Internet users first within three months of getting home Internet service and a second time eight months later after they have acquired some experience with home Internet service.

The data: For analysis, the paper uses data gathered in a telephone survey of customers of an entry-level broadband Internet service of a major national Internet service provider (ISP). More than 700 respondents were interviewed eight months apart (January 2014 and September 2014). The survey asked respondents’ a number of questions about online activities (e.g., whether they have looked for a job online) and attitudes about Internet use (e.g., its impact on social ties, educational opportunities, etc.). The survey also asked respondents to assess their levels of comfort with computers and the Internet, as well as whether they had formal training on how to use the Internet (through, for example, a local library or a community center).

Analysis: The longitudinal design allows statistical analysis to be conducted that compares results for respondents between Time 1 and Time 2, while controlling for baseline levels of digital skills and other demographic factors. Questions the paper will explore include:

• Are changes in self-reported digital skills attributable to whether the respondent had formal training on the Internet?

• Does the incidence of doing online job searches vary with having had Internet training, controlling for baseline levels of digital skills and/or changes in digital skills over time?

• How large are any impacts from formal Internet training on behavior and attitudes?

The unique contribution of this paper is its use of longitudinal data. To the author’s knowledge, prior research has not interviewed the same set of broadband users comparing behavior early on in their adoption curve with responses at a second time. The ability to assess the impacts of formal Internet training is also unique. Given investments in the public and private sectors to close broadband adoption gaps, the research speaks to stakeholders’ interest in understanding programs aimed at encouraging home broadband adoption. With the Federal Communications Commission beginning the process of adapting the Lifeline program (currently supporting telephone service) to broadband, the research results should be timely for that proceeding.

Moderators
Presenters
avatar for John Horrigan

John Horrigan

Senior Researcher, Pew Research Center
I have done extensive work on tech adoption, including barriers to adoption, as well as exploring the impacts of online connectivity. I have done this at the Pew Research Center, the FCC (National Broadband Plan), and as a consultant. I work in DC, but am a proud resident of Baltimore, MD.


Saturday September 26, 2015 9:00am - 9:32am
GMUSL - Room 332

9:00am

Adding Enhanced Services to the Internet: Lessons for History
Paper Link

In this paper, the authors draw on over 20 years of personal involvement in the design and specification of enhanced services on the Internet (Quallity of Service or QoS) to put the current debates into context, and dispel some confusion that swirls around service differentiation. This paper describes the twenty-year failure to get QoS capabilities deployed on the public Internet -- the technical, economic and regulatory barriers, and implications of this failure for the future.

This paper has four parts. The first is a historical perspective, drawing on our own early publications. One paper, from 1994, is a time-capsule of the state of the Internet in 1993. It describes how severe congestion on the NSFnet backbone in the preceding decade compelled its engineers to implement traffic prioritization to support interactive services such as remote login. Another paper, from 1992, framed the architecture of enhanced services within the IP protocol suite -- ideas that were refined, standardized and implemented over the next decade. But nowhere on the global Internet are these services found today.

The next section of the paper reviews the reasons behind this failure, which were mostly unrelated to technology; indeed, technical mechanisms for QoS were deployed in private IP-based networks. Rather, the obstacles to global deployment of QoS mechanisms were coordination failures among ISPs. Although the Internet has a standards body (the IETF) to resolve technical issues, it lacks any similar forum to discuss business issues such as how to allocate revenues among ISPs from enhanced services. ISPs feared such discussions in the mid-90s as risking anti-trust scrutiny. Thus, lacking a way to negotiate the business implications of QoS, it was considered a cost rather than a potential source of revenue.

The third section examines the context of current regulatory resistance to enhanced services -- why the fears of abuse seem to trump potential benefits. We explain why the term "network neutrality" does not capture the desired outcome, which is to prevent abusive behavior. Our paper introduces the basic technical issues that shape QoS implementations, and how they can affect applications running over the Internet.

The fourth section discusses implications of this reality for the future. As IP-based networking subsumes all previous critical communications infrastructure, there is increasing interest in how to satisfy societal requirements for reliability, ubiquity, and resiliency of services in times of crisis. We argue that this last requirement requires prioritizing certain services at times across multi-provider networks. Historical failure of such coordination on the Internet has motivated a shift that will crucially alter the economic and regulatory landscape: a few large ISPs are building private interconnected IP-based networks, using the IP technology but distinct from the public Internet. These private IP networks will allow those ISPs operating them to develop new business models, support enhanced services, and vertically integrate infrastructure and applications.

These networks are a natural industry response to the nation's need for a stable network infrastructure, but they introduce new regulatory concerns. First, this shift leaves the public Internet as a place for games, spam and social play, and perhaps starved for capital investment. Second, the new networks constitute a "shadow" activity, serving a role previously served by a regulated sector that is no longer regulated. Without regulation, these activities may carry substantial systemic risk, amplified in this case by gaps across different bodies of law, which hinder policymakers' ability to respond to problems. In light of these and other prevailing risks of the evolving Internet, we offer several recommendations based on lessons learned from unrealistic assumptions of two decades ago.

Moderators
Presenters
Authors

Saturday September 26, 2015 9:00am - 9:32am
GMUSL - Room 225

9:00am

Does Today's FCC Have Sufficient Decision Making Throughput to Handle the 21st Century Spectrum Policy Workload?
Paper Link

Today’s FCC is not as well structured to handle the reality of its spectrum policy workload as the early Commission was and may not be even keeping up with workload. Indeed, there is increasing evidence that “triage” is a key issue in spectrum policy. That is the nontransparent decision to even address an issue is a major determinant of its outcome. This could be both deterring capital formation for new spectrum technology R&D as well as creating real risks for incumbent licensees since emerging interference issues that need rulemaking or nonroutine action are not getting resolved in a timely way.

In 1934, the new FCC took a page from the structure of the ICC, one of its predecessors, and divided the then 7 commissioners into 3 “divisions” that could operate independently in the police areas of telephone, telegraph, and radio. There was no Administrative Procedures Act (“APA”) so rule deliberations were far simpler than today. The maximum frequency in routine use was 2 MHz and the modulation choices were just AM and radiotelegraphy. In the early days, a few of the commissioners had technical experience in spectrum issues.

Today we have the APA and nearly 70 years of court decisions than make rulemaking much more complicated. We have 5 commissioners that only make decisions en banc with virtually no §5(c) delegation to staff on emerging issues. Allocations go to 275 GHz, but service rules have been stuck at a 95 GHz limit since 2003. The selection process for commissioners appears to be focused on nonspectrum and nontechnical issues.

The result of all these factors is long drawn out deliberations on both new technology issues and on resolution of merging interference issues. While the US’ economic competitor nations often use “state capitalism” as a key issue in spectrum policy by subsidizing chosen new technologies and then cooperating to remove national and international spectrum policy limits for them, US entities in spectrum R&D often face both a lack of funding and an indifferent FCC (as well as NTIA - if access to G or G/NG spectrum is at issue).

The paper looks at a variety of spectrum policy issues FCC had dealt with since 2000 and examine the delays involved and their impacts. The issues consider include new technology issues such as the TV White Space, the FWCC 43 GHz petition and the Battelle 105 GHz petition as well as emerging interference issues such as police radar detector/VSAT interference, cellular booster-related interference, and FM broadcast/700 MHz LTE interference. The time lines of such deliberations will be reviewed as well as the likely impact of these timelines on the business plans of FCC regulatees.

Finally possible options to improve FCC throughput that are both feasible within existing legislation and consider approaches successfully used in foreign spectrum regulators will be discussed.

Moderators
JP

Jon Peha

Professor, Carnegie Mellon University

Presenters
avatar for Mike Marcus

Mike Marcus

Director, Marcus Spectrum
Independent spectrum tech & policy consultant. Overeducated in EE@MIT; survivor of 25 years at FCC; responsible for rules for Wi-Fi, Bluetooth, Zigbee, & 60 GHz



Saturday September 26, 2015 9:00am - 9:32am
GMUSL - Room 121

9:00am

Proportional Privacy in Big Data DiscoverySocial Media, Smartphones, and Proportional Privacy in Civil Discovery
Abstract Link

At its core, the discovery process in civil litigation relies on a balance between open access to information and protections against over-reaching. Although broad discovery is favored, courts simultaneously warn that the civil discovery process is not meant to be a fishing expedition. Thus, the value of achieving justice through complete and thorough access to information is counter-balanced by equally important limiting principles. These limiting principles include restrictions based on relevance, burden, expense, embarrassment, privilege, and proportionality. Essentially, these limiting principles draw on an important societal value: privacy.

Privacy is a core concept that underlies the civil discovery rules, and it is one that courts must return to when resolving discovery disputes over digital data compilations. These compilations, particularly when viewed in the aggregate, present a detailed mosaic of one’s personal life. The result is a highly revealing portrait of personal details that implicate individual privacy rights. In some cases, discovery of the private portions of social media accounts or the contents of a personal smartphone should be limited based on privacy concerns.

These privacy concerns can best be addressed as part of the proportionality analysis for defining the limits of civil discovery. The 2015 amendments to Rule 26 of the Federal Rules of Civil Procedure emphasize a proportionality inquiry as a key limit to discovery: the information sought must be proportional to the needs of the case. Although this test expressly considers the financial burden and expense of discovery, “burden” should go beyond mere financial considerations and instead encompass concepts like the privacy burden. Thus, this article proposes that the non-pecuniary burden on privacy should be factored into the proportionality analysis.

By recognizing the need for proportional privacy, courts can draw meaningful boundaries to define the scope of discovery, effectively disaggregating digital data compilations to prevent overly intrusive discovery. Other tools within the court’s arsenal, such as protective orders, should be used more liberally to limit access to entire mosaics of highly personal information.

This article defines discovery of digital data compilations, using private social media account contents and personal smartphones in ‘bring your own device’ workplaces as primary examples, and explains the historical development of civil discovery under the Federal Rules of Civil Procedure through the 2015 amendments. It also summarizes general principles of privacy law and existing discovery decisions as to social media accounts and smartphones, with an analysis of the intersection between privacy and discovery. Finally, this article lays out the mechanisms by which privacy protection can serve as an additional guide for defining the scope of civil discovery, particularly through examining privacy burdens as a factor in the proportionality test.

Moderators
avatar for Michelle De Mooy

Michelle De Mooy

Deputy Director, Consumer Privacy Project, Center for Democracy and Technology
Michelle De Mooy is Deputy Director, Consumer Privacy Project at the Center for Democracy & Technology. Her work is focused on promoting strong consumer privacy rights through pro-privacy legislation and regulation, working with industry to build and implement good privacy practices, and analyzing emerging privacy concerns. Michelle currently sits on the Advisory Board of the Future of Privacy Forum, a privacy think tank, and has been... Read More →

Presenters
avatar for Agnieszka McPeak

Agnieszka McPeak

Assistant Professor, University of Toledo College of Law


Saturday September 26, 2015 9:00am - 9:33am
GMUSL - Room 120

9:33am

Techno-Unemployment?
Paper Link

The objective of this paper is to examine the impact of information and communication technologies on employment. Recently concern has increased about the impact of accelerating development of artificial intelligence and automation on jobs. This issue is timely as some countries are still coping with high levels of unemployment, even after recovering from the “great recession.” As well, middle and working class incomes have stagnated for decades.

Given this setting we wish to answer the following questions: (1) What are the factors that lead to the elimination of some types of jobs through ICTs and automation and which types of employment are most vulnerable? (2) Are new jobs being created fast enough to absorb the freed workforce into higher quality employment opportunities (i.e. is a process of creative destruction unfolding)? (3) Are public policies required to mitigate these adjustments and if so, which policies?

Emerging research is beginning to show that current information technologies have greater potential than ever before to displace numerous people from their jobs and contribute to greater income inequality. Some observers argue that this should not be a concern because, as in previous technological revolutions, the economy will be able to recover and those displaced will be able to find other, often better and more rewarding jobs, including innovative forms of employment in the growing sharing economy. Arguments on this side are about the unlimited human wants that lead to innovation and the emergence of new firms that will be able to employ those displaced.

Others are more skeptical and believe that this time is different; that high-tech firms generate a lot of wealth but not a lot of employment. They feel that the less skilled will be left behind and will experience substantially lower wages, due to new competition from computers and other technologies that need to be programmed once in order to outperform a human at the same task. Recent OECD data shows that particularly medium skill levels are negatively affected, whereas there is growing demand for low and high skills.

In this paper we review the theoretical research on the effects of advanced ICTs on employment and develop a model of the effects of ICTs on employment. The paper concludes with policy recommendations. These include the need for experimentation with new approaches involving distribution of income.

Moderators
Presenters
IM

Ian MacInnes

Syracuse University

Authors
avatar for Johannes M. Bauer

Johannes M. Bauer

Professor and Chairperson, Michigan State University
I am a researcher, writer and teacher interested in the digital economy, its governance as a complex adaptive systems, and the effects of the wide diffusion of mediated communications on society. Much of my work is international and comparative in scope. Therefore, I have great interest in policies adopted elsewhere and the experience with different models of governance. However, I am most passionate about discussing the human condition more... Read More →
avatar for Martha Garcia Murillo

Martha Garcia Murillo

Professor, Syracuse University

Saturday September 26, 2015 9:33am - 10:05am
GMUSL - Room 332

9:33am

Interconnection and Capacity Allocation for All-IP Networks: Walled Gardens or Full Integration?
Paper Link

With Internet evolution and the convergence towards all-IP networks, the equation for interconnection in communications is changing fundamentally. While traditional interconnection agreements in telecommunications and the Internet ensured universal connectivity in a rather homogenous environment, the transition to broadband, and further the migration towards all-IP make IP-based interconnection agreements run up against a wide spectrum of different application services requiring heterogeneous levels of quality of service (QoS). Although the relevance of end-to-end inter-operator QoS has been recognized for a long time and technical means for implementing such strategies were developed more than a decade ago, there is no widespread implementation of differentiated IP interconnection agreements (cf. Weller and Woodcock 2013). Taking into account the evolution in the Internet ecosystem and the transition towards all-IP, the aim of this paper is to describe the evolution and future challenges in the markets for “all-IP interconnection”. We analyze and compare two alternative scenarios for all-IP interconnection from a network economic perspective.

Introducing a conceptual systematization of the evolutionary process we distinguish between three stages towards all-IP interconnection. Starting after the commercialization of the Internet, the first stage was characterized by traditional IP interconnection agreements ensuring universal connectivity. The public-switched telephone network (PSTN) and broadcasting networks coexisted and provided corresponding services. In a second stage, while traditional networks remained to provide fallback solutions for IP-based voice and broadcasting services, IP interconnection has been shaped significantly by content providers’ and content distribution network providers’ innovative business models resiliently responding to the insufficiencies of underlying best effort Internet principles – mitigating its drawbacks. As a result, price and QoS differentiations have been introduced and a regionalization of traffic flows is observable. In particular, the emergence of media content providers (e.g. Netflix) has led to substantial shifts in Internet traffic patterns (cf. e.g. Reed et al. 2014). Increasing complexity has spurred the adoption of innovative interconnection agreements like partial transit and paid peering (cf. Faratin et al. 2008). The integrated provision of all-IP services (i.e. voice, data and media) within multipurpose architectures marks a third stage of IP interconnection. With major providers projecting the phasing-out of the PSTN by the end of the decade, the ultimate migration to all-IP creates an urgent need for quality-equivalent IP-based substitutes for legacy telephone services (cf. e.g. Elixmann et al. 2014). The provision of such services requires end-to-end QoS guarantees by means of capacity allocations based on active traffic management and tailored interconnection agreements. As any IP-based service provision is based on the same traffic capacities, an unprecedented rivalry situation between converged services results and a single market for IP-based data services is created.

We analyze two different scenarios for all-IP interconnection. In the first scenario we consider an application-specific “walled garden solution” based on logical separation between different standardized non-Internet services and the public Internet as proposed by the recent FCC regulation (cf. FCC 2015) and similarly envisaged in Europe. In the second scenario a solution based on fully integrated all-IP service provision is introduced. Based on a network economic analysis we derive implications for an economically desirable all-IP interconnection regime. As economically optimal capacity allocation requires market driven price and QoS differentiation based on the opportunity costs of network usage (cf. Knieps and Stocker 2014) and as the integrated optimization of capacity allocation can only be based on an unrestricted evolutionary search for bilateral and multilateral interconnection agreements, we argue for a fully integrated solution resulting in a flexible and resilient all-IP ecosystem relying on a continuum of interconnection agreements capable of reflecting heterogeneity in demand for QoS.


References

ELIXMANN, D., MARCUS, J.S. AND PLUECKEBAUM, T. (2014), ‘IP-Netzzusammenschaltung bei NGN-basierten Sprachdiensten und die Migration zu All-IP: Ein internationaler Vergleich’, WIK-Diskussionsbeitrag Nr. 392, Bad Honnef.

FARATIN, P., CLARK, D., BAUER, S., LEHR, W., GILMORE, P. AND BERGER, A. (2008), ‘The Growing Complexity of Internet Interconnection’, Communications & Strategies, 72(4), pp. 51-71.

FEDERAL COMMUNICATIONS COMMISSION (FCC) (2015), In the Matter of Protecting and Promoting the Open Internet, REPORT AND ORDER ON REMAND, DECLARATORY RULING, AND ORDER, GN Docket No. 14-28, FCC 15-24, Adopted: February 22, 2015, Washington D.C.

KNIEPS, G. AND STOCKER, V. (2014), ‘Market Driven Network Neutrality and the Fallacy of a Two-Tiered Internet Traffic Regulation’, Paper prepared for the 42nd Annual Telecommunications Policy Research Conference, September 12-14, 2014, George Mason University, Arlington,VA, available at SSRN: http://ssrn.com/abstract=2480963

REED, D.P., WARBRITTON, D. AND SICKER, D. (2014), ‘Current Trends and Controversies in Internet Peering and Transit: Implications for the Future Evolution of the Internet’, Paper prepared for the 42nd Annual Telecommunications Policy Research Conference, September 12-14, 2014, George Mason University, Arlington,VA, available at SSRN: http://ssrn.com/abstract=2418770

WELLER, D. AND WOODCOCK, B. (2013), ‘Internet Traffic Exchange: Market Developments and Policy Challenges’, OECD Digital Economy Papers, No. 207, OECD Publishing. http://dx.doi.org/10.1787/5k918gpt130q-en

Moderators
Presenters
VS

Volker Stocker

University of Freiburg


Saturday September 26, 2015 9:33am - 10:05am
GMUSL - Room 225

9:33am

Do Not Track for Europe
Paper Link

Online tracking is the subject of heated debates. In Europe, policy debates focus on the e-Privacy Directive, which requires firms to obtain the consumer’s consent for the use of tracking cookies and similar technologies. A common complaint about the Directive is that clicking “I agree” to hundreds of separate cookie notices is not user-friendly. Meanwhile, there has been discussion about a Do Not Track (DNT) standard, which should enable people to express their wishes regarding tracking with a simple button in their browser.

This paper outlines the requirements that are needed for DNT, or a similar system, to be able to help website publishers and other firms to comply with European privacy law. The three main points of the paper are as follows. First, a DNT system for Europe (eDNT) is possible, and the work of the W3C World Wide Web Consortium on the DNT standard was originally designed to support European compliance. Second, an eDNT standard could emerge from W3C, or from elsewhere. Third, implementers do not need to wait for a standard, and there are current DNT implementations that are almost compliant with European law.

We analyse the requirements for DNT that follow from European data privacy law. We give examples of current implementations of DNT, and show that some implementations could almost be used to comply with EU law.

The interdisciplinary paper is written by a European legal scholar and a US scholar of engineering and public policy.

Moderators
avatar for Michelle De Mooy

Michelle De Mooy

Deputy Director, Consumer Privacy Project, Center for Democracy and Technology
Michelle De Mooy is Deputy Director, Consumer Privacy Project at the Center for Democracy & Technology. Her work is focused on promoting strong consumer privacy rights through pro-privacy legislation and regulation, working with industry to build and implement good privacy practices, and analyzing emerging privacy concerns. Michelle currently sits on the Advisory Board of the Future of Privacy Forum, a privacy think tank, and has been... Read More →

Presenters
FZ

Frederik Zuiderveen Borgesius

IViR Institute for Information Law (Amsterdam)

Authors

Saturday September 26, 2015 9:33am - 10:05am
GMUSL - Room 120

9:33am

Unlicensed Operations in the Lower Spectrum Bands: Why is No One Using the TV White Space and What Does That Mean for the FCC’s Order on the 600 MHz Guard Bands?
Paper Link

In 2008, the FCC authorized unlicensed use of vacant channels in the TV bands (TV white space) following the digital TV transition. Supporters of the FCC action asserted that giving unlicensed devices access to the lower bands would forestall congestion at 2.4 GHz and 5 GHz, provide mobile broadband coverage to underserved areas, and power a new wave of innovation (Google’s CEO famously said TV white space would bring about “Wi-Fi on steroids”). We were among a small minority who countered that the TV white space could not support effective unlicensed operations both because the limited bandwidth would make it inferior to 2.4 GHz and 5 GHz for Wi-Fi-type applications, and because the necessarily stringent power limits and lack of interference protection would preclude most long-range applications. We also argued that the TV white space could be used for licensed services, generating billions of dollars in auction revenue. In this paper we analyze a) the impact to date of the FCC’s TV white space policy and b) the implications of this for the FCC’s 2014 proposal to allow unlicensed operations in 600 MHz guard bands following the incentive auction, which is an extension of the TV white space policy.

The FCC’s TV white space policy to date has been a flop, as evidenced by the anemic market response. The FCC has approved only nine products for operation in the TV white space, and none of the major manufacturers of Wi-Fi routers or Wi-Fi chips fields a product that can be used there. Nor does any major smartphone or tablet vendor offer a product with the capability to use TV white space. As another indicator, users in this country have registered fewer than 600 individual devices (registrations are a proxy for sales), in contrast to the tens of millions predicted.

Although regulatory uncertainty may be a factor, the market’s anemic response to the TV white space has far more to do with fundamentals, in particular the predictably slow data rates for short-range, Wi-Fi-type applications. The maximum data rates for TV white space devices range from 3.25-16 megabits per second (Mbps), which is below the FCC’s new threshold for what constitutes broadband, compared to 600 Mbps to 1.7 gigabits per second for unlicensed devices that operate at 2.4 GHz and 5 GHz. For long-range applications, TV white space faces other challenges (key are the 4 watt power limit and the risk of interference from short-range devices), making licensed spectrum a superior choice.

The FCC’s decision to allow unlicensed operations in the (post-auction) 600 MHz guard bands is likely to meet the same fate. For short-range applications, because of limitations on bandwidth (six megahertz) and transmit power (40 milliwatts), unlicensed devices operating in the 600 MHz guard bands will have a data rate that is one-tenth to one-hundredth that of a Wi-Fi device operating at 2.4 GHz and 5 GHz. That handicap will trump almost any propagation advantages that the 600 MHz band may offer. For long-range voice and data communications, unlicensed operations in the 600 MHz guard bands will have even less to offer. With a power limit of 40 milliwatts, unlicensed guard band devices will simply not be capable of providing rural broadband access and other long-range communication services on any meaningful scale.

Moderators
JP

Jon Peha

Professor, Carnegie Mellon University

Presenters
Authors
avatar for Coleman Bazelon

Coleman Bazelon

Principal, The Brattle Group
DR

Dorothy Robyn

Dorothy Robyn

Saturday September 26, 2015 9:33am - 10:05am
GMUSL - Room 121

9:33am

Vertical Effects in Competition Law and Regulatory Decisions in Pay-Television in France, the United Kingdom and the United States
Paper Link

Vertical effects, such as input or customer foreclosure, are at the core of competition and regulatory decisions in pay-television markets. In particular, vertical effects have raised important concerns by competition authorities and regulators in France, the United Kingdom and the United States in the context of merger decisions and antitrust proceedings. Given the increasing convergence of television and telecommunication services and of bundles’ uptake, the importance of an adequate treatment of vertical effects remains crucial.

This paper addresses vertical effects in competition law and regulatory decisions in these three countries, with a focus on vertical input and customer foreclosure, exclusive dealing, countervailing buyer power and some aspects of the implementation of remedies. The empirical evidence is formed by an exhaustive analysis of antitrust, merger and regulatory decisions in these countries from 1996 to 2014.

First, I examine the different treatment of buyer power and its use as an argument to justify relaxing requirements for the authorisation of mergers in France and the United Kingdom (countervailing buyer power). A more consistent and evidence-based treatment of bargaining power could improve the overall merger review process.

Authorities in these three countries largely share the concerns about input and customer foreclosure in pay-television markets and do not underestimate the risks for competition in those markets. The approaches taken and the starting point in terms of market structures, are somewhat different. In the United States, where competition in pay-tv markets has improved in the past twenty years, the regime is transitioning from an ex-ante framework (programme access rules) to an ex-post regime with arbitration procedures and engagements from providers as a result of mergers and acquisitions. In the United Kingdom and France, in contrast, regulation is moving from ex-post enforcement only (through merger review) to, though more slowly, a set of ex-ante obligations.

Exclusive dealing is another source of competition concerns. Bidding rules for sports and movies rights have been found anti-competitive in France and the United Kingdom (e.g. holdback clauses, joint selling agreements, most-favoured nation clauses) and can, in the presence of powerful pay-television incumbents, foreclose efficient entry for many years. Therefore, a careful control of these negotiations is an important condition for promoting competition in pay-television markets. In France and the United States, some limitations have been placed on how exclusive dealing may be conducted even though high market concentration may suggest stronger action.

In view of the recent emergence of Online Video Distributors (OVD), a major opportunity may be arising for Europe. Being its broadband markets, in general terms, more competitive than in the United States, OVDs may encounter fewer barriers to become a credible alternative to traditional pay-television providers. Nevertheless, this is reliant on developments in network neutrality and, more broadly, in competition in broadband markets in the coming years.

The overall conclusion is that, while the United States has been moving away from stringent ex-ante regulation in pay-television markets, justified by an improvement in competition dynamics, France and United Kingdom have only partially succeeded in addressing these concerns – in many ways more serious that in the United States- due to a lack of a legal framework to issue ex-ante regulation.

Moderators
DS

Donald Stockdale

Bates White

Presenters
AD

AGUSTIN DIAZ-PINES

EUROPEAN COMMISSION

Authors

Saturday September 26, 2015 9:33am - 10:10am
GMUSL - Room 221

10:05am

Sovereignty and Property Rights: Conceptualizing the Relationship between ICANN, ccTLDs and National Governments
Can country code top level domains (ccTLDs) be considered property? Or are they sovereign rights? Or are they somehow both? In recent litigation involving the top level domain for Iran (.IR), plaintiffs sought to garnish the domain as a form of property that could be used to compensate victims of terrorist acts allegedly backed by the Iranian state. Similar cases seeking to garnish ccTLDs have affected Syria (.SY) and the Congo (.CG).

In the theory and practice of Internet governance, there is a tendency to resist recognizing ccTLDs as a property right. These arguments tend to view ccTLDs as trustee relationships and argue that recognizing private property rights will undermine the rights of the domain registrants within the ccTLDs. Some (but not all) court cases have found that second-level domains are not property, but services.

On the other hand, governments are keen on asserting sovereignty rights over ccTLDs. They claim that sovereigns should be the ultimate authority over delegation and public policy for ccTLDs. In countries like Iran with a long-term conflict with the US, sovereignty rights are thought to immunize them from confiscation by outsiders. Some sovereignty claims closely mirror property claims.

In physical space, sovereign states have recognized territories. Sovereignty results primarily from a state’s ability to maintain a monopoly on the legitimate use of violence in that territory, but also from recognition of its sovereignty by other states. In cyberspace, the delegation of a domain name representing a country (e.g., .BR for Brazil, or .IN for India) involves an unusual three-party relationship between a government, a party that operates the domain (delegee) and ICANN. ICANN, as the global coordinator and policy maker for the domain name space, must delegate a country code or name to a specific operator – otherwise the domain simply does not exist on the Internet. And because the DNS root is a globally shared resource, its management involves more than the wishes of the sovereign state but also involves obligations to "the global Internet community." Yet, as a nonprofit under U.S. federal and California jurisdiction, ICANN’s role seemingly subjects ccTLD delegees to civil law claims of the sort seen in the Iran and Congo cases.

What, then, is the best way to shape the relationship between ccTLD delegees, ICANN and the governmental authority referenced by a ccTLD string, and what role should sovereignty or property rights claims play? The scholarly literature has left these questions unsettled. It has studied mainly the relationship between states and ICANN, or between the state and the ccTLD delegee. Studies that consider the triangular relationship of ICANN, delegees and states have not applied both property and sovereignty theories. Either it has assumed that states have sovereignty rights over their ccTLDs, or it has not dealt with the applicability of the theories of sovereignty and property rights to this relationship.

This paper uses a law and economics framework to analyze the relationship between ccTLD delegation, theories of sovereignty and theories of property rights. While property is a private right and sovereignty is a public right, international relations theorists have argued that they have some commonalities. Both, for example, involve claims of exclusivity. Both are also invoked in allocating rights over international resources, such as rights over the sea and over space. By critically and systematically examining the consequences of applying sovereignty and property rights to ccTLDs, this paper attempts to provide practical insights into the best way to handle conflicting claims over ccTLD delegations.

Moderators
DS

Donald Stockdale

Bates White

Presenters
avatar for Farzaneh Badiei

Farzaneh Badiei

Associate Researcher, Humboldt Institute for Internet and Society
Farzaneh Badiei is an associate researcher at Humboldt Institute for Internet and Society. She is finalizing her PhD at the Institute of Law and Economics, Hamburg University, Germany. Farzaneh’s research focuses on the institutional design of online private justice systems in commercial contexts. She is also interested in studying online intermediaries such as social networks and payment intermediaries and their justice systems, using a... Read More →

Authors
avatar for Milton Mueller

Milton Mueller

Professor, Georgia Institute of Technology
(TBC) Milton Mueller is Professor at the School of Public Policy, Georgia Institute of Technology, USA. Mueller received the Ph.D. from the University of Pennsylvania’s Annenberg School in 1989. His research focuses on rights, institutions and global governance in communication and information industries. He is the author of two seminal books on Internet governance, Ruling the Root and Networks and States. Mueller was one of the founders of... Read More →

Saturday September 26, 2015 10:05am - 10:37am
GMUSL - Room 221

10:05am

Unlicensed Operations in the 600 MHz Guard Bands: Potential Impact of Interference on the Outcome of the Incentive Auction
Paper Link

In its June 2014 Report and Order on the 600 MHz incentive auction, which will repurpose TV broadcast spectrum for mobile broadband service, the FCC authorized the use of unlicensed devices in the post-auction guard bands. Although the FCC maintains that it will not be a problem, key stakeholders have argued that the operation of unlicensed devices in the 600 MHz guard bands could cause harmful interference to licensed mobile LTE services in nearby bands. In this paper, we analyze the potential for such interference and its implications for the incentive auction.

We begin by critiquing the FCC’s technical analysis, which found that an unlicensed device using the 600 MHz guard bands would interfere with the operation of a licensed mobile device using LTE at 600 MHz if the two devices were less than 20 feet apart. Although that finding will give potential bidders serious pause, it understates the real problem. When we modify key FCC assumptions to reflect conditions at hand, the estimated interference range is 45 feet to 75 feet.

Next, we analyze how harmful interference will reduce LTE network capacity and the corresponding market value of 600 MHz spectrum. Based on our analysis of an LTE network in a band similar to 600 MHz, we find that a 5 percent loss of capacity will lower the value of the affected spectrum by at least 9 percent; a 20 percent loss of capacity will lower its value by at least 43 percent; and a 35 percent loss of capacity will eliminate most (93 percent) of its value.

Finally, we trace how the prospect of interference, through its adverse impact on the value of spectrum, could damage the incentive auction. Two features of the auction magnify the impact of the risk of interference. First, at the allocation stage, bidders will be unable to distinguish between those blocks that are subject to interference from unlicensed operations and those that are not. Thus, rational bidders will assume that all blocks are subject to interference. We estimate that the allocation-stage revenues will decrease by an amount ranging from 9 percent (for a 5 percent level of interference) to 93 percent (for a 35 percent level of interference). Using for purposes of illustration the FCC’s 84-megahertz band plan (84 megahertz of broadcast spectrum cleared and 70 megahertz repurposed for mobile broadband), with a 5 percent level of interference, total allocation-stage revenues will be reduced by $4 billion, to about $40.3 billion. At the 35 percent level, allocation-stage revenues for this plan will be reduced by about $41.2 billion, to $3.0 billion.

Second, even though some of the revenue lost at the allocation stage will be recovered in the assignment round (when bidders will be able to identify individual blocks), the amount of TV spectrum cleared depends solely on the revenue generated in the allocation stage, which must cover total clearing costs. Thus, the prospect of interference, by reducing allocation-stage revenues, will limit how much spectrum even makes it to the assignment round. For example, at the 10 percent level of interference, allocation-stage revenues will fall short of required clearing costs for the FCC’s five largest band plans. Under that scenario, the largest feasible plan would be the illustrative band plan, which would clear 84 megahertz and repurpose 70 megahertz for mobile broadband. This is 50 megahertz less than would be repurposed under the FCC’s largest band plan (144 megahertz cleared, 120 megahertz repurposed), and some of it would be subject to interference.

We estimate that every 10 megahertz of broadcast spectrum that is not repurposed for mobile broadband represents at least a $60 billion loss in consumer welfare. Thus, with only a 10 percent level of interference, the best possible outcome (70 megahertz repurposed) would represent at least a $300 billion loss in consumer welfare relative to the FCC’s largest band plan (120 megahertz repurposed).

Moderators
JP

Jon Peha

Professor, Carnegie Mellon University

Presenters
DR

Dorothy Robyn

Dorothy Robyn

Authors
avatar for Coleman Bazelon

Coleman Bazelon

Principal, The Brattle Group

Saturday September 26, 2015 10:05am - 10:37am
GMUSL - Room 121

10:05am

The Impact of ICTs on Employment in Latin America: A Call for Comprehensive Regulation
Paper Link

The purpose of this research is to determine the manner in which employment will evolve as a result of developing information and communication technologies (ICTs). Prior research has shown that both communication and automation are displacing certain type of employment, mainly those with middle level skills. Given the evidence from countries with greater penetration of ICTs, we expect Latin America to follow a similar path and potentially be negatively affected by the elimination of some professions.

Researchers that have analyzed the impact of ICTs on employment have found that there has been a gradual move from agriculture, to manufacturing and to services with service economies being the latest iteration in development. The economic prospects for these service economies, however, will depend on the composition of the service professions. Ideally societies should aim to employ people in professions that require higher level skills as they would likely result in higher incomes and improved development prospects.

ICTs are currently generating employment in the Latin American region and this is likely to remain the case for some years as these technologies make business and government operations more efficient. It is unclear whether this will continue to be the case in the long term given weaknesses in the region’s economic and political environment. Latin America could be relegated to providing simple services that pay low wages, potentially increasing poverty in the region.

Using statistics from the World Bank the author has developed a panel of Latin American countries over a 20 year period to allow for comparisons across countries, and to determine if countries that have invested in education and R&D also benefited from service employment with higher level skills.

Initial results indicate that broadband communications, has a positive effect on employment and is likely to continue to do so as this technology is not yet widely deployed. Wireless connectivity, on the other hand, was not significant. Factors that are found to negatively impact employment were as expected capital formation, corruption, education which can be explain for the poor quality that still prevails in the region. Factors found to positively affect employment ICT imports, the creation of new business, ease of doing business, labor protections like paid leave.

Moderators
Presenters
avatar for Martha Garcia Murillo

Martha Garcia Murillo

Professor, Syracuse University


Saturday September 26, 2015 10:05am - 10:40am
GMUSL - Room 332

10:05am

Creating Connectivity: Trust, Distrust, and Social Microstructures at the Core of the Internet
Paper Link
Since the commercialisation of the internet in the 1990s, many network operators world wide have been confronted with the paradox of internet interconnection: private network actors such as internet access providers, carriers and content-heavy companies compete in a market environment, but in order to have a product, they need to cooperate. This article illuminates practices of cooperation and coordination between networkers empirically from a micro-social perspective. The text focuses on the question of what role trust but also distrust play in mitigating legal, architectural and economic uncertainties in the field of internet interconnection. Preliminary findings from 38 qualitative interviews with network engineers, peering coordinators and internet exchange representatives across the globe are presented. Such networking professionals play a critical role in establishing, maintaining and dissolving connectivity globally. The article shows how trust and distrust work in tandem in this field. It shows how distrust can cause critical moments that lead to reflection about existing modes of governance. On a theoretical level, the study proposes a conception of internet interconnection as a global microstructure that allows for coordination in the absence of multi-lateral regulation or overarching organisational structures.

Moderators
Presenters
avatar for Uta Meier-Hahn

Uta Meier-Hahn

Doctoral Researcher, Humboldt Institute for Internet and Society
Doctoral Candidate at Freie Universität Berlin. Interested in internet interconnection and informal regulation of internet infrastructure. | Formerly Academic Editor at the Internet Policy Review - http://policyreview.info. Likes to travel light. #infrastructure #interconnection #economicsociology


Saturday September 26, 2015 10:05am - 10:40am
GMUSL - Room 225

10:05am

Is 'New' 'Stronger'?: Online Behavioral Advertising and Consumer Privacy Legislation
Paper Link

Online behavioral advertising (OBA) or behavioral targeting advertising refers to an online practice that delivers advertising messages to consumers based on data about their prior and real-time online activities. Advertisers can effectively track consumer preferences by having access to diverse personal information gathered online. It is not new at all for profit-making companies to strive to gather personal information so that they can target their advertisements to “right” consumers and in turn minimize advertising waste. Advertisers assert that behavioral advertising can not only boost conversion rates, but also maximize consumer satisfaction; it benefits consumers, as well as causing no harm to consumers. Despite the online advertising industry’s argument, according to a survey conducted by Consumer Reports in 2014, 85% of online consumers oppose the personal data tracking for advertising purposes regardless of whether the data are anonymized or not. In addition, 76% of consumers responded that they saw “little or no value” in targeted ads.

The United States Federal Trade Commission (FTC) has been watching online tracking and behavioral profiling for advertising since the mid-1990s. As there have been considerable privacy concerns raised on the consumer side, the FTC has made many regulatory efforts including its proposals of self-regulatory principles in 2007 and “Do Not Track” mechanism in 2010. These FTC’s efforts, however, have not enabled consumers to effectively control the collection and use of their personal data. Although the U. S. Congress also has proposed legislations of online consumer privacy protection, no proposals have led to a comprehensive privacy legislation yet. As part of an ongoing effort to strengthen privacy regulations, there came two new proposals: one is the White House’s draft of the Consumer Privacy Bill of Rights Act of 2015, and the other is the Congressional Privacy Bill containing the Commercial Privacy Rights Act of 2015. Both industries and consumer advocates have been presenting their evaluations of those proposals since their release. Naturally, companies deplore that they are unreasonably stringent, whereas consumer organizations complain that they are still weak.

Aside from the evaluations by interested parties, this proposed research intends to analyze the two latest proposals in terms of whether they could be an effective measure to resolve the problems and concerns raised by consumers in relation to OBA. Based on a socialist concept of privacy explicated by Fuchs (2012), this study examines whether the new proposals provide sufficient protection for consumer privacy defined “as a collective right of exploited groups that need protection from corporate domination that uses data gathering for accumulating capital, for disciplining workers and consumers, and for increasing the productivity of capitalist production and advertising.”

To define “consumer concerns,” this study analyzes the documents of consumer complaints particularly related to OBA, which include the documents listed in the “Resources for Consumer Concerns about Privacy,” provided by the Consumer Federation of America, the complaints filed with the FTC by the consumer and privacy groups, and prior research on consumer perceptions of OBA. On the basis of the OBA problems extrapolated from these sources, how these problems can be resolved by the new legislation proposals is evaluated. As an additional discussion for a better measure to protect consumer privacy, this study will compare the two proposals with a draft of the European General Data Protection Regulation released in January of 2012 that is considered to be a comprehensive data protection law providing consumers with strong protection from the unauthorized use of their personal data. This study will be a significant addition to the discussions of a pertinent level of consumer privacy protection against online tracking and behavioral profiling.

 


Moderators
avatar for Michelle De Mooy

Michelle De Mooy

Deputy Director, Consumer Privacy Project, Center for Democracy and Technology
Michelle De Mooy is Deputy Director, Consumer Privacy Project at the Center for Democracy & Technology. Her work is focused on promoting strong consumer privacy rights through pro-privacy legislation and regulation, working with industry to build and implement good privacy practices, and analyzing emerging privacy concerns. Michelle currently sits on the Advisory Board of the Future of Privacy Forum, a privacy think tank, and has been... Read More →

Presenters
avatar for Ju Young Lee

Ju Young Lee

PhD Candidate / Instructor, Pennsylvania State University
My research interests include telecommunications policy and regulation, digital inclusion and broadband policy. Currently, I am doing research on municipal WiFi network deployment in different countries.


Saturday September 26, 2015 10:05am - 10:40am
GMUSL - Room 120

10:40am

Coffee Break
This year during coffee breaks there will be several identified topic tables in the Atrium with TPRC Program Committee members and attendees eager to discuss the latest issues. If you're new to TPRC and seeking a place to meet new friends, or if you are returning and seeking lively discussion, look for the signs and join the conversation.

Saturday September 26, 2015 10:40am - 11:10am
George Mason University School of Law Atrium

11:10am

Internet and Mobile Phone Use by Refugees: The Case of Za'Atari Syrian Refugee Camp
Given the scourge of armed conflict and increasing incidents of severe weather, the numbers of displaced persons around the globe is at all-time high. For those fleeing armed conflict, they may arrive in their new location with a mobile phone and in some cases well-developed internet skills. This changing landscape and skill sets of refugees are creating challenges and opportunities alike for the United Nations agencies (e.g. UNHCR, World Food Program, UNICEF) and implementation partners tasked with meeting their needs. Also, as globally the average length of stay in a refugee camp is 17 years, this scenario represents an interesting use case at the intersection of the traditional information and communication technologies for development (ICTD) and crisis informatics disciplines. In particular, this research seeks to understand: What level of mobile phone ownership and use is typical among refugee camp youth? How has their use changed (if at all) between the pre-conflict and refugee lives? What types of internet-based services might interest refugee youth and what are service providers likely to make available? Data for this exploratory research were collected via pen-and-paper survey in the Za’atari Syrian Refugee Camp in Jordan in January 2015. Za’atari is an outlier among refugee camps, with its wealthier and more IT-savvy refugee population. Hence, this analysis helps understand mobile phone use in what is now the state-of-art refugee context, but is likely to reflect future conditions in other camps around the world. Based on data from 174 youth, the research finds 86% of youth own mobile handsets and 83% own SIM cards. Even with reasonably high levels of SIM card ownership, 79% of youth also borrow SIM cards from friends and family. Unsurprisingly, mobile phones are the most popular medium for accessing the internet. This was true in Syria as well, but is even more so in the camp. In the camp, over half of youth access the internet one or more times per day. In terms of communication services, WhatsApp was the most frequently used to communicate to those in both Jordan and Syria, while mobile voice was used more frequently for communicating within Jordan. When asked about favorite online information sources, the six most frequently mentioned were: Google, Facebook, YouTube, Skype, TV and Wikipedia, with Google being significantly more popular. Without resource constraints, the youth indicated they would like more access to Instant Messaging/WhatsApp, news sources and increased opportunities to communicate with people via social media. Like many youth around the globe, the Za’atari youth frequently help family and friends as well as receive help to use the internet. A multivariate analysis examining predictors of camp-based internet use found education, sex and previous experience, respectively, are the most significant predictors. These results are in line with findings on internet use from studies in a variety of contexts. Given the high level of internet use among refugees, they appear to be likely candidates for online programming. However, youth have less computer as compared to mobile phone experience than the adults. Hence, basic computer training may be necessary prior to successful online program implementation. Additionally, our experience and observations in the camp suggest the UN agencies and their implementation partners have the skills to successfully implement such a program. These results provide interesting evidence of the use of mobile phones as an important source of information for displaced persons. It also provides insights into the transition from a crisis to the recovery phase of a disaster. As Za’atari camp continues to grow and develop from a temporary to a somewhat permanent residence, the diversity and complexity of ICT-based services evolves as well.

Moderators
HH

Heather Hudson

U of Alaska Anchorage

Presenters
CM

Carleen Maitland

Associate Professor of Information Sciences & Technology, Penn State
In addition to serving as the TPRC42 Program Chair, I research the effects of international policy and organizational contexts, particularly of international development organizations, on access to, and use of, information and communication technologies. I've carried out studies in the U.S., Europe, Africa, and the Middle East, working with organizations such as the United Nations, international non-governmental organizations (e.g. Save the... Read More →


Saturday September 26, 2015 11:10am - 11:40am
GMUSL - Room 332

11:10am

Search Advertising: Is there a Feedback Effect?
Paper Link

Like many other media services, internet search services provide two-sided platforms that coordinate interactions between media consumers and media advertisers. As with other media services, especially newspapers, the possibility has been raised that positive feedback between the consumer and advertising sides of search platforms (more consumer searchers will attract more advertisers, which will attract more consumers that attract even more advertisers, and so on) may amplify an initial advantage in either consumer usage or paid ads with the result that an initially advantaged search engine will capture a dominant share of a search market. To date, however, there has been no empirical research to establish whether two-way feedback between the consumer and paid advertising sides of a search platform exists, whether it is positive or negative if it does exist, and whether such feedback is of sufficient magnitude to significantly influence concentration in search markets. To our knowledge, this paper is the first to present evidence based on a rigorous empirical study for the existence, magnitude and directionality of feedback effects in search markets.

Because the consumer/searcher and advertiser sides of a search service may influence each other, the two sides of a search service cannot be studied separately if one wants to fully understand search market dynamics. We employed three-stage least squares to estimate a simultaneous equations model that allowed for ads-consumer usage feedback to estimate a simultaneous equations model using 2007 data reported by Yahoo for over 70 of its local metropolitan markets in the United States. Data reported by Yahoo on clicks on paid search ads for four locally-supplied products were used as a measure of consumer usage and our counts of numbers of paid ads on the search pages elicited by keywords corresponding to the four local products studied were employed as the measure of paid ad sales. Use of metropolitan markets data for a single search service was necessary because, while their services are very similar, the different major independent search services in the U.S. (Google, Yahoo and Microsoft at the time our data was collected) differed sufficiently in the types of data and geographic options they offered advertisers that econometrically sound direct comparisons were not feasible. Use of local markets within a single large country as a source of cross-sectional variation was also deemed preferable to constructing an international cross-section because it would have been difficult to adequately control for country-specific differences in culture, internet use, and regulations affecting the larger advertising market.

Our three-stage-least squares estimates provide statistically significant evidence for the existence of positive and quite large two-way feedback between the consumer and advertiser sides of Yahoo’s local internet search service. We use estimates for the model’s parameters to show how feedback between the consumer and advertiser sides of a search service increases the financial return to resources invested in promotional efforts to increase the number of consumers using Yahoo’s search service. Promotion through other media and investments to improve search accuracy are examples of such investments. We also discuss the importance of positive two-way feedback as a mechanism that may contribute to the observed pattern of high concentration in national internet search markets.

The empirical findings presented in this paper contribute toward a more complete understanding of search service economics and thereby to the development of policies that might better address the challenges raised by highly concentrated search markets.

Moderators
MH

Matt Hindman

George Washington University

Presenters
avatar for Steve Wildman

Steve Wildman

Michigan State University and University of Colorado
Steven S Wildman is a Senior Fellow at the Silicon Flatirons Center and a Visiting Scholar with the Interdisciplinary Telecommunications Program, both at the University of Colorado, Boulder. Prior academic positions include: 15 years as the J.H. Quello Chair of Telecommunication Studies at Michigan State University, where he also directed the Quello Center for Telecommunications Management and Law; Associate Professor of Communication Studies... Read More →

Authors

Saturday September 26, 2015 11:10am - 11:42am
GMUSL - Room 221

11:10am

Assessing the Health of Local Journalism Ecosystems: Toward a Set of Reliable, Scalable Metrics
Paper Link

In 2009, The Knight Commission released a report identifying access to credible and relevant information as a key requisite for healthy communities (Knight Commission, 2009). This report subsequently led to a comprehensive assessment by the Federal Communications Commission of how community information needs are being met in the broadband era (Waldman, 2011), as well as further exploration by the Commission into how this issue could be researched in ways that could inform policymaking (Friedland et al., 2012). The present research seeks to continue this line of inquiry by developing and testing a set of scalable performance metrics that could serve as analytical tools for assessing variations in the health of local journalism ecosystems across communities or over time; or as components of more comprehensive assessments of the relationship between the health of local journalism and other vital measures of community health, engagement, and political participation.

This research compares online local news ouput for three communities that vary in important ways, such as their average household income, average level of education, and broadband penetration. To assess the health of the local journalism ecosystems, four researchers first identified all possible outlets for journalism (both digital and “traditional”) in each of the three communities. The online presence of each outlet – websites as well as social media – was then determined; virtually all outlets within each community had an online presence. Content from one constructed week of social media (Twitter and Facebook), and one continuous week of website news stories was then coded for several key variables, including: originality, whether it addressed a pre-determined “critical information need” (Friedland et al., 2012), and whether it was about the target community.

The quantity of information produced by these outlets was controlled for the size of the population, by calculating for each community the number of journalism stories and social media postings produced in the given week per 10,000 capita; per capita measurements (as well as strict percentages) were also used to assess the outputs according to the key normative criteria outlined above (originality, about community, addressing critical information needs). In addition, output concentration (the extent to which the production of stories/social media posts was concentrated within a limited number of local journalism sources) was computed for each community across all of these categories of journalistic output using the Herfindahl-Hirschman Index.

The results show drastic differences in the availability and quality of local online journalism, which seem to mirror existing structural inequalities. The community with higher household income, level of education, and broadband penetration was remarkably better served than the poorest and least educated community. The third community, which falls in the middle in terms of education and wealth, but is markedly more diverse, also fared poorly relative to the highest-income community. These findings point to the possibility of pronounced “digital divides” separating communities of different types in relation to the per capita output of local journalism. The authors recommend several specific ways in which further research could supplement and further validate these results, and discuss the implications for communications policy in the digital age.

Presenters
RH

Rosemary Harold

Wilkinson Barker Knauer
SS

Sarah Stonbely

George Washington University

Authors
KE

Katie Ellen McCollough

Rutgers University, United States of America
PN

Philip Napoli

Rutgers University

Saturday September 26, 2015 11:10am - 11:42am
GMUSL - Room 225

11:10am

'What Can I Really Do?': Explaining Online Apathy and the Privacy Paradox
Paper Link

While many people claim to be concerned about privacy, their behavior, especially online, often belies these concerns. Researchers have hypothesized that this “privacy paradox” (Barnes 2006) may be due to a lack of understanding of risk; a lack of knowledge about privacy-protective behaviors (Hargittai & Litt 2013); or the social advantages of online self-disclosure (Taddicken 2014). This is especially salient for young people for whom social media may be intrinsic to social life, school, and employment. Using data from ten focus groups totaling 40 participants ages 19-35 administered during summer 2014, we examine young adults’ understanding of Internet privacy issues. Specifically, our research question asks whether the privacy paradox can be attributed to users’ lack of Internet experiences and skills.

While our focus group data do suggest some lack of understanding of risk, misunderstandings around the efficacy of certain privacy-protective behaviors, and lack of knowledge of privacy-related current events, some participants demonstrated use and knowledge of a variety of privacy-protective behaviors. These included configuring social network site settings, use of pseudonyms in certain circumstances, switching between multiple accounts, turning on incognito options in their browser, opting out of certain apps or sites, deleting cookies, using Do-Not-Track browser plugins, and so forth. The simultaneous presence of both lack of knowledge of risk and use of privacy-protective behaviors suggests that the privacy paradox cannot be attributed solely to either a lack of understanding or a lack of interest in privacy.

Instead, participant comments suggest that users have a sense of apathy or cynicism about online privacy, specifically that privacy violations are inevitable and opting out is not an option (“I feel like [pause], then you have the choice between not using the Internet and therefore keeping free of the surveillance, or living with it. So, I do care; but I guess I don’t care enough not to use the Internet. And I’m not sure what the alternative is at the moment.”). We explain this apathy using the construct of networked privacy (Marwick & boyd 2014), which suggests that in highly-networked social settings, the ability of individuals to control the spread of their personal information is compromised by both technological and social violations of privacy. Understanding this, young adults turn to a variety of imperfect, but creative, social strategies to maintain control and agency over their personal data. While participants engaged in a range of privacy-protective behaviors, they recognized that these were insufficient in the face of online data-mining, widespread identity theft, ever-changing privacy-settings, and highly-networked social situations (“I don’t consider myself a tech-savvy person and so just the idea of there being people out there who just with a computer in front of them can hack this database or get my information. To some extent, I think like, ‘Oh I better add a few random numbers in this password,’ or do this or that, but you know besides that I’m also wondering, what can I really do?”).

Our data suggest that fatigue surrounding online privacy and the simultaneous presence of concern over privacy and widespread self-disclosure is not necessarily paradoxical, but a realistic response to the contemporary networked social environment given existing US policy and corresponding business-sector affordances.

The author(s) do not wish to have this considered for presentation in the Poster session.

References (shortened due to word limit):

Barnes. 2006. “A Privacy Paradox” First Monday.
Hargittai & Litt. 2013. "New Strategies for Employment?" IEEE Security & Privacy.
Marwick & boyd. 2014. “Networked Privacy” New Media & Society.
Taddicken. 2014. “The ‘Privacy Paradox’ in the Social Web” JCMC.

Moderators
Presenters
avatar for Eszter Hargittai

Eszter Hargittai

Delaney Family Professor, Northwestern University

Authors

Saturday September 26, 2015 11:10am - 11:42am
GMUSL - Room 120

11:10am

Measures of Spectrum Holdings and Spectrum Concentration among Cellular Carriers
Paper Link

Regulators in most countries limit the amount of spectrum that a carrier is allowed to obtain as a way of reducing the risk that rival cellular carriers will be unable to obtain the spectrum they need to compete effectively. For similar reasons, regulators may consider how a proposed merger would affect concentration of spectrum holdings. These policies require appropriate measures of spectrum holdings for a carrier, and of spectrum concentration across carriers. Spectrum holdings of a carrier are typically measured simply by considering the total bandwidth. However, the total bandwidth of a carrier may not be the most useful measure, as this measure does not consider the frequencies of the spectrum held, although frequency greatly affects the cost of building out and operating infrastructure in that spectrum. A promising alternative is to use a frequency-dependent weighting function when quantifying spectrum holdings. Spectrum concentration across carriers is typically measured with the Herfindahl-Hirschman Index (HHI), but analysis in this paper shows that other measures could be more appropriate, and why the high costs of spectrum may yield economies of scale which benefit large carriers. This paper empirically investigates how use of these alternative measures affects the relationship between concentration of market share and the concentration of spectrum holdings using a variety of different measures to quantify spectrum holdings and concentration of spectrum holdings. Some of these measures of spectrum holdings take frequency into account, and others do not. Some of these measures are more appropriate if there is a linear relationship between the amount of spectrum a carrier has and number of customers it serves, and others are not. Greater correlations between spectrum concentration and market share concentration were observed with measures that put more weight on lower frequencies, although the differences are too small to support strong conclusions. The issue deserves further empirical research with more extensive data.

Moderators
GM

Giulia McHenry

Associate, The Brattle Group

Presenters
JP

Jon Peha

Professor, Carnegie Mellon University

Authors

Saturday September 26, 2015 11:10am - 11:42am
GMUSL - Room 121

11:43am

The Economic Effects of Domestic Search Engines on the Development of the Online Content Market
Paper Link

Although a few global search engine platforms, notably Google and Yahoo!, have achieved worldwide dominance in the search engine market, some domestic search engine platforms —defined as a search engine using domestic search technology and a domestic language, such as Naver in South Korea and Baidu in China— have come to dominate their domestic markets in competition with global search engine platforms.

Domestic vs. Global search engines compete with one another in terms of search quality in order to attract more users: the higher the quality, the more users it will attract and thus the more valuable it will become to content providers and advertisers. Unlike a global search engine, a domestic search engine usually connects users with localized content written in a domestic language, resulting in higher search relevance. It also generates private databases with more localized content, such as knowledge-sharing services that are better suited to local consumers.

In this study, we quantified the economic contributions made by domestic search engines to the expansion of the online content market. We hypothesized that the domestic Internet user base could increase as a result of the improvements in search quality made possible by domestic search engine(s), which in turn may lead to an increase in the size of the domestic paid online content market. A domestic search engine that provides more localized content will attract more domestic users, and thus the size of the paid online content market can be expected to increase.

We constructed a country-level dynamic panel of 51 countries using data from 2009 to 2013 taken from industry and government sources. The data includes the economic and cultural status of each country along with trends in online content markets, broadband Internet penetration, and other indices indicating the development of information and communication technologies. We then investigated the change in the size of the online content market in countries possessing domestic search engines.

The dependent variable, the relative size of an online content market (online content revenue divided by Gross Domestic Product) is regressed on a set of control variables, including the existence of a domestic search engine and a lagged dependent variable. We estimated our results using linear generalized method of moments (GMM) estimators which allowed us to use internal instruments and to control for autocorrelation, unobserved heterogeneity, and the endogeneity of some control variables.

Our preliminary analysis indicates that the development of a domestic search engine leads to an increase in online content revenue: A country with its own domestic search engine platform(s) has an average of a 0.12% larger online content market (measured by a proportion of GDP) than one without such a platform. This finding confirms the hypothesis that a domestic search engine has a positive effect on the development of a country’s domestic online content market. The reasons behind this trend and its policy implications are also discussed.

Moderators
MH

Matt Hindman

George Washington University

Presenters
SW

Sung Wook Ji

Assistant Professor, Southern Illinois University


Saturday September 26, 2015 11:43am - 12:15pm
GMUSL - Room 221

11:43am

Same Access, Different Uses, and the Persistent Digital Divide between Urban and Rural Users
Paper Link

While the provision of infrastructure has largely been successful in South Korea, the divergent uses after getting access to the networks have resulted in a new type of digital exclusion. The Korean government has implemented digital inclusion policies for several decades and the access gap has narrowed significantly. However, gaps in user skills and quality of uses persist particularly between urban and rural residents. This study explores how access, usage, and the perceptions of urban and rural Internet users differ in a highly digitalized country, with an attempt to identify new types of rural digital exclusion.

A secondary data analysis of a subset (N=3,641) of the National Information Society Agency’s (NIA) ‘2013 Information Culture Trend Survey’ was conducted. This is a nationally representative survey of Koreans age 7 and above (N=4,653). In this study a subsample of adults (age 20 and above) was used in the analysis. The age of the respondents ranged from 20 to 73, with an average age of 34. Among the respondents, 75.9% reported that they had been using the Internet for more than 10 years, and 73.7% reported using the Internet daily.

In terms of access, the rural-urban divide was not so evident. The frequency of online engagement and the time spent online were not different between the two groups. However, urban users had more devices connected to the Internet, and the proportion of online activities via mobile devices was higher than rural users. Urban users engaged more in SNS, IM and file sharing, while rural users were using email and e-government services. Rural users engaged more frequently in online participatory activities related to social and political issues. Usage patterns, the perception of trust and benefits were significantly different between the two groups. Urban users perceived online benefits to be higher and had more trust in online sites. The perception of online risks did not differ between urban and rural users.

Ordinary least square regression analyses were conducted in order to examine the impact of access, uses and online engagement on the perceived benefits among rural and urban users. The results suggest that the perception of overall benefits of the Internet is positively related to the frequency of Internet use, and engaging in production, communication and participatory activities. Both trust in sites and risk perception were positively correlated with the benefits. However, the perception of benefits was negatively related to the number of devices, age and income.

The dichotomy between haves and have-nots is no longer adequate in describing digital exclusion. Instead, we need to have a deeper look into how often, and how well, people use the available resources offered by the Internet. Many services provided online are targeted towards a critical mass and therefore may not meet the needs of rural users. This may explain why rural users, while using the Internet as much as their urban counterparts, perceive lower benefits and exhibit less trust towards online services. Furthermore, the differences in the devices that are used by urban and rural users suggest that despite the policies to remedy the gap, digital technologies advance rapidly and those who are lagging behind have to constantly catch up. Smartphones for example, can create a new type of digital divide. Digital inclusion policies must consider the rapid changes in technologies as well as the social context of utilizing them.

This study examined rural digital exclusion issues in a country where digital divide policies have been actively implemented by the government, and largely successful in terms of laying the infrastructure. A new type of digital divide was discovered in terms of users’ perceived benefits. This suggests the need for a new framework for devising rural digital inclusion policies suitable for highly digitalized societies.

Moderators
HH

Heather Hudson

U of Alaska Anchorage

Authors
GK

Gwangjae Kim

Hanyang Cyber University

Saturday September 26, 2015 11:43am - 12:15pm
GMUSL - Room 332

11:43am

Elements of Effective Privacy Notices
Paper Link

This paper will describe and define the core elements of effective notice in the online world. The core elements offered in this paper will provide greater insight into where online notice is inadequate and what concrete measures might be taken to improve it. Studies have shown that privacy polices routinely fail to apprise consumers with effective notice. Privacy policies are often articulated in ambiguous or misleading language that fails to inform consumers of the full scope of companies’ data collection practices. More so, privacy policies frequently display dense paragraphs of text in small type. This poor legibility deters consumers from reading privacy terms. In response to these problems, I will explore three distinct models of legal notice, each governing a different area of commercial practice. After studying these models, I will extract their most salient notice principles, and offer a set of requirements for effective notice in the privacy space.

This paper will first examine the factors that courts often look for when determining if a party had sufficient notice of an arbitration clause in a larger agreement. Focusing on arbitration provisions will illustrate greater standards of notice in the domain of commercial contracts. I will focus on the doctrines of constructive and actual notice, as well as the “knowing and voluntary” waiver standard that some courts employ when assessing the enforceability of arbitration clauses.

Following a discussion of arbitration clauses, this paper will shift to an exploration of the Food and Drug Administration’s (FDA) labeling requirements for over-the-counter drugs. Exploring these standards will provide some insight as to what is required of effective notice in highly regulated industries. This portion of the paper will pay close attention to the FDA’s rules regarding the use of symbols and pictograms to convey product warnings to consumers. I will also discuss the Administration’s requirements for formatting devices like headings, bulleted statements, and bold type, which have been found to improve consumers’ cognitive processing of label information.

Finally, this paper will discuss the substance of Federal Trade Commission (FTC) enforcement actions against companies that it found to cause privacy harms. This discussion will highlight notice standards concerning the privacy practices of general commercial entities. I will focus on cases in which the FTC determined that companies’ disclosures of data collection practices were inadequate or misleading. More so, I will discuss the specific notice requirements that the FTC has delineated in prior decisions and orders.

After exploring what these legal models assert about what notice requires, I will extract from them the most salient principles of effective notice. I will apply these principles to the online world, grouping them into broader classes of core elements. For example, FDA formatting practices may demonstrate that truly effective notice demands that text be prominently displayed, and may be classified under the broader notice element of “Visible and Conspicuous Statements.” The core elements that I will define will focus both on the requisite content and visual presentation of effective online notice.

By extrapolating notice requirements from settled legal models, I intend to inform expectations of what effective notice should look like in the online world. More so, by detailing these core elements, I aim to show what specific areas of online notice require improvement, and how they may be improved. Finally, this paper may help to provide a greater analytical framework for enhancing the effectiveness of online privacy statements. Though some have suggested innovative and technological approaches for improving notice online, I will suggest that referring to already available, established legal principles might also advance this aim.

Moderators
Presenters
AG

Amanda Grannis

Fordham Law School


Saturday September 26, 2015 11:43am - 12:15pm
GMUSL - Room 120

11:43am

Risk-Informed Interference Analysis: A Quantitative Basis for Spectrum Allocation Decisions
Paper Link

The trade-off between the benefits of a new radio service allocation and the risks to incumbents has to date been based on “worst case” analyses that focus on the single scenario with the most severe consequence, regardless of its likelihood. This is no longer a tenable approach since it leads to over-conservative allocations that block the social benefits of new services while giving incumbents more protection than they need.

This paper describes an alternative to worst case reasoning: the use of quantitative risk assessment (QRA) to analyze the harm that may be caused by changes in radio service rules. QRA considers both the likelihood and the consequences for multiple hazard scenarios, not just the most severe consequence, regardless of likelihood, as in worst case.

While quantitative risk assessment has been used for decades in many regulated industries it has not yet been applied to spectrum management. The paper identifies four key elements of a systematic, quantitative analysis of radio interference hazards, and illustrates them with examples: (1) make an inventory of all significant harmful interference hazard modes; (2) define a consequence metric to characterize the severity of hazards; (3) assess the likelihood and consequence of each hazard mode; and (4) aggregate them into a basis for decision making.

The four elements are illustrated by examples from spectrum policy questions past and present; the focal case study is the determination of geographic zones to protect weather satellite earth stations from co-channel cellular mobile transmitters.

The paper recommends actions that government can take to reap the benefits of risk-informed interference analysis. It proposes that regulators (1) develop know-how through a variety of educational initiatives; (2) use quantitative risk assessment in their analysis of harmful interference, and publish the results; and (3) pilot this approach in proceedings with limited scope, like selected site-specific license waiver proceedings.

There is also a role for legislators and the executive branch: (1) they can require risk-informed assessments as part of their oversight activities; (2) when presented with politically charged claims of harmful interference, they can avoid the temptation of nightmare scenarios and instead make assessments that consider both the likelihood and consequences of interference harms; (3) they can support regulators that use risk-informed interference assessments rather than basing decisions on worst case analysis.

Moderators
GM

Giulia McHenry

Associate, The Brattle Group

Presenters
avatar for Jean Pierre de Vries

Jean Pierre de Vries

Co-director, Spectrum Policy Initiative, Silicon Flatirons Center


Saturday September 26, 2015 11:43am - 12:16pm
GMUSL - Room 121

11:45am

The ICT Revolution in Historical Perspective: Progressive Capitalism as a Response to Free Market Fanaticism and Marxist Complaints in the Deployment Phase of the Digital Mode of Production
Over the course of a quarter of a millennium industrial capitalism has emerged from four deep recessions brought on by the bursting of commodity bubbles to achieve sustained economic growth by instituting progressive policies. These policies promote market success by reinforce entrepreneurial experimentation and investment, while simultaneously reducing income inequalities to stimulate demand. The bursting of the tech stock bubble and the financial meltdown of the early 21st century are the fifth such challenge. Using historical, comparative analysis, this paper shows that the building blocks for a successful institutionalization of the digital mode of production are in hand, awaiting strong policy initiatives to center the economy on sustainable growth. Section I presents the analytic approach, describing the pattern of development of industrial revolutions based on a framework for analyzing long-term innovation offered by Carlota Perez that extends and combines Schumpeter and Keynes into a theory of progressive capitalism. Section II, applies the analytic framework to the ICT revolution in qualitative and quantitative terms. The radical changes in organizational structure and core competence of organizations, driven by the dramatic changes in communications resources are described and quantified. The ultimate payoff of each of the great industrial technology revolutions has not come within the sectors in which they originated, but their ability to spread through and transform the entire economy. Section III, identifies the challenges that must be overcome to set the economy on a stable development path. The key to the transformation is the convergence of the information and energy sectors, the two most important resource systems of an advanced economy. Information and control technologies are hollowing out the energy sector. However, convergence is too weak a word to reflect the radical nature of the transformation that is needed and has already begun and to passive a word to capture the need for vigorous policy implementation to overcome institutional inertia and guide investment toward a coherent constellation of goals. Section IV explains why progressive policies is the key to building the road to the future. Excessive pessimism on the left (e.g. Piketty) and excessive optimism on the right (e.g. repeal of progressive era legislation) about what the market can do on its own are not justified by historical experience. The road to a stable growth path lies neither in the 19th century policy of Laissez faire nor 20th century policy of utility regulation, but the development of a 21st century model that extends the successful approach of the Carterphone, the Computer Inquiries and Spread Spectrum decisions. These policies were quintessential progressive capitalism by using state power to create a space of guaranteed access to essential communications resources, but refusing to regulate behavior within that space. The result was an explosion of entrepreneurial experimentation and a virtuous cycle of innovation and investment and ensured the ICT revolution would be overwhelmingly an American revolution.

Moderators
RH

Rosemary Harold

Wilkinson Barker Knauer

Presenters
MC

Mark Cooper

Silicon Flatirons


Saturday September 26, 2015 11:45am - 12:20pm
GMUSL - Room 225

12:16pm

Zero Rating: Do Hard Rules Protect or Harm Consumers and Competition? Evidence from Chile, Netherlands and Slovenia
Paper Link

Zero rating, the practice of not charging data to a mobile broadband subscriber’s contract, is emerging a potent issue in telecom policy. While zero rating of mobile subscriptions has been extant for almost two decades with little to no controversy.

Zero rating has become increasingly popular in both developed and developing countries and plays a particularly important role in developing countries, where the costs of mobile data services are higher relative to per capita incomes. About half of all mobile operators employ the strategy in some way. In fact network operators have used the equivalent of such strategies to incentivize both subscribers and content providers to be part of their network for well over a century.

In the last two years, however, zero rating become a flashpoint in the net neutrality debate. Whether a country allows it has become a litmus test for net neutrality supporters to certify the strength of the rules. At issue is whether operators and their customers should have the freedom to create contracts for mobile broadband service based on their preferences and constraints or whether mobile Internet service must be sold in a so-called “neutral” fashion where the only differentiating parameters are speed and megabytes. As the Internet increasingly transitions to mobile platforms, and the likelihood that the next two third of world who yet to come online will do so via mobile, who and how to provision mobile bandwidth has is an important, complex issue.

This paper examines the arguments for and against zero rating and the charges that zero rating hurts competition and consumers. It formulates 5 assertions based on the alleged harms and attempts to test them with empirical analysis from quantitative and qualitative perspectives. The paper reviews the leading database of financial information of the world’s mobile operators to see whether the impact of zero rating may be observed, for example with undue financial benefits earned by operators through the use of zero rating. To understand the issue more closely, the paper reviews zero rating in Chile, Netherlands, and Slovenia, countries which have banned some forms of the practice. The paper then examines whether there is harm to consumers and innovation by reviewing a leading database of mobile application market data. The paper concludes by suggesting reasons why zero rating is maligned in telecom policy debates.

Moderators
MH

Matt Hindman

George Washington University

Presenters
avatar for Roslyn Layton

Roslyn Layton

PhD Fellow, Aalborg University
I study the theories of net neutrality and develop models to test them using real world data. My current project investigates the impact of net neutrality rules on edge provider innovation and broadband infrastructure investment in 30 countries.

Authors

Saturday September 26, 2015 12:16pm - 12:48pm
GMUSL - Room 221

12:16pm

Rural Utilities Service Broadband Loans and Economic Performance in Rural America
Paper Link

This paper examines the Rural Utilities Service’s (RUS) Rural Broadband Access Loan and Loan Guarantee Program, which finances the development of broadband infrastructure in rural areas, and provides estimates of the effects of the program on economic performance. Access to broadband is often seen as vital to economic growth and improved quality of life, and broadband access is no less and perhaps more critical in rural areas, where the possibilities of advanced communications can reduce the isolation of remote communities and individuals. Provision of broadband infrastructure in rural areas has occurred at a slower pace than in more densely populated areas. Issues of density, among other factors, affect the expected returns to rural telecommunications projects. Profit-motivated lenders or other providers of project financing may rationally view the expected revenues derived from rural broadband projects as insufficient to justify investments in them. In response to limited funding from private sources, the Congress authorized the RUS broadband loan program, which finances the construction of broadband projects in rural areas.

We use information on the RUS broadband loan program to estimate the effects of the loans on economic performance. From program records, we know the geographic footprints of projects that submitted loan applications to RUS. We know whether RUS approved or rejected the loan application, and in the case of approvals, we know the timing of that approval. We map the geographic footprint of projects into counties served. While county level observations may not be a perfect level of observation, previous studies have used county-level data as there are advantages with respect to data availability. We develop a time series data set of county-level measures of employment, payroll, and the number of business establishments over the relevant time period, and examine different rural definitions to filter the set of counties used in the analysis.

Using a panel model with county and year fixed effects, we produce estimates of the effect of the broadband loan on the three measures of economic performance. Our estimation technique is best thought of in treatment terms, where the loan approval represents the treatment. The selection of control groups is important in a treatment context, and to this end, we develop several distinct control groups of rural counties. Specifically, we develop a control group of rural counties that were in the footprint of projects rejected for RUS funding; this control group represents a set of counties that may be similar to the treatment group in unobservable ways. We also use a propensity scoring technique to generate two control groups based on demographic characteristics of approved counties and on the pattern of economic growth leading up to the evaluation time period. Finally, we define a control group of rural counties adjacent to the set of counties with approved loans.

In general, we found modest but statistically significant relationships between the RUS broadband loan program and county employment and payroll, but no relationship between the program and the number of business establishments. These estimates suggest that economic performance was in the range of 1 to 4 percent higher in counties receiving a RUS broadband loan. These results were robust across the set of control groups. However, very few loans went to projects in the most rural of counties, such as those with only very small towns. Results restricting the sample to this set of rural counties showed no relationship between the RUS broadband loan program and economic performance.

Moderators
HH

Heather Hudson

U of Alaska Anchorage

Presenters
Authors

Saturday September 26, 2015 12:16pm - 12:48pm
GMUSL - Room 332

12:16pm

Does (Screen) Size Matter? News Engagement on Computers, Tablets, and Smartphones
Paper Link

Proliferation of the Internet, mobile devices, and social media is changing the way ordinary people choose, access, and consume information. Theories about the political consequences abound; namely, the impact of media fragmentation and content choices on political knowledge, attitudes, and behaviors. Work on the impact of mobile communication technology is newer, and much of the work is characterized by the tendency to focus on the positive benefits it affords. While the positive benefits are many, such focus may obscure important trends in mobile use that reveal important differences across the types of connectivity available to various individuals and groups. For example, many Americans are cell-only Internet users. A common interpretation of this might be that mobile phones are enabling Internet connectivity for a large portion of the U.S. population who might not otherwise have access. While that may be true, cell-only use means that this segment of the population does not access the Internet regularly on a computer, which is consequential because screen size and speed of connection are correlated with content engagement. What’s more is that there are important differences in computer and mobile device Internet usage across demographic groups. For example, on average Latino audiences in the 35-49 age brackets spend 27 hours and 12 minutes per month using the Internet on a computer and 53 hours and 54 minutes using the Internet on their mobile devices. That gap is much smaller when looking at the total U.S. population; the former uses the Internet on computers much more and on mobile devices much less (Nielsen, 2015). Generally speaking, theoretical arguments about the anticipated effects of the changing media landscape do not properly account for differences in habitual media used driven by tiers and types of content accessibility. Specifically, these arguments fail to recognize and account for key differences in access and media use between groups that may have consequences for theories about anticipated effects from the current media environment. These nuances also highlight the need for the literature to do a better job of linking the academic research with telecommunication policy implications. Our evidence is timely in that it should add to the discussion regarding regulation of the Internet by highlighting some of the political information consequences of access and connection differences across individuals and groups. We ask: what are the consequences of device type and connection speed on political information seeking, news engagement, and political learning?

The crux of our primary theoretical argument and contribution rests our major claim that we cannot assume that the communication and engagement opportunities afforded through internet access on mobile devices are the same as those afforded through computers and laptops with high speed connections. We argue that the kinds of content seeking and content exposure on the latter are not the same as the former, and using a novel experimental design we present evidence that these media behaviors are in fact quite different across these distribution mechanisms. News engagement is more tenuous and sporadic on mobile devices.

Moderators
RH

Rosemary Harold

Wilkinson Barker Knauer

Presenters
Authors

Saturday September 26, 2015 12:16pm - 12:48pm
GMUSL - Room 225

12:16pm

Radio Spectrum Management Policy: Revisiting the Debate
Paper Link

Scholars such as Hazlett, De Vries, and Noam extensively discussed a few years ago over which of three alternatives - spectrum markets, commons and easements - should be adopted to overcome the deficiencies of the command and control (C&C) approach to spectrum management. One common conclusion was that it is inevitable that countries would abandon C&C and move towards one or more of these alternatives. However, in most of the world, especially in the developing countries, radio spectrum is still managed along the lines of traditional government administration approach. It is, therefore, surprising that the reason why the C&C approach is still dominant has been overlooked.

To address such issue, this paper will seek to answer the following question: how do the international spectrum management stakeholders perceive the alternatives to the C&C approach? It will focus on the three main concepts - radiocommunication service allocation flexibility, technology neutrality, and easements in the TV white spaces (TVWS) - that are considered the main elements of spectrum markets, commons and easements. The paper adopts a qualitative inductive approach based on primary data collected from 86 semi-structured interviews with international stakeholders from mobile operators, manufactures, national regulators, broadcasters, TVWS industry and the ITU-R Bureau.

The paper is the first of its kind to highlight the views of those who formulate national spectrum policy in practice on the alternative approaches to C&C that have been advocated for many years in the literature with little empirical evidence.

Regarding service allocation flexibility, the data shows that there is a tendency to follow the non-obligatory international service allocation organised by the ITU-R to take advantage of global harmonisation and protect against interference. Regulators also have a tendency to have the upper hand when it comes to how spectrum as a resource is treated, with the self-regulated market not being favoured. The paper reveals that for different stakeholders, flexibility is the reverse of harmonisation. More specifically, flexibility is a way to change the service in use (e.g., broadcasting, fixed) to globally harmonised mobile service. Moreover, flexibility implies interference is not managed.

With respect to technology neutrality, there is a widespread acknowledgement of the merits of the concept. However, the theoretical technology neutrality advocated in the literature does not exist in reality as parameters used in defining technology neutrality are usually based on particular standards. Moreover, operators and regulators are mostly in favour of standardised technologies that are widely deployed and have pre-defined duplex modes (e.g., FDD) because being fully neutral increases the probability of interference, decrease harmonisation and complicates the design of equipment.

On the concept of easements in the TVWS, the interviews revealed that the additional mobile allocation in the 700 MHz and 800 MHz at WRC-12 and WRC-15 respectively in broadcasting spectrum has made TVWS a temporary deployment. In addition, the future use of the UHF band has become uncertain as the use of the band for mobile services will be decided by the time of WRC-15. Operators are in favour of exclusive access to spectrum to secure their investment and to ensure protection against interference. Regulators in developing countries, who often lack enforcement mechanisms, are concerned regarding the adoption of easements as non-exclusivity places additional management burdens on them. On the other hand, the paper revealed that approaches such as LSA are much favoured by operators and regulators because they accommodate more certainty as they are used in bands already identified for standardised technologies.

Note: The authors do not wish the proposal to be considered for presentation in the poster session.

Moderators
GM

Giulia McHenry

Associate, The Brattle Group

Presenters
Authors
JW

Jason Whalley

Northumbria University

Saturday September 26, 2015 12:16pm - 12:48pm
GMUSL - Room 121

12:16pm

Reconsidering the 'Right to Be Forgotten' - Shifting the Debate into the Realm of Memory Rights
Paper Link

On May 13th, 2014, after a long deliberation, the European Union’s Court of Justice established a “right to be forgotten” (hereinafter: RTBF) while declaring that an: Operator of a search engine is obliged to remove from the list of results displayed following a search made on the basis of a person’s name links to web pages, published by third parties and containing information relating to that person, also in a case where that name or information is not erased beforehand or simultaneously from those web pages, and even, as the case may be, when its publication in itself on those pages is lawful. (Decision 3)

And that: The data subject may, in the light of his fundamental rights […] request that the information in question no longer be made available to the general public on account of its inclusion in such a list of results, those rights override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in having access to that information upon a search relating to the data subject’s name. (Decision 4)

Coping with the court decision, policymakers, actors from within the telecommunication industry and different commentators have framed the public debate about it as a clash between two prominent liberal rights: the right to privacy on the one hand and freedom of expression on the other. Jonathan Zittrain wrote that the Court’s ruling is “a form of censorship, one that would most likely be unconstitutional if attempted in the United States.” (Zittrain, 2014) Similarly, Jeff Jarvis wrote in his blog that the RTBF is “the most troubling event for speech, the web and Europe.” (Jarvis, 2014) Henry Farrell who wrote that the decision would have “a serious impact on EU-US relations” (Farrell, 2014), set a different tone to the debate.

However, these interpretations offer a limited view of the right’s scope and impact. A critical legal analysis (Kelman, 1990) of three policy relevant papers -- the European Union Court of Justice decision of May 13th, 2014; Article 29 of the “Data Protection Working Party’s Guidelines on the Implementation of the Court of Justice of the European Union Judgment on Google Spain and Inc. v. AEPD and Mario Costeja Gonzalez” published on November 26th, 2014; and the final report of the “Advisory Council to Google on The Right to be Forgotten” published on February 6th, 2015 -- this study suggests a different approach to the right.

Interestingly, all three documents (including the one invited by Google itself) agree that in fact the RTBF should not be read as breaching on expression rights. While referring to the RTBF as a de-listing of results from a search made on a person’s name, the Working Party stated that “the impact of the de-listing on individuals’ rights to freedom of expression and access to information will prove to be very limited.” (p.2) Similarly, Google’s committee declared that “the ruling […] should not be interpreted as a legitimation for practices of censorship of past information and limiting the right to access information.” (p.6)

Hence, if the right to be forgotten is not about the fair balance between privacy rights and expression rights when dealing with personal information over the web, and if, as declared by Google’s committee, the court did not at all establish “a general right to be forgotten,” (p.3) since it “only affects the results obtained from searches made on the basis of a person’s name and does not require deletion of the link from the indexes of the search engine altogether (Working Party Guidelines, p.2), then a different discoursial path when analyzing the right should be taken.

This study suggests that the right to be forgotten should be understood as the right of the individual to control and construct his or her pubic narrative as well as the public representation of his or her life story in the digital era. This somewhat different understanding is backed by the court decision itself when it states that: The organization and aggregation of information published on the internet that are effected by search engines with the aim of facilitating their users’ access to that information may, when users carry out their search on the basis of an individual’s name, result in them obtaining through the list of results a structured overview of the information relating to that individual that can be found on the internet enabling them to establish a more or less detailed profile of the data subject. (Court decision, paragraph. 10)

As such, this study suggests policymakers should change their policy’s jargon. Instead of dealing solely with the rights of “data subjects,” a term widely used in the court decision and the papers ensued, the RTBF should actually deal with the rights of humans as “storytelling animals” (MacIntyre, 1984). Indeed, we tell others stories about ourselves, based on memories derived from our experiences. Combining these stories together we create a narrative of our life. Yet, narratives are constructed, they evolve over time and change constantly. We choose to highlight different life-phases when our life circumstances are changing. Our constructed narratives, based on what we perceive to be our memories, are what others can refer to as our identity. Thus, the RTBF is not really about forgetting, but rather about remembering, about our narratives, and eventually about our identity. As such, this study proposes to reconsider and analyze the right using tools derived from the memory studies discipline, which have gained prominence in the social sciences in recent years (see: Olick, Vinitzky-Seroussi & Levy, 2011).

Using a memory driven perspective, this study critiques the current right’s focus on forgetting, as a memory process is always about both forgetting and remembering. In addition, this study problematizes the fact that the right to be forgotten can be only materialized by individuals, while in fact, memory is a phenomenon shared by both individuals and collectives. Nevertheless, albeit there are some major flaws in the manifestation of the right to be forgotten, it contributed to a much more important discussion about memory rights. Until now, memory and rights were considered as important factors of two different spheres. Yet, the RTBF created a new opportunity to talk about memory rights as what actually happened with its establishment is the beginning of “the governance of personal and collective memory” (Pereira, Ghezzi & Vesinc-Alujevic, 2014, p.3).

Thus, after acknowledging the fact that the RTBF is actually more about memory issues and not about data protection or expression exclusively, and as memory becomes pervasive in the digital era, policymakers should not be satisfied with a limited right of an individual to de-list a particular link from a search engine especially when it only emphasizes forgetting and dismisses remembering, and when it is only applicable to individuals. Rather, they should use the opportunity to talk about memory rights in order to evolve “a culture of just memory” (Ricoeur, 1999, p.11).

References
Farrell, H. (2014) Five Key Questions About the European Court of Justice’s Google
Kelman, M. (1990), A guide to critical legal studies. Cambridge. MA: Harvard University Press.
Maclntyre, A. (1984). The Virtues, the Unity of Human Life, and the Concept of a
Tradition. In: Sandel, J. M. (Ed). Liberalism and its Critics. New-York: New York University Press.
Olick, J. K., Vinitzky-Seroussi, V. & Levy, D. (2011). The Collective Memory
Reader. Oxford, UK: Oxford University Press.
Pereira, A., Ghezzi, A. & Vesnic’-Alujevic’, L. (2014). Introduction: Interrogating the Right to be Forgotten. In: Ghezzi, A., Pereira, A. & Vesnic’-Alujevic’, L.
(Eds.) The Ethics of Memory in a Digital age – Interrogating the Right to be Forgotten. New York: Palgrave Macmillan. 1-8.
Ricoeur, P. (1999). Memory and Forgetting. In: Kearney, R. & Dooley, M. (Eds.)
Questioning Ethics – Contemporary debates in philosophy. London, UK: Routledge. 5-12.
Zittrain, J. (2014). Don’t Force Google to Forget. The New York Times, May 14,
Jarvis, J. (2014) The Right to Remember, Damnit. BuzzMachine. Available at: http://buzzmachine.com/2014/05/30/right-remember-damnit/ Last retrieved: 26.2.15.
available at: http://www.nytimes.com/2014/05/15/opinion/dont-force-google-to-forget.html?_r=0. Last retrieved: 26/2/15
Decision. The Washington Post, 14 May. Available at: http://www.washingtonpost.com/blogs/monkey-cage/wp/2014/05/14/five-key-questions-about-the-european-court-of-justices-google-decision/?wprss=rss_politics.

Moderators
Presenters
NT

Noam Tirosh

Ben-Gurion University of the Negev


Saturday September 26, 2015 12:16pm - 12:51pm
GMUSL - Room 120

2:00pm

The Impact of Asymmetric Regulation on Product Bundling: The Case of Fixed Broadband and Mobile Communications in Japan
Economic theory provides guidelines for efficiency increasing bundling and anti-competitive bundling. The effect of product bundling strongly depends demand function for goods bundled. To assess the effect of product bundling is absolutely empirical issue. Regarding to the concern of entry deterrence, Japan Ministry of Internal Affairs and Communications (MIC) has prohibited the telecommunication incumbent NTT grope to bundle fixed communication services and mobile communication services. In 2012, a competitor introduced the bundle discount for fixed broadband and mobile communications. A competitor's market share remarkably increased after introducing bundling. In 2014, NTT grope announced to begin a wholesaling of their FTTH services to any firms that include NTT's mobile operator. MIC approved NTT to use this wholesale that enable them to set bundle discount pricing in 2015. To assess whether this regulation change works for consumer, we develop the structural demand model where consumer's willingness to pay for goods are correlated. Adopting mixed logit model with error correlation allow us to estimate the individual specific demand correlation for goods. We also estimates the technological complementarities/substitutabilities between goods that provided by same firm grope. We estimate the model by combining two internet survey and one mail survey. The first internet survey consist of 2010 individual fixed broadband internet user in Japan. The second internet survey consist of 500 individual who does not use the fixed broadband but use mobile communications. Those two internet surveys are designed for the competition review in the Telecommunications Business Field. We obtain the choice of fixed broadband and mobile communication, monthly expenditure to those services and socio demographic characteristics. The last survey is the Communications Usage Trend Survey (CUTS) 2012 that been conducted in accordance with the Statistics Act for official statistics by mail survey. This survey consists 20,418 households and 54,099 individuals. We draw 1230 respondents from this survey to assess whether individual who does not use fixed broadband or mobile communication subscribe the fixed-broadband or the mobile communications when they face counterfactual state. Finally, we obtain 3740 observation that includes 2,000 broadband user, 2,298 mobile phone user and 1239 non user that approximately proportional to individual choice obtained by CUTS 2012. Estimation result shows that cross price elasticity between fixed broadband is positive. Similarly, cross price elasticity between mobile communications is positive, too. However, cross price elasticity of NTT's fixed broadband and mobile communication is negative. In contrast with NTT, those of KDDI and SoftBank are positive. It also shows bundle-discount tend to attract the consumer who does not use fixed broadband or mobile communications. It imply that NTT's bundle discount increases consumer surplus by drawing customer from outside of the markets and less likely harms the market competition. To complete the impact of asymmetric regulation on product bundling, we are calculating subgame perfect equilibriums of two-stage game with/without asymmetric regulation. Results are coming soon.

Moderators
DM

Deborah Minehart

Department of Justice

Presenters
TK

Toshifumi Kuroda

Full-time Lecturer, Tokyo Keizai University

Authors

Saturday September 26, 2015 2:00pm - 2:32pm
GMUSL - Room 221

2:00pm

The Persisting Digital Divide in Israel
Paper Link

Most “digital divide” studies focus on a point in time in which the study was conducted and on a particular “divide” issue studied. Such studies often analyze demographics of the populations or the ICT-usage skills they have or lack. In order to contribute to a better understanding of the divide, and in particular of its dynamics, this study examines changes over time of both ICT possession and patterns of use in Israel. Based on data derived from annual surveys conducted by the Central Bureau of Statistics, we paint a picture of the trends in ICT possession and patterns of use, between 2002-2013 (the latest date for which data is available).

Our basic assumption is that the term “digital divide” is limited, as it focuses policymakers on the disparity in ownership, skills to use, or utilization to fully exploit ICTs. However, there is a social price paid by those on the lower end of the divide, which is currently absent from the policy-making discourse. We use therefore the term “digital exclusion” and refer by that to the exclusion from participation in the civic, political, cultural and economic spheres, which are the foundation of membership in the contemporary information and communication society. Such lowered participation levels are caused by disparity in access and usage.

In order to demonstrate the analytical force of “digital exclusion”, the analysis takes into account the unique contours and cleavages within Israeli society and describes differences in ICT possession and use along population groups (Jewish/Palestinian), income, Jewish ethnicity (Europe/America – Asia/Africa origin), immigrant-Israel born, level of religiosity (within the Jewish community), and gender. ICT usage was checked with regards to: use for work, use for economic activities (online purchases), use for civic activities (e-government services), and use for social networking.

Indeed, while the study focuses on Israel, some of these differences and gaps exist in many other nations, and both the data and the analysis can contribute to an international comparative conversation on digital exclusion patterns. The results of this longitudinal analysis demonstrate how digital exclusion is either maintained or even grows along certain aspects of participation.

In each analysis, one of the determinants is controlled for, in order to identify effect. Initial results indicate that similarly to other countries, income is a major contributor to digital exclusion. Certain digital exclusion differences such as gender disappear, when income parity is present. However, such an effect is not visible when it comes to the gap between Jews and Palestinians, immigrants and Israel-born, and Europe/America and Asia/Africa descent. Additional insight is provided by the fact that across income levels, ICT use among 3rd generation Israelis is higher than among immigrants, and eradicates the effect of geographical roots (Europe/America vs. Asia/Africa).

One unique element of the study is the differences based on level of self-proclaimed religiosity within the Jewish population. Indeed, the choice of ultra-orthodox Jews not to own or use ICTs raises a whole set of issues with regard to participation in the information society as a matter of choice.

Moderators
JM

Jill Moss

Technical Advisor, USAID

Presenters
avatar for Amit Schejter

Amit Schejter

Ben-Gurion University of the Negev

Authors
NT

Noam Tirosh

Ben-Gurion University of the Negev

Saturday September 26, 2015 2:00pm - 2:32pm
GMUSL - Room 332

2:00pm

Measuring Residential Broadband for Policymaking: An Analysis of FCC's Web Browsing Data
Paper Link

This paper presents an analysis of F.C.C.-measured web page loading times as observed in 2013 from nodes connected to consumer broadband providers in the Northeastern, Southern and Pacific U.S. We also collected data for multiple months in 2015 from the MIT network. We provide temporal and statistical analyses on total loading times for both datasets. We present four main contributions. First, we find differences in loading times for various websites that are consistent across providers and regions, showing the impact of infrastructure of transit and content providers on loading times and Quality of Experience (QoE.) Second, we find strong evidence of diurnal variation in loading times, highlighting the impact of network and server load on end-user QoE. Third, we show instances of localized congestion that severely impair the performance of some websites when measured from a residential provider. Fourth, we find that web loading times correlate with the size of a website’s infrastructure as estimated by the number of IP addresses observed in the data. Finally, we also provide a set of policy recommendations: execution of javascript and other code during the web browsing test to more adequately capture loading times; expanding the list of target websites and collecting trace route data; collection of browsing data from non-residential networks; and public provision of funding for research on Measuring Broadband America’s web browsing data. The websites studied in this paper are: Amazon, CNN, EBay, Facebook, Google, msn, Wikipedia, Yahoo and YouTube.

Moderators
PD

Paul de Sa

Bernstein Research

Presenters
avatar for Alex Gamero-Garrido

Alex Gamero-Garrido

PhD Student, MIT | UC, San Diego
I am a computer networking and public policy researcher. In particular, I am interested in measurable network properties and their impact on social systems. My TPRC paper is a good example: it uses network performance data, in particular website loading times, as a way to infer variations in user-perceived connection quality. | | Recently I finished my masters in Technology and Policy at MIT, supervised by Dr. David Clark at CSAIL; I am... Read More →



Saturday September 26, 2015 2:00pm - 2:32pm
GMUSL - Room 225

2:00pm

Beyond Standing: How National Security Surveillance Undermines the Reporter's Privilege and the Free Press
Paper Link

Challenges to national security surveillance pursuant to Section 215 of the Foreign Intelligence Surveillance Act (FISA) are percolating throughout the federal courts. Section 215 authorizes the collection of “business records,” including telephony metadata, for an authorized investigation to obtain foreign intelligence information. Some of the challenges to the constitutionality of this statute raise interrelated First and Fourth Amendment claims, generally focusing on the chilling effect caused by surveillance and arguing that strict scrutiny ought to apply where First Amendment rights are at stake.

Historically, the courts have held that Fourth Amendment requirements, when applied with “scrupulous exactitude,” are adequate to protect First Amendment interests. But what protections are in place when the Fourth Amendment’s warrant and reasonableness requirements do not apply? Although metadata surveillance does not trigger Fourth Amendment protections, it nonetheless has First Amendment implications. This Article addresses a discrete subset of First Amendment interests affected by surveillance: those of the press. In Part I of this Article, I describe and summarize the full panoply of ways in which national security surveillance harms the free press. Even the collection of non-content information, or metadata, can reveal a journalist’s sources and expose confidential relationships and communications. These harms, I argue, are not speculative and remote, but concrete and specific and therefore cognizable by the courts.

In Part II, I argue that the Government’s position, frequently expressed in merits-stage briefing, that surveillance that is consistent with the Fourth Amendment per se does not violate the First Amendment is constitutionally and factually inadequate in cases brought by the press. In challenges that implicate both the First and Fourth Amendments, the law requires a “particularized analysis” of both claims. Challenges arising out of the independent press rights afforded by the First Amendment are distinct from more general challenges to surveillance based on chilling effects. The case for independent treatment of First Amendment rights is also supported by the text and history of the Privacy Protection Act of 1980 and the reliance of the “scrupulous exactitude” cases upon the adequacy of the warrant requirement to protect those rights.

Finally, in Part III, I contend that when the Fourth Amendment’s warrant and reasonableness requirements do not apply, the First Amendment requires independent safeguards to fill the gap. I outline a few potential architectures for those safeguards and suggest that, while each of these structures have distinct normative appeal, the First Amendment requires, at a minimum, the opportunity for the press’s interests to be raised and litigated at a hearing before any court authorizing surveillance.

Moderators
DS

David Sobel

Electronic Frontier Foundation

Presenters
HB

hannah bloch-wehba

Reporters Committee for Freedom of the Press


Saturday September 26, 2015 2:00pm - 2:32pm
GMUSL - Room 120

2:00pm

Risk Portfolio of Spectrum Usage
Paper Link

Spectrum sharing has been adopted slowly, even if the literature demonstrates that it provides the flexibility needed to respond to temporal and spatial variations in the traffic and bandwidth demand of different services. Several factors impede the adoption of spectrum sharing: (1) the quantity of shareable spectrum; (2) cost of spectrum sharing, including both monetary cost and processing time; (3) uncertainties and risks in spectrum sharing. The FCC and NTIA have made noteworthy efforts to enlarge the amount of shareable spectrum. For example, the TVWS is free for unlicensed access, and federal frequency bands, such as 1670 MHz and 3.5 GMz, are under consideration for federal-commercial sharing. Moreover, with the database assisted approach, the processing time of authorization is significantly shortened. Additionally, spectrum may be traded in the secondary spectrum market. When requirements are met, the trade can be approved within 24 hours.

Although more spectrum has been made available for sharing and the cost are decreasing, uncertainties and risks that are embedded in spectrum sharing still exist. Moreover, there are many spectrum sharing methods, such as cooperative sharing through trading, Authorized Spectrum Access, TV White Space, etc. Each spectrum sharing method leads to different risk portfolio. In addition, different frequency bands, coverage, and location brings more uncertainties and risks. We claim that these uncertainties and risks are significant barriers that hinder spectrum sharing from proliferating, in part because spectrum entrants and incumbents will not share spectrum when future conditions are difficult to foresee.

Consequently, minimizing risks for PUs and SUs is a key strategy to promote spectrum sharing. This paper seeks to outline the key risks and show how they can be managed. To this end, the first task is to analyze the risk portfolio for each spectrum usage model qualitatively and quantitatively. We focus on the following usage spectrum usage types: primary usage, cooperative sharing through trading, ASA, TVWS, sensing based Dynamic Spectrum Access (DSA), and unlicensed usage in ISM bands. These spectrum risks can be divided into three categories: (1) Monetary risk: every SU faces the risk that the firm cannot afford the project and the expected costs are not in line with the projected profits. (2) Competition risk: the competition in cooperative sharing comes from obtaining contracts. On the other hand, the competition in spectrum commons and sensing stems from identifying and accessing available spectrum. SUs may adopt advanced technology in order to increase the chances of success. (3) Environment risk: regulatory actions and secondary spectrum market liquidity may pose external risks for SUs. The spectrum risks will be quantified using a queueing model and computer simulations. Two types of risk metrics will be determined: (1) on average what is the percentage of the time that the service level agreement (SLA) can be met; (2) what is the probability that the SLA can be met at each point of time. The reason why we choose these two risk metrics is because (1) the satisfaction of the SLA is an important factor in determining the profits that service providers may gain, so we can determine the monetary risks from the first risk metrics; (2) the distribution of spectrum access risks can be applied in risk-informed regulation, which helps regulators and spectrum users determine the best practice for spectrum sharing in different environment.

The second task is to determine suitable strategies to mitigate these risks. In this paper, we focus on reducing risks for PUs and SUs by financial means. We begin with cooperative sharing through trading. The risk for this sharing method is that it is difficult if not impossible for PUs and SUs to predict their future usage. Two results emerge from this. First, spectrum users will be conservative so that PUs will only lease the minimum amount of spectrum to ensure capacity should their future service demand increase. SUs may choose lease the minimum amount of spectrum in case demand will decrease. Second, spectrum users may lease the maximum amount of spectrum, assuming the risks of service degradation or financial failure. Neither outcome is desirable. Trading spectrum as financial options can reduce these risks. Options give the buyer the right but not obligation to share the spectrum. This asymmetry provides protection for both parties. On one hand, the buyer of the right can decide whether to share the spectrum or not depending on their circumstances. On the other hand, the seller of the right either gains the premium (spectrum leasing fee) when buyers exercise the right or the strike (price of the right) when buyers do not exercise the rights.

With this framework, we proceed by broadening our view to other spectrum sharing methods. In general, spectrum entrants entering the wireless market face the decision of selecting the most appropriate spectrum usage method for their circumstance. Each spectrum usage method has embedded risks and uncertainties, such as changing situations in spectrum utilization environment, regulatory rules, service demands, etc. that may occur throughout the investment life cycle. However, risks and uncertainties may not necessarily lead to failure. When they occur, instead of passively committing to the existing business strategy, corporations have the right to delay, expand, contract, or abandon a project with a given cost or salvage value at some future date. This management flexibility may be able to alleviate the risks. Hence, in order to reduce the possibility of business failure, a clear understanding of each spectrum usage method is essential. Therefore, we will identify potential risks and mitigation strategies for each spectrum usage method. Then, we will quantify the value of each spectrum sharing method considering both risks and mitigation strategies by real options. With this value, spectrum entrants can make an informed decision by explicitly considering risks and mitigation strategies.

The outcome of this paper will promote spectrum sharing by showing how these risks may be minimized. It will also help identify the potential challenge with each spectrum sharing method. Therefore, policy makers, operators, and the spectrum market could create interventions in order to obtain the favorable outcomes.

Moderators
avatar for Derek Khlopin

Derek Khlopin

Senior Advisor for Spectrum, NTIA
Derek Khlopin is Senior Advisor for Spectrum at the National Telecommunications and Information Administration, U.S. Department of Commerce. Prior to NTIA Derek was Head of Government Relations, North America for Nokia Solutions and Networks. Derek also spent time in law and public policy for the Telecommunications Industry Association and alos has served at the FCC's Wireless Telecommunications Bureau. Follow Derek on... Read More →

Presenters
LC

Liu Cui

West Chester University

Authors
avatar for Martin Weiss

Martin Weiss

Associate Dean, University of Pittsburgh

Saturday September 26, 2015 2:00pm - 2:32pm
GMUSL - Room 121

2:33pm

The Role of Triple- and Quadruple-Play Bundles: Hedonic Price Analysis and Industry Performance in France, the United Kingdom and the United States
Paper Link

Communication providers may use bundles of different services (voice, data, pay-television, mobile voice and data) to leverage market power or increase switching costs for consumers but also as an efficient way to allocate fixed costs across services, reduce the complexity of their offers or provide unified billing for all services and innovative features (e.g. home security services, online music). Communication bundles are increasingly important and raise challenges for regulators and policy makers.

Economic literature has extensively addressed bundling issues, in particular in the area of industrial organisation, price discrimination and consumer welfare analysis (Rey and Tirole, 2006; Adams and Yellen, 1976). The literature on hedonic price analysis for communication services, relatively limited but expanding, builds on previous application of hedonic prices for automobiles, personal computers or houses (OECD, 2006) and explores the relationship between prices the bundle’s quality characteristics. A recent study by the Portuguese regulator and provides some insights on how quality parameters affect triple- and quadruple-play bundle pricing (Anacom, 2013).

This paper is novel in that it tries to map firm level industry performance data (revenues, profits, etc.) and pricing behaviour by those companies for triple-play and quadruple-play bundles (fixed voice, broadband, pay-television and mobile voice and data). On one hand, it uses hedonic price analysis of triple- and quadruple-play communication bundles of the largest operators in France, United Kingdom and the United States and, on the other hand, financial indicators of these operators corresponding to the 2009-2014 period (quarterly data).

A hedonic price analysis model is specified using OLS econometric analysis of some 300 offers from 15 operators in France, the United Kingdom and the United States (including standalone, double-, triple- and quadruple-play offers), for prices in April 2014. The collected variables include service price per month, technology, download speed, contract length, data allowance, local, national, international and/or weekend calls, the number of TV channels, premium content, mobile minutes, SMS and MB. The quality of the television content has been modeled using a quality index, constructed with the number of channels and whether the TV component includes premium sports or movies content. This quality index is in turn mapped into a series of dummy variables. The nature of the bundle, e.g. 2-play, 3-play, 4-play, has also been include through the use of dummy variables.

Quarterly financial indicators from 2009 to 2014 of 15 communication/pay-television providers in France, the United Kingdom and the United States, including revenues, profits, investment, indebtedness, number of subscribers, etc., are used in an econometric model that uses bundle penetration as the dependent variable. The model, not fully specified yet, could be a simple OLS, simultaneous equations or logistic regression (logit), using bundle penetration (% of customer taking up bundles) as the dependent variable. The independent variables would be the financial indicators, market characteristics, such as the operators’ market shares, relative prices (obtained from the hedonic price analysis), competition and/or regulatory variables, or even socio-economic parameters, such as GDP per capita, which could also be included in model as control variables.

Moderators
DM

Deborah Minehart

Department of Justice

Presenters

Saturday September 26, 2015 2:33pm - 3:05pm
GMUSL - Room 221

2:33pm

Using Empirical Estimates of Broadband Utilization to Target Broadband Adoption Incentive Programs
Paper Link

Encouraging low-income Americans to subscribe to home broadband service has proven to be an uphill battle for broadband proponents. While the monthly cost of broadband service is one obvious barrier to adoption within this population, other factors also play a role. Some may lack the necessary digital literacy skills, while others may lack a home computer or rely on other resources like libraries or computing centers for their broadband access. It is therefore important to measure barriers to broadband adoption within this portion of society, as well as determine what steps would have the greatest impact on closing the digital divide in a given geographic area.

This study attempts to measure the marginal impact of various behavioral and demographic variables and their impacts on home and mobile broadband adoption. In addition, those marginal impacts will be used to build a model by which policymakers can estimate the number of low-income households in a given geographic region that do not subscribe to broadband. This procedure will allow policymakers to estimate the number of low-income non-adopters in a given area that might be more responsive to price incentives (potential Lifeline discounts), the number that may require digital skills before adopting (regardless of price incentives), and the population to which more aggressive outreach may be needed beyond price and skill training. The procedure would allow for proper sizing of these various contents of a broadband adoption toolkit.

This study relies on multiple data sources. Early data collected through the Lifeline Pilot project sheds light on low-income respondents and their decision-making process when offered various incentives to subscribe to home broadband service. In addition, using a rich dataset collected through random digit dial telephone surveys with 8,442 low-income respondents across eight heterogeneous states (Iowa, Michigan, Minnesota, Nevada, Ohio, South Carolina, Tennessee and Texas) between 2010 and 2014, this study uses a logistic regression model to predict home and mobile broadband adoption decisions among low-income households. The regression models use binary dependent variables that indicate whether or not the household subscribes to home broadband service, and whether the household uses mobile broadband. The models incorporate independent variables for demographic factors such as the urban/rural location of the household, homeowner age, race, ethnicity, employment status, disability status, and the presence of children in the home. In addition, these models incorporate behavioral factors such as the presence of a computer in the household and use of the Internet at locations outside of the home (such as at work or a library) as independent variables.

The Lifeline Pilot Project data, coupled with early applications of the model, suggest that in addition to the demographic factors that have been shown to have a significant marginal impact on both home and mobile broadband adoption, behavioral factors that had not previously been incorporated into such models also have a significant impact on a low-income individual’s decision whether to subscribe to home or mobile broadband service. This combination of demographic and behavioral factors help provide a more robust image of the low-income non-adopter, allow researchers to estimate the number of such non-adopters within a given area, and provide policymakers design solutions that will address the lower adoption rates within this subset of the population in a more cost effective manner.

Moderators
JM

Jill Moss

Technical Advisor, USAID

Presenters
CM

Chris McGovern

Connected Nation, Inc.
HS

Hongqiang Sun

Connected Nation

Authors

Saturday September 26, 2015 2:33pm - 3:05pm
GMUSL - Room 332

2:33pm

Comparison between Benefits and Costs of Offload of Mobile Internet Traffic Via Vehicular Networks
Paper Link

Dedicated Short Range Communications (DSRC) is an emerging technology that connects automobiles with each other and with roadside infrastructure. The U.S. Department of Transportation may soon mandate that cars be equipped with DSRC to enhance safety. This work finds that if they do, then DSRC networks could also be an important new way to provide Internet access in urban areas that is more cost-effective than expanding the capacity of cellular networks. By combining our simulation model with data collected from an actual vehicular network that is operating in Porto, Portugal, we estimate how much Internet traffic can be carried on vehicular networks that would otherwise be carried by cellular networks under a variety of conditions. We then compare the benefits of cost savings of reduced cellular infrastructure due to offload with the cost of the DSRC vehicular network, to determine whether the former exceeds the latter. Although the benefits from offloading Internet traffic alone are not enough to justify a universal mandate to deploy DSRC in all vehicles, i.e. Internet offload benefit alone is less than total costs, we find that the majority of DSRC-related costs must be incurred anyway if safety is to be enhanced. Thus, soon after a mandate to put DSRC in new vehicles becomes effective, the benefits of offload in densely populated areas would be significantly greater than the remaining costs, which are the costs of roadside infrastructure that can serve as a gateway between DSRC-equipped vehicles and the Internet. Moreover, offload benefit would exceed DSRC infrastructure cost in regions with lower and lower population densities over time.

Moderators
PD

Paul de Sa

Bernstein Research

Presenters
avatar for Alexandre Ligo

Alexandre Ligo

PhD Student, Engineering and Public Policy, Carnegie Mellon University

Authors
JP

Jon Peha

Professor, Carnegie Mellon University

Saturday September 26, 2015 2:33pm - 3:05pm
GMUSL - Room 225

2:33pm

Shaping Privacy Law and Policy by Examining the Intersection of Knowledge and Opinions
Paper Link

This paper presents a novel approach to studying modern privacy issues. While there have been many surveys conducted over the last several years about online privacy, we present the first comprehensive study of how privacy knowledge and privacy opinions interact with each other. Our work provides significant insight into how consumers respond to the data practices of the government and the private sector.

Moderators
DS

David Sobel

Electronic Frontier Foundation

Presenters
CH

Carol Hayes

University of Illinois College of Law

Authors

Saturday September 26, 2015 2:33pm - 3:05pm
GMUSL - Room 120

2:33pm

Ex-Post Enforcement in Spectrum Sharing
Paper Link

It seems inevitable that future wireless systems will include shared spectrum. Shared spectrum can be viewed as a rearrangement of rights among stakeholders that will require enforcement. Demsetz indicates that enforcement is a key factor of any property rights management and Shavell argues that the timing of the enforcement action (ex-ante or ex-post) plays an important role. The emphasis in commercial-government sharing in the US has been on ex ante measures.

It has been posited that a system built on efficient ex post enforcement would reduce the opportunity cost of the ex ante measures. Determining the role of ex post enforcement in a spectrum sharing scheme is of significant importance since spectrum sharing will inevitably result in interference events. We propose to evaluate the role of ex post enforcement by modeling how an ex post only enforcement scheme might work, and what the limits are on its effectiveness. In particular, there are a number of factors to consider, including the cost and time of adjudication as well as how well the penalty is calibrated to the value of the communication.

To examine the role of ex post enforcement in a spectrum sharing regime, we study an ex-post-only regime. To determine whether (and when) this approach is superior to an ex ante approach, we build a model of a geographic region with geographically distributed secondary users and a single primary user. Aggregate signal power of the secondary users will be ‘measured’ at the primary user’s antenna. The number of excluded secondary users will be determined based on an ex ante approach that uses exclusion zones. We will plot the opportunity cost of the exclusion zones for varying values of secondary user communications.

To evaluate ex post enforcement, we posit an adjudication system that penalizes the secondary user for each interference event received at the PU’s antenna. The penalty would be proportional to the lost value of reception by the PU. The SU optimizes their transmissions so that the net value of a sequence of transmissions is positive.

There are a number of phenomena that we study in this scenario. First, as the value for SU transmissions increase, s/he may find it valuable to risk higher interference penalty by transmitting closer to the PU’s antenna. This results in dynamic and self-determined “exclusion zones”. We can also model the income stream to the PU; at some point, it may be more valuable for the PU to collect interference penalties than to operate their system in that location.

The above approach assumes that adjudication is immediate and costless. To determine the bounds of adjudication costs, we determine the cost level that would result in a region that is equivalent to an exclusion zone in ex ante enforcement. Having achieved that, we reason about the effectiveness of ex post enforcement system and their technical requirements.

Ex post enforcement must play a role in practical spectrum sharing systems. Despite this, it is not a topic that has attracted much attention in the research literature, with the notable exception of the work by Sahai and his co-authors (see, for example [6]). In this paper, we examine the behavior of a pure ex post enforcement system and compare it to pure ex ante approaches. The results of this study will help researchers develop feasible approaches to adjudication and will help policymakers balance the use of ex ante and ex post enforcement techniques in spectrum sharing regimes.

Moderators
avatar for Derek Khlopin

Derek Khlopin

Senior Advisor for Spectrum, NTIA
Derek Khlopin is Senior Advisor for Spectrum at the National Telecommunications and Information Administration, U.S. Department of Commerce. Prior to NTIA Derek was Head of Government Relations, North America for Nokia Solutions and Networks. Derek also spent time in law and public policy for the Telecommunications Industry Association and alos has served at the FCC's Wireless Telecommunications Bureau. Follow Derek on... Read More →

Presenters
avatar for Amer Malki

Amer Malki

Ph.D. student- Telecommunications and networking Program, University of Pittsburgh

Authors
avatar for Martin Weiss

Martin Weiss

Associate Dean, University of Pittsburgh

Saturday September 26, 2015 2:33pm - 3:05pm
GMUSL - Room 121

3:05pm

Effects of Media Use Behavior on the Preference for Channel Bundling of Multichannel Services
Paper Link

This paper analyzes the factors that influence the consumer’s preference for channel bundling of multichannel services (provided by Cable TV, DBS and IPTV) in Korea, focusing on the consumer’s media use behaviors as key determinants. For the analysis, we use the mixed logit model and estimate the coefficients of interaction terms between household attributes and channel bundle characteristics of the multichannel service. The micro data on household attributes including media use behavior comes from the Korean Media Panel survey conducted by Korea Information Society Development Institute (KISDI) in 2013, while the data on the characteristics of multichannel services were collected from the operators’ homepages, except viewer ratings which were provided by the Nielsen Korea, Inc..

We find that (i) households who preferred channel bundles composed of higher viewer-rating programs are those who spent more time watching terrestrial broadcaster’s programs than the other TV programs or who spent more time surfing the internet; (ii) households who preferred channel bundles composed of a variety of genres are those who spent more time watching VOD on TV or who spent more time either watching videos via PCs/mobile devices or surfing the internet; and (iii) single dwellers or married couples without children have preferences for the channel bundle composed of more watched channels, while upper-income households are less sensitive to the price.

Moderators
DM

Deborah Minehart

Department of Justice

Presenters

Saturday September 26, 2015 3:05pm - 3:37pm
GMUSL - Room 221

3:05pm

Local Economic Impacts of Investments in Community Technology Centers: An Empirical Investigation
Paper Link

The objective of this study is to analyze the local economic impact of investments in community technology centers (CTSs). More broadly, it asks the question whether investments in CTCs are primarily motivated by considerations of equity and social justice, or whether there is an economic rationale that can justify such investments.

The community technology center movement has a long pedigree, both in the United States and abroad. Historically, their motivation was to extend information and communication technology (ICT) access and training services to communities and individuals deprived of such services because of low socioeconomic status or lack of digital literacy. In effect, they were seen as universal service programs for those who could not afford household ICT access. However, CTCs also provided a host of services that might have economic consequences: digital literacy training, small business services, job training, etc. These services may cumulatively be expected to have effects on local economic growth, through encouraging small business entrepreneurship, lowering unemployment and enhancing local labor skills. Identifying the magnitude of these economic consequences, if any, is critical to ensuring continuing public support to CTCs.

The relationship between telecommunications and economic growth has been long recognized in the economics literature, ever since Jipp’s pioneering work found a positive correlation between telephone density in a country and per capita Gross Domestic Product (GDP). By substituting for other production inputs and reducing transaction costs, telecommunications contributes to economic growth. Growth in turn makes more investment capital available for telecommunications development and also contributes to demand by increasing household income. However, establishing a connection between CTC investments and economic growth is harder, and has not been explicitly addressed in the literature.

Analysis of the economic consequences of investments in CTCs is also complicated by lack of quality data as well as methodological issues. Data availability is a major concern, since no centralized database of CTC investments exists — most CTCs are run by a wide variety of entities including municipalities and city governments, charitable foundations, industry and trade groups, and public libraries. To solve this problem, we use data on CTCs from two sources that provide information on subsets of all CTCs: data for CTCs attached to public libraries are available from the Institute of Museum and Library Services (IMLS); and data from the National Broadband Map (NBM) for Public Computing Centers (PCCs) funded through the Broadband Technology Opportunities Program (BTOP). The IMLS data tracks individual libraries and PCCs and logs the types of services available at each location, including digital literacy courses and general education assistance. The NMB data includes the periodic reports to the NTIA from recipients of PCC grants about the number of new and improved PCCs, the number of new and upgraded workstations available to the public, hours of operation, average connection speed, primary uses of the PCCs, average users per day, and training provided with BTOP funds. We supplement this with information on broadband availability from the National Broadband Map; and socioeconomic and demographic information from the U.S. Bureau of the Census such as poverty level, average income, average household size of the region, and geographic data such as population and the ratio of the population living in rural areas.

In terms of method, models for the economic consequences of ICT and broadband deployment need to account for the simultaneity bias and spurious correlation. The incremental economic effect of CTCs needs to be isolated from that of the availability of broadband and ICTs in the wider community; it needs to rule out reverse causality, namely the tendency of wealthier communities to invest more in all public infrastructures including CTCs; and it needs to control for trends in the overall national and regional economies. Our paper utilizes an econometric model that addresses these methodological issues using appropriate controls and instrumental variables.

The outcome of this research will provide inputs into the relative merits of the investments of scarce public resources into CTCs or other public infrastructure investments. It has significant implications for public policies aiming at universal broadband access through middle mile institutions, as well as for the creation of community resources and local development.

Moderators
JM

Jill Moss

Technical Advisor, USAID

Presenters
JG

Jenna Grzeslo

Pennsylvania State University

Authors
KJ

Krishna Jayakar

Penn State University

Saturday September 26, 2015 3:05pm - 3:37pm
GMUSL - Room 332

3:05pm

Gigabit Broadband, Interconnection Propostions, and the Challenge of Managing Expectations
Paper Links

How should market and regulatory expectations evolve as broadband access speeds increase toward Gigabit speeds? One might note that the FCC in the United States only just increased the definition of broadband from 4 Mbps to 25 Mbps in 2015 and the average connection speed is only approximately 25 Mbps. An inquiry premised on a future with much faster broadband access speeds thus might seem premature. We disagree. Evolving consumer preferences shape the broadband offerings while the regulatory sphere is codifying acceptable behavior of network operators. These consumer expectations and regulatory decisions have the potential to either facilitate or hinder the deployment of an Internet with very high-speed connectivity.

This paper explores both technical and policy questions premised on a future where both large providers (e.g. Comcast, Verizon, Google Fiber) and small broadband ISPs (e.g. rural and community owned networks) have connectivity offerings ranging from 100 Mbps to 1 Gbps of access speed. We ask, what expectations should exist regarding how a broadband access provider offering such high-speeds is interconnected with other networks? How should the current norms and expectations that have developed regarding access network performance extend across the interconnection links of such a very high-speed broadband provider? Do the expectations of performance on the access links themselves change with such high-speed offerings? Do existing measurement tools and systems provide consumers and regulators with a clear understanding of how well such high-speed broadband access networks are functioning and interconnected? Are there dimensions of interconnection beyond capacity that are relevant to the customer experience and regulatory discussions?

There are two dimensions to the changes we explore here: a significant increase in the speed of access networks and a change in scope of performance expectations to include interconnection links and paths all the way to the applications and services that users are trying to access. The performance of the end-to-end path is a function of both infrequent long-term decisions (interconnection decisions, capacity decisions, etc.) and short-term decisions (BGP changes, CDN source selection, etc.) made by both the broadband access providers and by the other network actors delivering content and services to users. This underappreciated joint influence over the user experience is important to consider in setting market and regulatory norms and policy.

Moderators
PD

Paul de Sa

Bernstein Research

Presenters
SB

Steven Bauer

MIT CSAIL


Saturday September 26, 2015 3:05pm - 3:37pm
GMUSL - Room 225

3:05pm

0011011 [ESC]ape
Paper Link

The Internet of Things is here, but it is not under control.

We don't own or control our smartphones, tablets, or consoles. Data-hungry companies do. In two years, you won't own your smart television. In five years, you won't truly control your self-driving car. In ten, you won’t control your networked house. To escape, we must re-establish digital ownership at the most basic level. What you own, you must control. We have to hit [escape].

The virtues of ownership — independence, simplicity, privacy, modularity, wealth-building, and self-determination — will be necessary to escape the control companies assert over our property through intellectual property licenses. [ESC]ape will explore the social and technological developments that have driven the erosion of property rights in the digital context, and how the digital context increasingly defines physical reality. It will survey legal responses to technological progress and argue that these developments have led to a situation in which citizens do not meaningfully own or control their own property. It will explore the ramifications of the current state of property rights in digital objects and smart property, specifically with regard to rights of privacy, autonomy, and governance. The proposed solution will be to assert digital ownership. The piece will argue that developments in crypto-technology can, for the first time, make true digital ownership possible, and that this solution can allow individuals to enjoy the full positive promise of the Internet of Things while minimizing its negative consequences.

Moderators
DS

David Sobel

Electronic Frontier Foundation

Presenters
avatar for Joshua Fairfield

Joshua Fairfield

Professor of Law, Washington and Lee University School of Law
Josh Fairfield is a nationally recognized scholar on law, governance, economics, and intelligence issues related to technology. He has written on the law and regulation of e-commerce and online contracts and on the application of standard economic models to virtual environments. He has also written on the ethical and legal issues involved in virtual privacy and cyber-security.


Saturday September 26, 2015 3:05pm - 3:37pm
GMUSL - Room 120

3:05pm

Spectrum License Design, Sharing, and Exclusion Rights
Paper Link

The FCC is in the midst of a rulemaking to create a novel tripartite sharing regime in the 3.5GHz band. This has the potential to be a watershed event in the decades long transition toward more flexible and dynamic, market-based spectrum management. As part of this proceeding, Lehr (2014b) proposed interpreting commercial licenses to protected access as options contracts that explicitly separated the interference protection and exclusion rights as a way to endogenize market-based incentives to share spectrum. This paper builds on Lehr (2014b) by setting forth the larger vision implicit in the earlier proposal and expanding on the case for separating exclusion and interference protection rights. This separation will enable a licensing regime that will support more dynamic and granular assignment of access rights; is more consistent with the future of radio networks and spectrum utilization; and will expand the economic tools available to regulators for incentivizing efficient spectrum usage, which necessarily includes sharing spectrum more intensively.

Moderators
avatar for Derek Khlopin

Derek Khlopin

Senior Advisor for Spectrum, NTIA
Derek Khlopin is Senior Advisor for Spectrum at the National Telecommunications and Information Administration, U.S. Department of Commerce. Prior to NTIA Derek was Head of Government Relations, North America for Nokia Solutions and Networks. Derek also spent time in law and public policy for the Telecommunications Industry Association and alos has served at the FCC's Wireless Telecommunications Bureau. Follow Derek on... Read More →

Presenters

Saturday September 26, 2015 3:05pm - 3:37pm
GMUSL - Room 121

3:40pm

Coffee Break
This year during coffee breaks there will be several identified topic tables in the Atrium with TPRC Program Committee members and attendees eager to discuss the latest issues. If you're new to TPRC and seeking a place to meet new friends, or if you are returning and seeking lively discussion, look for the signs and join the conversation.

Saturday September 26, 2015 3:40pm - 4:10pm
George Mason University School of Law Atrium

4:10pm

Understanding the Federal Communication Commission's Policy-Making Using Big Data
Paper Link

The increasing availability of financial and consumer data through the internet as well as ever more accessible computing power to collect, organize, and analyze information has encouraged the widespread use of big data -- and has transformed all aspects of business from banking to retailing. On the other hand, big data has yet to play a role in our understanding of government, the administrative process, and policy-making.

In one of the first efforts to use big data to understand regulation and policy-making, we look to the Federal Communications Commission (FCC) and communications policy. In particular, we employ a unique data set representing the entire Electronic Comment Filing System (ECFS), a database that spans nearly three decades and includes virtually every formal submission to the FCC. Under agency regulation, all comments and other filings, including all replies, reports, applications, adjudication submissions, and, significantly, notices of ex parte meetings with commissioners and agency staff must be filed in ECFS. Our database has over 4 million specific records. In addition, we combine this database with another unique database derived from the official FCC Record, which publishes all agency action at a commission and bureau level.

Using various recursive regression techniques, we derive correlations between the activity of the commenters and ex parte meetings and agency action. Our tentative conclusions provide evidence on the drivers of FCC behavior. First, we find that comments and ex parte meeting are positively correlated with agency order production. While the causal arrow between these two variables is, of course, ambiguous, this result is expected given that comments and ex parte action likely both drive and anticipate agency action. On the other hand, we find significantly higher correlations between ex parte meetings and orders than comments and orders, suggesting a greater impact of ex parte meetings by elites as opposed to the broader community of commenters. This result has serious implications for how we understand the “democratic” nature of rule-making and other administrative practices.

Second, we extend this approach to examine correlations between FCC action and other variables. For instant, we examine how certain law firms and lobbying firms correlate with agency action. We also compare how these effects vary among the various FCC bureaus. We find variation in the correlations between particular firms and agency action, suggesting the existence of “insiders” at the FCC who have an advantage in getting the agency to do things.

Finally, we consider the role of correlation and big data analysis in policy formation. We argue from normative grounds that descriptive correlations offer a powerful tool to understanding institutions and how they form policy. These techniques deserve a wider acceptance in both legal scholarship and social science as big data becomes more easily available.

Moderators
GW

Geoffrey Why

Mintz Levin

Presenters
Authors

Saturday September 26, 2015 4:10pm - 4:42pm
GMUSL - Room 221

4:10pm

Federal Subsidies and Broadband Competition
Paper Link

Debate over U.S. broadband policy has largely shifted away from mere availability, toward the degree of competition among service providers, and effects on price and quality, in distinct regional markets. By far the largest sustained effort to increase broadband provision in the United States has been the so-called "e-Rate" schools and library subsidy system, funded by the telephone rate payer through the Universal Service Fund, and administered by a private industry consortium. In December 2014, the FCC issued an e-Rate modernization order that increased the annual funding limit for the e-Rate program by 63%, to $3.9 billion annually, and shifting if further from supporting legacy telecommunications systems, and toward provision of high speed broadband within schools, including in particular an emphasis on support for internal wireless internet infrastructure within schools.

One rationale for expansion of the e-Rate subsidy program voiced by advocates has been that it would also create spillover benefits for neighborhoods served by recipient schools, by enabling scale economies in the provision of broadband services to neighborhoods by new providers. This paper undertakes a rigorous empirical evaluation of this argument using a rich panel data set assembled from multiple sources.

Using these data to control for a variety of economic, social, and demographic factors that might shift demand, as well as local factors that might shift the costs of network and service provision, I examine whether the pre-2014 e-Rate program had an identifiable and statistically significant impact on the evolution of broadband competition at the individual U.S. zip code level over the period 2005-2008. The impact of a much smaller but much more focused program also funded by the Universal Service Fund, the rural health center program, can also be evaluated, and provides a useful counterpoint in assessing the impact of these subsidy programs on the competition issue.

One statistical problem faced in undertaking this evaluation is that FCC statistics on broadband provision by geographical locale censor reported data on the number of competitors when small numbers of providers are present. If observations with censored outcome variables are simply dropped, it is well understood that estimates of effects in a regression model will be biased.

Using simulations, I test a multiple imputation method I have devised that exploits aggregate state level information on the distribution of the censored provider outcomes, and find that it performs well in removing censoring bias. Employing these methods with an econometric model, I conclude that the small but highly focused USF rural health center funding has had a statistically and economically significant impact on numbers of local broadband service providers, while the e-Rate program generally did not in most areas. However, in the very poorest or most rural areas, there is some evidence that the e-Rate program had a small, but statistically significant impact in stimulating greater competition in broadband service provision.

Moderators
Presenters
KF

Kenneth Flamm

LBJ School, Univ of Texas at Austin


Saturday September 26, 2015 4:10pm - 4:42pm
GMUSL - Room 332

4:10pm

Mobile Telecommunications Service and Economic Growth: Evidence from China
Paper Link

Many telecommunications policy efforts are aimed at increasing consumer subscribership and usage. Because of the positive externalities likely to emanate from better communications flows, these goals are likely to generate social benefits beyond the private benefits to the consumer. We provide evidence on one possible social benefit in the Chinese experience -- increased economic growth stemming from greater usage of the mobile telecommunications network.

We contribute to the role of telecommunications service on economic growth in three ways. We separately examine fixed-line and mobile telephone subscription levels. We compare results across periods and regions that differ by the level of development. In addition, we develop a method designed to address endogeneity of telecommunications with respect to growth. We find that mobile services contribute much more to growth but that the effect diminishes as the provincial economy develops more.

 


Moderators
RL

Rod Ludema

State Department

Presenters
MW

Michael Ward

University of Texas at Arlington


Saturday September 26, 2015 4:10pm - 4:42pm
GMUSL - Room 225

4:10pm

Risk-Based Vulnerability Disclosure: Towards Optimal Policy
Paper Link

As computing has become increasingly ubiquitous and embedded (as demonstrated by industrial control systems, in-vehicle systems, in-home care systems, and within the energy and transportation infrastructures) the issue of responsible disclosure has returned to the fore. These new computing contexts require revisiting the nature of vulnerabilities, and thus responsible disclosure. The goal of this work is to critique the current disclosure practices, particularly in terms of pervasive computing. Based upon these critiques, grounded in the history of vulnerabilities, and informed by a series of expert interviews, we propose a model of risk-based responsible disclosure.

Research on vulnerability disclosure policy was an early focus in economics of security, particu- larly until 2006. However, that earlier research reasonably assumed models of computers that were applicable to desktops, laptops, and servers. That is, there is a centralized source of patches, that patching is possible in a very short time frame, that patching is low cost, and that the issue of physical harm need not be addressed. Currently there is limited agreement upon best practices for vulnerability disclosure. This arises in part from the increasing diversity of both vulnerabilities and their potential impact. There are some clear lines, for example, it is not acceptable to disclose a vulnerability by implementing it and causing harm to victims. There are also well-known rea- sons for disclosure, specifically creating incentives for vendors to patch and diffusing information to potential victims for their use in risk mitigation.

The trade-offs between transparency and confidentiality are increasingly complex. Responsible disclosure must be equitable: informing the marketplace, incentivizing software manufacturers to patch flaws, protecting vulnerable populations, and simultaneously minimizing the opportunities for malicious actors. To understand and resolve these challenges we begin with the current state of vulnerability research. Stepping back provides a high-level historical perspective from the first identifiable vulnerability in a mass-produced device (beyond the canonical physical bugs in the first highly custom computers) to the Superfish malware in 2015. We describe extant models of disclosure, identifying the strengths and weaknesses of each of these. After that, we summarize factors previously used as vulnerability (and thus disclosure) metrics. These historical analyses and technical critiques are augmented by a series of interviews with technology and policy experts.

We conclude that there is now no single welfare-maximizing disclosure regime. Given this, we advocate for a model of optimal disclosure grounded in risk-based analysis. Such an analysis should be complete and deterministic for a given context. We propose the factors necessary for such a systematic analysis. We then use well-known cases to test the framework and provide illustrative but practical examples.

Moderators
Presenters
avatar for Andrew Dingman

Andrew Dingman

Indiana University
GR

Gianpaolo Russo

Indiana University


Saturday September 26, 2015 4:10pm - 4:42pm
GMUSL - Room 120

4:10pm

The Value of Network Neutrality to European Consumers
Paper Link

BEREC’s recognition of network neutrality as a key policy priority in 2010 has led to various related activities, for instance a fact-finding on traffic management practices and an assessment of IP interconnection. In consequence, European regulators have gained a solid basis for determining next steps. However, this is only the case for network neutrality questions related to the supply side of Internet Access Service (IAS). The demand side has been tackled to a much lesser extent. How do consumers understand and conceptualize network neutrality? Do consumers value aspects of net neutrality in their preferences for IAS offers? These questions drive the consumer research, for which BEREC commissioned an extensive study.

One particularly relevant research objective for the study was to understand the effect that information has on consumer behavior. This addresses this research objective and sheds light on the relevance of qualitative insights in developing meaningful consumer information on the complex subject of network neutrality. Furthermore, the paper describes the effects of such consumer information as measured by a representative online survey in four European countries (CR, CZ, EL SE).

In the survey, one half of respondents received an information package on network neutrality and its effects, whilst the other half did not. Our results show clearly that respondents who received the information package had a significantly better understanding of the issue of network neutrality and its effects on their quality of experience. However, this additional knowledge did not affect their choices for IAS products nor their attitudes towards network neutrality.

Our paper highlights the importance of mixed-methods consumer research guiding policymaking and regulation in particular with respect to topics such as network neutrality that immediately affect consumers’ quality of experience. Furthermore, our paper illustrates that consumer information should use vivid animation in order to be effective. Furthermore, we found that network neutrality-related product attributes play an important role in consumer purchase decisions for IAS products. Given the complex nature of the subject, stakeholders and among them in particular ISPs who want to bring new IAS offers to market will need to understand consumer behavior and preferences in more depth.

Moderators
avatar for Scott Jordan

Scott Jordan

University of California

Presenters
avatar for Rene Arnold

Rene Arnold

WIK Consult


Saturday September 26, 2015 4:10pm - 4:42pm
GMUSL - Room 121

4:42pm

Right Way Wrong Way: The Fading Legal Justification for Telecommunications Infrastructure Rights of Way
Paper Link

Per common practice, telecommunications providers use rights-of-way to build physical network infrastructure on lands they do not own, for deploying cable aboveground or underground and for placing wireless transmission towers. Agreements to use these lands are usually made with public landowners such as local governments and the agencies that oversee national parks, state forests, and the like.

The procedures for interacting with and compensating landowners in order to obtain rights-of-way have been established per regulation and court precedent. In short, private landowners should be justly compensated; public landowners may be compensated directly, but more often the firm using the land must offer some sort of remedy that is in the public interest. This paper focuses on the legal justifications for allowing telecommunications firms to use publicly-owned lands for rights-of-way.

While the 1996 Telecommunications Act includes some specific rules for rights-of-way as needed by telecom service providers, much of the law regarding this matter descends from utilities regulation and the common law of land ownership. More specifically, utilities that operate aboveground power lines or underground pipelines are designated as franchisees that have been granted certain privileges for using land that is owned by someone else, and in return these franchisees face various public interest obligations. For example, a fossil fuels company that lays a pipeline through a state forest is often required to satisfy the public interest by vowing to repair ecological damage. Historically, telecommunications firms have been subjected to similar requirements.

This paper will introduce rights-of-way policy for private operators that make use of public lands, including the corresponding regulations in the telecommunications industry. Recent actions by telecommunications firms in which they have sought to be released from public interest responsibilities -- including the maintenance of universal service programs, serving as common carriers, and serving as Carriers of Last Resort – have eroded their legal justifications for unfettered use public rights-of-way, which is one of the most important benefits they receive from the regulations that they hope to escape. The paper will conclude with a discussion of whether this conundrum can be resolved via existing telecommunications regulations, or if a new focus on local property rights and public utilities law should be considered.

 


Moderators
GW

Geoffrey Why

Mintz Levin

Presenters
BC

Ben Cramer

Pennsylvania State University


Saturday September 26, 2015 4:42pm - 5:15pm
GMUSL - Room 221

4:42pm

Mobile Communications Policies and National Broadband Strategies in Developed and Developing Countries: Lessons, Policy Issues and Challenges
Paper Link

The intelligent mobile phone has become the most widely used communications device in the world and the access device of choice in the developing world. The International Telecommunications Union’s report “The World in 2014: ICT Facts and Figures” estimates that there were some 7 billion mobile service subscriptions by the end of 2014, corresponding to a global population of some 7.3 billion. Mobile cellular penetration rates stand at 96% globally, 121% in developed countries and 90% in developing countries. Mobile broadband subscriptions have increased from 268 million in 2007 to 2.1 billion in 2013, an average annual growth rate of 40%.By the end of 2014 the number is expected to have reached 2.3 billion, with some 55% in developing countries, compared to only some 20% in 2008.There are now more than twice as many mobile broadband subscriptions as fixed ones.

Mobile broadband communications requires an integration of wireless and wireline networks. Spectrum is the lifeblood of mobile communications services. As high-speed mobile Internet access becomes more readily available and affordable, intelligent mobile devices (e.g. smart phones, tablet computers, laptops) are being used widely for bandwidth-hungry applications, in business as well as for personal and social purposes. This means that the demand for additional spectrum bandwidth is likely to increase rapidly and outstrip the supply for the next few years. Governments have a key role to play in efficiently allocating and managing the use of the spectrum and meeting the future demand for additional spectrum bandwidth. Issues and challenges related to spectrum allocation and management will become an important component of any national wireless broadband strategy.

This paper, which complements a Panel proposal, will focus on the impact of the widespread penetration and use of the mobile phone and other more intelligent mobile devices in both developing and developed countries. It will examine and compare the role that wireless access and mobile broadband play in various national and regional broadband strategies, and how mobile communications is integrated with the wireline component of such strategies. It will discuss and compare strategies being used in developed countries like the US, Australia, New Zealand, Singapore and the EU, and developing countries like Mexico, Brazil and India, among others.

The paper will examine and discuss issues such as:
• What role does mobile broadband play in different national broadband strategies, and how is it integrated with the wireline component?
• In addition to efficiently allocating and managing the use of the spectrum, what other roles can governments and regulators play in enabling the continued growth of mobile telecommunications services?
• Could/should revenues derived from spectrum auctions be used for targeted subsidies or other demand and supply side initiatives?

We wish to find out what has worked, what did not, the problems encountered and whether there are lessons to be learned that are of general applicability, as well as for particular countries including developing ones like India. We wish to explore the possibilities and limitations of learning from other nations’ and regions’ experiences, identifying common policy challenges and medium term research requirements of interest to the TPRC community.

Moderators
Presenters
Authors
RJ

Rekha Jain

IIM Ahmedabad

Saturday September 26, 2015 4:42pm - 5:15pm
GMUSL - Room 332

4:42pm

Regulating Over-the-Top Service Providers in Two-Sided Content Markets: Insights from the Economic Literature
Paper Link

The market for telecommunications services is changing rapidly and a myriad of new players is successfully deploying new and innovative services, which are adopted by the consumers. Especially emerging “Over-the-Top Services” (OTTs), which generally do not own an extensive infrastructure, but rather use the existing infrastructure of Internet Service Providers, lead to the necessity of investments into transmission capacity due to increased bandwidth-consumption (e.g. Netflix or YouTube) and are forcing traditional telecommunications providers (telcos) to reconsider their business models as a consequence of the offered, partly substitutive services of (e.g.) Google, Skype, WhatsApp or Facebook.

Hence, telcos claim to be exposed to declining revenues while simultaneously forced to invest into new high-capacitive infrastructure. The debate about the monetary compensation needed is partially covered by the net neutrality debate. A host of academic literature has recently looked at this discussion (cf. Krämer, Wiewiorra and Weinhardt, 2013) and considered different scenarios, welfare implications and resulting (new) revenue streams for telcos. Although this debate is still ongoing and telcos refer to the necessity of additional revenues to stem the needed investments (cf. ETNO, 2012), it seems unclear if the conclusion to this debate solves the disputes between traditional telcos and OTTs.

OTTs offer complementary services to the underlying infrastructure (Peitz, Valletti and Schweitzer, 2014) and might be a precondition for a high valuation of the consumer – but as they extract traditional revenue streams of telcos, they are also a threat. The most salient issue is whether OTT providers should and can be regulated similarly to the infrastructure service providers. Infrastructure service providers are subject to several regulatory remedies, such as access regulation and interconnection obligations. OTTs currently do not face such remedies. But shouldn’t Apple be forced to provide access to its ecosystem? Or should Google be obliged to offer access to their data sources? The dominance and the increasing revenues of several OTTs, which are establishing a content monopoly, might be the initiator of this claim.

The traditional telcos in many countries were ironically also monopolies and were forced to open the access to their essential facilities. The evolving question is whether the monopoly of popular OTTs, such as Google or Facebook active in a two-sided market model, also qualifies for similar approaches.

The final paper will focus on the key question whether access regulation should also be enforced at higher levels of the value-chain of the Internet, namely at OTTs with a two-sided market model. Therefore, we will briefly review the characteristics of the (regulated) bottleneck constituted by traditional telcos and compare these findings to the characteristics of emerging OTT-Players. It can be seen that the preconditions for regulating infrastructure services vs. content services differ. Whereas in the case of traditional telcos the physical infrastructure with immense sunk costs causes the (monopolistic) bottleneck, OTTs establish a proprietary virtual network cohered by data and participation. Although not all OTTs employ a two-sided market business model (cf. Hagiu and Wright, 2011), we focus on these players as a result of current debates and, more importantly, a completely different market form, which needs to be taken into account.

Traditionally telcos offer infrastructure services to end consumers, both at a wholesale and at a retail level. This constitutes a one-sided business model. Thus, telcos have an incentive to charge high prices and to serve only those customers with a high valuation for the service. In contrast, an OTT in a two-sided market acts as an intermediary who needs to “bring both sides on board” and has an incentive to price efficiently (Rochet and Tirole, 2006). Hence, a comparison of key principles according to the market mechanisms and incentives of the involved players seems necessary.

In terms of access to an essential facility a vast amount of economic literature evolved over the years (e.g. Laffont and Tirole, 2001) and indicates that some kind of regulation of the access market seems appropriate to achieve stated goals such as efficiency or innovation. Considering two-sided markets, the conclusions are less clear. Although there is a tendency that intermediaries in two-sided markets gain a certain amount of dominance not necessarily harming welfare (cf. Caillaud and Jullien, 2003), the paper will consider examples of proponents who claim to open the ecosystems of dominant OTTs making it mandatory to analyze the mechanisms and effects of two-sided markets. The paper will therefore also look at models in the context of two-sided markets and, after pointing out their relevance in the context of OTTs, examine the welfare implications. Particularly the effect of a hypothetical opening of two-sided networks seems relevant in the context of established proprietary virtual networks.

The paper will conclude by comparing the two different market forms, i.e. one-sided and two-sided markets, and highlight their main differences. Based on the extant literature (cf. Schiff, 2003), we argue that in comparison to the motivation in traditional monopolistic bottlenecks, opening existent virtual networks may lead to less competition, although the (social) welfare implications may also be positive. These implications may be of paramount importance in the topic area of OTTs with proprietary databases or separated networks; especially with respect to the ongoing debate concerning dominant OTT networks and ecosystems such as Android, iOS or Facebook.

Moderators
RL

Rod Ludema

State Department

Presenters
MW

Michael Wohlfarth

Universitaet Passau

Authors
avatar for Jan Kraemer

Jan Kraemer

Full Professor, University of Passau

Saturday September 26, 2015 4:42pm - 5:15pm
GMUSL - Room 225

4:42pm

Proving Limits of State Data Breach Notification Laws: Is a Federal Law the Most Adequate Solution?
Paper Link

While the discussion about a federal law on data breach notification is ongoing and a rash of large, costly data breaches has galvanized public interest in the issue, this paper investigates on the phenomenon of data breach notification letters in terms of their content. We explore the causal link between on one side state specific notification regimes, breached organisation industry-sectors and breach types that generate notifications and on the other side the type and timing of the communications issued by organisations. This will contribute to shed light on the ultimate resulting effects of the current set up of the data breach notification laws in US. In particular, based on the observed companies’ behaviour, do these laws act predominantly on the reputational fear of the breached organizations increasing company security measures or on the mitigation that customers can put into place once the communication is received?

In order to perform such analysis we empirically answer to the questions below labeling a sample of letters according to the messages customers may perceive when they read them. Specifically, over 400 notifications issued in U.S. in 2014 are classified based on elements that can be isolated and analysed, e.g. (1) does the letter alarm the customer about possible consequences or does rather belittle the event (2) is the customer in the position to immediately identify the importance of such a missive, or can the letter mislead the addressee, who qualifies it as spam. The analysis of the content of the letters is also extended to the time span between the data breach and the delivery of the notification to the customer.

According to these intentional choices made by organisations when composing and sending notifications, we are able to depict pitfalls and opportunities generated by the possible implementation of a federal data breach notification law in U.S. in opposition to the present state of the art.

The research is innovative in presenting objective findings related to notification timing, notification style, and notification content more in general. It is based on 445 letters issued in 2014 in 4 States, representing more than 50% of Data Breaches reported in U.S. in the same year (783 according to ITRC - Identity Theft Resource Center).

Moderators
Presenters
avatar for Fabio Bisogni

Fabio Bisogni

TU Delft / FORMIT Foundation


Saturday September 26, 2015 4:42pm - 5:15pm
GMUSL - Room 120

4:42pm

A Semantic Network Analysis of The Network Neutrality Debate
Paper Link

With nearly 4 million remarks, the 2014 proceeding that produced the recently released Open Internet Order stands as the most commented Federal Communications Commission (FCC) docket to date. According to FCC Chair Tom Wheeler, the volume and substance of this public input was crucial in shifting the agency’s final rules away from legal justifications based on Section 706 towards Title II reclassification. Mass public sentiment had a significant impact on agency decision-making. The bulk release of comments thus gives communication researchers an opportunity to more fully understand the formation and expression of public opinion. Using a variety of proven data mining and semantic network analysis techniques, this paper will conduct an exploratory but comprehensive quantitative inquiry of the comments.

Previous quantitative surveys have been enveloped in controversy. In particular, the Sunlight Foundation’s analysis sparked a number of responses and counter analyses by involved think tanks and activists, requiring further clarification by the organization and an official response by the FCC. By first reproducing and then extending this work, this paper will serve as a first step in establishing an official understanding of the public’s input.

This paper is divided into four sections. The first section provides a literature review of the relevant research from both psychology and communication theory that undergirds semantic network analysis.

The second section seeks to answer a number of key questions that still remain after the initial round of research. For example, how many comments mentioned and then supported Title II reclassification? Of the total submitted comments, how many were due to form submissions? From which organizations did these form submissions come? How many of the comments were not related to network neutrality but still expressed general concern with the state of the American broadband industry? Moreover, just how many of the comments were not related to any broadband concern?

The third will employ a variety of data mining techniques including word-pair link strength and k-nearest neighbors algorithms to chart changes in semantic networks as a quantitative proxy for changing public opinion. By parsing comments into a number of time series, changes between the two sets can be tracked. Four time sets have been identified. The first will compare comments during the first and last 30 days of the official comment period. The second time series will explore the influence of popular television host John Oliver. His widely shared and viewed TV segment ended with a call for viewers to file comments in the proceeding that immediately crashed the FCC comment system. By comparing those comments filed 30 days after and 30 days before the show ran, a measure of his influence on the conversation will be established. Next, the importance of Internet Slowdown Day, an event popularized by a number of activist groups will undergo analysis through a similar 30 day before and after comparison. Lastly, all of the comments in the initial round of comments will be compared to those of the reply round.

The fourth and final section will review conclusions from the research and outline opportunities for future studies.

Moderators
avatar for Scott Jordan

Scott Jordan

University of California

Presenters
avatar for William Rinehart

William Rinehart

Directory of Technology & Innovation Policy, AAF


Saturday September 26, 2015 4:42pm - 5:15pm
GMUSL - Room 121

5:15pm

The Road to an Open Internet is Paved with Pragmatic Disclosure & Transparency Policies
Paper Link

Ensuring a healthy ecosystem for broadband services is critical to securing the future of a healthy and open Internet. From the perspective of social welfare maximization, this means collective management of the decision-making regarding how we design, operate, provide access to, use, and pay for our broadband access networks. Realizing this collective goal requires balancing the interests of multiple market participants that are often in conflict and evolve in light of changing technical, business, and policy conditions.

The efficiency of markets and regulatory interventions depends on whether decision-makers at all market levels are appropriately informed. This requires the selective sharing of information. Consumers need information about their broadband access options in order to make informed decisions about which (if any) broadband services to subscribe to, how to use those services, and what investments to make in complementary assets (devices, content, applications). Providers of content, applications, and other complementary goods and services need to know about broadband access options to appropriately position their offerings in the market. And, regulators need information about broadband access options to design and enforce policies that will promote competition and ensure appropriate market choices exist.

All of these stakeholders need information about broadband service availability, pricing, performance, and to the extent discernible, about trends and plans that will shape future options. Furthermore, broadband access service providers either already possess or may more easily obtain a great deal of the information needed by market participants. However, the information sharing challenges are far from simple. Different stakeholders need different information, information is costly to collect and share, and to the extent it impacts market outcomes, has strategic value. For example, better informed consumers might be more inclined to switch providers, thereby intensifying price competition; while better informed regulators may be better able to limit supra-competitive profit opportunities. Additionally, sharing of too much information about the performance of specific broadband connections might threaten subscriber privacy or render broadband networks more vulnerable to attack.

Disclosure and Transparency (D&T) policies comprise a toolset of rules, processes, and mechanisms that are used by market participants to help structure and manage the flow of information that is needed for informed decision-making. D&T policies comprise a significant component of the regulatory provisions in the FCC's 2015 OIO, which sets forth the FCC's approach for regulating providers of broadband access services. The focus of this paper is on providing a framework with which to interpret the OIO's D&T provisions within the larger market context. The OIO's specific D&T provisions are just one component of the tools and mechanisms that shape how broadband management relevant information is discovered, shared, and interpreted. Other regulatory provisions in the OIO and other market mechanisms such as performance testing platforms interact with the explicit D&T provisions that mandate specific obligations and responsibilities. As we shall explain, the richness of D&T tools is desirable in order to address the complex and diverse questions that arise in the context of broadband management requiring information sharing. Moreover, understanding how these D&T policy tools interact and complement (or substitute) for each other is helpful if these tools are to be appropriately applied and appreciated. Application of these tools should be nuanced and evolvable to incentivize cooperation and voluntary disclosure by the ISPs while also safeguarding the interests of end users and intermediaries in the broadband Internet ecosystem.

In Section 2, we review the specific D&T provisions in the FCC's 2015 OIO and situate these within the larger D&T policy framework. We introduce a meta-tool, the D&T Coordinator, to assist in better understanding the landscape of potential interventions and with which to contrast the relative merits of different interventions in different contexts.

In Section 3, we apply our framework to divergent prototypical examples of the sorts of questions that confront the challenge of how to best manage broadband networks. At one end, we have what appears to be the narrow and specific question of crafting an appropriate set of D&T policies to ensure adequate reporting of packet loss by ISPs. At the other extreme, we consider open-ended questions that relate to society's aspirations or goals for what the Internet and broadband services should be. We argue that an assortment of D&T tools are needed for the array of questions confronting broadband stakeholders, but with different emphasis, because the contexts within which they arise engage both specific and general, closed and open-ended details to be addressed appropriately.

Section 4 offers our concluding summary and directions for future work.

Moderators
GW

Geoffrey Why

Mintz Levin

Presenters
Authors

Saturday September 26, 2015 5:15pm - 5:47pm
GMUSL - Room 221

5:15pm

Technology Broadband Roadmap for Rural Areas in the Andes and Amazon Regions in Peru
Paper Link

In the last five years, several countries in Latin America have launched national broadband plans . Most of these plans have a similar first stage component of deploying, or expanding, the national fiber-optic backbone network to interconnect main urban areas and reach rural areas in order to increase penetration of broadband. For example, Peru is now building a new fiber optic backbone that will expand backbone coverage to 22 states and 180 provinces including urban and rural areas. Once the backbone is deployed, the next big challenge that governments and broadband service providers will face is the deployment and operation of local access networks in underserved and unserved rural areas (in Latin America most of them located in the Andes range and Amazon rainforest). In this scenario, there is a high level of uncertainty as to the best local access technology to deploy and operate. Given this setting in Latin America, and taking Peru as a case study, this research addresses the question: What is the technology roadmap for introducing broadband services to underserved and unserved areas in the Andes and Amazon regions of Peru?

The paper will identify and compare current (WiFi, WiMAX, LTE and TVWS) and new (balloons, millimetric-wave, drones and gigabit-satellite) wireless technology candidates for access networks in the Andes and Amazon regions of Peru. The research will focus on the following key issues:

1. Analysis of the access network deployment cost for these wireless options based on different coverage and speed scenarios.

2. Analysis of operating and maintenance costs for these options after the initial deployment. Once again coverage and speed will play an important role in the analysis. This look at costs over time will allow the development of a broadband roadmap for the region that will describe the forecasted deployment of network capacity over a time period of 10 years.

3. Spectrum management. In the last decade in Peru, most licensed bands used to provide broadband services have been granted to operators on a nationwide and regional basis. This paper will examine current available spectrum and alternatives to enable more spectrum for new access network deployments in underserved or unserved areas in the Andes and Amazon regions.

To address these points, the paper will determine the technical performance and cost of the wireless options using both specialized propagation software to carry out wireless network simulations in this remote environment and a detailed engineering cost model to quantify the cost of the access networks based on coverage and speed in this geographic setting (the Andes and Amazon regions).

The paper will employ a quantitative research methodology that will utilize geographic coverage and demand data provided by Peruvian government agencies (i.e., the telecommunications regulatory agency (OSIPTEL), the universal service fund agency (FITEL), the transport and telecommunications department (MINTEL), etc.) Additionally, the paper will use access network elements and wireless equipment cost data from vendors in Peru and regional vendors.

Moderators
Presenters
DE

David Espinoza

PhD Candidate/Graduate Research Assistant, University of Colorado at Boulder

Authors
DR

David Reed

University of Colorado at Boulder, University of Colorado
Dr. David Reed is the Faculty Director for the Interdisciplinary Telecommunications Program at the University of Colorado at Boulder. He also leads the new Center for Broadband Engineering and Economics that specializes in the interdisciplinary research of the emerging broadband ecosystem, and is Senior Fellow, Silicon Flatirons Center for Law, Technology, and Entrepreneurship at the University of Colorado. | | Dr. Reed was the Chief... Read More →

Saturday September 26, 2015 5:15pm - 5:47pm
GMUSL - Room 332

5:15pm

Business Strategies of Korean TV Players in the Age of Over-the-Top (OTT) Services
Paper Link

This paper is a comparative analysis of business models and strategies employed by firms in the digital video marketplace facing competition from Over-The-Top (OTT) content services. It focuses on the Korean TV market, that is, and analyzes the strategies of all players including traditional over the air broadcasters, pay TV providers, telecommunication network providers, and the new over-the-top (OTT) service providers.

According to the definition of ABI Research, OTT content is characterized as “online video from services and operators that is distributed over a number of channels including fixed (e.g. to computers, connected computer equipment, tablets) and mobile (e.g. smartphones and tablets) broadband - it is not associated with a pay TV service provider subscription” (ABI Research, 2014).

Consumers are increasingly streaming or downloading long-form video programming (mainly movies and TV shows) by using OTT content services, and sometimes unsubscribing from traditional video providers. This phenomenon - described as ‘‘video cord-cutting’’ or ‘‘over-the-top (OTT) bypass’’ - suggests that the business models of traditional TV service providers are under threat (Banerjee, 2014). OTT services were initially provided by specialized OTT video content providers such as Netflix and Hulu. As a result, traditional pay TV service providers are experiencing some revenue losses and slowdowns (Banerjee, 2014). For instance, while Netflix added 14 million subscribers in the last three years, the cable and satellite TV lost 7.6 million. In the third quarter of 2013, Netflix subscribers (33 million) surpassed the HBO subscribers (28.7 million) (Song, 2014; Variety, 2014, April 30).

PayTV providers are responding to this new threat by experimenting with new services such as: 1) multiscreen (N-screen): everywhere, anywhere 2) monetizing content beyond the subscription 3) online pay TV packages: a fully OTT model 4) cloud pay TV: app in smart TV or disruptive business model 5) Hybrid broadcast/broadband services (Gartner, 2013, July 25; cited in Song, 2013). Not only pay TV providers but also all types of service providers, including terrestrial broadcasters, IT companies, and device manufacturers wishing to enter the TV media business provide services in OTT form (Song, 2014; Crandall, 2014; KISDI, 2013).

Unlike the case of the United States in which the third party players (e.g., Netflix) rather than pay TV providers dominate in the OTT content service market, the Korean case draws a different picture. Domestic telecommunications service providers, terrestrial broadcasters, cable TV providers and IPTV providers have led the OTT content market actively launching OTT video services as part of their N-Screen strategies. In addition, while global companies pay attention to large-scale global platforms, domestic companies focus more on making connections to multi-screens and mobile devices (KISDI, 2013). About 6 service providers have led the OTT content market, including traditional telecom service providers:, (KT’s Olleh TV mobile, SK BB’s Btv Mobile, and LG U ’s U HDTV), terrestrial broadcasters (POOQ), and cable TV providers (TVing by CJ HelloVision and EveryonTV, a joint venture between Hyundai HCN and Pandora TV) (KT, 2014).

Despite the criticality of OTT content services, not many studies have been conducted about business models and strategic positioning of TV players except in consulting firm reports or company trade reports (ABI Research, 2014; Song, 2013; Ross & Erasmus, 2013; Aidi, et al., 2013). Therefore, this study aims at providing insights and implications for deploying the global OTT content services by comparing and contrasting the business models and strategies of Korean TV players as a case study.

Moderators
RL

Rod Ludema

State Department

Presenters
avatar for EUNA PARK

EUNA PARK

professor, University of New Haven


Saturday September 26, 2015 5:15pm - 5:47pm
GMUSL - Room 225

5:15pm

Models for Cybersecurity Incident Information Sharing and Reporting Policies
Paper Link

Reporting requirements represent the area of cybersecurity policy where governments have been most active, to date, but depending on their purpose, these reporting policies vary greatly with regard to what kinds of information entities are expected to report, and to whom. Right now, the European Union is in the process of passing a Network and Information Security Directive (NISD) and the U.S. Congress appears to be moving towards a final version of its Cybersecurity Information Sharing Act (CISA), both of which function primarily to encourage — or require — that more information about cybersecurity incidents be shared with — or reported to — entities other than the ones who detect those incidents. The two policies share a common underlying principle — that everyone would benefit if information about cybersecurity incidents were discussed among and available to more actors — but they seek to establish completely different information reporting and sharing models.

This divergence reflects the ways in which policies built on spreading cybersecurity incident information more widely work to fulfill several different goals, including protecting people whose information has been breached, helping others in real time to defend against threats that have been previously experienced or identified by others, and contributing to a better understanding long-term of the types of threats observed and effectiveness of various countermeasures. Each of these three goals has very different implications for security reporting regimes and may pose different challenges for both defenders and regulators. Through comparisons of the current pending policies in the U.S. and E.U., as well as other existing cybersecurity incident reporting and data breach notification policies, this analysis proposes templates for the designing policy measures intended to meet these three different goals, including the ways in which each of those goals shape whom information is shared with, what information is shared, the timeline for sharing that information, and the relative benefits of mandatory versus voluntary regimes. This analysis also explores the pitfalls of trying to conflate multiple goals under a single reporting regime, using the example of the E.U. directive, which attempts to combine elements of all three goals.

Policy-makers have different roles to play in promoting these distinct goals of security reporting, all of which may be challenging for private actors to address adequately in the absence of government intervention, but for different reasons. While many existing and proposed cybersecurity policies focus on short-term reporting requirements intended to protect consumers and aid real-time threat remediation, it is the third purpose of information reporting — the long-term data collection about incidents and security interventions — that is in many ways most central to the establishment of effective policies governing security actions and outcomes. Without that information, policy-makers have no means of determining which defensive measures have the greatest impact or what the consequences of security breaches actually are. These policies could therefore serve as a first step in the cybersecurity policy-making process — setting the stage for defenders and policy-makers alike to gain a better grasp of what the security landscape looks like and how it can be improved. Given the large number and great variety of different actors involved in security threats and defending against them, any individual firm or actor is very limited in terms of what can be learned from their own security data. Combining the threat data from defenders who play different roles in the security ecosystem, serve different customers, have insight into different layers of the network, and impact each other’s security is central to figuring out where policy-makers may need to intervene and how.

Moderators
Presenters
JW

Josephine Wolff

Rochester Institute of Technology


Saturday September 26, 2015 5:15pm - 5:47pm
GMUSL - Room 120

5:15pm

Comparative Case Studies in Implementing Net Neutrality: A Critical Analysis
Paper Link

(a) Objective including, insight developed: This paper critically examines the relatively few examples of regulatory implementation of network neutrality enforcement at national level. The paper draws on co-regulatory and self-regulatory theories of implementation and capture, and interdisciplinary studies into the real-world effect of regulatory threats to traffic management practices (TMP). It examines both enforcement of transparency in TMP by governments and their agencies, notably through use of SamKnows monitoring (Brazil, US, UK, EU) and the publication of key metrics, and enforcement by regulators following infringement actions where published. It also explores the opaque practices of co-regulatory forums where governments or regulators have decided on partial private rather than public diplomacy with ISPs, notably in the US, Norway and UK.

(b) Methods used to develop the paper’s thesis: presents the results of fieldwork in South America, North America and Europe over an extended period (2003-2015), the latter part of which focussed on implementation. The countries studied are: Brazil, Chile, Norway, Netherlands, Slovenia, Canada, United States, United Kingdom. This paper is based on rigorous in-country fieldwork (with the exception of Chile, where the UN CEPAL and Brazilian CGI provided a forum for Chilean stakeholders to travel to workshops on comparative implementation). The final four years of research was funded by the European Commission EU FP7 EINS grant agreement No 288021 and internal funding from the university. No ISP or content provider has provided funding to the project since 2008, though several of each funded earlier stages.

(c) Why the research is novel: Most academic and policy literature on net neutrality regulation has focussed on legislative proposals and economic or technological principles, rather than specific examples of comparative national implementation. This is in part due to the relatively few case studies of effective implementation of legislation, and in part due to fixation with the legal logjams in the United States, Brazil and European Union. Spurious comparisons have been drawn without appropriate fieldwork to assess the true scope of institutional policy transfer. The paper notes the limited political and administrative commitment to effective regulation thus far in the countries examined, and draws on that critical analysis to propose reasons for failure to implement effective regulation. Finally, it compares results of implementations and proposes a framework for a regulatory toolkit for those jurisdictions that intend effective practical implementation of some or all of the net neutrality proposals currently debated. Specific issues considered are the definitions used for specialized services, and the tolerance of zero rating practices, notably as deployed by mobile ISPs.

(d) Data assembled: empirical interviews conducted in-field with regulators, government officials, ISPs, content providers, academic experts, NGOs and other stakeholders from Chile, Brazil, United States, Canada, United Kingdom, Netherlands, Slovenia, Norway.

Moderators
avatar for Scott Jordan

Scott Jordan

University of California

Presenters

Saturday September 26, 2015 5:15pm - 5:47pm
GMUSL - Room 121

5:50pm

Reception
Saturday September 26, 2015 5:50pm - 6:30pm
George Mason University School of Law Atrium

7:15pm

Remembering Charles Benton
Join us in sharing memories of Charles Benton. 

Charles was a regular attendee at TPRC with a particular interest in broadband policy and the design of effective public policies to bring advanced communications to disadvantages populations and areas. His critical yet constructive contributions challenged unspoken assumptions and elevated the quality of discussions. During the weeks prior to his untimely passing from renal cancer he helped shape the panel on Lessons from BTOP for Broadband Policy and Research, to be held at TPRC 43 on Friday, September 25, 2015, 2-3:30 pm.

Please join us for a special commemoration of Charles’ life and contributions to TPRC during dinner on Saturday, September 26, 2015. We would like to collect remembrances from the TPRC community. Please use the text box below to share your own memories and reflections.

Please share your own memories (we will make them available to Charles’ family and the public) http://www.tprcweb.com/charles-benton-memories/


Saturday September 26, 2015 7:15pm - 7:30pm
GMUSL Multipurpose Room

7:45pm

Keynote Speaker
Speakers
avatar for Julie Brill

Julie Brill

Julie Brill was sworn in as a Commissioner of the Federal Trade Commission on April 6, 2010. Since joining the Commission, Ms. Brill has been working actively on issues of critical importance to today’s consumers, including protecting consumers’ privacy, encouraging appropriate advertising substantiation, guarding consumers from financial fraud, and maintaining competition in industries involving health care and high-tech... Read More →


Saturday September 26, 2015 7:45pm - 10:45pm
GMUSL Multipurpose Room
 
Sunday, September 27
 

9:00am

Against Jawboning
Paper Link

Despite the trend towards strong protection of speech in U.S. Internet regulation, federal and state governments still seek to regulate on-line content. They do so increasingly through informal enforcement measures, such as threats, at the edge of or outside their authority – a practice this Article calls “jawboning.” The Article argues that jawboning is both pervasive and normatively problematic. It uses a set of case studies to illustrate the practice’s prevalence. Next, it explores why Internet intermediaries are structurally vulnerable to jawboning. It then offers a taxonomy of government pressures based on varying levels of compulsion and specifications of authority. To assess jawboning’s legitimacy, the Article employs two methodologies, one grounded in constitutional structure and norms, and the second driven by process-based governance theory. It finds the practice troubling on both accounts. To remediate, the Article considers four interventions: implementing limits through law, imposing reputational consequences, encouraging transparency, and labeling jawboning as normatively illegitimate. In closing, it extends the jawboning analysis to other fundamental constraints on government action, including the Second Amendment. The Article concludes that the legitimacy of informal regulatory efforts should vary based on the extent to which deeper structural limits constrain government’s regulatory power.

 


Moderators
avatar for Harold Feld

Harold Feld

Senior Vice President, Public Knowledge
Harold is Public Knowledge's Senior Vice President. Before becoming Senior Vice President at Public Knowledge, Harold worked as Senior Vice President of Media Access Project, advocating for the public interest in media, telecommunications and technology policy for almost 10 years. Prior to joining MAP, Harold was an associate at Covington & Burling, worked on Freedom of Information Act, Privacy Act, and accountability issues at the... Read More →

Presenters
DB

Derek Bambauer

University of Arizona


Sunday September 27, 2015 9:00am - 9:32am
GMUSL - Room 221

9:00am

Estimating Demand for Fixed-Mobile Bundles and Switching Costs between Tariffs
In the last years many telecommunications operators in Europe introduced fixed-mobile bundles (quadruple play tariffs) which include mobile voice and data, fixed IP voice, fixed Internet access and IP TV. The introduction of these offers raises some questions. First, it is important to understand what is the consumer valuation of particular tariff components and what is their impact on consumer surplus. Since mobile, fixed voice and broadband satisfy communications needs, another question is to what extent there is an additional value created when they are sold jointly, i.e., whether they are complements or substitutes. The interaction between fixed and mobile data services is often studied separately from voice interaction and rarely for both. Our study aims to understand fixed-mobile interaction both for voice and data services. This paper estimates demand for fixed-mobile bundles (quadruple play tariffs) using a database of subscribers to a single mobile operator from a single town in a European country which has full coverage with both ADSL and FTTH broadband technologies. We merge together two datasets to construct the choice sets: (i) monthly billing database including information about the tariff used by each consumer in the last 12 months before December 2013; and (ii) database on the characteristics of mobile tariffs. The most important attributes of tariffs are: (i) list price per month; (ii) length of commitment; (iii) whether a telephone subsidy is offered or SIM card only without handset subsidy; (iv) whether voice minutes are unlimited and if not what is the volume of minutes included in the list price; (v) the volume of mobile data in GBs included in the offer; and (vi) option for fixed broadband access none/ADSL/FTTH. A discrete choice framework is commonly used to analyze choices of telecommunications products including choices of tariff plans. In discrete choice models each individual chooses between a set of discrete alternatives with preferences depending on his characteristics and product attributes, and selects the one which maximizes his utility. Based on the demand estimation we find that consumer valuation of FTTH broadband in 2013 increased over time while ADSL lost on attractiveness relative to FTTH but also in absolute terms, which suggests that consumers increasingly care about the speed of connection offered by FTTH. The consumer surplus increased substantially due to ongoing transition of consumers from less valued quadruple play tariffs with ADSL to more valued with FTTH. We also find that mobile data is complementary to fixed broadband access. Mobile Internet access became possible since the introduction of 3G technology and the usage of mobile data is on rise with ongoing deployment of 4G LTE technology. However, there are bandwidth constraints of mobile networks which do not allow offering unlimited data volume within mobile tariff plans, which is nowadays a standard for fixed broadband offers. Consumers can therefore use mobile data to sample online content such as a movie and then they can complete online activity using fixed broadband at home, which has no download limit and is cheaper. Thus, fixed broadband services provide additional value to mobile data services. Consumers who get fixed broadband access value more having mobile data and vice versa. On the other, we find that mobile voice usage is a substitute to fixed broadband access and consumers reduce their voice consumption once they get broadband connection. Because of the nature of voice calls, consumers have to choose to make a phone call using either mobile phone or fixed-line connection. Hence, consumers who purchase fixed broadband value mobile voice services less because they can also use fixed broadband for voice communication.

Moderators
Presenters
JL

Julienne Liang

Orange/France Telecom

Authors

Sunday September 27, 2015 9:00am - 9:32am
GMUSL - Room 332

9:00am

Suing Internet Firms to Police Online Misconduct: An Empirical Study of Intermediary Liability Litigation by Secondary Stakeholders
Paper Link

Internet intermediary platforms are online services that provide a means for information to be hosted, shared, and transmitted between third parties. As such, they also enable the distribution of content that is ‘objectionable’ from a societal point of view. Consequently, a growing population of individuals and organizations has filed civil suits against Internet platforms as a consequence of online speech or misconduct by the third-party users of these services. Such actors comprise a platform’s “secondary stakeholders,” meaning that they neither own stock nor are part of the firm’s supply chain or customer base. Secondary stakeholders may be particularly motivated to act when they feel they are direct victims of a negative consumption externality that occurs as a result of a firm’s economic activities and thus are ‘owed’ a response. Although these 'secondary' stakeholders do not hold sway over the firm via formal or implied contractual obligations, regulatory power, or market mechanisms, the use of intermediary liability litigation affords them the ability to exercise sometimes powerful demands.

I situate legal action targeting Internet intermediary platforms in the context of stakeholder theory and argue that specific attributes of the stakeholders and of their requests in intermediary liability suits may confer legitimacy and urgency which results in increased salience, or likelihood of positive responses, from judges. While the singular locus of examination in much prior work is the salience decision, I argue for the importance of addressing factors that shape stakeholder demands in order to better explicate the relationship between stakeholders, their requests, and litigation outcomes (salience). I develop a two stage framework in which I first examine the relationship between Stakeholder Attributes and Request Development and then the relationship between Request Development and Salience (or litigation outcomes).

In order to test my hypotheses, I built a hand-collected database of 295 objectionable content lawsuits filed against Internet intermediary platforms in the United States between 1995 and 2014. I conduct an empirical analysis using a two-stage logistic regression model using the following measures developed over a two-year period. The type of plaintiff is coded as individual, firm, or government. Acts of online misconduct were classified on a four-point scale ranging from 1 (most likely tort) to 4 (most likely crime). Intermediary platforms were assigned to six categories which were ranked on a 6-point ordinal item measuring the category's distance (in an engineering context) from the Internet 'backbone,' or how 'downstream' it is, as a proxy for its location in the ecosystem. The farther downstream a category is, the less central it is to the network and the more visible it is to the end user. The size of the stakeholder’s request is measured in terms of ‘Duties of Care,’ a 4-point ordinal item representing the type of remedy requested by a plaintiff (information disclosure, content blocking, content filtering, service discontinuation) and the corresponding burden to the intermediary. Finally, litigation outcomes are coded (liable or not liable) for each intermediary platform.

My primary findings are:

1) The more likely an act of online misconduct is to be classified as a crime rather than as a tort, the greater will be the size of the stakeholder’s request.

2) The more visible an intermediary is to users in terms of its location in the network architecture, the greater the size of the stakeholder’s request.

3) While the criminality of misconduct and the visibility of intermediary targets contribute positively to the likelihood of positive litigation outcomes for the stakeholder, the size of the stakeholder’s request is negatively correlated with the likelihood of a positive litigation outcome.

 


Moderators
avatar for Michael R. Nelson

Michael R. Nelson

Public Policy, CloudFlare
The future of the Internet and the Cloud, Internet Governance, cybersecurity, online surveillance, and online privacy

Presenters
avatar for Jaclyn Selby

Jaclyn Selby

Postdoctoral Fellow, Tuck School of Business
Jaclyn is a postdoc at the Center for Digital Strategies at the Tuck School of Business where her research interests lie at the intersection of strategic management and technology policy. She focuses on competitive dynamics in the high tech and media industries, particularly emphasizing the regulatory environment and innovation management. Her work has been published in Communications & Strategies, Foreign Policy Digest, and Intellibridge Asia... Read More →


Sunday September 27, 2015 9:00am - 9:32am
GMUSL - Room 225

9:00am

Machine Generated Culpability: Socio-Legal Agency in Machine Learning for Cybersecurity Enforcement
Paper Link

What happens when an algorithm is capable of identifying security targets with greater accuracy than human analysts? Does it matter if the algorithm used is so opaque that a human analyst or expert cannot articulate the reasons why there is reasonable suspicion (or probable cause) to act against a particular target? Does it matter whether the operational purpose is to prevent a national security threat, gather intelligence, or prevent crime? Or whether the target will be subject to a drone strike, arrest or cyber network operation?

Priorities in cybersecurity have shifted to support the identification of future threat actors, and the determination of whether potential harm warrants preventative action. Machine learning technologies — the automatic improvement of computer algorithms via feedback using statistical methods — are especially useful in cybersecurity problems where large databases may contain valuable implicit patterns that can only be discovered automatically due to the limits of individual human cognition.

While MLTs have great promise for securing cyberspace, there are many practical and theoretical challenges of building intelligent predictive cybersecurity systems that can provide accuracy while preserving civil liberties and accounting for human agency. Our research shows that MLTs can alleviate privacy concerns related to data collection and utilization. Still, cybersecurity lacks an operational framework as to which legal authority (or authorities) to apply to a given cyber operation, and a legal framework as to what actions are permitted to be taken by state or private actors on the basis of opaque MLT outcomes.

This paper seeks to systematize the socio-technical foundations of utilizing machine-learning technologies (MLTs) for decision-making in the domain of cybersecurity. The goal is to develop a predictive modeling framework that can be applied on a diverse set of data sources and legal authorities to achieve situational awareness and information assurance while maintaining accountability, transparency and procedural process in accordance with the rule of law. The framework will operate readily over new kinds of domain-independent data and therefore may be applied to many different threat analysis and sense-making problems.

Broadly, it will describe how a combination of innovative new machine-learning technologies and socio-legal mechanisms can be used conjointly over big data to achieve a trustworthy and secure cyberspace. Specifically, it seeks to better understand the operational capabilities and socio-legal limitations of current and future machine learning technologies (MLTs) for the purpose of detection of and response to cybersecurity threats in order to systematize both the computational and legal foundations for future design and deployment to secure cyberspace while prioritizing civil liberties and the rule of law.

The basic research is founded on the new theoretical foundations of cognitive opacity/transparency, distributed agency, and collective intelligence as applied to socio-legal foundations of due process and privacy and human agency. These foundations are first embodied in an evaluative framework based on understanding past cybersecurity incidents and predicting new ones using predictive analytics. Using this evaluative framework, we will lay the foundations for a way to conduct evaluation of MLTs in modular and scalable cybersecurity platforms, with the goal of automating information assurance

Moderators
DS

David Simpson

Chief Public Safety & Homeland Security Bureau, FCC

Presenters

Sunday September 27, 2015 9:00am - 9:32am
GMUSL - Room 120

9:00am

Preservation of Best-Effort Service on the Internet in the Presence of Managed Services and Usage-Generated Applications
Paper Link

There is general consensus that Best Effort Service has been an essential contributing factor in the Internet’s explosive growth and the corresponding growth in innovations, applications and creativity that has benefitted society and consumers. Users of the Best Effort service, notwithstanding the absence of guarantees, have enjoyed reasonable performance in quality of service features, such as latency, jitter, throughput and reliability, as well as unfettered connectivity to content and applications. Also, importantly, for a low flat subscription fee for broadband connection, usage has been free. Users have taken advantage by using the Internet as a platform for experiments on innovative ways of using the network, which has led to the creation of new applications that, in turn, has fueled growth.

In the current discussion on Net Neutrality, an important question is whether the Internet’s essential characteristics can be preserved if Internet Service Providers (ISPs) are allowed to offer Managed Service with guaranteed quality of service (QoS). The concern is that the flexibility of offering differentiated services may lead to the "damaged goods" strategy. That is, the ISPs will have the incentive to induce subscribers to pay a premium price to use Managed Service by withholding necessary provisioning and investments in bandwidth for Best Effort service. As a consequence Best Effort service will be offered with poor quality, which will reduce social welfare and consumer surplus in the short-term, and also hinder the creation of new applications and innovations, and thus undermine the long-term vitality of the Internet.

We study the above issues by developing a model-based approach for investigating equilibrium outcomes of allowing Managed Service. We consider a monopoly ISP which offers both Best Effort Service for free use and Managed Service with guaranteed QoS for a fee per use. Our analysis starts from modeling optimal choices of consumers regarding whether to subscribe to the broadband network, which service (Best Effort or Managed Service) to use, and the usage of the chosen service. These decisions depend on both the average delay of Best Effort service, which depends on bandwidth and users’ self-adjustment of usage in response to delay, and the usage fee of the Managed Service. The ISP selects the usage fee and makes decisions to rent bandwidth at a given unit price to provision for the two services. The ISP’s objective is profit maximization, under the constraint that it has to deliver sufficient surplus to induce consumers to subscribe to its network.

We follow a common premise that underlays many arguments for Net Neutrality in assuming that usage of Best Effort service leads to the generation of applications. We model the dynamics of the applications process as a "birth-death" process, where the number of new births per unit time is proportional to the usage level of Best Effort service and the death rate is proportional to the existing number of applications. The optimal solution for maximizing the immediate profit is to provide Best Effort service with the minimum quality that generates just enough consumer surplus to justify the subscription fee. However, we show that, on the contrary, forward-looking ISPs should never take this approach. To this end, we model a strategic ISP that perceives the connection between new applications generated from Best Effort service and the profitability of Managed Service. We show that the strategic view can lead to a profit-maximizing decision that is quite different from the myopic one. In many cases, it becomes optimal to offer bandwidth for Best Effort service that generates far more surplus for its users than the minimum amount.

Our research suggests that to preserve a robust offering of Best Effort service the regulator may not need to ban Managed Service. Instead, leveraging the power of usage-generated applications on long-term profit leads to the desired result.

Moderators
TB

Tim Brennan

UMBC/RFF

Presenters
QW

Qiong Wang

University of Illinois at Urbana-Champaign

Authors
DM

Debrasis Mitra

Columbia University

Sunday September 27, 2015 9:00am - 9:32am
GMUSL - Room 121

9:00am

The FCC's Authority to Regulate Internet Privacy
Paper Link

As the first and last mile conduit, mobile and fixed broadband Internet access service providers (ISPs) play an important role for consumers to access an array of information, services, and applications. As the FCC has taken new steps to preserve an open Internet within the network neutrality debate, concerns nevertheless arise over what extent user privacy is protected by broadband ISPs, social media sites and apps providers. Thus far, most digital privacy rights that consumers enjoy are guided by terms of service or use conditions that define the extent to which their information is collected and shared. Within this realm, typically consumers willingly agree to broad terms of service agreements that facilitate the big data market, including the first step they take to access the rest of the Internet.

The FCC’s 2015 Open Internet order was significant in its reclassification of broadband as a telecommunications service, in effect relegating ISPs to fall under Title II common carriage regulation. The Commission used this authority, buttressed by Section 706 and Title III (wireless) to prohibit blocking, paid prioritization and throttling and enhance transparency requirements but noted important concerns still arise over ISP privacy and data use policies. For the time being, the FCC exercised forbearance to apply the existing customers proprietary network information (CPNI) rules under Section 222. The Commission found that broadband ISPs are “necessary conduit for information passing between an Internet user and Internet sites or other Internet users, and are in a position to obtain vast amounts or personal and proprietary information about their customers.” The Commission acknowledged that user data privacy is a legitimate concern and is set to launch a separate rulemaking proceeding that will address specifically how CPNI applies to broadband ISPs.

Concerns over consumer privacy and data use is nothing new within telecommunications policy. In years prior, the FCC has used CPNI rules to protect the confidentiality and disclosure of consumer calling records that apply to wired and wireless telephone and VoIP providers. The Commission has also enforced the 1984 Cable Communications Privacy Act to restrict cable television subscriber privacy records, requiring cable operators to refrain from collecting personally identifiable subscriber information without prior consent or to share such information to third parties. However broadband Internet is arguably an entirely different category in terms of the amount of information that may be collected and shared, moving well beyond telephone numbers and television programming selections to the so-called “Internet of things” ecosystem.

This paper sets for the following main research questions: What existing regulatory authority does the FCC possess to specifically regulate data use and privacy policies among broadband ISPs? To explore these questions, this paper will use legal research and analysis to explore the degree to which Section 222 as well as Section 706 may apply to broadband ISP data use and privacy policies. The paper will also examine data use and privacy policies within several broadband ISPs terms of service to help determine the degree of information that is collected and shared internally as well as with third parties. In conclusion, this paper will provide policy recommendations to help protect consumer privacy as it applies to broadband ISPs.

Moderators
DS

David Simpson

Chief Public Safety & Homeland Security Bureau, FCC

Presenters
AB

Andrew Bagley

Andrew W. Bagley, Esq. teaches as an adjunct professor in the University of Maryland University College's Graduate Cybersecurity Policy Program and works on e-discovery issues at the Federal Bureau of Investigation. He holds a Juris Doctor from the University of Miami School of Law, and a Master of Arts in Mass Communication Law, Bachelor of Science in Public Relations, and Bachelor of Arts in Political Science from the University of Florida... Read More →

Authors
avatar for Justin Brown

Justin Brown

Assistant Professor, Univ. of South Florida, Zimmerman School of Advertising & Mass Communication
My research focuses on telecommunications law and policy issues including broadband deployment, privacy, network neutrality and transparency issues concerning terms of service agreements. I will be on a congressional fellowship in D.C. later this fall to serve as a legislative aide.

Sunday September 27, 2015 9:00am - 9:33am
GMUSL - Room 120

9:32am

The Song Remains the Same: What Cyberlaw Might Teach the Next Internet Economy
Paper Link

Legal and regulatory questions for the next phase of the digital economy parallel those of the early days of the commercial internet, nearly twenty years ago. Contemporary debates about the On-demand Economy, the Internet of Things, and Big Data recapitulate a familiar error: the artificial division of virtual and real-space activity. Now as in the past, this “digital dichotomy” feeds both excessive skepticism about the need for legal protections, as well as excessive concern about the threats from technology-based innovations. The early history and evolution of cyberlaw show the importance of overcoming such perspectives and recognizing the role of government as an enabler rather than just a restraint on innovation. Companies such as Uber and AirBnB didn’t exist when the legal environment for first-generation Internet-based services was defined in the late 1990s, but they face strikingly similar questions today.

Moderators
avatar for Harold Feld

Harold Feld

Senior Vice President, Public Knowledge
Harold is Public Knowledge's Senior Vice President. Before becoming Senior Vice President at Public Knowledge, Harold worked as Senior Vice President of Media Access Project, advocating for the public interest in media, telecommunications and technology policy for almost 10 years. Prior to joining MAP, Harold was an associate at Covington & Burling, worked on Freedom of Information Act, Privacy Act, and accountability issues at the... Read More →

Presenters
avatar for Kevin Werbach

Kevin Werbach

Wharton/University of Pennsylvania


Sunday September 27, 2015 9:32am - 10:05am
GMUSL - Room 221

9:32am

Analyzing the Characteristic Determinants of Smartphone Post-Paid Pricing in South Korea 2010-2015
The objective of this study is to analyze the determinants of smartphone wireless plan prices in South Korea from 2010 to 2015. This study attempts to measure the monetary effect of the key service characteristics -- data, voice, text, speeds, handset subsidy, contract duration, and other additional features. The two elementary hypotheses of this study: (1) Taken separately, each characteristic variable is significantly related to price (2) The interactive effect of the selected two characteristic variables is significantly related to the level of price. For example, it is hypothesized that the relationship between price and data allowance is greater at higher levels of handset subsidy or voice or text messaging. In terms of regression equations, we expect that the data allowance impact on the price is different when we add a series of interaction terms to the explanatory variables.

The study estimates the hedonic pricing equation using OLS regression in which changes in the price are regressed on changes in the quantities of the characteristics. We first employ ratio transformation techniques to keep the collinearity among the regressors under acceptable range, and propose a baseline equation. We use the baseline equation to experiment with four alternative functional forms -- linear, semi-log, double-log, and Box-Cox. The study carries out a series of diagnostic tests for the regression specifications, and compares the F values, R squared values, the variances of the residuals, VIF statistics, and AICs, which allows us to figure out how well each model fits the observations. The study runs not only semi-annually separate regression but also pooled regression over the observations from all time periods. The pooled regression is based on the assumption of fixed slope coefficients, but it can involve the assumption of random intercepts depending on judgment about the presence of any conspicuous structural change over time. The paper does not perform any weighted regression due to the limitations of subscription data. The interpretation of the estimated coefficients is the major interest in this paper.

The study tracks the smartphone standard post-paid offerings offered by Korea’s 3 major wireless carriers from the 2010 1st to the 2015 1st. This panel dataset contains between around 50~80 observations for each semi-annual period, and an overall total number of around 550 observations. The sample represents at least more than 94% subscribers out of the whole market throughout the periods. The dataset can be divided into three groups: (1) Pricing information, considered as indicators of the monthly payment that is equal to the minimum total cost on a 2 year contract at certain interest rate, divided by 24 -- the number of months. The minimum total cost consists of wireless plan service charges and handset payments. (2) Usage allowance information, including data, voice, text, and downstream speed measured by megabytes, minutes, integers, and kilobits per second, respectively. (3) Handset subsidy information, based on a monthly reduction in the price of a postpaid service plan with contract. For the sake of parsimony, the models choose just one type of device on which handset basket rules are based, so we do not consider different handset models. Transaction level data on retail price and characteristics are compiled from many sources, including agreements, firms’ online databases, newspapers, magazines, KISDI wireless competition reports, etc.

The estimated coefficients will assist in disentangling the causes of wireless service pricing that are still complex. Carriers have increasingly moved to inexpensive voice-text messaging packages and changed their pricing schemes around data treatment. However, the extent to which any subsidy reduction or increased voice limits benefit consumers is an empirical question, because it can be set by carriers in response to demand in data usage. Thus the coefficients will provide insights into how firms choose different means of enforcing data limits in the evolution of the industry.

This study builds on the previous fixed-broadband hedonic works including Prud'homme & Yu (2001), Williams (2008), and Greenstein & McDevitt (2011).

 


Moderators
Presenters
WJ

Wook Joon Kim

Researcher, Korea Information Society Development Institute


Sunday September 27, 2015 9:32am - 10:05am
GMUSL - Room 332

9:32am

Information and Communication Technologies as Drivers of Social Unrest
Paper Link

Information and communication technologies (ICTs) are reducing the transaction costs of information gathering and distribution. This can be a powerful tool for citizens to protest against what they may perceive as social injustice. This century has seen the use of ICTs as tangible media,facilitating movements among disgruntled citizens. Examples include the Arab Spring and the Occupy movements.

This paper aims to ascertain the impact of ICTs on political stability. Scholars have long argued that various socio-cultural factors impact the political stability of a country. Our literature review identifies following factors as significant contributors: income per capita (poverty), education, corruption and freedom of expression. We conduct empirical tests based on a uniquely developed dataset to ascertain, ceteris paribus, whether or not ICTs play a role as a facilitator to change the status quo.

The advent of ICTs opened up a new platform for citizens to coordinate their efforts against perceived injustices. These technologies have facilitated access to critical information and enabled greater interaction among the affected. Some recent studies suggest that social media via ICTs have contributed to the Arab Spring (Ghannam, 2011). However, various others have found evidence which shows that these technologies are not sufficient to result in social unrest (Dewey, Kaden, Marks, Matsushima, & Zhu, 2012).

We thus expect that poverty, education, corruption and freedom of expression may lead to greater unrest as people can more easily organize. From an economic perspective this will mean a shift upward in the relationship curves, and thus social unrest, as ICTs are more widely accessible to the population.

Using data from the World Bank and other international organizations we assemble a cross national panel of dataset that tests the impact of ICTs on political stability(denoted by number of various types of protests in a country per year) in presence of the income, education, corruption and freedom of expression variables to see if these technologies have made governments more or less stable. The dataset has 10 years of data on these factors. We conduct a fixed effect logit regression analysis to ascertain the impact of ICT variables on the social unrest of a country.

ICTs may shorten the time and frequency that people need to be organized. Hysteresis, which is the tendency to remain constant in spite of changes in the environment, reflects the delay that is seen in societies before they are willing to get engaged more visibly when faced with a problem. We may find that ICTs reduce hysteresis, meaning this tendency to remain constant, due to the ease with which people learn about problems.

Researchers have found that knowing what others are doing may influence a person’s behavior. Before the growth of information and communications technologies, however, it would have taken much longer for a person to know what another is thinking. The public now has many tools to communicate with people they don’t even know. With a keystroke a person can easily find information on practically any topic they wish. Mobile phones and Facebook, for example, allow people to connect with others. On the Internet they can find blogs and, via a broadband connection, they can access videos.

Based on the results of the empirical analysis we plan to present a comprehensive framework that will help us understand the dynamics between ICTs, these factors and social unrest. We conclude with policy recommendations.

References
Abadie, A. (2006). Poverty, Political Freedom, and the Roots of Terrorism. The American Economic Review, 96(2), 50-56. doi: 10.2307/30034613.

Abernethy, D., & Coombe, T. (1965). Education and Politics in Developing Countries. Harvard Educational Review, 35(3), 287-302.

Alesina, A., & Perotti, R. (1996). Income Distribution, Political Instability, and Investment. European Economic Review, 40(6), 1203-1228.

Archer, R. P. (1990). The transition from traditional to broker clientelism in Colombia: political stability and social unrest.

Dewey, T., Kaden, J., Marks, M., Matsushima, S., & Zhu, B. (2012). The impact of social media on social unrest in the Arab Spring. International Policy Program.

Fjelde, H., & Hegre, H. (2014). Political Corruption and Institutional Stability. Studies in Comparative International Development, 49(3), 267-299. doi: 10.1007/s12116-014-9155-1.

Ghannam, J. (2011). Social Media in the Arab World: Leading up to the Uprisings of 2011. Center for International Media Assistance, 3.

Isham, J., Kaufmann, D., & Pritchett, L. H. (1997). Civil Liberties, Democracy, and the Performance of Government Projects. The World Bank Economic Review, 11(2), 219-242. doi: 10.1093/wber/11.2.219.

Moderators
avatar for Michael R. Nelson

Michael R. Nelson

Public Policy, CloudFlare
The future of the Internet and the Cloud, Internet Governance, cybersecurity, online surveillance, and online privacy

Presenters
avatar for Moinul Zaber

Moinul Zaber

LIRNEasia
I am a telecommunications policy researcher who believes that empirical evidences can in many cases give better policy suggestions. I have received my doctoral degree from the department of Engineering and Public Policy of the Carnegie Mellon University, U.S.A. I use various data scientific approaches ( mostly statistical, econometric and machine learning approaches) to find evidences of the impact of various policy decisions related to... Read More →

Authors
avatar for Martha Garcia Murillo

Martha Garcia Murillo

Professor, Syracuse University

Sunday September 27, 2015 9:32am - 10:05am
GMUSL - Room 225

9:32am

Broadband Industry Structure and Cybercrime: An Empirical Analysis
Paper Link

Cybercrime continues to be a growing drain on the world economy. While admittedly a continual game of cat-and-mouse between the hackers and the minders of data stored and in transit, Internet service providers (ISPs) can play a pivotal role in reducing the incidence and effect of cybercrime. This paper builds on existing theoretical and empirical work to further explore the role of competition on the level of security provided by ISPs.

This paper primarily serves to test the findings of our theoretical analysis of the impact of competition on ISP incentives to invest in cybersecurity. There are many dimensions in the relationship between ISP competition and security investment. Increased competition might lower the margins for ISPs, resulting in lower security investment. Alternatively, if users are interested in greater security and can discern the relative security levels of the competing ISPs, competition might lead to increased security investment. Yet another possibility is that competition provides an opportunity to free ride on the security provided by the rival ISP, again reducing security investment. Our previous work shows theoretically how ISP incentives change in different competitive situations. In this paper we test those theoretical results by analyzing the number of infected users, signifying botnet prevention, and the duration of infection, showing botnet mitigation.

A 2012 OECD study showed that ISPs have significant discretion, and variation, in how they address botnet mitigation. The authors of that study recognized and estimated many of the factors that can explain the sizable differences found in the security performance of ISPs, emphasizing the institutional and organizational characteristics that shape the ISPs’ incentives. (van Eeten et al 2010). The current paper builds on this by using updated data and placing greater emphasis on market structure. Competitive pressure in the ISP market is considered in the previous work with the use of a variable for average revenue per subscriber and market share, using a panel of observations of the spam generated by infected users for 40 countries in 2005-2009. These variables for competitive pressure were not found to be significant, implying no relationship. In order to look more directly at the effect of competition, we identify when ISPs are facing significant changes in the competition they face either through entry or exit, and test its significance in botnet infection prevention and mitigation using econometric methods. The data is obtained in a manner similar to that used in van Eeten et al 2012, with the development of a honeypot to attract the spam generated by infected machines. These spam messages are then traced back to their ISPs through identification of their IP addresses.

Moderators
DS

David Simpson

Chief Public Safety & Homeland Security Bureau, FCC

Presenters
CG

Carolyn Gideon

Asst Prof Int'l Communication and Tech Policy, Tufts University
CH

Christiaan Hogendorn

Wesleyan University


Sunday September 27, 2015 9:32am - 10:05am
GMUSL - Room 120

9:32am

Innovational Complementarities and Network Neutrality
Paper Link

The Federal Communication Commission’s Order on Protecting and Promoting the Open Internet (GN Docket No. 14-28), adopted on February 26, 2015, mentions innovation more than 100 times. The Order is based on the premise that Internet openness is a precondition for a virtuous cycle in which edge innovation and network investment mutually propel each other. The adopted rules—including bright line standards (no blocking, no throttling, no paid prioritization), and safeguards for non-discriminatory access of edge providers and users—are seen as necessary safeguards to allow this beneficial process to unfold. Although an extensive research literature exists on the drivers of innovation in information and communication markets the Order mainly refers to comments and assertions submitted by stakeholders but it does not reference any of the pertinent innovation research. This paper seeks to close this gap by integrating insights from innovation theory and regulatory economics to examine the conditions of innovation in the Internet. We then juxtapose the findings with the vision embraced in the Order.

Because empirical observations from countries where network neutrality regulations have been in place for some time (e.g., Netherlands, Chile) are sparse and anecdotal, the paper remains largely theoretical and conceptual. There is a long debate on the pros and cons of network neutrality regulation by constraining forms of active traffic management. Mandating net neutrality by means of government regulation, in particular the ban to charge termination fees to content providers, would be subsidizing the creation of new application services. This absence of fees to be paid to the Internet traffic providers would stimulate entry of new application service providers and also move consumers into the role of prosumers who are creating their own application software (Lee and Wu 2009, pp. 66; Choi and Kim 2010). While it helped clarify important aspects, this literature is too narrowly construed as it does not explore the full set of complementarities between network and edge innovations. Moreover, the rapid convergence of a multitude of services with heterogeneous demands on the emerging all-IP network requires a broader perspective. There is also a vibrant theoretical literature suggesting that some degree of network differentiation is conducive to efficiency and innovation (e.g., Reggiani and Valletti 2011; Krämer et al. 2013; Bourreau, Kourandi and Valletti 2014) although this body of work typically uses highly abstract notions of innovation.

Our paper goes beyond these strands of literature by differentiating the types of innovation processes that unfold in the Internet economy and then linking them with research on innovation in highly interdependent systems. Building on the industrial organization literature on innovation types (e.g., Malerba and Orsenigo 1996; Breschi, Malerba and Orsenigo 2000), the paper starts with a detailed examination of the anatomy of the Internet innovation space. We distinguish innovation processes along two main dimensions: the type of coordination required for a successful innovation (modular, coupled) and the extent of the innovation (incremental, radical). The role of “innovational complementarities” between General Purpose Technologies (GPT, Bresnahan and Trajtenberg 1995) is of particular relevance in future all-IP networks. Although the focus of the literature on General Purpose Technologies (GPTs) has mainly been on the role of key technologies on aggregated economic growth, its overall framing is very fruitful for the understanding of the dynamics of the future Internet. According to Bresnahan and Trajtenberg (1995) innovation in the upstream GPT increases the productivity of R&D in downstream application markets and, developments of GPT-using applications raise vice versa the return to new advances in the GPT.

From this perspective innovations within the Internet are not only driven by applications but can also be stimulated by positive feedback effects from the all-IP infrastructure and Generalized DiffServ architecture which function as GPTs for the application services (Knieps 2013). It is therefore important that the GPTs both on the broadband infrastructure level as well as on the traffic architecture level are open for innovative evolutions, taking into account requirements from the application side. Within this conceptual framework we examine the conditions under which different types of innovation flourish and under which conditions an overall desirable mix of innovation emerges. Are forms of differentiation in the network a precondition for certain types of innovation? Does exploiting the innovation potential of some applications require realizing innovation potentials within the data transmission architecture? On the other hand, what types of innovation in the network and at the application and services level are likely supported in a neutral network environment?

This yields a differentiated analysis. In the future all-IP world a disaggregated representation of the Internet into all-IP broadband infrastructures, markets for Internet traffic (applying active traffic management) and markets for application services become more relevant. While much will depend on how many of the provisions in the FCC Order will be interpreted and operationalized, our framework helps formulate contingency claims as to how different approaches will affect innovation. If the rules are interpreted in a rather stringent way, it is likely that innovation will be biased toward application and services with the risk that infrastructure constraints may eventually slow down innovation compared to a scenario in which some degree of network differentiation is allowed. One possible outcome is that innovations that thrive in a more quality-differentiated network environment might migrate to private IP networks, inadvertently undermining one of the central goals of the Order, to safeguard an integrated and open Internet.

References:

Bourreau, M., Kourandi, F. & Valletti, T. (2014). Net Neutrality with Competing Internet Platforms, CEPR Discussion Paper No. DP9827. Available at SSRN: http://ssrn.com/abstract=2444828.
Breschi, S., Malerba, F. & Orsenigo, L. (2000). Technological Regimes and Schumpeterian Patterns of Innovation, The Economic Journal, 110(463), 388-410.
Bresnahan, T.F., Trajtenberg, M., (1995), General Purpose Technologies: ‘Engines of Growth’?, Journal of Econometrics, 65, 83-108.
Choi, J.P. & Kim, B.-C (2010). Net Neutrality and Investment Incentives, RAND Journal of Economics, 41, 446-471.
Knieps, G. (2013), ‘The Evolution of the Generalized Differentiated Services Architecture and the Changing Role of the Internet Engineering Task Force’, Paper presented at the 41st Research Conference on Communication, Information and Internet Policy (TPRC), September 27-29, George Mason University, Arlington, VA, available at: http://ssrn.com/abstract=2310693.
Kourandi, F., Krämer, J., & Valletti, T. (forthcoming). Net Neutrality, Exclusivity Contracts and Internet Fragmentation, Information Systems Research. Available at SSRN: http://ssrn.com/abstract=2541091.
Krämer, J., Wiewiorra, L. & Weinhardt, C. (2013). Net Neutrality: A Progress Report. Tele-communications Policy, 37(9), 794-813.
Lee, R.S., & Wu, T. (2009), Subsidizing Creativity through Network Design: Zero-Pricing and Net Neutrality. Journal of Economic Perspectives, 23, 61-76.
Malerba, F., & Orsenigo, L. (1996). Schumpeterian Patterns of Innovation are Technology-specific. Research Policy, 25(3), 451-478.
Reggiani, C., & Valletti, T. (2011). Net Neutrality and Innovation at the Core and at the Edge. Available at http://www.lboro.ac.uk/departments/sbe/downloads/research/economics/Paper-CarloReggiani_23-11-2011.pdf.

Moderators
TB

Tim Brennan

UMBC/RFF

Presenters
avatar for Johannes M. Bauer

Johannes M. Bauer

Professor and Chairperson, Michigan State University
I am a researcher, writer and teacher interested in the digital economy, its governance as a complex adaptive systems, and the effects of the wide diffusion of mediated communications on society. Much of my work is international and comparative in scope. Therefore, I have great interest in policies adopted elsewhere and the experience with different models of governance. However, I am most passionate about discussing the human condition more... Read More →

Authors
PD

Prof. Dr. Guenter Knieps

University of Freiburg


Sunday September 27, 2015 9:32am - 10:05am
GMUSL - Room 121

10:05am

Out of the Frying Pan & into the Fire: The FCC Takes over Privacy Regulation
Paper Link

In late 2014, the FCC imposed an unprecedented $10 million fine against Terracom — not for violating the FCC’s CPNI rules issued under Section 222(b) and (e), but for failing to provide “reasonable” data security, a duty the Commission found, for the first time, to flow from the general language of Section 222(a) and the “just and reasonable” standard of Section 201(b). In March, the FCC reclassified all broadband providers under Title II — and chose not to forbear from applying either of these sections to broadband. The FCC has promised to clarify what its approach will be in the future.

This paper will explore this evolving issue in depth, including several key legal questions: What does the Open Internet order’s discussion of IP addresses as the equivalent of phone numbers (in order to justify reinterpretation of “public switched network” and thus reclassification of wireless) mean for privacy regulation? How will CPNI regulation, traditionally focused on the adequacy of opt-in consent, evolve? How might the FCC use its sweeping “general conduct” standard or its claimed Section 706 authority over data practices? (The FCC’s 2014 706(b) NOI specifically asked how privacy and security concerns affect broadband deployment.) How might the FCC’s case-by-case enforcement approach work without clear limiting principles? Is the FCC essentially creating a murkier version of the FTC’s unfairness standard? What lessons can be learned from the experience of the FTC with unfairness and, more recently, with data security and privacy regulation?

How far might the FCC’s regulation extend? Might the FCC reclassify other services beyond broadband? Might it indirectly regulate non-common carriers by maintaining that telecom carriers have a duty not to “permit access” (Section 222(c)(1)) to CPNI by, say, mobile operating system or apps operators except subject to a flow-through of CPNI obligations? Will broadband providers, especially mobile operators, become the new intermediaries responsible for policing the data practices of other players in the ecosystem?

This paper will describe where the FCC may head, the pitfalls of various approaches, and offer normative suggestions for how the FCC, FTC and Congress should handle the privacy and data security practices of broadband providers (and other related services).

Moderators
DS

David Simpson

Chief Public Safety & Homeland Security Bureau, FCC

Presenters
avatar for Berin Szoka

Berin Szoka

President, TechFreedom
Berin Szoka is the President of TechFreedom. Previously, he was a Senior Fellow and the Director of the Center for Internet Freedom at The Progress & Freedom Foundation. Before joining PFF, he was an Associate in the Communications Practice Group at Latham & Watkins LLP, where he advised clients on regulations affecting the Internet and telecommunications industries. Before joining Latham's Communications Practice Group, Szoka practiced... Read More →

Authors
avatar for Thomas Struble

Thomas Struble

Legal Fellow, TechFreedom
Legal Fellow @TechFreedom. Tech policy enthusiast. @GWLaw alumni. @KUAthletics & @LFC supporter.

Sunday September 27, 2015 10:05am - 10:35am
GMUSL - Room 120

10:05am

Crowdsourcing Privacy Policy Interpretation
Contract disputes frequently call on courts to resolve conflicts arising out of interpretative differences. In these disputes, the party at the bad end of a deal typically contends that the parties meant their contract to have a meaning other than the one that led to the unfavorable result. To this end, the complaining party argues that particular terms are ambiguous, and that the ambiguity should be resolved in a way that yields a more favorable outcome. Whether a contract’s terms are ambiguous is a determination for the court to make. But a battle wages over the appropriate method for making this determination. While some courts confine their analysis to the contract’s four corners (that is, a term will be deemed ambiguous if its meaning cannot be gleaned from the document itself), others consider evidence extrinsic to the document to determine whether terms are reasonably susceptible to more than one meaning. Under either approach, if the court determines that terms are ambiguous, it will resolve ambiguity according to an objective reasonable person standard. But subjective elements influence decision makers in even the most earnest endeavors to decide objectively. This paper proposes the novel concept that crowdsourcing can aid courts both in determining whether contract ambiguity exists and in resolving ambiguities objectively. Courts that accept extrinsic evidence as part of their ambiguity analysis could look to how the crowd interprets the agreement: if crowd workers cannot agree on a particular term’s meaning, the court may accept this as evidence that the term is ambiguous. Similarly, crowd agreement on a particular term’s meaning can supply the court with a reasonably objective interpretation of that term. The paper explores this concept through the lens of empirical data from a recent study, Disagreeable Privacy Policies: Mismatches between Meaning and Users’ Understanding. That study asked crowd workers to interpret certain website privacy policies and compared the crowd’s interpretations to privacy policy experts’ interpretations of the same policies. This paper relies on data from that study to exemplify how the concept might apply. To reach this analysis, the paper first surveys the general landscape of online contracting. Because the data relied upon in the paper derives from website privacy policies, the paper specifically examines the extent to which those policies can be enforced as legally binding contracts. A finding that privacy policies are rarely enforced as such highlights a flaw in the notice and choice privacy regime that calls its legitimacy into question. Nevertheless, the paper suggests, the concept may be useful to privacy regulators despite a regime in which contract rules do not necessarily apply. The author does not wish to have the proposal considered for presentation in the poster session.

Moderators
avatar for Michael R. Nelson

Michael R. Nelson

Public Policy, CloudFlare
The future of the Internet and the Cloud, Internet Governance, cybersecurity, online surveillance, and online privacy

Presenters

Sunday September 27, 2015 10:05am - 10:37am
GMUSL - Room 225

10:05am

Cease and Desist: Copyright Takedown Requests on Google Search
Paper Link

Since the passage of the DMCA, which provides the legal blueprint for copyright enforcement on the web, the strategies undertaken by entertainment companies to combat digital piracy have evolved. The effectiveness of their approaches, as well as potential chilling effects, have been widely examined. However, few studies explore enforcement strategies on Google Search. This paper aims to contribute to this discussion by examining copyright-related takedown notice requests issued to Google from March 2011 to March 2015. First, I frame the discussion against a larger historical overview of the evolution of copyright enforcement online. Second, I present the findings, which reveal a substantial growth in both takedown requests and a decrease in Google's noncompliance with them. These practices have become increasingly normative and hyper-concentrated among a small minority of entities, predominantly entertainment and porn companies. However, the takedown notices ultimately do little to curb online piracy. Moreover, an examination of the takedown process suggests they may have potential chilling effects for online competition and the openness of the Internet, particularly since the burden of proof of notice validity falls on the accused, for whom Google often lacks the means to effectively notify of the complaint. Finally, I address potential explanations for the fluctuations in takedown requests. I discuss the implications of the findings for the effectiveness of this copyright enforcement strategy.

Moderators
avatar for Harold Feld

Harold Feld

Senior Vice President, Public Knowledge
Harold is Public Knowledge's Senior Vice President. Before becoming Senior Vice President at Public Knowledge, Harold worked as Senior Vice President of Media Access Project, advocating for the public interest in media, telecommunications and technology policy for almost 10 years. Prior to joining MAP, Harold was an associate at Covington & Burling, worked on Freedom of Information Act, Privacy Act, and accountability issues at the... Read More →

Presenters

Sunday September 27, 2015 10:05am - 10:40am
GMUSL - Room 221

10:05am

Competition between Standards and the Prices of Mobile Telecommunication Services: Analysis of Panel Data
Paper Link

This paper aims to address the differential effect of intra-standard and inter-standard competition on the prices of mobile telecommunication services, specifically in 3G and beyond. It is hypothesized that inter-standard competition will lower price over time after controlling for per capita GDP, preference diversity, and mobile teledensity. A fixed effects model was estimated using the least squares dummy variable (LSDV) method to empirically test the association, using ARPU as a proxy for price. The findings were inconclusive because though the direction of the effect suggested that standards competition might be reducing prices, it was not significant. Strong time trends in the data and hedging strategies, in which companies deploy multiple standards, might account for the observed weak effects.

 


Moderators
Presenters
KJ

Krishna Jayakar

Penn State University

Authors
avatar for EUNA PARK

EUNA PARK

professor, University of New Haven

Sunday September 27, 2015 10:05am - 10:40am
GMUSL - Room 332

10:05am

Network Neutrality: An Empirical Approach to Legal Interoperability
Paper Link

The Internet is grounded on an open and interoperable architecture, giving rise to a quintessentially transnational environment. This global network of networks is, however, in natural tension with an international legal system based on mutually excluding legal frameworks, which have the potential to fragment the Internet, creating separated national intranets and conflicting cyberspaces. It seems important, therefore, to encourage the development of rules that can be used across jurisdictions, thus fostering the compatibility of the legal systems penetrated by the Internet and the internet economy. Promoting a “legal interoperable” environment seems indeed an instrumental step to achieving a better-functioning Internet ecosystem, in which new technologies can spur, and in which cultural exchange is promoted. Advancing international legal interoperability of issues of systemic importance is not an easy task, but one way to start address this challenge is analysing existing initiatives aimed at producing regulatory models addressing specific issues.
One particular topic that lends itself well to the analysis of the benefits and potential developments of legal interoperability is net neutrality. Indeed, it has been addressed by several jurisdictions, each using a specific approach. At the same time, a net neutrality regulatory model framework has already been elaborated and has inspired more than one organisation, such as the European Parliament and the Council of Europe. The principle is a good example of the importance of legal interoperability as it plays an instrumental function in promoting and protecting the free flow of information and the distributed nature of the Internet. Whereas it might be seen as a domestic matter, exclusively impinging upon how Internet traffic is managed at the national level, the level of protection of network neutrality determines immediate consequences on the Internet users’ capability to freely seek, impart and receive information regardless of frontier.

This paper will examine the relevance of legal interoperability by analysing the approaches towards network neutrality regulation in different countries as well as the potential benefits determined by the utilisation of model frameworks fostering regulatory convergence rather than fragmentation. A lot of academic research has been undertaken with the aim of explaining what net neutrality is and how it should be addressed by national legislators and policy makers. However, there is a pending need to understand how the rules that have already been adopted will or will not promote or hinder the development of the legal interoperability within the Internet. In order to do so, this paper will be structured in three sections aimed at analysing (i) the concept of interoperability and its potential transposition from the technical to the regulatory level; (ii) the regulatory state of play of net neutrality, highlighting the various legal approaches that have been adopted, to date, in order to frame net neutrality at the national level; and (iii) the potential benefits that a legally interoperable approach to net neutrality has the potential to determine in terms of both legal certainty and transaction costs for businesses.

The authors do not wish to have the proposal considered for presentation in the Poster session.

 


Moderators
TB

Tim Brennan

UMBC/RFF

Presenters
avatar for Nathalia Foditsch

Nathalia Foditsch

Communications Law and Policy Specialist, American University

Authors

Sunday September 27, 2015 10:05am - 10:40am
GMUSL - Room 121

10:40am

Mimosa Break
Sunday September 27, 2015 10:40am - 11:10am
George Mason University School of Law Atrium

11:00am

Industry as an Audience for Academic Policy Research
Traditionally, the audience for research papers presented at TPRC is assumed to be government policy makers.  Survey responses from last year’s TPRC, however, indicate that industry and government representatives made up nearly equal shares of conference attendees.  Industry interest in the policy research presented at TPRC, and academic authors’ interest in effectively reaching industry audiences, both seem likely to continue, given external trends such as the increasing impact of public policies on the communications and information industries, and limited government funding opportunities for policy-relevant research.

This panel will feature a lively discussion among panelists with diverse perspectives on industry as an audience for academic research in the domains of communications, information and internet policy.  Questions for discussion will range from the philosophical to the practical.  For example, what types of value do industry participants and academics seek from each other?  What new or underrepresented research domains and questions are of particular interest to industry attendees?  How are policy findings and recommendations amplified or diminished by industry audiences?  What are effective mechanisms for academics to locate specific industry audiences interested in particular research topics?  What are best and worst practices for academic-industry engagement?

Moderator:
Sharon Gillett, Microsoft Corp.

Panelists:
Joe Waz, Comcast/NBCUniversal, Inc;
Paul Mitchell, Microsoft Corp.
David D. Clark, Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Lab
Kathleen Ham, T-Mobile USA
Richard Whitt, Google Inc.

Moderators
Presenters
CN

Carolyn Nguyen

Director, Technology Policy, Microsoft
Internet governance, big data, machine learning
JW

Joe Waz

Comcast/NBCUniversal


Sunday September 27, 2015 11:00am - 12:50pm
GMUSL - Rm 121