Loading…
This event has ended. View the official site or create your own event → Check it out
This event has ended. Create your own
View analytic
Sunday, September 27 • 9:00am - 9:32am
Machine Generated Culpability: Socio-Legal Agency in Machine Learning for Cybersecurity Enforcement

Sign up or log in to save this to your schedule and see who's attending!

Paper Link

What happens when an algorithm is capable of identifying security targets with greater accuracy than human analysts? Does it matter if the algorithm used is so opaque that a human analyst or expert cannot articulate the reasons why there is reasonable suspicion (or probable cause) to act against a particular target? Does it matter whether the operational purpose is to prevent a national security threat, gather intelligence, or prevent crime? Or whether the target will be subject to a drone strike, arrest or cyber network operation?

Priorities in cybersecurity have shifted to support the identification of future threat actors, and the determination of whether potential harm warrants preventative action. Machine learning technologies — the automatic improvement of computer algorithms via feedback using statistical methods — are especially useful in cybersecurity problems where large databases may contain valuable implicit patterns that can only be discovered automatically due to the limits of individual human cognition.

While MLTs have great promise for securing cyberspace, there are many practical and theoretical challenges of building intelligent predictive cybersecurity systems that can provide accuracy while preserving civil liberties and accounting for human agency. Our research shows that MLTs can alleviate privacy concerns related to data collection and utilization. Still, cybersecurity lacks an operational framework as to which legal authority (or authorities) to apply to a given cyber operation, and a legal framework as to what actions are permitted to be taken by state or private actors on the basis of opaque MLT outcomes.

This paper seeks to systematize the socio-technical foundations of utilizing machine-learning technologies (MLTs) for decision-making in the domain of cybersecurity. The goal is to develop a predictive modeling framework that can be applied on a diverse set of data sources and legal authorities to achieve situational awareness and information assurance while maintaining accountability, transparency and procedural process in accordance with the rule of law. The framework will operate readily over new kinds of domain-independent data and therefore may be applied to many different threat analysis and sense-making problems.

Broadly, it will describe how a combination of innovative new machine-learning technologies and socio-legal mechanisms can be used conjointly over big data to achieve a trustworthy and secure cyberspace. Specifically, it seeks to better understand the operational capabilities and socio-legal limitations of current and future machine learning technologies (MLTs) for the purpose of detection of and response to cybersecurity threats in order to systematize both the computational and legal foundations for future design and deployment to secure cyberspace while prioritizing civil liberties and the rule of law.

The basic research is founded on the new theoretical foundations of cognitive opacity/transparency, distributed agency, and collective intelligence as applied to socio-legal foundations of due process and privacy and human agency. These foundations are first embodied in an evaluative framework based on understanding past cybersecurity incidents and predicting new ones using predictive analytics. Using this evaluative framework, we will lay the foundations for a way to conduct evaluation of MLTs in modular and scalable cybersecurity platforms, with the goal of automating information assurance

Moderators
DS

David Simpson

Chief Public Safety & Homeland Security Bureau, FCC

Presenters

Sunday September 27, 2015 9:00am - 9:32am
GMUSL - Room 120

Attendees (8)