Take a step back, and ask yourself: What are the largest societal problems that relate to cybersecurity, today?
Depending on your background, you may come with very different answers. Computer security researchers may come up with technical answers relating to formally verifiable code or homomorphic encryption or the structure of the internet. Tech-savvy policy makers may think about the “going dark” problem: or, tensions between law enforcement wanting a “backdoor” into encryption algorithms and cryptographers’ insistence that weakening encryption for anyone weakens encryption for everyone. Social psychologists may think of how to better structure organizations so that security is a more valued part of the executive decision making process in corporations and government.
These are all very different problems, and each are hard and important. So, what happens if you pull together a group of researchers in the computer and social sciences, pair them with policy makers, and ask them all to agree upon a few “grand” challenges in sociotechnical cybersecurity?
I was fortunate enough to participate in a workshop with exactly that premise: The Computing Research Association‘s “Sociotechnical Cybersecurity Grand Challenge” workshop recently. More specifically, this was a “planning workshop” to come up with a set of topics to discuss as well as an agenda for a later workshop that will have the goal of developing four grand challenges for sociotechnical cybersecurity. I thought I’d recap some of what we discussed, though I’d like to add one large disclaimer here: this post is very much filtered through my own experience at the workshop. I was only exposed to a small subset of all of the interesting conversations that were happening.
The workshop took place at the University of Maryland, College Park over two days. Most people in attendance were well-established faculty in the computer and social sciences from a variety of universities all over the country. Also in attendance were government and industry researchers in key roles at their respective organizations.
We started with a brief discussion on what is a grand challenge. The definition we eventually arrived at was a challenge that is important and will require multiple years of significant, non-incremental, cross-disciplinary work to address. Equipped with this basic understanding, we spent the remainder of our time uncovering key challenges in cybersecurity as each of us understood them and synthesizing these varied thoughts into a set of problem areas within which there may lurk a grand challenge.
We began this process with a series of panels, which were organized around white papers submitted to the workshop. There were three panels: one on cybercrime, another of metrics and measures, and a third on individuals and norms. The panels were selected based on white paper submissions that were solicited a few months earlier.
The cybercrime panel focused on our (lack of) understanding of cybercrime. What even is a cybercrime? Is cyber bullying a cybercrime? We currently have no comprehensive typology that helps us understand the boundaries between cybercrime, physical crime and nasty use-cases of technology that are not specifically “crimes”. This matters because if we do not have a clear definition for what is a cybercrime, we cannot collect clear metrics on whether it is getting better or worse. Likewise, under what jurisdiction does cybercrime fall? When one is robbed in the physical world, the next steps are generally obvious: call 911. There is no equivalent for when one’s identity is stolen online.
The metrics and measures panel discussed the need for measurable constructs that help us answer seemingly simple questions like: what is “good” cybersecurity and what is “bad” cybersecurity?; Are employees complying with organizational cybersecurity policies? What measures matter in diagnosing suspicious cybersecurity activity? What behaviors do cybercriminals and “hackers” partake in, and how is that changing over time? In many ways, the sort of questions that were being asked in this panel are the most fundamental: without a clear measure of how to improve security, how can we know what to do next?
Finally, the individuals and norms panels (of which I was a part), discussed our general lack of understanding of the social and behavioral components of cybersecurity. Security evolved in a tradition of military and high-stakes corporate use. Accordingly, security protocols and systems have been developed assuming: (1) that people always act optimally in the interest of security at all times; and, (2) that individuals make their security decisions in a vacuum unaffected by the behaviors of others (likely because everyone is assumed to always act optimally). Of course, both of these assumptions are untrue, especially once we consider that it’s not just the military or corporations that are using computing systems anymore — it’s everyone.
During panels, we all wrote, on post-it notes, questions and ideas that came up during the discussion. These were later clustered into four themes on which we had breakout groups to synthesize a set of areas within which a grand challenge could be hiding. We continued these discussions for the rest of the conference (two half-day sessions) to finally arrive at a number of problem areas that may be discussed in the next workshop. I can’t remember all of the problem areas, but some that were discussed include (and I’m paraphrasing) (and in no particular order apart from the order in which they came to my mind):
- How can we make security seamless, so that it just “fits in” with our lives? An example is single sign-on, which drastically reduces the number of login attempts we need to make. How can we replicate that more broadly without, in turn, drastically reducing security? Of course, the challenge here is that security is, it self, a seam: it sits at the interface between humans and technology.
- How can we create better, adaptable behavioral models of adversary behavior through collaborations with social scientists? As computer scientists, we like to think of formal threat models so that we can partition the problem space and create solutions with guarantees. But, in the real world, attackers are adaptable and clever and break many of the assumptions that we make: they are rarely accurately described by formal threat models. Could better models of human behavior help us build systems that are more resilient to attackers?
- What can we do about the “going dark” problem? Loosely, this is the problem of law enforcement wanting privileged access to encrypted data if given permission through lawful process (e.g., search warrants). Cryptographers, of course, argue that “privileged” access is the same as designing for weakness, and weaknesses are universal: any malicious party will be able to use the backdoor.
- How can we create “cybersecurity hygiene” habits that increase people’s awareness, motivation and knowledge of cybersecurity threats and methods to counter-act those threats? We brush our teeth and buckle up through years of national awareness campaigns and doctors’ instruction. Can we replicate that for good cybersecurity practices? If so, how, given that the landscape of cyberthreats changes so rapidly?
- How can we incentivize good security at the organizational level? Currently, C-suite executives have little incentive to prioritize security over new features that introduce code bloat and, by definition, more security vulnerabilities. Chief security officers are also often organizationally embedded in odd ways that make it difficult for them to enact meaningful changes in product decisions. Should we view this problem as needing more carrots (rewards for good security) or needing more sticks (regulations and fines for bad security)?
- What are better measures and metrics we can track to answer simple questions about cybercrime like “Are cybercrimes getting worse or better?” and “Which solution to mitigating DDoS attacks works better?” If/when we have these better measures, how can we aggregate them at the national scale? If we were to make a National Cyber Crime Reporting Bureau, what would it look like and how could we do it?
- What are better methods for data stewardship? For example, things like consumer’s unions that force industry to respect the data collection and control preferences for any users who are a part of it?
- How can we create security tools and systems that are more aware and responsive to human social behavior? Currently, security systems are designed with little understanding of social norms: for example, is it really appropriate for family members to each have their own secret password to access their accounts in a shared Xbox? We also know from more general technology adoption models that observability helps diffuse technology, yet security technology is not observable at all.
- How can we better inform users of the consequences / outcomes of their security (in)action? Currently, the risks associated with security are abstract as are the risk mitigations associated with good security behaviors. How can we make it more concrete?
There were probably many other interesting questions discussed: I had just one cross-sectional view of the entire workshop as I was siloed in panels and breakout groups that restricted my view. My understanding of what other groups spoke of is based on a final report towards the end of the workshop, though I’m sure these reports did not fully capture the breadth of interesting ideas discussed.
To my understanding, the next step is to whittle those questions down to 8 interesting problem areas that should be discussed in a more inclusive workshop later in the Spring / early in the Summer of 2017. The goal of that workshop will be to come up with a set of 4 grand challenges in socio-technical cybersecurity that will hopefully guide much of the research in the space for the next decade or so.
I’ll probably attend that next workshop. If/when I do, I’ll try and post an update!
One final note: If you liked this post and would like to show your support, here are two things you can do:
– Follow me on Twitter @scyrusk; and,
– Consider signing up for my mailing list below. You’ll get new post notifications and perhaps even some content that I don’t post on the blog. You can unsubscribe at any time.
If you do either or both of those things, you’d make me happy. Thanks!