NSA’s Best Scientific Cybersecurity Paper Competition, pt. 1

I just returned from the NSA, where I gave an invited talk and received an honorable mention for their annual Best Scientific Cybersecurity Paper competition. If I had to summarize: it was a humbling and educational experience. I’m happy to have participated, and am smarter for it. I thought I’d recap my experience for those interested. To avoid a mega-post, I’ve split it up into two parts. The first part is about the award and my paper. The second part will be about the ceremony itself: the things we did, the discussions we had.

The certificate I received for winning the award. I also got a challenge coin out of it.

First, the elephant in the room. No, I’m not a fan of every decision the NSA has made. I like to think that my research is all about empowering end-users to better protect their privacy and ensure their security. However, the NSA researchers I met were not boogeymen. They were like most other security researchers I’ve met (in academia and industry): thoughtful, deeply concerned with the present state of security technology, and frustrated with the difficulty of translating academic insight into vetted solutions that are adopted into real products. All that said, this post is not meant to be an opinion piece about the NSA. It’s meant to be a descriptive account of the competition and my experience with it.

The Award

First up: what even is this award? If you basically know, just skip ahead to the next section. If you don’t know but are an academic, skip the next couple of paragraphs. I’ll start broad for those not in academia.

So, as an academic, your primary currency is the papers you produce. You put in your hours to get these papers, and then you cash in those papers for good jobs and grants and fellowships and such. It’s not a perfect analogy: you can often cash in papers for multiple things, and some fields require many more hours per paper than others, and it’s not a sure thing that you can trade hours for papers even if you are very smart and hardworking. Anyway, the point is: the goal of every academic currently engaged in research is to produce these papers, directly or indirectly.

To produce papers you have to do original research, write up what you did, and submit it to journal or conference or workshop. In most fields, journals are what matters most. In most computer science related fields, conferences are what matters most [1]. Anyway, there are many, many conferences and journals. Different conferences and journals are associated with different levels of prestige. For example, you have probably heard of the journals “Nature” and “Science” — those are two extremely prestigious journals (though they are typically not venues where computer scientists publish). But there are many others, some just as good and others worth next to nothing. Generally what determines a venue’s prestige is whether or not it is peer-reviewed (and how strictly), its historical relevance in the field (e.g., have their been any highly cited or game changing papers published there?), and social proof (e.g., do other important academics like the venue?).

More conferences and journals pop up every day and increasingly more papers are published. With all of this new knowledge being generated, it keeps getting harder to figure out what’s important. One way to underscore the importance of a paper is to give it an award. Many conferences, for example, have “best paper awards”. If you win a “best paper award” in a “top tier” conference, that’s a good signal that your community thinks your work is important [2]. But still, there are many conferences. Even many top tier conferences [3]. Sifting through conference best paper awards can be overwhelming in its own right. For example, at one of the flagship HCI conferences, CHI, there are hundreds of papers every year and dozens of them get “Best Paper” awards.

So, there are also awards that are broader than any single set of conference proceedings or journal issue. These are meta-awards. The scope of these awards can vary significantly. Some meta-awards are truly massive and are awarded to individuals for entire bodies of work beyond any single paper (e.g., Turing Awards, Nobel Prizes, Lifetime Achievement Awards). Others are time-scoped (e.g., most impactful paper in the last 10 years) and/or discipline-scoped (e.g., best paper in “databases”). Naturally, these meta-awards also vary in prestige, and the amount of prestige is generally determined by the committee responsible for selecting the award recipients and, again, historical relevance and social proof.

NSA’s Best Scientific Cybersecurity Paper Competition is a meta-award that is time and discipline scoped: it’s awarded to papers that are published within the broader realm of “cybersecurity” within a particular year, but irrespective of the the specific conference or journal. The specific goal is to recognize papers that advance the  “science” of cybersecurity. That’s a bit vague (even to me), but I think they basically mean papers that produce foundational facts about security that are rigorous and replicable. Only papers that are nominated by someone who is not an author on the paper are considered. This raises the bar, because it can be quite difficult to find someone not on the author list who immediately recognizes the importance of newly published work. So, someone outside of the author list needs to be a firm believer in the paper if it is to be nominated.

In terms of prestige, the award is relatively new — only on its 3rd iteration. So it’s historical relevance is negligible for now. Similarly, its importance according to social proof is difficult to gauge. The previous winning papers were all authored or co-authored by people who are well known in the security research community. Accordingly, I feel like it eventually will have strong social proof as a respected award. I think this is especially true because the committee responsible for selecting the award recipients includes many behemoths in the field of security. Seriously. People like Whitfield Diffie (who you may recognize from Diffie-Helman key exchange) and Jeanette Wing (one of the heads of Microsoft Research).

My Honorable Mention Paper on Social Cybersecurity

For the 2014 competition, one my papers, “Increasing Security Sensitivity With Social Proof: A Large-Scale Experimental Confirmation” was recognized as one of two honorable mentions for the award. Effectively, this means that someone liked it enough to nominate the paper for the award and that a committee of very important security researchers liked it enough to call it one of the top 3 papers advancing the science of cybersecurity in 2014. It may be the single strongest signal that I’ve received, to date, that what I do seems to be useful and important. Of course, I don’t feel like I’m worthy of the recognition, but that is an unfortunate affliction that is pervasive in academia.

Anyway, the paper was about an experiment I conducted while I was an intern at Facebook [4]. It was an exploration of a very simple, intuitive idea: that social proof greatly influences our security behavior so maybe we can use it to increase people’s awareness and adoption of optional security tools on Facebook. Traditionally, security tool usage and security behaviors are invisible by design: We don’t see our friends’ security behaviors or decisions. This is a missed opportunity, though. Human behavior is greatly driven by “social proof” — our tendency to look to others for cues on how to act when we are uncertain. We should be leveraging this force to increase people’s awareness and adoption of security tools.

I hypothesized that if we showed people that their friends were using a bunch of optional security tools, they would be more likely to explore those tools themselves. So, I showed a random subset of 50,000 people on Facebook one of a set of announcements informing them that they could use extra security tools (e.g., two-factor authentication) to secure their accounts. Some of these announcements had anonymized, aggregated social proof: e.g., “108 friends of your friend are using extra security tools…”. [5] One of the announcements (i.e., the control) had no such social proof, and only informed participants that those tools were available — i.e., the sort of announcements we receive about security today.

Example announcements I showed in the experiment.

The results shouldn’t be surprising. Every announcement with social proof significantly outperformed the non-social control: both in getting people to click on the announcement and in the number of people who ultimately adopted one of the promoted security tools in the ensuing 7-day and 5-month periods. The best announcement simply showed people the number of their friends who used the promoted security tools. The greater the number of these friends, the greater the positive effect of the social proof.

Simple, right? It doesn’t sound very impressive, but in my experience, simple, elegant ideas that seem obvious after the fact are often the best ideas. [6]

If any of that interests you, you can read more about the paper here. This paper is one of the three papers that will serve as the foundation for my thesis proposal, which will be on a broader thrust of work I call “Social Cybersecurity”.

Concluding Remarks

Okay, that’s it for Part 1. In my next post, I’ll actually talk about my visit to the NSA for the award recognition ceremony. Till then!


One final note: If you liked this post and would like to show your support, here are two things you can do:
– Follow me on Twitter @scyrusk; and,
– Consider signing up for my mailing list below. I won’t spam, you can unsubscribe at any time, and you’ll get some content that I don’t post on the blog.

If you do either or both of those things, you’d make me happy. Thanks!


[1] Journals are not unimportant, but nor are they of pivotal importance. For example, I’m in in my 5th year of graduate school and have not yet produced a journal article, nor do I really care to. It would be nice to have, but it’s not very important. I spend almost all of my “work” time working on conference papers in some capacity.

[2] It doesn’t mean your work will end up being important, though. Sometimes, you can get a best paper award just for doing research really well, even if the work is not necessarily very practical or important. It’s imprecise. It’s something to be happy about it if you get one though.

[3] In HCI, I know of at least 4 that occur annually: CHI, UbiComp, UIST and CSCW. There are others that people might argue should be on the list. This is partially because HCI is a very broad field.

[4] In an interesting twist, I first heard that the paper had been recognized as an honorable mention for the award while I was working as an intern at Google.

[5] This social proof, of course, did not reveal any personal information about which friends used the security tools. We also only selected participants who had several hundred friends and had more than a handful of friends who used the promoted security tools.

[6] Good researchers solve hard problems. Great researchers solve the right problems. That’s something I’ve learned just by talking to a bunch of people I consider to be great researchers.

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInShare on TumblrPin on Pinterest

2 thoughts on “NSA’s Best Scientific Cybersecurity Paper Competition, pt. 1

Leave a Reply

Your email address will not be published. Required fields are marked *