This came out around December last year, but I’m reposting it here in any case – Shane Horgan and I wrote a short article for Scottish Justice Matters on cybercrime (link). It includes an awful pun, but don’t let that put you off. It was quite interesting to write collaboratively, and for a joint policy/academic audience, but I really enjoyed working with Shane on this and hope we can work on something again in the future!

 

Advertisements

Some good news – having finished the pilot project and written it up for my First Year Panel, I’ve now passed my board and am good to begin my main research. I’ll be uploading a summary of my pilot findings here soon. I’ve arrived at what I think are a good set of research questions, which I’ll post here (they will likely be refined across the course of the research, but I think they’re a good base to work from):

  1. “Which are the main groups of actors that organise Tor and what are the organising practices, concepts and relationships that enable the project to flourish?”
  • What organisation and interest groupings are present, how do different groups within the project interact and how does this affect the functioning of the project? How are decisions about Tor software development made and negotiated?
  • What are the motivations of the people who contribute to the project, and how do they see the work they do, and their own identity as project contributors?
  1. “In what ways are the interests, experiences and views on crime, privacy and surveillance of the people who develop Tor reflected in work done, decisions about technology development and implementation, and governance of the project?
  • How do contributors manage or mitigate the potential harms which could arise from use of their software? What challenges do they face Internally and externally, ow are these challenges and potential effects debated, understood and translated into the technology by contributors?
  • What role do constructions of crime, privacy, state power and surveillance play in shaping how the organisation responds to these changes, and in shaping the technology itself?
  1. How do activists, Hidden Services providers and members of law enforcement agencies perceive these issues, and the governance and activities of Tor, and how do they appear to be attempting to influence outcomes and direction.
  1. Do STS frameworks provide useful insights in developing criminological theories of the Internet? How could a greater focus on the social shaping of technology deepen understanding of crime in high-technology societies?

I’ve also made a poster for the SCCJR research poster competition, attached here (with apologies to the Designers’ Republic).picture1

Since quitting my job, it’s been really great having more time to read and devote time to other academic work – I’ve been mostly reading up on symbolic interactionism, which I think could be useful in working the Science and Technology Studies frame into a criminological context. I’ve also been doing a lot of teaching at the University, which I’m really enjoying – the first years are all proving very engaged and critical, drawing links between the tutorials (which are covering ideas of legitimacy in the criminal justice system) and contemporary issues, events and politics with very little or no prompting.

My next steps for the research are to begin speaking to members of the Tor Project about my research, arrange interviews where possible and begin exploring these research questions. I’m heading to the Internet Freedom Festival in Valencia next week to attend talks, meet other academics researching internet freedom technologies and speak to people involved in the Tor community if possible.

Quite a big update – I’m currently working part-time for Transport Scotland as a statistician (I manage the transport elements of the Scottish Household Survey), which gives an interesting perspectives on issues around open data, ethical research and anonymity. I am, however, ending this work in late January to focus on my PhD research – I’m finding working on both quite tiring as it is currently, and from an ethical perspective I think that managing these two identities could become difficult, given the nature of my research, the people to whom I want to speak and the reflexive, open and respectful approach I intend to take. I’m really looking forward to getting more time to read and focus on my research (and more time to relax as well)!

I’ve just recently finished running a small pilot study in advance of my First Year Panel, in order to gather some initial data, begin exploring and refining my research questions and make my first approaches to the Tor community. First off (if anyone is reading this), many thanks to all those who got in touch and agreed to speak to me! I conducted 4 interviews in the end – even with this fairly small sample of respondents I managed to hear from some quite diverse and interesting perspectives, work through a lot of the documents on the main Tor website and get a lot of helpful feedback on the project. A number of other people talked to me and offered very helpful suggestions on the research, particularly Nathalie Marechal, who gave me a lot of useful comments – many thanks Nathalie!

I spoke to three exit relay operators and a developer working on the project as a volunteer. The respondents were from Scotland, Germany and Russia and I spoke to them under the agreement that their contributions would be anonymous (though some of them were happy for me to use their names in the research write-up).

My interviews focused on two main topics – firstly exploring the respondents’ attitudes to questions around the relationships between state and corporate surveillance, personal privacy and crime, and secondly asking more detailed technical questions around what is involved in working on the project, how their opinions and values influence the work they do and how they see the technological and political elements of the project evolving over the next few years. Despite coming from very different political backgrounds, and with different types and degrees of involvement with the project, there were some striking similarities in the accounts and the interviews opened up several interesting further lines of enquiry. In particular, a number of the participants were initially (understandably) sceptical due to my identification as a criminologist – I think people have quite varied impressions about this designation, with many people interpreting it as “crime science” or at the very least, research which seeks to “fight crime”, or which assumes state agencies as the eventual audience for the findings. The particular type of criminological research I want to undertake, however, takes a very different approach, informed both by the critical tradition of criminology at Edinburgh University as well as my own background in activism. I think a broader and more critical perspective, which includes looking harms caused by states and other powerful actors is vital to criminological research, especially that which touches on surveillance (and especially in the current climate). I found that speaking a bit more to the participants about my research and what I was interested in helped a lot in reassuring them that I wasn’t trying to contribute to a “moral panic” around Tor and the DarkNet – understandably, frustration around media and academic depictions of Tor was a common theme in the interviews, and it was clear that researchers or journalists barrelling in without “doing their homework” were not helping matters.

I’m now engaged in writing up this pilot study and preparing for my First Year Panel in January, after which I’m hoping to start my main research project. I’m feeling pretty good about it all and really excited to get started – although the project has changed a lot in the last year (and even the last few months), I’m really pleased with the direction in which it’s going. I’ll try to write some more here about the findings as I go.

I haven’t updated this in a while but I’m hoping to write here more regularly in the coming months. My paid work as a statistician has been pretty hectic recently, though I’ve been enjoying the opportunity to do a lot more programming and develop new skills. Most of my day-to-day work uses SAS, but I’ve recently been working more in R and, in my spare time, brushing up on my Javascript and working through the exercises in Erickson’s Hacking: The Art of Exploitation. Since my last post I’ve made a good deal of progress towards refining what I’m interested in researching – my project is now considerably more focused and I’m really pleased with how things are developing. I’m particularly happy that I’ve found a way to bring together the theoretical elements of my PhD (bringing frameworks and perspectives from Science and Technology Studies into criminology) with an object of focus which brings together some contemporary issues which I think are really important- more on this below.

Having spent the last few months reading, writing and refining my research questions, I’m now at a stage where I have a good idea of what I want this project to cover. While these questions will doubtless undergo further development and change over the coming months, I think they provide a good starting point. This project began with a desire to further explore some of the ground broken in Sheila Brown’s excellent article “The criminology of hybrids: rethinking crime and law in technosocial networks”, in which Brown makes a convincing case for the opportunities which Science and Technology Studies could afford the study of crime and law in high-technology societies. As identified in Majid Yar and David Wall’s work, criminology’s theoretical engagement with cybercrime tends to wrestle with how the features and properties of internet-mediated communication (including the “force multiplier” effect, the paradoxical surveilability and anonymity afforded by online interaction and the disinhibiting effects of online communication) provide new contexts for existing crimes (fraud, harrassment, drug dealing, etc.) and new opportunities for high-tech crimes (“hacking”, Denial of Service, etc.). Clark, Lessig and many others have, however, put forward that the technological properties of the internet we have today were by no means pre-determined, rather they are designed; contingent on the power relationships, values and politics of the context in which they were developed. As such, a criminology which seeks to address the internet as it stands or gain insights into its possible futures should understand the history of its present – the people, processes and mechanisms by which the technologies which make it up came to have the properties they possess.

While bringing concepts from STS into criminological study of cybercrime would be an interesting and worthwhile task in itself, I have elected to carry out a case study centred around one of these qualities – the “surveilability” of online space. Following the relevations of Edward Snowden and others in the past few years, with several pieces of upcoming legislation (including the IP bill in the UK) and a renewed public debate on mass surveillance, questions about the balance between police and state security power and personal liberty and privacy have reached a new prominence. We may well be gearing up for another period analogous to previous “cryptowars” such as that waged in the 1990s – times when these issues of surveillance, encryption and crime undergo fierce public negotiation, challenge and resistance. For this and other reasons, the Tor browser and the community of developers, activists and academics connected to it provide a vital point of coming-together of these negotiations, debates and practices, and it is on this that I wish to focus as the main subject of my research.

My research questions as they currently stand – they will likely evolve over the coming months – are as follows:

  • The next cryptowars – key debates and developments in surveillance and resistance, and their implications for anti-surveillance technology
    • How are members of the Tor community reacting to the changing character of contemporary state and corporate surveillance?
    • How do novel refinements of algorithmic data processing techniques, sophisticated mass data collection and analysis etc. change the nature of their work as anti-surveillance technology developers
    • What do they see as the emerging challenges they face – what are the big debates in which they are engaging? How are these debates and discussions conducted?
    • How do these values, identities and politics and their deep technical engagement with the technologies of surveillance influence constructions of crime and surveillance; privacy and security, and, in turn, their sense of their role as a community engaged in developing anti-surveillance technology?
    • How do these values and politics circulate within the community and between connected communities of activists and advocates?
  • How do ideas about crime and surveillance then go on to shape technologies of resistance in high-tech, networked societies?
    • How are these ideas and values translated into qualities and properties of the anti-surveillance technologies they develop?

 

 

 

 

 

Last week I attended a workshop held by the Alan Turing Institute under the general theme of “Algorithm Society” – this proved to be really useful in crystallising some of the ideas I’d been having around the PhD and I think has brought me closer to having a defined topic. Having previously only approached this area of study from a “big data” perspective, the “algorithm society” concept, which looks at the incorporation of machine learning, “algorithmic” decision making and automation into social processes was particularly useful in bringing together some of my thoughts around cybercrime and the PhD.

The workshop included presentations and discussion groups on various topics – although crime and surveillance were not focused on in their own right, they had undeniable relevance to many of the areas discussed and I felt able to make useful contributions from a criminological perspective. In particular, the “work” group, which discussed the consequences of incorporating algorithmic and machine learning technologies into the labour market and within the workplace, was particularly relevant, touching on issues of discrimination, “sorting” bias and the changing nature of work and social interaction. I was also interested by some of the discussion we had around how people “gamed” or subverted algorithmic systems, for example Mechanical Turk workers forming groups to discuss how to get the best jobs or businesses trying to artificially increase their standing on TripAdvisor.

Much of the “work” topic discussion had relevance from a criminological perspective – this was split, broadly, into the effects of algorithmic/machine learning processes on the labour market and the incorporation of them into the work people do. The first strand, discussing how these technologies were being used to make decisions about hiring, allocating work and shaping labour processes situated the human subject as “within” the algorithm, bound up in the social world which the algorithm sorted and shaped, whether that be by choosing which Uber drivers were selected for fares or micromanaging and surveiling workflow in an Amazon shipping warehouse. There was also broader discussion of how this affected work and class on a macro scale, with the potential creation of an “algorithmic working class” of workers with little to no labour rights or capacity for communal organisation. Is this just an extension of managerialism or a new “social order”?

In the second strand, algorithms were treated more as a tool, augmenting the labour of professional and skilled workers and removing the “grunt” or “bulk” elements of their work in order to reduce error or to allow them to focus on higher order processes. This tied into an earlier discussion with Donald Mackenzie around how these systems affected where “power” was located in organisations. As in many cases, the algorithm did not make the “final decision”, its role was rather in structuring and presenting information to a final decision-maker who could authorise action (or not), this had the effect of concentrating decision-making power in that individual, where previously the “grunt work” done by the algorithm would have been the product of a wider group of people who could influence elements of the decision chain.

I’m keen to write more on the workshop, but I’ll finish here for now with some potential question which this poses for the PhD. I began my research with a broad “ANT and cybercrime” scope, in particular reacting to the existing theoretical literature on cybercrime and proposing that ANT might provide a starting point to investigating the role and importance of non-human actors in cybercrime. One of the conceptual problems with this was the bracketing off of “cybercrime” as a phenomenon in its own right – this is a very broad and nebulous term which encompasses a lot of very different phenomena. In some sense, any crime committed in a high-tech society will have some “cyber” element so it might be more useful to look at a particular novel phenomenon associated with the rise of high-tech infrastructure in late-modern social spaces. “Non-human technological actors” is itself a broad and non-homogeneous group, however this does suggest a potential phenomenon – the automation of social and human processes and the insertion of non-human “algorithmic” or “machine learning” actors in decision-making processes. As this pertains to “cybercrime”, one of the most obvious applications of this kind of technology is in the incorporation and analysis of massive information flows surveillance and policing.

Distilling this down into some bullet-pointed research questions:

  • How does the presence of “algorithmic” intermediaries in the decision-making chain affect the work of surveillance and policing? What effects does this have on the experience of those making use of these systems? To what extent is this a process of “automation”?
  • How do these algorithms work and develop and what are the consequences for justice and surveillance? How do they learn/encode values and norms in their sorting and ranking processes and are there any unintended consequences? Are political or organisation decisions important for the function of these algorithms in their social/work context? If a machine learning algorithm can end up a “black box” whose operation is difficult or impossible to understand, even for its creators, what are the processes for accountability?
  • What are the consequences of these systems interacting with whole populations on a “databody” or “dataperson” level? Is there a “social sorting” effect?
  • How do people subvert these algorithms? Identity management? Malicious “tricking” of the algorithms to increase the risk scores of a target? “Air gap” work? “Systemic” subversion/attack using botnets, DDOS etc.? How does this affect the day-to-day use of the internet (and broader social interaction) by people who practice “deviant” behaviours?
  • How does this interact with the increasing automation of many types of cybercrime?

Current reading: various research papers, Surveillance as Social Sorting edited by David Lyon

Fiction: Just finished The Good Terrorist by Doris Lessing and now on the excellent Embed with Games by Cara Ellison.

 

 

I’m Ben Collier and I’ve just (five days ago) begun studying towards a PhD in criminology. I haven’t kept a blog before now and am intending to use this space for reflecting on the academic reading and writing I’ll be doing as my research progresses. As such, posts here won’t be fully-formed pieces of writing but more a record of ideas and impressions as I go through the early stages of my PhD, hopefully progressing into more focused and lengthy discussions as my writing and knowledge of the area improves.

I’m originally from a sciences background, having completed an MSc in Chemistry at the University of Edinburgh – I decided that Chemistry wasn’t where my interests lay and went on to study an MSc in Criminology and Criminal Justice at the same university. Through my MSc, I took courses in Gender, Crime and Criminal Justice, Theoretical Criminology, Mental Health and Crime, Research Methods, Quantitative Analysis, Criminal Justice and Penal Process and I audited a course on Cybercrime.

My PhD topic is still in its early stages of formation, but it follows on somewhat from my MSc dissertation topic. My dissertation took the form of a critical analysis of the criminological theory literature on cybercrime, using theoretical perspectives from Bruno Latour’s “Actor-Network Theory” and Donna Haraway’s “Cyborg Theory” to try to gain some insight into some of the classic problems posed by the literature on cybercrime. This work was particularly inspired by Sheila Brown’s paper “The Criminology of Hybrids: Rethinking Crime and Law in Technosocial Networks”. In particular, I was interested in questions about the “novelty” of cybercrime – whether high-technology societies produced a novel social environment for crime and what the role of technology was in mediating criminal and deviant behaviour. While ANT has its limitations, I thought that its treatment of space and its focus on a wide range of technological and human actors could potentially lead to some insights in these areas. I also felt that Donna Haraway’s work could help fill in some of the perspectives missing in ANT – the role of discourse, culture and identity in technosocial networks.

IMG_20150816_221513

Photo from Chaos Communication Camp in August

I’d very much like the PhD research to tackle similar questions. I think that there are a lot of issues posed by cybercrime which the current criminological literature does not adequately address – routinisation, automation, the role played by algorithms and semi-autonomous technological actors in cybercrime, questions of the “novelty” of cybercrime and the deployment of the outdated concept of “cyberspace”. To start with, I’ll be doing a lot of reading to try to narrow down the exact areas I want to focus on and try to get an idea of any case studies or fieldwork I might want to conduct.

Current reading: finishing off Visions of Social Control  by Stan Cohen and Mass Incarceration on Trial by Jonathan Simon. Also beginning an NSA-style dragnet of Google Scholar on relevant keywords. Current fiction is Flow My Tears, the Policeman Said.