This came out around December last year, but I’m reposting it here in any case – Shane Horgan and I wrote a short article for Scottish Justice Matters on cybercrime (link). It includes an awful pun, but don’t let that put you off. It was quite interesting to write collaboratively, and for a joint policy/academic audience, but I really enjoyed working with Shane on this and hope we can work on something again in the future!

 

Advertisements

Some good news – having finished the pilot project and written it up for my First Year Panel, I’ve now passed my board and am good to begin my main research. I’ll be uploading a summary of my pilot findings here soon. I’ve arrived at what I think are a good set of research questions, which I’ll post here (they will likely be refined across the course of the research, but I think they’re a good base to work from):

  1. “Which are the main groups of actors that organise Tor and what are the organising practices, concepts and relationships that enable the project to flourish?”
  • What organisation and interest groupings are present, how do different groups within the project interact and how does this affect the functioning of the project? How are decisions about Tor software development made and negotiated?
  • What are the motivations of the people who contribute to the project, and how do they see the work they do, and their own identity as project contributors?
  1. “In what ways are the interests, experiences and views on crime, privacy and surveillance of the people who develop Tor reflected in work done, decisions about technology development and implementation, and governance of the project?
  • How do contributors manage or mitigate the potential harms which could arise from use of their software? What challenges do they face Internally and externally, ow are these challenges and potential effects debated, understood and translated into the technology by contributors?
  • What role do constructions of crime, privacy, state power and surveillance play in shaping how the organisation responds to these changes, and in shaping the technology itself?
  1. How do activists, Hidden Services providers and members of law enforcement agencies perceive these issues, and the governance and activities of Tor, and how do they appear to be attempting to influence outcomes and direction.
  1. Do STS frameworks provide useful insights in developing criminological theories of the Internet? How could a greater focus on the social shaping of technology deepen understanding of crime in high-technology societies?

I’ve also made a poster for the SCCJR research poster competition, attached here (with apologies to the Designers’ Republic).picture1

Since quitting my job, it’s been really great having more time to read and devote time to other academic work – I’ve been mostly reading up on symbolic interactionism, which I think could be useful in working the Science and Technology Studies frame into a criminological context. I’ve also been doing a lot of teaching at the University, which I’m really enjoying – the first years are all proving very engaged and critical, drawing links between the tutorials (which are covering ideas of legitimacy in the criminal justice system) and contemporary issues, events and politics with very little or no prompting.

My next steps for the research are to begin speaking to members of the Tor Project about my research, arrange interviews where possible and begin exploring these research questions. I’m heading to the Internet Freedom Festival in Valencia next week to attend talks, meet other academics researching internet freedom technologies and speak to people involved in the Tor community if possible.

Quite a big update – I’m currently working part-time for Transport Scotland as a statistician (I manage the transport elements of the Scottish Household Survey), which gives an interesting perspectives on issues around open data, ethical research and anonymity. I am, however, ending this work in late January to focus on my PhD research – I’m finding working on both quite tiring as it is currently, and from an ethical perspective I think that managing these two identities could become difficult, given the nature of my research, the people to whom I want to speak and the reflexive, open and respectful approach I intend to take. I’m really looking forward to getting more time to read and focus on my research (and more time to relax as well)!

I’ve just recently finished running a small pilot study in advance of my First Year Panel, in order to gather some initial data, begin exploring and refining my research questions and make my first approaches to the Tor community. First off (if anyone is reading this), many thanks to all those who got in touch and agreed to speak to me! I conducted 4 interviews in the end – even with this fairly small sample of respondents I managed to hear from some quite diverse and interesting perspectives, work through a lot of the documents on the main Tor website and get a lot of helpful feedback on the project. A number of other people talked to me and offered very helpful suggestions on the research, particularly Nathalie Marechal, who gave me a lot of useful comments – many thanks Nathalie!

I spoke to three exit relay operators and a developer working on the project as a volunteer. The respondents were from Scotland, Germany and Russia and I spoke to them under the agreement that their contributions would be anonymous (though some of them were happy for me to use their names in the research write-up).

My interviews focused on two main topics – firstly exploring the respondents’ attitudes to questions around the relationships between state and corporate surveillance, personal privacy and crime, and secondly asking more detailed technical questions around what is involved in working on the project, how their opinions and values influence the work they do and how they see the technological and political elements of the project evolving over the next few years. Despite coming from very different political backgrounds, and with different types and degrees of involvement with the project, there were some striking similarities in the accounts and the interviews opened up several interesting further lines of enquiry. In particular, a number of the participants were initially (understandably) sceptical due to my identification as a criminologist – I think people have quite varied impressions about this designation, with many people interpreting it as “crime science” or at the very least, research which seeks to “fight crime”, or which assumes state agencies as the eventual audience for the findings. The particular type of criminological research I want to undertake, however, takes a very different approach, informed both by the critical tradition of criminology at Edinburgh University as well as my own background in activism. I think a broader and more critical perspective, which includes looking harms caused by states and other powerful actors is vital to criminological research, especially that which touches on surveillance (and especially in the current climate). I found that speaking a bit more to the participants about my research and what I was interested in helped a lot in reassuring them that I wasn’t trying to contribute to a “moral panic” around Tor and the DarkNet – understandably, frustration around media and academic depictions of Tor was a common theme in the interviews, and it was clear that researchers or journalists barrelling in without “doing their homework” were not helping matters.

I’m now engaged in writing up this pilot study and preparing for my First Year Panel in January, after which I’m hoping to start my main research project. I’m feeling pretty good about it all and really excited to get started – although the project has changed a lot in the last year (and even the last few months), I’m really pleased with the direction in which it’s going. I’ll try to write some more here about the findings as I go.

I haven’t updated this in a while but I’m hoping to write here more regularly in the coming months. My paid work as a statistician has been pretty hectic recently, though I’ve been enjoying the opportunity to do a lot more programming and develop new skills. Most of my day-to-day work uses SAS, but I’ve recently been working more in R and, in my spare time, brushing up on my Javascript and working through the exercises in Erickson’s Hacking: The Art of Exploitation. Since my last post I’ve made a good deal of progress towards refining what I’m interested in researching – my project is now considerably more focused and I’m really pleased with how things are developing. I’m particularly happy that I’ve found a way to bring together the theoretical elements of my PhD (bringing frameworks and perspectives from Science and Technology Studies into criminology) with an object of focus which brings together some contemporary issues which I think are really important- more on this below.

Having spent the last few months reading, writing and refining my research questions, I’m now at a stage where I have a good idea of what I want this project to cover. While these questions will doubtless undergo further development and change over the coming months, I think they provide a good starting point. This project began with a desire to further explore some of the ground broken in Sheila Brown’s excellent article “The criminology of hybrids: rethinking crime and law in technosocial networks”, in which Brown makes a convincing case for the opportunities which Science and Technology Studies could afford the study of crime and law in high-technology societies. As identified in Majid Yar and David Wall’s work, criminology’s theoretical engagement with cybercrime tends to wrestle with how the features and properties of internet-mediated communication (including the “force multiplier” effect, the paradoxical surveilability and anonymity afforded by online interaction and the disinhibiting effects of online communication) provide new contexts for existing crimes (fraud, harrassment, drug dealing, etc.) and new opportunities for high-tech crimes (“hacking”, Denial of Service, etc.). Clark, Lessig and many others have, however, put forward that the technological properties of the internet we have today were by no means pre-determined, rather they are designed; contingent on the power relationships, values and politics of the context in which they were developed. As such, a criminology which seeks to address the internet as it stands or gain insights into its possible futures should understand the history of its present – the people, processes and mechanisms by which the technologies which make it up came to have the properties they possess.

While bringing concepts from STS into criminological study of cybercrime would be an interesting and worthwhile task in itself, I have elected to carry out a case study centred around one of these qualities – the “surveilability” of online space. Following the relevations of Edward Snowden and others in the past few years, with several pieces of upcoming legislation (including the IP bill in the UK) and a renewed public debate on mass surveillance, questions about the balance between police and state security power and personal liberty and privacy have reached a new prominence. We may well be gearing up for another period analogous to previous “cryptowars” such as that waged in the 1990s – times when these issues of surveillance, encryption and crime undergo fierce public negotiation, challenge and resistance. For this and other reasons, the Tor browser and the community of developers, activists and academics connected to it provide a vital point of coming-together of these negotiations, debates and practices, and it is on this that I wish to focus as the main subject of my research.

My research questions as they currently stand – they will likely evolve over the coming months – are as follows:

  • The next cryptowars – key debates and developments in surveillance and resistance, and their implications for anti-surveillance technology
    • How are members of the Tor community reacting to the changing character of contemporary state and corporate surveillance?
    • How do novel refinements of algorithmic data processing techniques, sophisticated mass data collection and analysis etc. change the nature of their work as anti-surveillance technology developers
    • What do they see as the emerging challenges they face – what are the big debates in which they are engaging? How are these debates and discussions conducted?
    • How do these values, identities and politics and their deep technical engagement with the technologies of surveillance influence constructions of crime and surveillance; privacy and security, and, in turn, their sense of their role as a community engaged in developing anti-surveillance technology?
    • How do these values and politics circulate within the community and between connected communities of activists and advocates?
  • How do ideas about crime and surveillance then go on to shape technologies of resistance in high-tech, networked societies?
    • How are these ideas and values translated into qualities and properties of the anti-surveillance technologies they develop?

 

 

 

 

 

I’ve recently started, on the advice of one of the teaching staff, keeping an annotated bibliography and I’m finding it a really useful strategy (although, as the below will demonstrate, I’m also finding it hard to keep the entries at a reasonable length). I’ve included below one of my first entries on an excellent article by Kirstie Ball.

Ball, K (2005) “Organisation, surveillance and the body: towards a politics of resistance” Organisation, 12:1, 89-108

This extremely useful organisation studies piece by Kirstie Ball sets out a fairly comprehensive review of the surveillance and embodiment literature from the context of resistance. It does so in the context (as many papers form this time do) of the national ID card scheme which had been proposed by New Labour and which was since dismantled by the coalition government elected in 2010.

[Possibly from a civil liberties perspective but, to be honest, by 2010 this policy was fairly redundant as much of its surveillant functions could be far more cheaply carried out using big data strategies]

After conducting a brief review of modern organisational surveillance practices, Ball identifies that these focus primarily on the body as the central “object and subject” and “indicator of truth and authenticity”, counterposing this with the trend in theories of resistance to focus on consciousness, political forces and the interaction between individual subjectivity and “dominant managerial ideas”.

[A key question for me is whether insights from surveillance in the workplace are transposable to surveillance of criminal and deviant populations – maybe? In some regards, both use surveillance as a control function, by which employees/subjects are encouraged to conform as they know they are being surveilled, a classification function, by which employees/subjects can be graded, sorted and ranked, and an identification function, by which certain censured practices or individuals can be identified and punished. On the other hand, resistance is likely to take very different meanings and forms in the two different contexts]

Ball then goes on to discuss current developments in surveillance theory, in particular the contemporary trend away from the panoptic lens and towards frameworks informed by the work of Latour and Deleuze. The concept of the “surveillance assemblage” in particular emphasises the move away from the conceptualisation of surveillance as the totalising observation of a docile population and towards a networked, rhizomatic picture which privileges the connections linking individuals, technologies and organisations and the flows of data and information between them. Ball references an earlier paper, “Elements of Surveillance”, in which she identifies four conceptual elements of surveillance – “representation”, “meaning”, “manipulation” and “intermediaries” (which Ball identifies as a primary site of individual resistance). This draws on work by Haggerty and Ericson which identifies the role of surveillance processes as breaking the body down in to a series of “data flows” which feed “information categories”. Briefly, this pivots the focus of the surveillant lens from the subjective experiences and identities of an individual toward stripping aggregate volumes of data for categories and qualities which can be, in effect, sterilised, deindividualised and reapplied. In particular, it locates primary sites of resistance at the interfaces between humans and technologies and technologies and information (with a nod to Donna Harway’s cyborg). Ball, however, suggests that for a comprehensive understanding of surveillance, the processes of breaking down the body must be studied alongside the oft-neglected processes of reconstituting it within the system as “information”.

[This is very interesting and I’m unsure how I feel about it intuitively. This is also discussed by Amoore through the concept of “data derivatives” and “risk calculus” – I think in some ways this is a (naively or otherwise) very apt innovation as it reconceptualises individuals as technosocial entities, dealing with the “digital exhaust” (traces) of the various technologies, networks and actors which make up the technosocial individual heterogeneously, implicitly acknowledging the “agent-like” behaviour of technologies as they link up in Actor-Network. On the other hand, this is very much a tracing rather than a map – subjectivity is ironed out during the “decontamination” process by which the data are gathered and rendered sensible to the translation system. The real question is – are subjectivities, narrative, context and internality detectable in these “data derivatives”? If so, does this come from the software or is it dependent upon the intervention of the analyst?]

Ball then goes on to critically describe three contemporary frameworks for dealing with embodiment in the context of a recent “corporeal turn” in the social sciences. These include Crossley, who draws on Goffman to postulate the social world as fundamentally centred around embodiment, discussing how corporeal interactions between bodies and technology become routinized and merged into a “corporeal schema”; Hayles, who maintains that the body is experienced separately and simultaneously as both a lived experience and a social artefact which can be inscribed by interacting with technology; and Grosz, who describes the body as a “Mobius strip” whose inner and outer analytical surfaces are contingent and continuous. Ball criticises Crossley for the distancing of these body schema from politics and subjectivity, and Hayles for the dualism in her framework which does not sufficiently treat the boundary between body and technology as negotiable, contingent and subjective. She does, however, identify a usefully “cyborg” consciousness in Grosz’ work, in which the body exists “at the threshold of a singularity that interfaces with machineries, technologies, histories and cultures”. A further discussion of bodily ontology within organisations makes reference to biodata as “of the body” rather than “about” the body”, from which Ball then moves on to describe a series of theoretical turns, drawing on Haraway’s “cyborg”, to site resistance to bodily surveillance as occurring at several points in the translation process, through problematisations or manipulations of and at the interface between body and technology, technology and information.

[I am sympathetic with the connections Ball makes with Haraway’s cyborg – I think that a similar approach might translate well to an engagement with the creation of the criminal subject in cybercrime, with some notable points of conflict. Particularly, I’m not sure that “resistance” is the only thing of interest, or necessarily the most apt frame, for how people who engage in criminal or deviant acts in high-technology societies manage and conceptualise their relationships with the network of technologies, software, hardware and data which they enrol. In particular, given the points Ball makes about the blurring and negotiation of the boundaries between the body, technology and data, I think that this might be better described as an (potentially deviant) active (though possibly unreflexive) creation of and participation in the assembly of a subjective and contingent cyborg self, which includes points of resistance and acceptance; authorship and submission; expression and categorisation. Especially with a population of expert or at least fluent users of technology, this negotiation and management is likely to be a site for a lot of really interesting interactions, where constructions of deviance, criminality and self-identity are written, read and edited in real time]

 

Last week I attended a workshop held by the Alan Turing Institute under the general theme of “Algorithm Society” – this proved to be really useful in crystallising some of the ideas I’d been having around the PhD and I think has brought me closer to having a defined topic. Having previously only approached this area of study from a “big data” perspective, the “algorithm society” concept, which looks at the incorporation of machine learning, “algorithmic” decision making and automation into social processes was particularly useful in bringing together some of my thoughts around cybercrime and the PhD.

The workshop included presentations and discussion groups on various topics – although crime and surveillance were not focused on in their own right, they had undeniable relevance to many of the areas discussed and I felt able to make useful contributions from a criminological perspective. In particular, the “work” group, which discussed the consequences of incorporating algorithmic and machine learning technologies into the labour market and within the workplace, was particularly relevant, touching on issues of discrimination, “sorting” bias and the changing nature of work and social interaction. I was also interested by some of the discussion we had around how people “gamed” or subverted algorithmic systems, for example Mechanical Turk workers forming groups to discuss how to get the best jobs or businesses trying to artificially increase their standing on TripAdvisor.

Much of the “work” topic discussion had relevance from a criminological perspective – this was split, broadly, into the effects of algorithmic/machine learning processes on the labour market and the incorporation of them into the work people do. The first strand, discussing how these technologies were being used to make decisions about hiring, allocating work and shaping labour processes situated the human subject as “within” the algorithm, bound up in the social world which the algorithm sorted and shaped, whether that be by choosing which Uber drivers were selected for fares or micromanaging and surveiling workflow in an Amazon shipping warehouse. There was also broader discussion of how this affected work and class on a macro scale, with the potential creation of an “algorithmic working class” of workers with little to no labour rights or capacity for communal organisation. Is this just an extension of managerialism or a new “social order”?

In the second strand, algorithms were treated more as a tool, augmenting the labour of professional and skilled workers and removing the “grunt” or “bulk” elements of their work in order to reduce error or to allow them to focus on higher order processes. This tied into an earlier discussion with Donald Mackenzie around how these systems affected where “power” was located in organisations. As in many cases, the algorithm did not make the “final decision”, its role was rather in structuring and presenting information to a final decision-maker who could authorise action (or not), this had the effect of concentrating decision-making power in that individual, where previously the “grunt work” done by the algorithm would have been the product of a wider group of people who could influence elements of the decision chain.

I’m keen to write more on the workshop, but I’ll finish here for now with some potential question which this poses for the PhD. I began my research with a broad “ANT and cybercrime” scope, in particular reacting to the existing theoretical literature on cybercrime and proposing that ANT might provide a starting point to investigating the role and importance of non-human actors in cybercrime. One of the conceptual problems with this was the bracketing off of “cybercrime” as a phenomenon in its own right – this is a very broad and nebulous term which encompasses a lot of very different phenomena. In some sense, any crime committed in a high-tech society will have some “cyber” element so it might be more useful to look at a particular novel phenomenon associated with the rise of high-tech infrastructure in late-modern social spaces. “Non-human technological actors” is itself a broad and non-homogeneous group, however this does suggest a potential phenomenon – the automation of social and human processes and the insertion of non-human “algorithmic” or “machine learning” actors in decision-making processes. As this pertains to “cybercrime”, one of the most obvious applications of this kind of technology is in the incorporation and analysis of massive information flows surveillance and policing.

Distilling this down into some bullet-pointed research questions:

  • How does the presence of “algorithmic” intermediaries in the decision-making chain affect the work of surveillance and policing? What effects does this have on the experience of those making use of these systems? To what extent is this a process of “automation”?
  • How do these algorithms work and develop and what are the consequences for justice and surveillance? How do they learn/encode values and norms in their sorting and ranking processes and are there any unintended consequences? Are political or organisation decisions important for the function of these algorithms in their social/work context? If a machine learning algorithm can end up a “black box” whose operation is difficult or impossible to understand, even for its creators, what are the processes for accountability?
  • What are the consequences of these systems interacting with whole populations on a “databody” or “dataperson” level? Is there a “social sorting” effect?
  • How do people subvert these algorithms? Identity management? Malicious “tricking” of the algorithms to increase the risk scores of a target? “Air gap” work? “Systemic” subversion/attack using botnets, DDOS etc.? How does this affect the day-to-day use of the internet (and broader social interaction) by people who practice “deviant” behaviours?
  • How does this interact with the increasing automation of many types of cybercrime?

Current reading: various research papers, Surveillance as Social Sorting edited by David Lyon

Fiction: Just finished The Good Terrorist by Doris Lessing and now on the excellent Embed with Games by Cara Ellison.

 

 

I’m getting started on some reading and thoroughly enjoying it – I’m currently writing up some thoughts on an article by Donald MacKenzie titled “Is Economics Performative? Option Theory and the Construction of Derivatives Markets” which I’ll post later. For now, I thought I’d link to some writing I did around the topics in my MSc dissertation on Edinburgh sociology blog It Ain’t Necessarily So last year:

Spaces and Technosociology: I.T. Ain’t Necessarily So

Cyborg Sociology and High-tech Discourse