About

We propose to study privacy management by investigating how individuals learn and benefit from their membership of social or functional groups, and how such learning can be automated and incorporated into modern mobile and ubiquitous technologies that increasingly pervade society. We will focus on the privacy concerns of individuals in the context of their use of pervasive technologies, such as Smartphones and Clouds, and aim to contribute to research in three areas:

  1. software engineering of adaptive systems that guide their users to manage their privacy;
  2. development of machine learning techniques to alleviate the cognitive and physical load of eliciting and personalising users’ privacy requirements; and
  3. empirical investigation of the privacy behaviour of, and in, groups, in the context of both collaboration and conflict.

Technological approaches to privacy management have revealed that individuals’ privacy behaviors can be contradictory, making it difficult to learn or anticipate their privacy needs. This is usually attributed to inherent limitations of human reasoning and information processing capacity. Privacy management systems have attempted to compensate for these limitations – but with mixed success. In this proposal we suggest an alternative approach to the problem of privacy management. We argue that, what appears from the outside to be contradictory behaviours, can in fact be perfectly consistent from the point of view of the individual themselves and the different roles they play in social groups. Recent work in the social identity tradition has demonstrated that an individuals’ identity is not fixed – but is rather the outcome of a dynamic process. Depending on the context, individuals can see themselves in terms of a personal identity or a social identity. Social identities are derived from membership of social groups – which are themselves multiple (identities as a male, a father, a worker, a football fan, English, British etc). Each of these identities has different trust relationships between members, different privacy expectations, and different group values, which can vary as a function of the intergroup context. Thus, the same individual can assess privacy requirements or exhibit privacy behaviours in different ways at different times, as a function of the salience of different identities. Thus, privacy behaviours are an expression of an individual’s personal or social identities. Moreover, the Social Identity tradition shows that groups play a key role in regulating the behavior of the individuals within them. The form in which privacy behavior emerges is more than the product of an individual decision maker – it should rather be seen as the outcome of collaborative filtering. Understanding the identity process is therefore key to assessing the impact that privacy and security policies have on people’s behaviours. This is essential to be able to deliver systems that can express and analyse users’ privacy requirements and, at runtime, self-adapt and guide users as they move between contexts. Broadly speaking, our proposed project asks the following two questions and attempts to answer them from both a social psychology and a computing perspective:

  • Can privacy be a distributed quality (across “the group”)? If so, under what conditions might this be the case?
  • Can the group protect the privacy of the individual? If so, how does the group manage the privacy-related behaviour of its members?

A key cybersecurity challenge is the protection of user’s personal information in the variety of contexts in which it is created, stored, and used. However, without understanding such contexts, and the privacy requirements of users within them, our view is that the choice of effective security mechanisms is hard, if not impossible. Given that privacy is as much a social concern as it is a technical one, any solution to manage privacy must have a socio-technical perspective on the problem. Therefore, addressing privacy management systematically requires both social input and engineering discipline. The proposed project recognises this fundamental intertwining of security and human behaviour, and will deliver research contributions that draw upon research from both engineering and the social sciences. In particular, the project will:

Investigate empirically the privacy dynamics of groups, informed by the groups’ use of context sensing technology and its privacy management capabilities;

  1. Develop systematically a software-based privacy management framework, populated by computational capabilities based on machine learning, that exploits the group dynamics examined in (1) in order to deliver privacy management advice and functions;
  2. Evaluate and demonstrate the benefits of the proposed framework to both software engineering practitioners seeking to develop privacy-aware software, and end users looking for usable ways to guide them through the privacy management decisions they face in their interactions with others.

We posit that understanding identity is key to assessing the impact of sensing (and by implication, privacy) technologies (such as surveillance). So, for example, we will examine if a social identity approach helps us to predict when people see surveillance as limiting or undermining their freedom, and when surveillance is accepted or even endorsed by those that are being watched. Specific questions that the project therefore aims to address include: How do we know which identity is salient at any given moment? Is it a personal or a social identity? If social identity, which group? How does identity impact on privacy? What does this mean for privacy seeking behaviour? For privacy management technologies?