KINGS COLLEGE LONDON

Research Associate in Secure AI Assistants: P1 (Usable Security focus)

Location
London (Central), London (Greater)
Salary
Grade 6, £38,304 - £45,026 per annum inclusive of £3,500 London Weighting Allowance per annum
Posted
Sep 11, 2020
End of advertisement period
Oct 12, 2020
Ref
R6/1136/20-KN
Academic Discipline
Life sciences, Social Sciences
Job Type
Research Related
Contract Type
Fixed Term
Hours
Full Time

The successful candidate will join King’s College London and work on the 3-year EPSRC-funded project “SAIS: Secure AI Assistants”.

There is an unprecedented integration of AI assistants into everyday life, from the personal AI assistants running in our smart phones and homes, to enterprise AI assistants for increased productivity at the workplace, to health AI assistants. Only in the UK, 7M users interact with AI assistants every day, and 13M on a weekly basis. A crucial issue is how secure AI assistants are, as they make extensive use of AI and learn continually. Also, AI assistants are complex systems with different AI models interacting with each other and with the various stakeholders and the wider ecosystem in which AI assistants are embedded. Beyond the technical complexities, users of AI assistants are known to have mental models that are highly incomplete and they do not know how to protect themselves.

SAIS (Secure AI assistantS) is a cross-disciplinary collaboration between the Departments of Informatics, Digital Humanities and The Policy Institute at King's College London, and the Department of Computing at Imperial College London, working with non-academic partners: Microsoft, Humley, Hospify, Mycroft, policy and regulation experts, and the general public, including non-technical users.

This particular post will focus on interaction and interface design to explain security in AI assistants, so that users can have more accurate mental models of these systems and their security and can better trust them. This will include co-created explanations with the stakeholders mentioned above to increase users' literacy of security in AI assistants. The post holder is expected to interact with the symbolic AI researchers of the project in charge of creating the theoretical AI models that can underpin the explanations required for the interactions with users.

The successful candidate must be highly motivated and must have:

- A PhD (or near completion) in a relevant subject (computer science, human-computer interaction, human-ai interaction, digital humanities, social sciences)

- Excellent skills and previous experience in qualitative and quantitative research methods.

- A strong academic track record of conducting and disseminating research, adequate to their career stage.

- Excellent communication skills in English (both writing and speaking).

- A good attitude towards team working in an international and inter-disciplinary environment.

For reference, preliminary work in this area from the team, specifically on a type of AI assistants include:

- N. Abdi, K. Ramokapane, J. M. Such. More than smart speakers: security and privacy perceptions of smart home personal assistants. USENIX Symposium on Usable Privacy and Security (SOUPS), 2019.

- J. Edu, J. M. Such, Guillermo Suarez-Tangil. Smart-home personal assistants: a security and privacy review. ACM Computing Surveys, 2020.

Similar jobs

Similar jobs