Research Associate

London (Central), London (Greater)
Grade 6, £38,304 - £45,026 per annum inclusive of £3,500 London Weighting Allowance per annum
15 Jun 2020
End of advertisement period
28 Jul 2020
Contract Type
Fixed Term
Full Time

The successful candidate will join King’s College London and work on the 3-year EPSRC-funded project “SAIS: Secure AI Assistants”.

There is an unprecedented integration of AI assistants into everyday life, from the personal AI assistants running in our smart phones and homes, to enterprise AI assistants for increased productivity at the workplace, to health AI assistants. Only in the UK, 7M users interact with AI assistants every day, and 13M on a weekly basis. A crucial issue is how secure AI assistants are, as they make extensive use of AI and learn continually. Also, AI assistants are complex systems with different AI models interacting with each other and with the various stakeholders and the wider ecosystem in which AI assistants are embedded. This ranges from adversarial settings, where malicious actors exploit vulnerabilities that arise from the use of AI models to make AI assistants behave in an insecure way, to accidental ones, where negligent actors introduce security issues or use AI assistants insecurely.

SAIS (Secure AI assistantS) is a cross-disciplinary collaboration between the Departments of Informatics, Digital Humanities and The Policy Institute at King's College London, and the Department of Computing at Imperial College London, working with non-academic partners: Microsoft, Humley, Hospify, Mycroft, policy and regulation experts, and the general public, including non-technical users.

This particular post will focus on providing an understanding of attacks on AI assistants considering the whole AI assistant ecosystem, the AI models used in the assistants, and all the stakeholders involved, particularly focusing on the feasibility and severity of potential attacks on AI assistants from a strategic threat and risk approach.

The successful candidate must be highly motivated and must have:

- A PhD (or near completion) in a relevant subject (computer science, cyber security, AI/ML)

- Excellent background in cyber security and/or AI security.

- A strong academic track record of conducting and disseminating research, adequate to their career stage.

- Excellent communication skills in English (both writing and speaking).

- A good attitude towards team working in an international and inter-disciplinary environment.

Candidates are also encouraged to highlight any relevant industry experience and/or success in securing funding for their own research.

Similar jobs

Similar jobs