Research Associate in Secure AI Assistants

London (Greater) (GB)
£38,304, including London Weighting Allowance
Thursday, 29 October 2020
End of advertisement period
Sunday, 13 December 2020
Contract Type
Fixed Term
Full Time

Job description

The successful candidate will join King's College London and work on the 3-year EPSRC-funded project “SAIS: Secure AI Assistants”.
There is an unprecedented integration of AI assistants into everyday life, from the personal AI assistants running in our smart phones and homes, to enterprise AI assistants for increased productivity at the workplace, to health AI assistants. Only in the UK, 7M users interact with AI assistants every day, and 13M on a weekly basis. A crucial issue is how secure AI assistants are, as they make extensive use of AI and learn continually. Also, AI assistants are complex systems with different AI models interacting with each other and with the various stakeholders and the wider ecosystem in which AI assistants are embedded. Beyond the technical complexities, users of AI assistants are known to have mental models that are highly incomplete and they do not know how to protect themselves.

SAIS - Secure AI assistants - is a cross-disciplinary collaboration between the Departments of Informatics, Digital Humanities and The Policy Institute at King's College London, and the Department of Computing at Imperial College London, working with non-academic partners: Microsoft, Humley, Hospify, Mycroft, policy and regulation experts, and the general public, including non-technical users.

This particular post will focus on providing an understanding of attacks on AI assistants considering the whole AI assistant ecosystem, the AI models used in the assistants, and all the stakeholders involved, particularly focusing on the feasibility and severity of potential attacks on AI assistants from a strategic threat and risk approach.

The successful candidate must be highly motivated and must have:

  • A PhD, or near completion, in a relevant subject: computer science, cyber security, AI/ML
  • Excellent background in cyber security and/or AI security.
  • A strong academic track record of conducting and disseminating research, adequate to their career stage.
  • Excellent communication skills in English: both writing and speaking.
  • A good attitude towards team working in an international and inter-disciplinary environment.

For reference, preliminary work in this area from the team, specifically on a type of AI assistants include:

  • J. Edu, J. M. Such, Guillermo Suarez-Tangil. Smart-home personal assistants: a security and privacy review. ACM Computing Surveys, 2020.

This is a full-time, fixed-term post for 24 months or until 30 November 2022.

Key responsibilities

  • Designing, conducting, analysing and reporting empirical and/or theoretical security results
  • Writing papers for publication in journals and conferences, and presenting at conferences, seminars and other research meetings.
  • Working closely with the Principal Investigator (PI), other Co-investigators, other postdoc researchers and industry partners to ensure that the aims and objectives of the project are achieved in a timely and effective way.
  • Developing new concepts and ideas to extend intellectual understanding in the area of the project.
  • Leading the organisation of and support events, conferences and workshops run by the project to develop the project outputs and research agenda; and participating in relevant events within the institution or externally, in order to build contacts to facilitate exchange of information and advance thinking.
  • Contributing to the development of further research proposals.

The above list of responsibilities may not be exhaustive, and the post holder will be required to undertake such tasks and responsibilities as may reasonably be expected within the scope and grading of the post

Skills, knowledge, and experience

Essential criteria

  • PhD awarded in Cyber Security, Computer Science, AI/ML, or similar near completion.
  • Excellent background in cyber security and/or AI security
  • Strong research record in the subjects above as evidenced by publications in high quality journals and conferences.
  • Proven ability to work independently.
  • Flexible approach to working and a desire to develop knowledge.
  • Excellent interpersonal / team-working and organisational skills.
  • Excellent written and verbal communication skills including presentation and report-writing skills.

Desirable criteria

  • Previous experience on AI assistants Security
  • Previous experience on Offensive Security, Security Testing or Risk Assessments
  • Previous experience on studying platforms like Android that run components from third parties.

*Please note that this is a PhD level role but candidates who have submitted their thesis and are awaiting award of their PhDs will be considered. In these circumstances the appointment will be made at Grade 5, spine point 30 with the title of Research Assistant. Upon confirmation of the award of the PhD, the job title will become Research Associate and the salary will increase to Grade 6.

Further information

This advertisement does meet the requirements for a Certificate of Sponsorship under Home Office regulations and therefore the university will be able to offer sponsorship for this role.

Similar jobs

Similar jobs