Research Associate in Robotics
An 18-month Postdoctoral Research Associate position is available at the Robot Perception Lab led by Dr Shan Luo at the Department of Engineering, King’s College London. You will work on the “ViTac: Visual-Tactile Synergy for Handling Flexible Materials” project, funded by the EPSRC. The research will be conducted in collaboration with colleagues from King’s College London, international collaborators at MIT, and industrial partners of the project: Unilever and Shadow Robot Company.
The main objective of the ViTac project is to set the basis for the next generation of robots that will be able to handle flexible materials, underpinned by visual-tactile synergy. The candidate will perform research in the areas of robotics, computer vision and machine learning. For more information about the project, please refer to https://gow.epsrc.ukri.org/NGBOViewGrant.aspx?GrantRef=EP/T033517/2.
You must have a PhD in Robotics, Computer Science, Electric/Electronic Engineering (or related field) and have excellent communication and interpersonal skills. You should have substantial knowledge in robotics, machine learning, computer vision, control and mechatronics, along with experience with robotic hardware. A good understanding of tactile sensing and robot manipulation would be an advantage
This post will be offered on an a fixed-term contract for 1 year 6 months
This is a full-time post
• Review the literature in the field of robotics, in particular on robot visuo-tactile perception, robot grasping and manipulation, deep reinforcement learning, and predictive models;
• Design deep/machine learning models to predict states of deformable objects, e.g., fabrics and clothes;
• Design robot learning models for handling deformable objects;
• Set up experiments in simulation and on real robot platforms for deformable objects manipulation;
• Evaluate the developed models and publish the results
The above list of responsibilities may not be exhaustive, and the post holder will be required to undertake such tasks and responsibilities as may reasonably be expected within the scope and grading of the post.
Skills, knowledge, and experience
The candidate is expected to have a PhD in Robotics, or related fields, and have excellent communication and interpersonal skills. The candidate should have substantial knowledge in robotics (particularly on robot learning).
1. Proven research experience in the field of robotics;
2. PhD degree (or shortly expect to receive) in Robotics, Computer Science, Artificial Intelligence, Engineering (or a related field);
3. Sufficient depth and breadth of knowledge in the subject of Robotics to develop research in the field, e.g., robot design, control and mechatronics;
4. Publication record in robotics and machine learning, e.g., ICRA, IROS, RSS, CoRL, T-RO, CVPR, ICML;
5. Excellent verbal and written communication skills, including academic conference presentations and journal papers;
6. Excellent mathematical and programming skills in Python or C++, and practical experience with deep learning libraries (e.g., TensorFlow, PyTorch);
7. Ability to organize and prioritize conflicting tasks to organize the research within the project timetable and ensure interim deadlines are met;
8. Self-motivated and hardworking;
9. Ability to work collaboratively and independently.
1. Research experience in one or more of the following areas: tactile sensing, robot grasping and manipulation, robot control, computer vision, deep learning, or deep reinforcement learning;
2. Hands-on experience with systems integration, including Robot Operating Systems;
3. Enthusiasm to supervise UG/MSc/PhD students.
Please note that this is a PhD level role but candidates who have submitted their thesis and are awaiting award of their PhDs will be considered. In these circumstances the appointment will be made at Grade 5, spine point 30 with the title of Research Assistant. Upon confirmation of the award of the PhD, the job title will become Research Associate and the salary will increase to Grade 6.
The position is expected to start on 1st November 2022.
Some of the recent research outputs from the project can be found below:
L. Pecyna, S. Dong, S. Luo, “Visual-Tactile Multimodality for Following Deformable Linear Objects Using Reinforcement Learning”, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022.
T. Jianu, D.F. Gomes, S. Luo, “Reducing Tactile Sim2Real Domain Gaps via Deep Texture Generation Networks”, IEEE International Conference on Robotics and Automation (ICRA), 2022.