UNIVERSITY OF SYDNEY

Postdoctoral Research Associate in Augmented Reality for the Visual Impaired, Auditory Sensory

Location
Sydney, Australia
Salary
$97,043 p.a. - $100,717p.a. + 17% superannuation
Posted
18 Jul 2022
End of advertisement period
22 Aug 2022
Ref
0094472
Contract Type
Fixed Term
Hours
Full Time

Postdoctoral Research Associate in Augmented Reality for the Visual Impaired, Auditory Sensory Augmentation

  • Join a world-class research team developing an augmented reality device to assist the visually impaired
  • Develop novel auditory sensory augmentation technologies to assist the visually impaired using assistive technologies based on wearable glasses with machine vision
  • Full time, 2 year fixed term position, Level A, Base Salary $97,043 p.a. - $100,717p.a. + 17% superannuation

To keep our community safe, please be aware of our COVID safety precautions which form our conditions of entry for all staff, students and visitors coming to campus.

Sponsorship / work rights forAustralia

Australian Temporary Residents currently employed at the University of Sydney may be considered for a fixed term contract for the length of their visa, depending on the requirements of the hiring area and the position.

Pre-employment checks

Your employment is conditional upon the completion of all role required pre-employment or background checks in terms satisfactory to the University. Similarly, your ongoing employment is conditional upon the satisfactory maintenance of all relevant clearances and background check requirements. If you do not meet these conditions, the University may take any necessary step, including the termination of your employment.

EEO statement

At the University of Sydney, our shared values include diversity and inclusion and we strive to be a place where everyone can thrive. We are committed to creating a University community which reflects the wider community that we serve. We deliver on this commitment through our people and culture programs, as well as key strategies to increase participation and support the careers of Aboriginal and Torres Strait Islander People, women, people living with a disability, people from culturally and linguistically diverse backgrounds, and those who identify as LGBTIQ. We welcome applications from candidates from all backgrounds.

How to apply

Applications (including a cover letter, CV, and any additional supportingdocumentation) can be submitted via the Apply button at the top of the page.

If you are a current employee of the University or a contingent worker with access to Workday, please login into your Workday account andnavigate to the Career icon on your Dashboard. Click on USYD Find Jobs and apply.

For a confidential discussion about the role, or if you require reasonable adjustment or support filling out this application, please contact Rebecca Astar or Linden Joseph Recruitment Operations,by email to recruitment.sea@sydney.edu.au.

©The University of Sydney

The University reserves the right not to proceed with any appointment.

Click to view the Position Description for this role.

Applications Close

Monday 22 August 2022 11:59 PM

About you

The University values courage and creativity; openness and engagement; inclusion and diversity; and respect and integrity. As such, we see the importance of recruiting talent aligned to these values and are looking for a Postdoctoral Research Associate who has: 

a PhD (or near completion) in a relevant field previous experience with AR/VR/XR environments knowledge of one or more of sound synthesis techniques (deep networks and other methods), audio signal processing related to spatial audio; methods and principles of psychophysical experiments experience with one or more programming languages (C, C++, C#, Python) ability to manage others and coordinate multiple project objectives excellent communication and interpersonal skills

About the opportunity

The University of Sydney’s Computing and Audio Research Laboratoryis one of Australia’s leading spatial audio and machine hearing research groups, undertaking fundamental and applied research in audio signal processing, auditory perception and related fields. CARLab is focused on developing and testing new auditory sensory augmentation paradigms. We take a broad view of auditory sensory augmentation as comprised of three parts: (1) sensors and machine artificial intelligence extract information for a targeted objective; (2) this information is rendered via the auditory channel as sound; (3) we enable efferent feedback control via hand/wrist or other sensors. Experiments are run using motion capture and the latest AR/VR/XR equipment.

We have funding available through the Project ARIA CRC-P to support a Postdoctoral Research Associate to undertake fundamental and applied research related to the development, design and deployment of novel imaging technologies. Project ARIA, Augmented Reality in Audio, seeks to endow the visually impaired with a richer sense of their surroundings using a wearable augmented reality device. Building on technologies from robotics, augmented reality, and spatialised audio, ARIA will deliver next-generation perception algorithms with the potential to improve quality of life for millions of people affected by vision impairment worldwide.

There are multiple positions open as part of Project ARIA. This auditory-sensory-augmentation-focused position will advance the auditory sensory augmentation technologies required for the ARIA wearable device to perform reliably and efficiently in a breadth of usage scenarios. Challenges will relate to microphone array processing on the glasses to extract ambient and voice audio, sound synthesis paradigms (e.g., Deep Networks and other sound synthesis approaches) to render targeted information as audio; real-time and individualized spatial audio rendering where appropriate (HRTFs and head-tracking).

Incorporating concepts from audio signal processing and machine learning, auditory psychophysics, and some spatial audio understanding, this role involves the development of auditory sensory augmentation experiments (working with the machine vision team to render sensory information as sound for targeted scenarios) and working with modern AR/VR/XR equipment (headsets, Unity/Unreal programming, and motion capture) to run psychophysical experiments with both sighted and low-vision participants.

In this role you will:

work closely with project academics, engineers, and PhD students create a simulation and design environment for developing novel auditory sensory augmentation paradigms characterise novel auditory sensory augmentation paradigms through simulation, lab experiments, and user trials develop novel sound synthesis techniques that interface with machine sensor information contribute as a researcher to project report preparation, presentation at internal workshops, and dissemination at top international conferences and journals.