Autonomous Vehicles Need Superhuman Perception for Success

Michael Milford, Associate Professor at Queensland University of Technology (QUT), is a leading robotics researcher working to improve perception and more in autonomous vehicles, conducting his research at the intersection of robotics, neuroscience and computer vision.

By Leonie Philipp, Re.Work.


For self-driving cars and other smart transport to be successfully integrated in the real-world, the safety of passengers and pedestrians must be ensured. In the world of intelligent machines, perception answers the question: what is around me? This situational awareness is paramount for safe operation of autonomous vehicles in real-world environments.

Scientists working in this field point to robotic perception as fundamental in equipping machines with a semantic understanding of the world, so that they can reliably identify objects and make informed predictions and actions. Michael Milford, Associate Professor at Queensland University of Technology (QUT), is a leading robotics researcher working to improve perception and more in autonomous vehicles, conducting his research at the intersection of robotics, neuroscience and computer vision. 

Michael's research models the neural mechanisms in the brain underlying tasks like navigation and perception in order to develop new technologies, with a particular emphasis on challenging application domains where current techniques fail such as all-weather, any-time positioning for autonomous vehicles. 

As the Machine Intelligence in Autonomous Vehicles Summit in Amsterdam draws nearer, we spoke to Michael to gain insight into the recent advancements in robotic perception in autonomous systems and the challenges that lie ahead.

How did you begin your work in autonomous systems?

I’ve always been fascinated by the application of intelligence to autonomous systems, ever since my undergraduate university days. I think working with intelligent systems that are actually deployed and evaluated on embodied systems such as autonomous robots and vehicles provides a crucial “real-world” sanity check for what parts of theory are correct and which need more work. These insights can then close the loop back to the underlying theory and help improve it even further, moving us closer to truly intelligent autonomous systems.

What key factors have enabled recent advancements in the perception of challenging environments?

On a practical side, it’s the relatively recent realization by all the major players in this space that autonomous vehicles represent an unprecedented commercial opportunity at a huge scale, and the associated influx of resources and talent that is now working on making significant advances for problems like perception in challenging environments.

Sensors like cameras are also rapidly improving to make this problem more tractable. You can now get a consumer camera for a couple of thousand dollars that sees better in the dark than you do, and this technology is still getting better. So some “traditional” perception problems like seeing in the dark are being largely solved by improved sensing technology.

Then there is the software and algorithmic side of things. Humans are very good at dealing with challenging or “corner-case” perceptual situations – and now that we’re gathering such large amounts of Learn more about the future of AI in transport at the  Machine Intelligence in Autonomous Vehicles Summit in  Amsterdam on  28-29 June, taking place alongside the  Machine Intelligence Summit. View more information  here.

Confirmed speakers include Pablo Puente Guillen, Researcher, Toyota Motors; Jan Erik Solem, Co-founder & CEO, Mapillary; Sven Behnke, Head of Autonomous Intelligent Systems Group, University of Bonn; Damian Borth, Director of the Deep Learning Competence Center, DFKI, Julian Togelius, Associate Professor, NYU Tandon School of Engineering and more.

Tickets are limited for this event. Book your place now.

Opinions expressed in this interview may not represent the views of RE•WORK. As a result some opinions may even go against the views of RE•WORK but are posted in order to encourage debate and well-rounded knowledge sharing, and to allow alternate views to be presented to our community.


Original. Reposted with permission.


  • Women in Tech: Interview with DeepMind’s Silvia Chiappa
  • Top /r/MachineLearning Posts, April: Why Momentum Really Works; Machine Learning with Scikit-Learn & TensorFlow
  • 5 Free Courses for Getting Started in Artificial Intelligence