Alexander Domahidi CTO and co-founder Embotech AG SWITZERLAND
A vehicle’s physical capabilities are crucial for the feasibility and smoothness of any maneuver. Traditional motion planning methods for AD neglect most of the physics, being conservative or requiring advanced low-level vehicle controls that are often not present or are prohibitively expensive. We demonstrate physics-based motion planning technology, using numerical optimization, to calculate smooth and safe trajectories, which can be easily followed by standard low-level vehicle controllers. Based on recent advances in embedded optimization technology, we capture most of the relevant vehicle dynamics while driving on a highway or on rural roads, significantly extending the performance envelope of autonomous cars.
Real-time deep learning for ADAS and autonomous vehicles
Assaf Mushinsky Chief scientist and co-founder Brodmann17 ISRAEL
Perception, whether camera- or lidar-based, is the heaviest task in L4/L5 autonomous vehicles. Although the first robot-taxis and luxury vehicles may not be as sensitive to cost, it is important to understand what the options are for the mass production of autonomous vehicles. The presentation will discuss the improvements possible within deep learning algorithms that will enable the mass production of autonomous vehicles. We will review deep learning frameworks, inference engines – including whether or not to write your own – and neural network optimization. Throughout the presentation we will share measured data and results for every step in the chain.
Autonomous driving and AI: an approach to achieve functional safety
Oliver Bockenbach Head of functional safety - autonomous driving department KPIT Technologies GmbH GERMANY
Applications based on artificial intelligence are improving at a rapid pace. The accuracy of the inferences is constantly increasing. However, a small percentage of wrong inferences remains. Those mispredictions are very hard to detect. From the perspective of functional safety, errors that cannot be detected are unacceptable. Nevertheless, techniques can be used to avoid and detect mispredictions. This presentation is articulated around the following points: the notion of confidence in the inputs to the AD; the safety mechanisms working on the spatial surroundings of the vehicle and the temporal sequence of events perceived by the vehicle.
10:30 - 11:15
RGBD-based DNN for road obstacle detection and classification
Shmoolik Mangan Algorithm development manager Vayavision ISRAEL
Mainstream DNN-based detection and classification is based on an object-level sensor-fusion scheme, where a specific DNN is applied on the output of each sensor individually, followed by object-level fusion. The main disadvantages of this concept are the need to run a separate DNN on each sensor, and the propagation of the weaknesses of each sensor channel. Here we present an alternative scheme, where low-level sensor fusion is used to create a unified HD RGBD 3D model, followed by a single unified DNN. We present results of novel DNN architectures that utilize the RGBD for robust detection and classification.
DeepRacing AI: teaching autonomous vehicles to handle edge cases in traffic
Madhur Behl Assistant professor, computer science University of Virginia USA
What will an AV do if another vehicle swerves across multiple lanes without any indication? Or when the car in front brakes without warning? Or an obstacle appears at the last second in front of the car? How do we ensure that the car drives safely and reliably in situations that don’t happen often in day-to-day driving and are therefore difficult to gather data on. This talk will describe the research being done at the UVA Link Lab, where we teach AVs to learn how to deal with edge cases in traffic, by being agile.
Native camera imaging on lidar and novel deep learning enablement
Raffi Mardisosian VP, corporate development Ouster Inc USA
The emergence of lidar as a critical 3D sensing modality for autonomous vehicles has resulted in a need for computer vision scientists to develop new algorithms to segment, track and classify point clouds. Progress has been limited by the inability to apply decades of methodologies from camera-based vision due to the novel data formats and structures that conventional lidar output. Recent breakthroughs in lidar hardware enable camera-like imagery of both ambient and signal data in a rectilinear camera-like grid. This talk will focus on resulting implications for deep learning, and will feature applications of camera deep learning algorithms on lidar.
12:45 - 14:00
Wednesday 22 May
Workshop - Amazon Web Services 15:00 - 18:00
Easier productivity, reinforced learning and faster training with no pre-labeled data
After a brief introduction to AWS, participants will be able to build a Reinforced Learning Model for Autonomous Driving with Amazon SageMaker, AWS RoboMaker and AWS DeepRacer. This is a hands-on workshop.
All participants in the workshop must have access to an AWS Account and successfully launched and tested a ml.c5.xlarge Amazon SageMaker notebook instance. Furthermore, we recommend that all participants have completed the Amazon SageMaker getting started tutorial. This will help to ensure that learning objectives are met and will enable participants to experiment with personalized training models on a physical AWS DeepRacer.
Please Note: This conference programme may be subject to change