Naghmana Majed CTO, global automotive and aerospace and defense industry IBM USA
The presentation will outline a solution using blockchain for OTA software updates to vehicles, enabling traceability, auditability and compliance. It will also discuss how we can enable OEMs to move toward an Open Vehicle platform around OTA software updates and new features, and look at how to reduce testing and deployment cycles.
Artificial intelligence in the driver’s seat
Serkan Arslan Director of automotive NVIDIA EMEA GERMANY
New data center infrastructure and in-vehicle supercomputing platforms are required, and AI is being used to process data from cameras, radars, lidars and other sensors. Algorithms leveraging structure from motion, sensor fusion and deep learning will help perceive the environment, create HD maps, predict traffic and behavior and then control traffic. To ensure AVs are safer than human-driven vehicles, developers need to drive millions if not billions of miles to properly train and test them. Combining photo-realistic simulation and AI enables the industry to safely drive billions of miles in virtual reality – testing an unlimited variety of conditions and scenarios.
Progress on the AUTOSAR adaptive software platform for intelligent vehicles
Dr Günter Reichart Spokesperson AUTOSAR Development Partnership GERMANY
The AUTOSAR community has grown to approximately 250 partner companies since the first AUTOSAR Classic Platform specification was released more than 15 years ago. AUTOSAR Adaptive Platform is a completely new approach to cope with the challenging market trends in the automotive industry, such as internet access in cars, highly automated driving and V2V communication. The platform runs on high-performance computing hardware and supports parallel processing on many core systems and GPUs. Consequently, it can also be used to support high-bandwidth communication and is able to host AI applications. AUTOSAR has its roots in the automotive industry with safety and security as the highest priority.
10:30 - 11:15
Challenges of deep learning in the automotive industry
Dr Florian Baumann CTO Automotive & AI Dell EMC GERMANY
After briefly introducing deep learning, the talk will focus on the common workflow of constructing a neural network in terms of the lifecycle of automotive product development: data collection and acquisition phase, data annotation phase, quality checks and finally constructing the network with its test and validation in a last step. The audience will: understand the common workflow and the basics of constructing a deep-learning-based classifier for automotive product development; become aware of typical challenges/problems and how to avoid and counter them; learn how to test and validate deep-learning-based algorithms for autonomous driving; understand how many miles must be driven, how many images annotated and the massive investment in terms of effort for test and validation.
Deep learning for automation and quality control in crowdsourcing applications
Bernd Heisele Principal vision engineer Mighty AI USA
Autonomous vehicles must be trained to detect and avoid road obstacles with incredible accuracy to safely operate on public roads. Efficiently generating precise manual annotations for ground truth training datasets is a top priority for computer vision teams who must label thousands of frames in a timely and cost-effective manner. Although crowdsourcing these annotations delivers necessary scale, the results can be insufficient without proper quality control. In this presentation, we discuss a novel approach to quality control in crowdsourcing applications, which uses state-of-the-art deep learning model (Faster RCNN NASnet) to predict performance and provide real-time feedback to human annotators.
Presentation title and synopsis TBC
Dan Cauchy Executive director of Automotive Grade Linux The Linux Foundation USA
12:45 - 14:00
Tuesday 21 May
Afternoon Session 14:00 - 17:45
AI and big data management for autonomous driving
Frank Kraemer Systems architect IBM GERMANY
Advanced driver assistance systems (ADAS/autonomous driving) are becoming part of all vehicles. All major OEM and Tier 1 auto manufacturers are implementing and testing AD facilities. We examine how real-time sensors, big data computing, data storage and data archiving are integrated in today's ADAS/AD systems, providing a fascinating case study of best practices for workflow design, testing and development, data storage and archiving, applicable to all industries.
Autonomous driving Level 5 – percept and fusion
Davor Andric CTO AI and analytics North and Central Europe DXC Technology GERMANY
The presentation will cover: (1) Analyzer – distributed/parallel processing of automotive formats on sensor level without data conversion and storage duplication, reduced time to analyze --> bring algorithms to the data; (2) Optimizer – custom annotation and extraction of AI relevant data sequences; (3) Trainer – state-of-the-art autonomous driving platform for object recognition, fusion of disparate sensor data sets and formats; orchestration of AD/ML workloads.
Practical validation of AI safety within the ISO26262-6 framework
Dr Edward Schwalb Lead scientist MSC Software USA
It is often mentioned that ISO 26262-6 is inadequate for addressing safety validation of autonomous vehicles comprising AI components. As an example, there is a need to validate behavior when faced with unprogrammed objects, e.g. a piece of tissue paper vs. aircraft on the road; some refer to this as the 'category problem'. In this presentation we review the various relevant sections of ISO 26262, including 8.4.2, 8.4.5, 9.4.3, 9.4.4, 9.4.5 10.4.3, 10.4.4 and 10.4.5. We provide a practical approach to address key aspects of the 'category problem' within the ISO 26262 framework.
15:30 - 16:15
Seeing through clutter: compressive learning for single photon lidar
Puneet Chhabra Co-founder and CTO Headlight AI UK
Sensors are vital for a wide range of key markets, including transportation, utilities, mining, security, construction and manufacturing; and they will remain a fundamental building block of the future. However, they are limited in their applications to ideal conditions. They don’t see effectively when they most need to – in fog, heavy rain or in complete darkness – and it is challenging to process the vast quantity of data they generate. This presentation introduces Headlight AI’s latest development in the real-time processing of multi-return, multi-spectral lidar and radar measurements using compressive deep learning techniques.
Automate and validate AV algorithms with large driving sensor datasets
Nicolas du Lac CEO Intempora FRANCE
With diversity of scenarios and the unpredictability of perception algorithms, it becomes necessary to perform statistical validation as formal proof becomes too complex and unaffordable. Costs for such validation can be tremendous. How to manage petabytes of recorded sensor datasets, unaltered videos and a growing number of computing nodes? We will introduce an innovative software suite that can help to benchmark, test and automate the validation of ADAS and HAD functions. It allows computing resources to be optimized for storage and high-performance computing by taking advantage of software execution performance and cloud architectures scalability, while testing your features against relevant data only.
HPC, artificial intelligence, virtualization – a creative combination for ADAS development
Gianluca Vitale Global business segment manager AVL List GmbH AUSTRIA
The automotive industry is undergoing the biggest shift since cars hit the roads. New electrification and autonomous technologies are combined with the conventional ones, generating even higher system complexity than before. Indeed, increasing model complexity creates a challenge for virtual testing as the number of test cases increases exponentially. An alternative to brute force testing is to restrict the procedure to execute only the most relevant test cases by exploiting artificial intelligence methods. In the present approach, we introduce a strategy that utilizes the power of cloud-based scaling controlled by AI algorithm to filter and pinpoint critical scenarios.
Wednesday 22 May
Morning Session 09:00 - 12:45
Motion planning at the physical limits
Alexander Domahidi CTO and co-founder Embotech AG SWITZERLAND
A vehicle’s physical capabilities are crucial for the feasibility and smoothness of any maneuver. Traditional motion planning methods for AD neglect most of the physics, being conservative or requiring advanced low-level vehicle controls that are often not present or are prohibitively expensive. We demonstrate physics-based motion planning technology, using numerical optimization, to calculate smooth and safe trajectories, which can be easily followed by standard low-level vehicle controllers. Based on recent advances in embedded optimization technology, we capture most of the relevant vehicle dynamics while driving on a highway or on rural roads, significantly extending the performance envelope of autonomous cars.
Real-time deep learning for ADAS and autonomous vehicles
Assaf Mushinsky Chief scientist and co-founder Brodmann17 ISRAEL
Perception, whether camera- or lidar-based, is the heaviest task in L4/L5 autonomous vehicles. Although the first robot-taxis and luxury vehicles may not be as sensitive to cost, it is important to understand what the options are for the mass production of autonomous vehicles. The presentation will discuss the improvements possible within deep learning algorithms that will enable the mass production of autonomous vehicles. We will review deep learning frameworks, inference engines – including whether or not to write your own – and neural network optimization. Throughout the presentation we will share measured data and results for every step in the chain.
Autonomous driving and AI: an approach to achieve functional safety
Oliver Bockenbach Head of functional safety - autonomous driving department KPIT Technologies GmbH GERMANY
Applications based on artificial intelligence are improving at a rapid pace. The accuracy of the inferences is constantly increasing. However, a small percentage of wrong inferences remains. Those mispredictions are very hard to detect. From the perspective of functional safety, errors that cannot be detected are unacceptable. Nevertheless, techniques can be used to avoid and detect mispredictions. This presentation is articulated around the following points: the notion of confidence in the inputs to the AD; the safety mechanisms working on the spatial surroundings of the vehicle and the temporal sequence of events perceived by the vehicle.
10:30 - 11:15
RGBD-based DNN for road obstacle detection and classification
Shmoolik Mangan Algorithm development manager Vayavision ISRAEL
Mainstream DNN-based detection and classification is based on an object-level sensor-fusion scheme, where a specific DNN is applied on the output of each sensor individually, followed by object-level fusion. The main disadvantages of this concept are the need to run a separate DNN on each sensor, and the propagation of the weaknesses of each sensor channel. Here we present an alternative scheme, where low-level sensor fusion is used to create a unified HD RGBD 3D model, followed by a single unified DNN. We present results of novel DNN architectures that utilize the RGBD for robust detection and classification.
DeepRacing AI: teaching autonomous vehicles to handle edge cases in traffic
Madhur Behl Assistant professor, computer science University of Virginia USA
What will an AV do if another vehicle swerves across multiple lanes without any indication? Or when the car in front brakes without warning? Or an obstacle appears at the last second in front of the car? How do we ensure that the car drives safely and reliably in situations that don’t happen often in day-to-day driving and are therefore difficult to gather data on. This talk will describe the research being done at the UVA Link Lab, where we teach AVs to learn how to deal with edge cases in traffic, by being agile.
Native camera imaging on lidar and novel deep learning enablement
Raffi Mardisosian VP, corporate development Ouster Inc USA
The emergence of lidar as a critical 3D sensing modality for autonomous vehicles has resulted in a need for computer vision scientists to develop new algorithms to segment, track and classify point clouds. Progress has been limited by the inability to apply decades of methodologies from camera-based vision due to the novel data formats and structures that conventional lidar output. Recent breakthroughs in lidar hardware enable camera-like imagery of both ambient and signal data in a rectilinear camera-like grid. This talk will focus on resulting implications for deep learning, and will feature applications of camera deep learning algorithms on lidar.
12:45 - 14:00
Wednesday 22 May
Workshop - Amazon Web Services 15:00 - 18:00
Easier productivity, reinforced learning and faster training with no pre-labeled data
After a brief introduction to AWS, participants will be able to build a Reinforced Learning Model for Autonomous Driving with Amazon SageMaker, AWS RoboMaker and AWS DeepRacer. This is a hands-on workshop.
All participants in the workshop must have access to an AWS Account and successfully launched and tested a ml.c5.xlarge Amazon SageMaker notebook instance. Furthermore, we recommend that all participants have completed the Amazon SageMaker getting started tutorial. This will help to ensure that learning objectives are met and will enable participants to experiment with personalized training models on a physical AWS DeepRacer.
Thursday 23 May
Morning Session 09:00 - 12:45
Software library qualification for ASIL D systems
Dr Oscar Slotosch Board member Validas AG GERMANY
According to ISO 26262, software components and libraries have to be classified into 'unchanged' or 'modified/new' components. New/modified libraries have to be developed according to ISO 26262. However, for usually unchanged libraries, like C/C++ standard libraries or C runtime libraries, there is a simplification in ISO 8-12. In the talk we present the testing requirements for software libraries and how they can be qualified. In addition we present the Validas growing qualification kit for C/C++, which already covers about 200 C library functions and can be used to qualify many others.
Making cameras self-aware for autonomous driving
Moritz Bücker Business unit manager Adasens Automotive GmbH GERMANY
Two fundamental algorithms addressing the issue of making cameras self-aware of their status are proposed: online targetless calibration based on optical flow, and blockage detection based on image quality metrics (e.g. sharpness and saturation). Online calibration is based on the vanishing point theory; soil/blockage detection is based on the extraction of image quality metrics and the identification of discriminative feature vectors by a support-vector machine. The presentation will include videos and real-world examples from the algorithms running in real time.
An executable requirement model framework for ADAS software development
Alexander Van Bellinghen Research engineer Siemens Industry Software NV BELGIUM
The introduction of automated driving vehicles leads to increased complexity in automotive software. This paper explains how a formal contract-based software design and testing approach based on an executable requirements model front loads the implementation, validation and verification of ADAS/AV software. Requirements are transformed into engineering contracts that are put on top of the software architecture to ensure architecture consistency, drive the software implementation specification (C/Simulink/…) and channel unit or integration testing. This contract-based design methodology considering requirements as engineering contracts will be explained through an adaptive headlight software use case.
10:30 - 11:15
Coding a dilemma – legal issues on developing AI solutions
Dr Alexander Duisberg Partner Bird & Bird LLP GERMANY
Developing AI solutions poses a multitude of challenges. Starting from the regulatory framework on AV, implementing privacy by design and default, ensuring functional safety, testing the solutions in 'sandbox conditions', addressing contractual and product liability, all the way up to dealing with the critical ethical issues (dilemma situations) requires a conscious, well-prepared approach. Although 'try as you go' has never been the approach of the industry, the legal requirements on process and proper documentation, as well as managing risk, are critical success factors for AV. The presentation sets out the key issues and discusses possible solutions.
Safety argument structures for autonomous systems that use machine learning
James McCloskey Group leader - digital assurance Frazer-Nash Consultancy Ltd UK
Machine learning (ML) is making rapid progress in a variety of applications. It is highly likely to be used in safety-related and possibly safety-critical systems. There is a need to consider how to make safety arguments for systems that exploit AI techniques; more generally, there is a need to make safety arguments for autonomous systems that make use of them. This paper presents the work undertaken by a consortium led by Frazer-Nash Consultancy in support of Defence Science and Technology Laboratory to determine the types of safety argument.
Warp 'driving' – approaching AI’s speed of light
Kirk Boydston Training data specialist Samasource NETHERLANDS
Does it really require infinitely more training data to get your model to 100%? Getting an algorithm to 99%+ accuracy often feels like approaching the speed of light. Although some applications of AI are okay with sub-100% thresholds, anything less than 100% just simply won’t cut it for applications where lives are at stake, (e.g. pedestrian detection). This talk will investigate the emerging best practices derived from 75+ autonomous vehicle projects around breaking free of the 'subluminal' data barrier, and address questions such as does a 'warp drive' to 100% accuracy exist or is there a long, incrementalist slog ahead?
12:45 - 14:00
Lunch and Conference Close
Please Note: This conference programme may be subject to change