16th International Workshop on Boolean Problems

Keynote Speaker


Thursday | Keynote 1 | September 19


Jan Peleska
Universität Bremen

Scary or Promising ? Machine Learning in Safety-Critical Control Systems




Abstract:
One of the challenges our society faces today has been caused by a recent change of paradigm in computer science: the advent of powerful applicable artificial intelligence (AI). This topic is currently discussed nearly everywhere in the media. In this talk, I will focus on a specific "sub-challenge", namely the risks involved in applying machine learning in safety-critical applications and the possibilities to mitigate these risks such that they become socially acceptable. Coping with this challenge is currently of considerable importance, since (1) our society has started to take technical safety for granted (2) large enterprises have replaced technical specialists in the upper management layers by accountants, controllers and true believers in share holder value, and (3) autonomous systems (road vehicles, trains, robots, drones, etc.) have become tempting business cases, but cannot be operated without the application of machine learning for safety-critical control components. As it turns out, the specific risk induced by using machine learning (ML) in safety-critical control systems is not really AI-specific or ML- specific. The root cause of these risks lies in the fact that no globally valid logical specification is given how arbitrary elements of the input space should be transformed into outputs. The expected output is only specified for a training and verification set of data which represents a tiny fraction of all the elements in the input space. The same problem occurs in increasingly complex applications that do not rely on AI at all: the complexity prevents system designers from creating comprehensive models describing the expected system behaviour. Instead, so-called scenario libraries are created, specifying how the system should behave in certain situations. For systems of this kind, it is necessary to determine the residual risk for uncovered inputs or uncovered scenarios that could occur during real-world operation. Using a trained neural network for obstacle detection in autonomous trains as an example, we will demonstrate how such estimates can be calculated, using a combination of mathematical analysis and statistics. It is shown that the statistical part of this approach can also be used to determine the residual risks of missing scenarios in complex system specifications.

CV:
Since 1995, Dr. Peleska is professor for computer science (operating systems and distributed systems) at Bremen University in Germany. At the University of Hamburg, he studied mathematics and wrote his doctoral thesis on a topic in the field of differential geometry. From 1984 to 1990 he worked with Philips as Senior Software Designer and later on as department manager in the field of fault-tolerant systems, distributed systems and database systems. From 1990 to 1994 he was manager of a department at Deutsche System-Technik responsible for the development of safety-critical embedded systems. Since 1994 he has worked as a consultant, specialising on development methods, verification, validation and test of safety-critical systems. His habilitation thesis focusing on Formal Methods for the development of dependable systems was finished in 1995. Together with his wife Cornelia Zahlten, he has founded the company Verified Systems International GmbH in 1998, providing tools and services in the field of safety-critical system development, verification, validation and test. His current research interests include formal methods for the development of dependable systems, test automation based on formal methods with applications to embedded real-time systems, verification of security properties, and formal methods in combination with CASE methods. Current industrial applications of his research work focus on the development and verification of avionic software, space mission systems and railway and automotive control systems.


Friday | Keynote 2 | September 20


Lars Hedrich
Johann Wolfgang Goethe-Universität, Frankfurt am Main

Synthesizing Analog Neural Networks for Low-Power AI




Abstract:
AI edge devices are important in reducing network traffic and power consumption of whole systems. Edge devices with extremely low power consumption may use a different architecture than GPU and CPUs. We present an analog CNN inference structure with low power consumption. The structure is automatically generated from NN descriptions on netlist and partly on layout level. Due to the automatic generation, an efficient design space exploration can be performed. As the analog networks suffer from process variations and mismatch we discuss the verification of the correct functionality.

CV:
Lars Hedrich is a full professor at the Institute of Computer Science, University of Frankfurt, where he is head of the design methodology group. He was born in Hanover, Germany, in 1966 and graduated (Dipl.-Ing.) in electrical engineering at the University of Hanover in 1992. In 1997, he received the Ph.D. degree and became an assistant professor at the same university in 2002, before he moved to Frankfurt in 2004. His research interests include several areas of analog design automation: symbolic analysis of linear and nonlinear circuits, behavioral modeling, automatic circuit synthesis, formal verification and robust design.


Friday | Keynote 3 | September 20


Christoph Lüth
DFKI, Germany

The case for open source hardware





Abstract:
Open-source software has conquered the world, with the Linux kernel running approx. 40% of all computers. Yet, open-source hardware is still decidedly niche-- until recently, a completely open-source hardware design flow was not available. In the HEP project, we have developed such a flow, from the model in a hardware description language (HDL) right down to the physical chip (ASIC) produced in a fab in Frankfurt/Oder, and put it into action for a Hardware Security Module (HSM). In our talk, we will explore this design flow, exhibit our specific contributions, and discuss the benefits and drawbacks of the approach.

CV:
Christoph Lüth is vice director of the research department Cyber-Physical Systems group at the German Research Centre for Artificial Intelligence (DFKI) in Bremen, and professor for computer science at the University of Bremen. His research covers the whole area of formal methods, from theoretical foundations to tool development and applications in practical areas such as robotics. He has authored or co-authored over eighty peer-reviewed papers, and was the principal investigator in several successful research projects in this area.