Critical analyses and fundamental studies seek novel insights that shape the future of our society.
We are building a unique environment of world class researchers and industrial scale real-world data openly shared with the scientific community.
The spectacular success of modern control theory realized by reinforcement learning suggests that many real-world problems can be solved via a new paradigm of simulation or ‘gamification’. Averting climate change, making mass-mobility sustainable through smart city control and large-scale fleet management, building safe self-driving cars, or revolutionizing any real-world logistics problem: With this new paradigm, a simulator (or game) is built that encompasses all salient features of a problem to be solved. Then, a self-learning AI algorithm plays the game until it finds a solution. Hence an entire class of hitherto intractable intelligent control theory problems has become accessible if one can build an expressive enough simulator.
Recent advances in modern machine learning allow us to exploit big data to study complex challenges that were unsolvable before. At IARAI we focus on pressing problems with great impact on society and the planet. We can now, for instance, predict patterns in urban traffic or large-scale rainfall, assess climate change, classify biomedical images, and help discover new drugs.
We train models to learn the features required to make good predictions. These features thus describe the essence of the object or state examined, and are therefore of interest in their own right. We research adversarial attacks, self-supervised learning, as well as point clouds and sets for the detection of multi-dimensional objects.
The field of machine learning has undergone a revolution: Certain well-known ‘black box’ algorithms have shown their long-suspected potential and now dominate methods delivering the state-of-the-art performance on many benchmark problems. What makes these algorithms ‘black box’ is that we have no rigorous understanding of their success that is also intuitive to humans. As a result, questions at the heart of their application have no clear answers today: What architecture should a neural network have to handle a certain new data type? How can one combine different networks effectively? How can one best add prior or expert knowledge? etc.
At IARAI, we believe that these fundamental questions need to be answered in a theoretically rigorous manner to allow humanity to exploit the full benefits of AI. White-boxing AI will also improve the interpretability of models and results, improve learning efficiency, robustness, and inform us how we best deal with rare yet important events.