co-PI: Tatiana Tommasi (Politecnico di Torino)
The vast majority of modern AI is computationally heavy at all stages, from algorithm design and development, to training and deployment. Designing new algorithms often requires heuristics and repeated trial-and-error; training requires data processing on a very large scale, and extensive hyperparameter research and optimization is required to ensure optimal final performance. . This is true regardless of the type of hardware the AI runs on, from cloud and HPC with large-scale centralized data storage, to edge devices with limited compute and memory resources. These two polar opposite computational frameworks – the infinitely small on the edge and the infinitely large on HPC – are the two emerging computing paradigms of the coming decades. The design and study of new artificial intelligence algorithms, capable of exploiting the intrinsic properties of edge and exascale hardware by construction, is an open and crucial research challenge. We need a new generation of algorithms for decentralized, robust, adaptive and accurate optimization and learning, capable of supporting intelligent systems and autonomous agents with varying degrees of supervision, with applications ranging from industrial robotics to banking, mobility and defense, energy management and Health.