TP3: Learning and Reasoning from Individual to Communities to Society

PIs:

  • Fosca Giannotti (Scuola Normale Superiore)
  • Bruno Lepri (Fondazione Bruno Kessler)

Spoke 1: Human-centered AI
Spoke 2: Integrative AI
Spoke 6: Symbiotic AI
Spoke 8: Pervasive AI
Spoke 9: Green-aware AI
Spoke 10: Bio-socio-cognitive AI

Transversal Project 3 coordinates the WPs activities aimed at studying methods for the integration of learning and reasoning at multiple scales (individuals, communities, society).

At a small scale, the goal is explainable AI for synergistic human-AI collaboration and “human-in-theloop” co-evolution of human decision making and machine learning models.

From the human perspective, (X)AI-assisted decision making should empower reasoning, trigger rational thinking through cognitive stimuli for more aware decision making, discover and cope with bias, recognize cases that are under-represented in historical data (e.g., pertain to minorities) and require specific attention.

From the machine perspective, novel “socratic”, self-aware ML models should be devised that “ know what they don’t know”, that are capable to recognize when and why a new case belongs to the data distribution underlying the training set or not, that are capable to explain both their suggestions and the reasons why they prefer to defer the decision to the human to explain their suggestions modulating on their level of confidence and, that are capable to defer decisions based on their confidence.

At a large scale, the goal is modeling complex large scale AI social systems made of interacting people and intelligent machines/AI assistants, with focus on collective emergent phenomena. The network effects of AI and their impact on socio-technical systems (STS) are not sufficiently addressed by AI research, first of all because they require a step ahead in the trans-disciplinary integration of AI, data science, network science and complex systems with the social sciences. How to understand and mitigate the harmful outcomes? How to design “social AI/collaborative AI/cooperative AI” mechanisms that help towards agreed collective outcomes, such as sustainable mobility in cities, diversity and pluralism in the public debate, fair distribution of resources? As increasingly complex socio-technical systems emerge, made of people and intelligent machines, the social dimension of AI becomes more and more evident. In principle, AI could empower communities to face complex societal challenges. Or it can create further vulnerabilities and exacerbate problems, such as bias, inequalities, polarization, segregation, and depletion of social goods. The development of social and cooperative AI involves different scenarios in which many humans and artificial agents must coordinate and cooperate (possibly in decentralised ways) well beyond the current state of the art of multi-agent systems.