Spoke 3 of the FAIR project addresses a core challenge shared across modern AI research: real‑world data are often unstructured, noisy, incomplete, limited in quantity, and partially inconsistent. Improving the performance of AI systems therefore requires dedicated methodologies that ensure resilience and robustness when algorithms operate in‑the‑wild.
Our research activities focus on:
- developing data augmentation techniques for scenarios where available data are incomplete or not sufficiently representative;
- designing machine learning and deep learning models that remain robust against external attacks, including those arising from “poisoned” training data;
- studying the implications for the design, validation, verification, evolution, and operation of software systems that incorporate learning algorithms deployed in real‑world conditions.
