Workshop Program
The Workshop will take place on the 5th of July, from 10:00 to 13:00, at the Trilussa room, of the Pontificia UniversitĂ Gregoriana.
You can see the full program here: đ đ Open program
Workshop Description
Establishing and upholding trust in AI systems is an imperative pursuit as Machine Learning becomes an everyday commodity in our lives. The workshop âForging Trust in Artificial Intelligenceâ brings together a group of experts and researchers from diverse subfields, converging on the exploration of how transparency, explainability, fairness, and privacy collectively contribute to making machine learning trustworthy. By uniting experts across these pivotal disciplines, this workshop illuminates the best practices that not only enhance the trustworthiness of AI but also reinforce its ethical foundations. Building on the success of the last yearâs workshop, which focused on explainability and security in vision AI, this yearâs event shifts its attention to the challenges of managing sequential and multimodal data in NLP and IoT. These areas are at the forefront of human-AI interaction: large language models (LLMs) already shape how we access information and make decisions, while smart IoT systems increasingly define automated processes that directly affect not only the industry but also our everyday lives.
This yearâs workshop highlights explainability as a cornerstone of trustworthy AI in continuous systems. It tackles the complexities of understanding how neural networks operate, from demystifying the outputs of LLMs to clarifying automated decisions in continuous IoT environments, and eventually providing more tools to improving AI performance. Ensuring fairness and mitigating biases are also central themes, as these systems increasingly influence critical aspects of human life. By focusing on transparency and ethical practices, the ForgAI Workshop seeks to equip participants with actionable insights and methodologies for building AI systems that inspire confidence and align with societal values. By aligning with IJCNNâs mission to explore the latest advancements in neural networks, the workshop deepens the conversation around responsible AI development. It emphasizes that explainability is not just a technical challenge but a fundamental requirement for creating AI systems that serve as reliable and ethical partners in human-AI interactions.
The following list includes (but is not limited to) relevant topics that will be addressed within this workshop:
Explainability in ML and NLP Models
- Explainable Natural Language Processing Models
- Visual Explanations for Language Models
- Explainable Large Language Models
- Interpretable Architectures for Large Language Models
- Techniques for model transparency and interpretability
- Explainability in Multimodal Learning
- Privacy and Security
Explainability in AIoT (Artificial Intelligence in IoT)
- Interpretable Machine Learning Models for IoT Data
- Explainable Anomaly Detection in IoT Networks
- Explainability for Federated Learning in IoTL
- Causal Inference and Explainability in IoT Applications
- Human-Centric Explainability in IoT
- Continual Learning and Explainability
- Explainable Predictive Maintenance
Algorithmic Fairness in Machine Learning
- Fairness Evaluation and Metrics in ML
- Ethical AI Development and Deployment
- Bias Mitigation in AI
- Fairness in Multimodal Learning
- Fairness in Collaborative Learning
- Transparency and Fairness in Federated Learning
- Explainable and Fair AI Models
Invited Speakers
Professor Vassilis Christophides, ENSEA/ETIS, France
Short Bio: Vassilis Christophides studied Electrical Engineering at the National Technical University of Athens (NTUA) in 1988, he received my DEA in computer science from the University PARIS VI in 1992, and his Ph.D. from the Conservatoire National des Arts et Metiers (CNAM) of Paris, in 1996. From September 2020, he joined as Full Professor the Ăcole Nationale SupĂ©rieure de lâĂlectronique et de ses Applications (ENSEA), Cergy. Previously, he has served the Computer Science Department of the University of Crete for 16 years.
His main research interests span Machine Learning Systems, Data Science and Big Data Computing, Databases and Web Information Systems, as well as Digital Libraries and Scientific Systems. On these topics, he have published over 190 articles in top-tiered journals and conferences. His research work has received more than 8200 citations with an h-index 48 according to Google Scholar.
He was a recipient of the 2004 SIGMOD Test of Time Award, and of several best paper awards in BDA (2021), ISWC (2003, 2007, 2009). He chaired (General Chair of the EDBT/ICDT Conference in 2014, Area or Track Chair in KDD 2024&2025, ICDE 2016, SCC 2004, EDBT 2004) or served on program committees of numerous conferences (SIGMOD, VLDB, ICDE, EDBT, WWW, KDD, CIKM, etc.) while I have also acted as reviewer of several journals (CACM, TODS, TOIS, TOIT, VLDB Journal, TDKE, DPS, etc.). I have also been a keynote or invited speaker in conferences and summer schools (PODS 2003, HDMS 2004, ESWC Summer School 2013, WebST 2016, BDA Summer School 2018, GDR RO/IA Summer School 2023).
Invited Talk Title: Explainable-by-design Data Debugging in Artificial Intelligence of Things (AIoT).
Summary: The emerging Artificial Intelligence of Things (AIoT) aims to enable IoT processes to be executed and actions to be more reciprocated between people and machines by using modern data analytics. AIoT has the potential to fundamentally transform the way people perceive and interact with the real world. One of the main challenges to realize this vision is related to the quality of the data generated by IoT sensors. Traditional AI models are trained under the assumption of finite, closed collections of âperfectâ data. However, in IoT data streams, the statistical properties of the serving data may differ from those used for training, while various forms of data imperfections may slip on the labels or the input features. Clearly, such data bugs degrade the performance of models and jeopardize the reliability of their outcomes.
In this presentation, we will introduce an explainable-by-design framework allowing us to detect and explain various types of data bugs encountered in iid and sequence datasets. Our framework essentially analyses the influence of samples to the decision boundary of a model. We argue that mislabeled, anomalous, drifted or even poisoned samples have different influence signatures compared to âperfectâ samples. We are then proposing several influence-based signals for identifying fine-grained forms of data bugs in IID and sequence datasets. Extensive experiments on various classification tasks demonstrate that our signals are robust across foundation models or models trained from scratch as well as different data modalities (image and tabular datasets).
Professor Anastasios Tefas, Aristotle University of Thessaloniki, Greece
Short Bio: Anastasios Tefas received the B.Sc. in informatics in 1997 and the Ph.D. degree in informatics in 2002, both from the Aristotle University of Thessaloniki, Greece. Since 2022 he has been a Professor at the Department of Informatics, Aristotle University of Thessaloniki. From 2008 to 2022, he was a Lecturer, Assistant Professor, Associate Professor at the same University. He is the director of the MSc program on Artificial Intelligence in the Dept. of Informatics. Prof. Tefas coordinated 16 and participated in 20 research projects financed by national, private and European funds. He was the Coordinator of the H2020 project OpenDR, âOpen Deep Learning Toolkit for Roboticsâ. He has co-authored 160 journal papers, 300 papers in international conferences and contributed 17 chapters to edited books in his area of expertise. He has co-organized more than 15 workshops, tutorials, special sessions, and special issues and has given more than 20 invited talks. He has co-edited the book âDeep Learning for Robot Perception and Cognitionâ, Elsevier, 2022. Over 13000 citations have been recorded to his publications and his H-index is 55 according to Google scholar. His current research interests include computational intelligence, deep learning, machine learning, data analysis and retrieval, computer vision, autonomous systems and robotics.
Invited Talk Title: Trustworthiness in AI and Autonomous Systems
Summary: This talk explores the concept of trustworthiness in AI and autonomous systems, with a focus on deep learning-based models and their behavior in safety-critical contexts. We examine key attributes of trustworthy AIâincluding reliability, robustness, safety, transparency, and accountabilityâand the particular challenges posed by opaque, data-driven models. A central theme is how these systems handle unprecedented or out-of-distribution (OOD) inputs, which often arise in dynamic real-world environments and autonomous systems that make decisions and take actions in proximity to humans. We discuss deep learning-based anomaly detection techniques such as autoencoders, variational methods, density estimation, and uncertainty quantification via deep learning models and Monte Carlo dropout. The talk also highlights the need for formal validation, interpretability, and ethical governance to ensure that AI systems not only perform accurately but also adapt and earn and maintain human trust under uncertainty.
Paper Submission
We accept full or short paper submissions. Full paper submissions (up to 8 pages) will be considered for publication in the IJCNN 2025 proceedings on the IEEE Xplore Digital Library, and will be oraly presented during the workshop. Full papers must follow the same author guidelines (check here) as the papers of the main conference (double blind), and will be reviewed by 3 reviwers of our program committee. Short papers and extended abstracts (up to 4 pages) will be considered for oral presentations or poster presentations at the workshop; however, these will not be included in the proceedings. Papers must be submitted through the IJCNN 2025 CMT System, by selecting the workshop name âInternational Workshop on Forging Trust in Artificial Intelligence 2025â in the Subject Area.
Submission Deadline: 20/03/2025 27/03/2025
Acceptance Notification: 15/04/2025
Organisers
Alexandros Iosifidis, Tampere University, Finland
Nistor Grozavu, Cergy Paris University/ETIS, France
Aikaterini Tzompanaki, Cergy Paris University/ETIS, France
Corina Besliu, Technical University of Moldova, Moldova
Nicoleta Rogovschi, Paris Descartes University, France
Program Committee
Hajer Baazaoui, CY Cergy Paris University/ETIS, France
Georgios Bouloukakis, Télécom SudParis/IP-Paris, France
Luis GalĂĄrraga, INRIA/IRISA, France
Apostolos Giannoulidis, Aristotle University of Thessaloniki, Greece
Michele Linardi,CY Cergy Paris University/ETIS, France
Illia Oleksiienko, Aarhus University, Denmark
Marina Papatriantafillou, Chalmers University of Technology, Sweden
Nikolaos Passalis, Aristotle University of Thessaloniki, Greece
Dimitris Sacharidis, Université Libre de Bruxelles, Belgium
Konstantinos Stefanidis, Tampere University, Finland
Sponsors
The workshop is organised under the support of the PANDORA European project and the BEVIAN company.