
Value–Aware AI (VALAWAI)
Contributors
Share

Giulio Prevedello
Research Associate

Pietro Gravino
Researcher

Martina Galletti
Assistant Researcher

Emanuele Brugnoli
Research Associate (Sony CSL - Rome)
— Abstract
Understanding the moral values embedded in user-generated content is essential for building AI systems that interpret and engage with human discourse, especially in polarized, emotionally charged spaces like social media. This project pioneers value-aware AI, capable of detecting and reasoning about the moral undercurrents driving online conversations.
We develop cutting-edge recommender systems that foster pro-social behavior while maintaining high levels of engagement. By doing so, we are also defining the theoretical framework to assess the impact of online actions and interventions. By aligning algorithmic decisions with human values, we seek to build tools that can turn online platforms into digital spaces for more constructive conversations. Our ultimate goal is to embed moral reasoning into AI, systems that can decode, respect, and adapt to the fabric of human expression.
— Context
Values are the invisible architecture of human behavior. They shape how we judge, choose, and act, from our consumer habits to our political beliefs. Yet, traditional AI systems often ignore this foundational layer. This research bridges that gap, providing computational tools that can sense and respond to human values in real-world digital environments.
In an era dominated by autonomous systems, from social robots to content moderators, embedding moral awareness is no longer optional. This challenge is particularly pressing on social media, a domain where value-laden discourse, identity signaling, and moral confrontation collide.
Social media platforms such as Twitter and Facebook are not only channels for information sharing, they are also spaces where users express personal beliefs and moral viewpoints. These interactions often reflect clashing value preferences that can drive polarization. By analyzing the moral content within user-generated posts, we aim to better understand the underlying causes of disagreement online. Our work focuses on developing AI tools that can detect and interpret these moral signals, with the goal of supporting more respectful and constructive digital communication.
— Methodology
Our approach is grounded in Moral Foundations Theory (MFT), which posits that human morality spans five key dimensions, such as care/harm and fairness/cheating. MFT offers a universal and cross-cultural lens for computational modeling, enabling scalable analysis of moral cues in massive text datasets.
We implement both supervised and unsupervised techniques:
- Supervised deep learning models trained on annotated corpora offer high performance but come with limitations: annotation costs, cultural bias, and generalization issues across contexts.
- Unsupervised, frame-based approaches –notably FrameAxis and Fluid Construction Grammar– are scalable, and adaptable across languages and cultures. These allow us to extract moral signals from text without relying on predefined labels.
These tools form the backbone of a value detection pipeline, suitable for downstream applications that can go even beyond the scope of this project.
Moral value detectors:
- A Deep Learning-based classifier for MFT dyad detection in tweets
- A semantic frame analysis component based on Fluid Construction Grammar (FCG) for a pipeline mapping frames to moral values
- Supervised and unsupervised solutions based on FrameAxis methodology
Key findings:
- Moral-aware clustering of user accounts outperforms standard approaches by uncovering fine-grained ideological communities
- These clusters align with political orientation and are significantly influenced by out-group bias, mirroring findings in social science literature
- Results are visualized via TAIWA, our interactive platform for exploring moral content in social media datasets
Our work also advances both machine awareness, through integration of moral value detection into recommender simulations, and human awareness, by promoting ethical literacy around the diversity of values expressed in digital public spheres.
— Links
Paper link:
https://ceur-ws.org/Vol-3473/paper51.pdf
https://doi.org/10.1007/s41109-024-00643-1
https://aclanthology.org/2025.coling-main.133/
https://doi.org/10.5220/0012595000003636
https://doi.org/10.5220/0012596000003636
https://doi.org/10.1007/978-3-031-85463-7_4
https://doi.org/10.1007/978-3-031-58202-8_5
Projects’s GitHub: https://valawai.github.io/docs/
Deep learning model for moral value classification: https://huggingface.co/brema76/moral_immigration_it
Project’s Website: https://valawai.eu/
Twitter account: https://twitter.com/ValawaiEU
Related projects

Inès Blin
Assistant Researcher

Remi Van Trijp
Research Leader

Ilaria Tiddi
Assistant Professor

Annette ten Teije
Professor