Course Description
Artificial Intelligence (AI) is transforming decision-making across nearly every domain of contemporary society, including healthcare, education, finance, environmental governance, urban systems, and public administration. As AI systems increasingly operate across national, cultural, and regulatory boundaries, the key challenge is no longer only technical performance, but the ability to ensure that these systems are trustworthy, transparent, fair, and aligned with human values.
This interdisciplinary course explores how AI can be designed, evaluated, and governed responsibly in a globalized world. The course brings together perspectives from computer science, ethics, law, public policy, sustainability studies, and the social sciences in order to understand the societal implications of algorithmic decision-making. It is designed for students from diverse academic backgrounds and does not require prior technical knowledge.
The course examines the concept of trustworthy AI, a framework increasingly adopted by international organizations, governments, and research institutions to guide the responsible development of intelligent systems. Students will explore how issues such as bias, opacity, accountability, and environmental impact affect the legitimacy of AI technologies, and how different societies respond to these challenges through regulatory, ethical, and institutional approaches.
The course is structured around four thematic modules:
A central component of the course is collaborative, project-based learning. Students will work in multinational and interdisciplinary teams to develop a Trustworthy AI Framework for a selected application domain, such as healthcare diagnostics, smart cities, environmental monitoring, or educational technologies. This project reflects the intercultural and interdisciplinary environment of Venice International University and encourages students to consider how technological solutions must adapt to different social and institutional contexts.
By the end of the course, students will be able to critically evaluate AI systems from ethical, governance, and sustainability perspectives, and to communicate complex technological issues to diverse audiences. The course directly supports the mission of Venice International University by fostering globally minded, socially responsible students capable of engaging with complex challenges in an interconnected world.
Description of the Virtual Component
The preliminary virtual component (July 1 - August 3) will prepare students for the intensive on-campus program by introducing key concepts and fostering early interaction among participants. The virtual phase is designed to support intercultural dialogue, collaborative learning, and conceptual preparation for the group project.
Live Online Session (Zoom - 2 hours)
Asynchronous Activities (Moodle + Slack)
Students will complete a set of preparatory activities before the on-campus session:
To encourage active and informal academic exchange, short discussion meetings will be organized using Slack Huddle sessions, where students will discuss their reading assignments in small international groups (moderated by the instructor). These discussions will allow students to compare perspectives from different cultural, disciplinary, and institutional backgrounds, and will help prepare them for collaborative work during the on-campus program.
The virtual component enables that students arrive at the Summer Session with a shared conceptual foundation and an established working group, enabling more effective collaboration during the intensive four-week course.
Learning Outcomes
Upon successful completion of the course, students will be able to:
Teaching and Assessment Methods
Teaching will be based on an interactive and interdisciplinary approach that encourages intercultural dialogue, critical thinking, and collaborative learning. Given the multicultural structure of the VIU Summer Session, students will be intentionally grouped to ensure diversity in nationality, academic background, and level of study, allowing them to benefit from multiple perspectives throughout the course.
Instruction will combine short lectures with discussion-based and practice-oriented activities, including:
Collaborative work will play a central role in the learning process. Students will work in multinational teams to examine the societal implications of AI and to develop a structured framework for evaluating trustworthy AI in a selected application domain.
Assessment Structure
Marking Scheme
Participation in Discussions, Case Analyses, Workshops, and Group Activities (20%)
Midterm In-class Exam (20%)
Individual Policy Brief Assignments (20%)
Final Group Project (Trustworthy AI Framework) (40%)
Participation in Discussions, Case Analyses, Workshops, and Group Activities (20%)
Participation will be evaluated continuously throughout the four-week session and will include:
Students are expected to attend all classes and actively contribute to discussions, practical exercises, and group work.
Midterm In-class Exam (20%)
The midterm exam will be conducted in Week 2 of the course.
The midterm grade will reflect the student’s overall progress at mid-session.
Individual Policy Brief Assignments (20%)
Students will complete two individual policy brief assignments, submitted during the course:
Each policy brief will require students to analyze a real-world issue related to AI, ethics, or governance, and to propose recommendations based on concepts discussed in class.
Final Group Project: Trustworthy AI Framework (40%)
Students will work in multinational and interdisciplinary teams to develop a Trustworthy AI Framework for a selected application domain (e.g., healthcare, smart cities, environmental governance, education, or public policy; project topics will be provided by the instructor).
The final project will include:
Evaluation will consider analytical depth, integration of ethical and governance perspectives, clarity of the framework, originality, and effectiveness of teamwork.
Bibliography
• Floridi, L. (2023). The ethics of artificial intelligence: Principles, challenges, and opportunities. Oxford University Press. https://doi.org/10.1093/oso/9780198883098.001.0001
• Russell, S. (2019). Human compatible: AI and the problem of control. Penguin UK. European Commission (2023). EU AI Act Proposal. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
https://eur-lex.europa.eu/eli/reg/2024/1689/oj
• UNESCO (2022). Recommendation on the Ethics of Artificial Intelligence. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence
https://unesdoc.unesco.org/ark:/48223/pf0000381137_eng
• Rossi, F. (2022). Atlas of AI–Book review| Atlas of AI, Kate Crawford, Yale University Press (2021). https://doi.org/10.1016/j.artint.2022.103767
• Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
(All readings will be provided in English.)
Last updated: March 16, 2026