fbpx

Professors

Polat Goktas (Sabancı University)

Schedule


Course Description
Artificial Intelligence (AI) is transforming decision-making across nearly every domain of contemporary society, including healthcare, education, finance, environmental governance, urban systems, and public administration. As AI systems increasingly operate across national, cultural, and regulatory boundaries, the key challenge is no longer only technical performance, but the ability to ensure that these systems are trustworthy, transparent, fair, and aligned with human values.
This interdisciplinary course explores how AI can be designed, evaluated, and governed responsibly in a globalized world. The course brings together perspectives from computer science, ethics, law, public policy, sustainability studies, and the social sciences in order to understand the societal implications of algorithmic decision-making. It is designed for students from diverse academic backgrounds and does not require prior technical knowledge.
The course examines the concept of trustworthy AI, a framework increasingly adopted by international organizations, governments, and research institutions to guide the responsible development of intelligent systems. Students will explore how issues such as bias, opacity, accountability, and environmental impact affect the legitimacy of AI technologies, and how different societies respond to these challenges through regulatory, ethical, and institutional approaches.

The course is structured around four thematic modules:

  1. Understanding AI in Society: Introduction to AI, algorithmic decision-making, and the social implications of automation.
  2. Ethics, Bias, and Human-Cantered Design: Fairness, transparency, explainability, epistemic risk, and the role of human oversight in AI systems.
  3. Global Governance of AI: Comparative analysis of regulatory frameworks including the European Union AI Act, United States policy approaches, Asian governance models, and emerging international standards.
  4. Sustainability, Responsibility, and Future Directions: Environmental footprint of AI, digital sustainability, AI in climate governance, healthcare, and public decision-making, and the role of science diplomacy in global cooperation.

A central component of the course is collaborative, project-based learning. Students will work in multinational and interdisciplinary teams to develop a Trustworthy AI Framework for a selected application domain, such as healthcare diagnostics, smart cities, environmental monitoring, or educational technologies. This project reflects the intercultural and interdisciplinary environment of Venice International University and encourages students to consider how technological solutions must adapt to different social and institutional contexts.

By the end of the course, students will be able to critically evaluate AI systems from ethical, governance, and sustainability perspectives, and to communicate complex technological issues to diverse audiences. The course directly supports the mission of Venice International University by fostering globally minded, socially responsible students capable of engaging with complex challenges in an interconnected world.

Description of the Virtual Component
The preliminary virtual component (July 1 - August 3) will prepare students for the intensive on-campus program by introducing key concepts and fostering early interaction among participants. The virtual phase is designed to support intercultural dialogue, collaborative learning, and conceptual preparation for the group project.

Live Online Session (Zoom - 2 hours)

  • Introduction to course objectives, structure, and expectations
  • Overview of global challenges related to Artificial Intelligence
  • Introduction to the concept of trustworthy AI
  • Formation of interdisciplinary and multinational project groups
  • Explanation of assessment methods and final project

Asynchronous Activities (Moodle + Slack)
Students will complete a set of preparatory activities before the on-campus session:

  • Assigned readings on AI ethics, governance, and sustainability
  • Two short recorded lectures introducing core concepts
  • A short written reflection (approximately 500 words):
    • What constitutes trustworthy AI in different cultural and regulatory contexts?
  • Participation in moderated online discussions based on the readings

To encourage active and informal academic exchange, short discussion meetings will be organized using Slack Huddle sessions, where students will discuss their reading assignments in small international groups (moderated by the instructor). These discussions will allow students to compare perspectives from different cultural, disciplinary, and institutional backgrounds, and will help prepare them for collaborative work during the on-campus program.
The virtual component enables that students arrive at the Summer Session with a shared conceptual foundation and an established working group, enabling more effective collaboration during the intensive four-week course.


Learning Outcomes
Upon successful completion of the course, students will be able to:

  1. Explain the basic concepts of artificial intelligence and algorithmic decision-making.
  2. dentify ethical risks related to bias, opacity, and lack of accountability.
  3. Compare major international AI governance frameworks.
  4. Evaluate the social and environmental impact of AI technologies.
  5. Develop structured approaches for assessing trustworthy AI.
  6. Work effectively in interdisciplinary and multinational teams.
  7. Communicate complex technological and policy issues clearly.


Teaching and Assessment Methods 
Teaching will be based on an interactive and interdisciplinary approach that encourages intercultural dialogue, critical thinking, and collaborative learning. Given the multicultural structure of the VIU Summer Session, students will be intentionally grouped to ensure diversity in nationality, academic background, and level of study, allowing them to benefit from multiple perspectives throughout the course.

Instruction will combine short lectures with discussion-based and practice-oriented activities, including:

  • Case-based analysis of real-world AI applications
  • Structured debates on ethical, societal, and policy-related challenges
  • Simulation exercises exploring decision-making in complex technological contexts
  • Guided sessions for the development of group projects
  • Continuous feedback sessions to support project progress and learning outcomes

Collaborative work will play a central role in the learning process. Students will work in multinational teams to examine the societal implications of AI and to develop a structured framework for evaluating trustworthy AI in a selected application domain.

Assessment Structure
Marking Scheme
Participation in Discussions, Case Analyses, Workshops, and Group Activities (20%)
Midterm In-class Exam (20%)
Individual Policy Brief Assignments (20%)
Final Group Project (Trustworthy AI Framework) (40%)

Participation in Discussions, Case Analyses, Workshops, and Group Activities (20%)

Participation will be evaluated continuously throughout the four-week session and will include:

  • Active participation in weekly class discussions (10%)
  • Contribution to case-based exercises conducted during Weeks 1-4 (5%)
  • Engagement in collaborative group activities during Weeks 2-4 (5%)

Students are expected to attend all classes and actively contribute to discussions, practical exercises, and group work.

Midterm In-class Exam (20%)
The midterm exam will be conducted in Week 2 of the course.

  • One in-class written exam (20%)
  • Format: short-answer and analytical questions
  • Coverage: core concepts of artificial intelligence, ethics, governance, and sustainability introduced during the first half of the course

The midterm grade will reflect the student’s overall progress at mid-session.

 

Individual Policy Brief Assignments (20%)

Students will complete two individual policy brief assignments, submitted during the course:

  • Policy Brief 1 - Week 2 (10%)
  • Policy Brief 2 - Week 3 (10%)
  • Length: approximately 800-1000 words each

Each policy brief will require students to analyze a real-world issue related to AI, ethics, or governance, and to propose recommendations based on concepts discussed in class.

Final Group Project: Trustworthy AI Framework (40%)
Students will work in multinational and interdisciplinary teams to develop a Trustworthy AI Framework for a selected application domain (e.g., healthcare, smart cities, environmental governance, education, or public policy; project topics will be provided by the instructor).

The final project will include:

  • Group project proposal / concept note - Week 2 (10%)
  • Final written framework report - Week 4 (15%)
  • Final group presentation - Week 4 (15%)

Evaluation will consider analytical depth, integration of ethical and governance perspectives, clarity of the framework, originality, and effectiveness of teamwork.

 

Bibliography
• Floridi, L. (2023). The ethics of artificial intelligence: Principles, challenges, and opportunities. Oxford University Press. https://doi.org/10.1093/oso/9780198883098.001.0001
• Russell, S. (2019). Human compatible: AI and the problem of control. Penguin UK. European Commission (2023). EU AI Act Proposal. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
https://eur-lex.europa.eu/eli/reg/2024/1689/oj
• UNESCO (2022). Recommendation on the Ethics of Artificial Intelligence. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence
https://unesdoc.unesco.org/ark:/48223/pf0000381137_eng
• Rossi, F. (2022). Atlas of AI–Book review| Atlas of AI, Kate Crawford, Yale University Press (2021). https://doi.org/10.1016/j.artint.2022.103767
• Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2

(All readings will be provided in English.)

 

 

Last updated: March 16, 2026

 

Venice
International
University

Isola di San Servolo
30133 Venice,
Italy

-
phone: +39 041 2719511
fax:+39 041 2719510
email: viu@univiu.org

VAT: 02928970272