Reflecting on AI in society

Artificial intelligence is part of our everyday life, a component of technical innovations and increasingly becoming part of education. Intelligent systems are often assumed to make objective decisions, but numerous examples demonstrate how such systems take over our stereotypes and (un-)conscious biases. We inspect emerging ethical and societal implications within the Interchange Forum for Reflecting on Intelligent Systems (IRIS) and the “Platform for Reflection” of the DFG-funded Cluster of Excellence “Data-Integrated Simulation Science” (EXC 2075). Ongoing projects in this area relate to understanding cognitive mechanisms underneath reflection, algorithm aversion / appreciation, automation bias, and human trust.

Modeling effects of using artificial agents in organizations (2023 – present)
An ongoing postdoc project funded by the Cluster of Excellence “Data-Integrated Simulation Science” (EXC 2075) takes on a systems thinking perspective to investigate dynamics underneath the use of artificial agents in organizational contexts. In collaboration with the Organizational Leadership & Diversity research group at the Max Planck Institute for Intelligent Systems, we focus on effects such as algorithm aversion, algorithm appreciation, automation bias, or human trust.

Cognitive mechanisms of trust in AI (2022 – present)
A dissertation project affiliated with the SimTech Graduate Academy focuses on cognitive factors that contribute to or diminish trust in AI. Here, we particularly consider aspects such as algorithm aversion / appreciation, cognitive dissonance, and (un-)conscious biases. In a first step, we inspected effects of discursive interactions with a chatbot as part of a public engagement project. After experimentally investigating potentially influencing factors in such interactions (e.g., emotions or cognitive biases), system dynamics modeling will serve as means to formally decompose and model trust in AI.

Mechanisms of reflective learning (2019 – 2023)
Reflective learning relates to human’s ability to introspectively examine their own learning process. This happens by sequential learning episodes that iteratively re-evaluate the trustworthiness of acquired experiences and information gained in each learning step to face potential future difficulties more resiliently. Applying Socratic questions in the form of metacognitive prompts has already been proven to result in a more optimized investment of limited cognitive resources and subsequently more sustainable performance results. We investigate cognitive mechanisms underneath reflective learning with prompts (among others) particularly related to interacting with intelligent systems.

Related publications

  • Ðula, I., Berberena, T., Keplinger, K., & Wirzberger, M. (2023). Conceptualizing responsible adoption of artificial agents in the workplace: A systems thinking perspective. In 20th Conference of the Italian Chapter of AIS (Association for Information Systems) (21). BEST PAPER AWARD
  • Ðula, I., Berberena, T., Keplinger, K., & Wirzberger, M. (2023). Hooked on artificial agents: A simulation study. In International Conference on Data-Integrated Simulation Science (SimTech2023). University of Stuttgart.
  • Ðula, I., Berberena, T., *Keplinger, K., & *Wirzberger, M. (2023). Hooked on artificial agents: A systems thinking perspective. Frontiers in Behavioral Economics, 2, 1223281. https://doi.org/10.3389/frbhe.2023.1223281*Authors share last authorship
  • Becker, F., Wirzberger, M., Pammer-Schindler, V., Srinivas, S., & Lieder, F. (2023). Systematic metacognitive refletion helps people discover far-sighted decision strategies: A process-tracing experiment. Judgement and Decision Making, 18, E15. https://doi.org/110.1017/jdm.2023.16
  • Berberena, T., & Wirzberger, M. (2022). Embedding reflective learning opportunities in teaching about intelligent systems. In G. Lapesa, L. Erhard, C. Runstedler, & S. Kaiser (Eds.), Reflection on intelligent systems: Towards a cross-disciplinary definition. University of Stuttgart. [PDF]
  • Berberena, T., & Wirzberger, M. (2021). Bringing Thiagi to the classroom: Reducing stereotype-threat by promoting reflection in CRT. In EARLI 2021 – Online. [PDF]
  • Berberena, T., & Wirzberger, M. (2021). Unveiling unconscious biases and stereotypes in students: The necessity of self-reflection in Higher Education. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rdAnnual Meeting of the Cognitive Science Society (p. 3488). Cognitive Science Society. [ABSTRACT]