Document Actions

You are here: FRIAS Announcements Current News Progress needs Reflection: …

Progress needs Reflection: Introducing our Interdisciplinary Colloquia

FRIAS provides space for the discussion of the broader impact and ethical implication of scientific innovation.

For better or worse, new technologies shape the world in fundamental ways, so an ongoing debate on their ethical implications is imperative for scientists breaking new ground. This topic is thus befitting the FRIAS Interdisciplinary Colloquium], offered 3-4 times per academic year as a space for current research fellows from all disciplines to reflect on issues of general relevance for science.

Such ethical questions are also at the core of the current FRIAS research focus “Responsible AI: Normative Aspects of the Interaction of Humans and Intelligent Systems”. Since October 2018, a team of four at the Freiburg Institute for Advanced Studies (FRIAS) has been researching the social, legal, ethical and technical challenges arising from the interaction between humans and autonomous, intelligent systems. Two members of this group recently sparked a debate on legal and philosophical aspects of a highly controversial technological innovation, the implementation of artificial intelligence in (semi-)autonomous weapon systems.

In her presentation in the weekly Humanities and Social Sciences Colloquium (HUMSS) on December 17, Silja Vöneky, professor of International Law at U Freiburg, discussed “Autonomous Weapons and International Law”. Vöneky spelled out the ill-defined regulatory status quo of Lethal Autonomous Weapon Systems: such innovative weapon systems are not covered by current Laws of War and need a concerted international effort to be governed effectively, but consensus on the degree and kind of regulation is yet to be reached. Her talk raised crucial issues underlying possible legislative implementations: To which degree can, or should, modern warfare rely on lethal weapons controlled mainly or solely by artificial instead of human intelligence? Are semi-autonomous weapons possibly even suited to reduce human cost in armed conflicts, their use thus a moral duty? Or should they be prohibited altogether via international conventions due to their potential unforeseeable consequences, like biological and chemical weapons before them?

In his talk introducing the Interdisciplinary Colloquium on January 14, Oliver Müller, professor of Philosophy at U Freiburg, also built on the example of automated warfare to raise ethical questions of a more universal nature. This objective he pursues both as principal investigator in the research focus “Responsible AI”, and as part of the Freiburg-PennState Project Group “Philosophy in the Age of the New Wars”. Müller reflected on the somewhat problematic implications of the notion of artificial agency and ethics. How can (semi-)autonomous technology be programmed to account for ethical principles? Preceding this, who decides about these principles? Are moral decisions based on algorithmic preferences possibly even superior to human decisions driven by emotion and empathy? And who bears the ultimate responsibility for the outcome in cases of hybrid agency? Given the rapidly growing importance of artificial intelligence in critical domains like medicine and health care or autonomous driving, these issues will occupy researchers in the years to come.

The colloquium feeds into the general debate on science and technology reflection hosted at FRIAS. This reflection is on its way to being institutionalized in the Freiburg Network on Ethical, Legal and Social Aspects of Science and Technology (FELSA). Dedicated events and formats are planned, so keep an eye on our event schedule or subscribe to our monthly newsletter FRIAS EXPRESS.

 

2019/02/15