"False"
Skip to content
printicon
Main menu hidden.

AI at Umeå University - A Cross-Faculty Collaboration

Thu
27
Oct
Time Thursday 27 October, 2022 at 10:30 - 12:00
Place Universitetsklubben

AI and its applications and consequences are of great interest to Umeå University. In 2021, eight seed projects were funded to strengthen and support promising ideas in research and outreach. These interdisciplinary projects across faculty boundaries are meant to serve as a starting point for future collaborations in AI within TAIGA. The projects were allocated SEK 100,000 each and six of them will present their results in this session.

Welcome to hear: 

10.30: AutogrAIde Hackathons for Interdisciplinary Education and AI for Grading 

Loïs Vanhée, Associate professor and TAIGA co-director

“Can AI be used for grading and assessment? If yes, how?” This was the challenge for which students and professionals across all disciplines and levels proposed their solution in the AutogrAIde hackathon organized in January 2022. The results are very exciting, besides getting to know student projects and implicit assumptions on AI and education practices, the approach revealed to be a success for organizing university-wide transdisciplinary education around AI topics. Interested? We are recruiting!

10.45: cAIm & ASALL - Exploring human-AI interactions

Mikael Wiberg, Professor at the Department of Informatics

In this talk, I will present two projects, cAIm & ASALL, with a shared focus on human-AI interactions. I will focus on human-AI co-performance, trust, and division of labor, and I will discuss inclusive design and distribution of responsibility in human-AI entanglements.

11.00: What Causes Humans to Abandon Established Beliefs, and When Should They?

Linus Holm, Associate Professor at the Department of Psychology 

In an idealized world, our beliefs would be grounded on evidence and revised when better information is available. But we are regularly confronted with information that is ambiguous, only contextually true, and even deliberately falsified. Sorting through these cases is extremely challenging. It happens that well-established beliefs are rejected in favor of ideas that constitute major scientific breakthroughs, but well-established beliefs may also be rejected in favor of fringe beliefs and conspiracies. For both AI and human intelligence, the core question remains: when should strong beliefs be revised or abandoned in the face of conflicting information, and what happens to the old beliefs? Beliefs should be revised in the face of better information, but the core problem is how the agent can assess the value and credibility of information as better:  if the new information is in conflict with an established belief, then it should subjectively appear incorrect. This core problem of belief revision is well-known in knowledge representation but has not produced satisfying models. We argue that our beliefs are not the key basis for evaluating the credibility of new information - we preserve both episodic groundings for our beliefs and assess the coherence of new information in that setting.  We have been using these ideas to derive models for rational knowledge updating and experimentally tested what compels humans to abandon their old beliefs in experiments manipulating prior knowledge and prediction error rates. Here we present preliminary results from this work.

11.15: ”Detecting Inter Presence is Hard. A ‘Where’s Wally’ Paradigm Integrating AI, fNIRS, NUNA and Physics”

Niclas Kaiser, Associate Professor at the Department of Psychology  

Our main aim was to detect inter presence from physiological measurements of interacting pairs of people. We collected data from three conversation studies in which different variables were manipulated: 1. eye contact and audio/video delay using the NUNA, 2. the extent of explicit verbal communication 3. the grade of conflict. We analyzed the data using both new techniques and established methods such as cross-wavelet transforms and convolutional neural networks. While we found some signatures in data where we expected to find inter presence, both methods and experimental protocols need improvement.

11.30: IceLab AI Medicine Hackathon

Martin Rosvall, Professor at the Department of Physics

How a pitch event and an off-campus hackathon were set up to spark collaborative research ideas and help turn them into concrete research proposals. More information about IceLab. 

Part of TAIGA

The lectures are held in conjunction with the inaugural conference of TAIGA, Umeå University's new Centre for Transdisciplinary AI. Read more about TAIGA here

 

Organiser: Umeå University
Event type: Lecture
Staff photo Martin Rosvall
Speaker
Martin Rosvall
Professor
Read about Martin Rosvall
Staff photo Niclas Kaiser
Speaker
Niclas Kaiser
Associate professor
Read about Niclas Kaiser
Staff photo Linus Holm
Speaker
Linus Holm
Associate professor
Read about Linus Holm
Staff photo Mikael Wiberg
Speaker
Mikael Wiberg
Professor
Read about Mikael Wiberg
Staff photo Loïs Vanhée
Speaker
Loïs Vanhée
Associate professor
Read about Loïs Vanhée
Contact
Anders Steinwall
Read about Anders Steinwall
Contact
Annakarin Resoluth
Read about Annakarin Resoluth
Contact
Frank Dignum
Read about Frank Dignum