#frAIday: Learning and reasoning in AI systems vs. in humans
Fri
19
Jan
Friday 19 January, 2024at 12:15 - 13:00
Zoom
If we can build smart machines, what would this tell us about natural intelligence, like human intelligence? Actual comparisons between contemporary AI systems and human cognition reveal significant distinctions and limitations. While AI systems excel in specific tasks, such as board games, their performance seems to diverge significantly from that of humans in terms of learning, reasoning, and adapting to novel situations. Already young children showcase an ability to generalise concepts and understand language with minimal exposure, whereas contemporary AI systems often require extensive data for effective training and struggle in scenarios with limited data. Moreover, humans can transfer their knowledge and skills to novel situations, something deep neural networks still struggle with. For instance, humans effortlessly identify and interpret rarely occurring phenomena, like snow lines on a highway, attributing them to their material (snow). But a self-driving car, reliant on training data, may misinterpret these as regular lane lines, revealing shortcomings in its adaptability to unexpected situations. Such disparities might pose challenges to mind-machine analogies, raising questions concerning whether these distinctions indicate variations in the degree or in the inherent nature of intelligence exhibited by humans vs. AI systems.
Nina Poth, postdoctoral researcher at Humboldt University Berlin, works at the intersection of philosophy of cognitive science, philosophy of mind and epistemology. She is particularly interested in learning, reasoning and rationality. Her research focuses on understanding how people learn concepts from perception and reason rationally with them. She investigates how this can be explained by Bayesian models of cognition and evaluates the effectiveness of these explanations.