In today’s world, algorithms and artificial intelligence (AI) are having a greater impact on our lives. Algorithms and artificial intelligence decide what content we watch online, and what music we enjoy and even answer our questions. But according to psychologist and behavioral scientist Gerd Gigerenzer, this reliance on AI could change our behavior in ways we don’t fully understand.
Gigerenzer, who is director of the Harding Center for Risk Literacy at the University of Potsdam in Germany, has spent decades researching how people make decisions when faced with uncertainty. In his latest book, “How to Stay Smart in a Smart World,” he talks about how algorithms are reshaping our future.
One of Gigerenzer’s key insights is that complex algorithms such as deep neural networks are stable and can make better decisions than humans in situations. Games like chess. problems that are not stable, such as predicting epidemics like the coronavirus; Complex algorithms don’t work like humans here. Gigerenzer refers to this principle as the “Principle of Stable Worlds” and suggests that, to fully benefit from AI, we need to make the world more stable.
Despite their ability to perform some tasks better than humans, Gigeranger insists that algorithms are still simply calculating machines. They can construct text, but they don’t really understand language like humans. Humans should not rely on flawed algorithms.
He also warned of the dangers of surveillance by governments and technology companies. He explains that often people’s personal data is provided to advertisers to make money. Overall, Gigerenzer emphasizes the importance of being aware of the limitations and dangers of algorithms and AI. We should try to understand the technology and use it in a way that will benefit us.