To learn more about Hellodarwin: https://go.hellodarwin.com/hypercroissance?utm_source=helloDarwin&utm_medium=podcast&utm_campaign=grants-hypercroissance
đ AI, opaque models, incomprehensible decisions: have we gone too far with artificial intelligence?
In this episode of Hypercroissance, Jonathan LĂ©veillĂ© (CEO of Openmind Technologies) sits down with Antoine GagnĂ© to discuss a topic thatâs as crucial as it is unsettling:
âĄïž Why even the creators of AI no longer understand their own models.
Based on an article by the founder of Anthropic, they dive into an underestimated but critical issue: interpretability. Behind the promises of productivity and automation lies an uncomfortable truthâweâre using tools we no longer control.
We cover:
â
What AI interpretability is and why itâs so urgent
â
The real-world risks of LLMs (large language models) for businesses
â
Why AI is advancing faster than our ability to regulate it
â
The role of leadership in this technological revolution
â
How to integrate AI without losing control of your company
Are you a business leader, marketing director, or operator in a growing company? This episode is a strategic wake-up call you wonât want to miss.
đ Subscribe for more essential discussions on growth, innovation, and leadership in Canada.
To learn more about Openmind Technologies: https://www.openmindt.com/
To learn more about Jonathan Léveillé: https://www.linkedin.com/in/jonathanleveille/
To connect with me on LinkedIn: â https://www.linkedin.com/in/antoine-gagn%C3%A9-69a94366/â
Our podcast Social Scaling: https://www.youtube.com/@podcastsocialscaling
Our podcast No Pay No Play: https://www.j7media.com/fr/podcast-no-pay-no-play
Follow us on social media:
LinkedIn: â https://www.linkedin.com/company/podcast-d-hypercroissance/â



