In the final episode of our mini-series on »Artificial Intelligence (AI) and Statistics«, things get deep – in the truest sense of the word. Our podcast team – consisting of Esther Packullat, Sascha Feth and Jochen Fiedler – looks at the major language models such as GPT-4, Llama and Gemini.
We take a look behind the scenes and discuss exciting questions such as:
- What makes these models so powerful?
- How do large language models actually work?
- What is deep learning – and how does it differ from shallow learning?
- How do neural networks work – and why does their topology play a crucial role?
- To what extent can the capabilities of large AI models really be predicted?
- Why is deep learning more than just applied statistics?
- What is emergence – and why does AI sometimes surprise us?
- And finally, the big question: will AI systems one day develop a consciousness? What would that mean?
Our two statistics colleagues agree: development is heading in an exciting direction – and we are only at the beginning.