AI Basics Part 2: Neural Networks, the building blocks of Artificial Intelligence
Neural Networks in AI are the topic of this session. In this second of four presentations about Artificial Intelligence and how it can be used in practice, Pere Sivecas, Spend Matters Development PM and Software Architect, builds upon his previous explanation of what a neural network is to show how fitting neural networks together creates an AI system. He outlines the workings of a generative adversarial network, which powers the Stable Diffusion text-to-image model, and transformers, which are used for text-based services like Google Translate. Special attention is given to decoder-only transformers that enable ChatGPT. The session concludes with Pere describing what a large language model (LLM) is, detailing what each of the three words specifically means, and the two main properties that inform the effectiveness of an LLM.