How does a Large Language Model like ChatGPT actually work? Well, they are both amazingly simple and exceedingly complex at the same time. Hold on to your butts, this is a deep dive ↓


You can think of a model as calculating the probabilities of an output based on some input. In language models, this means that given a sequences of words they calculate the probabilities for the next word in the sequence. Like a fancy autocomplete.


To understand where these probabilities come from, we need to talk about something called a neural network. This is a network like structure where numbers are fed into one side and probabilities are spat out the other. They are simpler than you might think.


Imagine we wanted to train a computer to solve the simple problem of recognising symbols on a 3x3 pixel display. We would need a neural net like this: - an input layer - two hidden layers - an output layer

想象一下,我们想训练一台计算机来解决识别3x3像素显示器上的符号这一简单问题。我们将需要一个这样的神经网络:- 一个输入层 - 两个隐藏层 - 一个输出层

Our input layer consists of 9 nodes called neurons - one for each pixel. Each neuron would hold a number from 1 (white) to -1 (black). Our output layer consists of 4 neurons, one for each of the possible symbols. Their value will eventually be a probability between 0 and 1.


In between these, we have rows of neurons, called "hidden" layers. For our simple use case we only need two. Each neuron is connected to the neurons in the adjacent layers by a weight, which can have a value between -1 and 1.

在这两者之间,我们有一排排的神经元,称为 "隐藏 "层。对于我们的简单用例,我们只需要两个。每个神经元都通过一个权重与相邻层的神经元相连,权重的值在-1和1之间。

When a value is passed from the input neuron to the next layer its multiplied by the weight. That neuron then simply adds up all the values it receives, squashes the value...


首页 - Wiki
Copyright © 2011-2023 iteam. Current version is 2.118.1. UTC+08:00, 2023-10-05 11:16
浙ICP备14020137号-1 $访客地图$