KaliYuga on Nostr: you first gather a lot of data about a topic that matters to you. And then by ...
you first gather a lot of data about a topic that matters to you. And then by training a neural network on this data, you're compressing all the information that is embedded in the data. The output of the training is a much smaller set of numbers (weights). By inserting these numbers in a math function, you can now ask questions related to your set of data. The answers won't necessarily be perfect since the data was compressed.
Training a neural network is a 2-step mathematical process where the 1st step is preparing the data (80% of the overall effort) and the 2nd step is going through an iterative process that intents reducing the error rate when compressing the data up to an optimal point.
There are many applications and LLMs like chatGPT are only an example. I was lucky enough to make credit default prediction models with neural networks in a big bank in the early 90s.
Training a neural network is a 2-step mathematical process where the 1st step is preparing the data (80% of the overall effort) and the 2nd step is going through an iterative process that intents reducing the error rate when compressing the data up to an optimal point.
There are many applications and LLMs like chatGPT are only an example. I was lucky enough to make credit default prediction models with neural networks in a big bank in the early 90s.