Pre on Nostr: Like the LLM machines, they are layers of transformer networks sandwiched between ...
Like the LLM machines, they are layers of transformer networks sandwiched between layers of attention networks.
Unlike LLMs, the music machines are trained on music not language.
The phrase for describing both/all is Generative Pre-Trained Transformer, GPT, even though "GPT" from OpenAI itself is only an LLM, and there are many other types of pre-trained transformers not trained on language.
Computer jargon is always stupid.
Published at
2024-04-29 10:28:50Event JSON
{
"id": "33c2196942fa11c32a989ab63e604e1e0082c8774904a8fbc9852189b29d63b5",
"pubkey": "0dae2d95f5d2891c27abad36aa4f62d04b6e3fee7f6d54824b58487031371936",
"created_at": 1714386530,
"kind": 1,
"tags": [
[
"e",
"8073e7ec8816d8370a8930d7190c327117772d8e046fc6c8a62e84f884fbd6c7",
"",
"root"
],
[
"e",
"9ed4100a283aa11d8aee0a8424b364ecc6f132dbe3ba627e70016da03263f76a",
"",
"reply"
],
[
"p",
"2b9a828913e945b2315406523c0e578a076bad56ea3c4ea7931392f70d1a8a3c"
],
[
"p",
"0463223adf38df9a22a7fb07999a638fdd42d8437573e0bf19c43e013b14d673"
]
],
"content": "Like the LLM machines, they are layers of transformer networks sandwiched between layers of attention networks.\n\nUnlike LLMs, the music machines are trained on music not language.\n\nThe phrase for describing both/all is Generative Pre-Trained Transformer, GPT, even though \"GPT\" from OpenAI itself is only an LLM, and there are many other types of pre-trained transformers not trained on language.\n\nComputer jargon is always stupid.",
"sig": "cd596d855373e8a5ca722f96ecb802d3bf5510b1ce4e548e7e2a5781d72f14a69bb734389bed99ce299dd4940878fbdd26be03ac6f208086493c5d210ff9b582"
}