Hello, yes, this is Gregly on Nostr: npub1psdfx…99sr7 I suppose one could theoretically train a model entirely on ...
npub1psdfxfpxz2cwmmnsk60y3nqpn2tqh9n24h4hstvfkwvr6eaek9js499sr7 (npub1psd…9sr7) I suppose one could theoretically train a model entirely on one’s own previous work, and use that… but then it wouldn’t be an LLM (and I wonder exactly how useful a neural network that would be, since it would essentially regurgitate stuff you’d already done in the past).