Asta [AMP] on Nostr: npub1mdky0…laptk In the context of the paper/article, is this as irrelevant/out of ...
npub1mdky0njswyvy6esxh964apx9e6mmwt8am4wuqk3ds5p4xyaav2dqvlaptk (npub1mdk…aptk) In the context of the paper/article, is this as irrelevant/out of context as it seems? Because he seems to not so subtly conflate the very basic notion of a successfully trained model with the arguments made in Stochastic Parrots: that is, he almost seems to be saying here that Bender, et al, were arguing that an LLM can only output what the training input was, which is an entirely separate and irrelevant thing (and that is definitely not an argument that was made).
The idea that it can generate text it "didn't see" is pretty basic; hell, a Markov chain can do it. Is this quote more... appropriate, and less bullshit, in context?
The idea that it can generate text it "didn't see" is pretty basic; hell, a Markov chain can do it. Is this quote more... appropriate, and less bullshit, in context?