James Grimmelmann on Nostr: We say that a model has “memorized” a piece of training data when: (1) it is ...
We say that a model has “memorized” a piece of training data when:
(1) it is possible to reconstruct from the model
(2) a near-exact copy of
(3) a substantial portion of
(4) that specific piece of training data.
We think that this is the most useful definition for legal conversations, and we explain how it relates to other terms in common use, such as “learning,” “extraction,” and “regurgitation.”
(1) it is possible to reconstruct from the model
(2) a near-exact copy of
(3) a substantial portion of
(4) that specific piece of training data.
We think that this is the most useful definition for legal conversations, and we explain how it relates to other terms in common use, such as “learning,” “extraction,” and “regurgitation.”