Matthew Green on Nostr: The critical point here is that the resulting models are a *massive* privacy threat ...
The critical point here is that the resulting models are a *massive* privacy threat to users. Your data might not reveal your embarrassing or deeply private preferences. But the resulting model might be able to determine exactly what you like. Its existence destroys your privacy.
This is happening today, with many tech firms deploying differential privacy and federated learning to dig deeper into user data and build models of their users’ behavior. It’s not all transparently “evil” but none of it is arguably good for users’ privacy either.
This is happening today, with many tech firms deploying differential privacy and federated learning to dig deeper into user data and build models of their users’ behavior. It’s not all transparently “evil” but none of it is arguably good for users’ privacy either.