Léo Ducas on Nostr: nprofile1q…ufa4k There are two many exemple to list, so here is my favorite one: ...
nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqknzsux7p6lzwzdedp3m8c3c92z0swzc0xyy5glvse58txj5e9ztqaufa4k (nprofile…fa4k) There are two many exemple to list, so here is my favorite one: the digital signature switcharoo.
A good basis \(G\) of a lattice \(L\) serves as secret key, a bad as public key. Using the good basis we can find close lattice point \(s\) to arbitrary targets. This is how we sign a message \(m\). The bad basis suffices to verify that \(s \in L\) and \(\|s-m\|\) is small.
WLOG (thanks to hash functions) we can assume that \(m\) is uniform in the \(\mathbb R^n / L\).
There is a catch. Do that naively, and signing to many messages leaks the secret key. The naive approach simply tile the space according to the parallelipiped $P = B \cdot [0,1]^n\). Every message signature pair gives a uniform point in \(P\), and \(P\) can be learned given enough such samples.
The fix is to resort to a careful rerandomization, using discrete gaussian. The tiling is replaced by an algorithm that select a lattice point according to a discrete gaussian centered at \(m\): i.e. with probability proportional to \(-\exp(\|s-m\|^2 / 2\sigma^2 )\).
Using the smoothness argument, we can show that this process, producing \(m\) uniform and then \(s\) is statistically close to something that does not depend on the Secret key. Namely, choose \(s\) first, and set \(m=s + e\) for \(e\) a continuous gaussian. \(e \bmod L\) is close to uniform, and so is \(m \bmod L\).
A good basis \(G\) of a lattice \(L\) serves as secret key, a bad as public key. Using the good basis we can find close lattice point \(s\) to arbitrary targets. This is how we sign a message \(m\). The bad basis suffices to verify that \(s \in L\) and \(\|s-m\|\) is small.
WLOG (thanks to hash functions) we can assume that \(m\) is uniform in the \(\mathbb R^n / L\).
There is a catch. Do that naively, and signing to many messages leaks the secret key. The naive approach simply tile the space according to the parallelipiped $P = B \cdot [0,1]^n\). Every message signature pair gives a uniform point in \(P\), and \(P\) can be learned given enough such samples.
The fix is to resort to a careful rerandomization, using discrete gaussian. The tiling is replaced by an algorithm that select a lattice point according to a discrete gaussian centered at \(m\): i.e. with probability proportional to \(-\exp(\|s-m\|^2 / 2\sigma^2 )\).
Using the smoothness argument, we can show that this process, producing \(m\) uniform and then \(s\) is statistically close to something that does not depend on the Secret key. Namely, choose \(s\) first, and set \(m=s + e\) for \(e\) a continuous gaussian. \(e \bmod L\) is close to uniform, and so is \(m \bmod L\).