waxwing on Nostr: Adaptors generalise to proofs of representation (which means, I have a point V and I ...
Adaptors generalise to proofs of representation (which means, I have a point V and I claim knowledge of (x1, x2,..) such that V = x1 G1 + x2G2 + ..); and that's unsurprising, because sigma protocols can be built over any homomorphism.
For a concrete example, consider pedersen commitment C = aG + bH. You can prove knowledge of the opening using exactly the same paradigm as a schnorr signature, and that's because, just as for proving knowledge of a single DL, we have (x + y)G = xG + yG, so here also, C1 + C2 commits to the two tuple (a1+a2, b1+b2). I wrote this in a short blog post here: https://reyify.com/blog/homework-answer-advancing-2022
So, for a general case with N bases, and proving representation, the analog of the one-dimensional adaptor is a tuple; write tuples as (x) .Then we have (s)', a list of N scalar s-values.
so you'd write something like this : s_1' G1 + s_2'G2 + s_3'G3 = R + H(R+T, V)V to correspond with the full signature-tuple s: s_1 G + s_2 G2 + s_3 G3 = R + T + H(R+T, V)V.
and in analog, again, revealing the tuple (s), reveals the tuple (t). Note that that works because for *every* index (from 1 to N) you can do t_i = s_i - s_i'
Is this useful? I think one specific way of using it is, but still trying to flesh it out. The idea is this:
To construct full sigma protocols (and somehow have them verified onchain), you need to use proofs of representation; it's not enough to just do DLEQs (as per previous post a few days ago). In other words, you need to publish points V and claim you know their representations against a set of bases G_i.
It seems like the trick to making this work, is to create an adaptor tuple where all values are the same; i.e. t1 = t2 = t3 .. in the tuple (t).
Then if you enforce, on chain, that s1 = k1 + t1 + H(R1+T1, V1, msg)V1 (i.e. the pubkey is the first component of the proof of representation), it follows that if, after verifying all the adaptors, s1 is published, then t is revealed and since t1=t2 etc. you "receive" all the s-values in the tuple (s) at the same time.
For now this is a very undeveloped set of ideas, one point is that you are revealing one component of your representation (x1 G1), whereas usually you should reveal none, but my intuition is that this is the least of the problems to address here :)
For a concrete example, consider pedersen commitment C = aG + bH. You can prove knowledge of the opening using exactly the same paradigm as a schnorr signature, and that's because, just as for proving knowledge of a single DL, we have (x + y)G = xG + yG, so here also, C1 + C2 commits to the two tuple (a1+a2, b1+b2). I wrote this in a short blog post here: https://reyify.com/blog/homework-answer-advancing-2022
So, for a general case with N bases, and proving representation, the analog of the one-dimensional adaptor is a tuple; write tuples as (x) .Then we have (s)', a list of N scalar s-values.
so you'd write something like this : s_1' G1 + s_2'G2 + s_3'G3 = R + H(R+T, V)V to correspond with the full signature-tuple s: s_1 G + s_2 G2 + s_3 G3 = R + T + H(R+T, V)V.
and in analog, again, revealing the tuple (s), reveals the tuple (t). Note that that works because for *every* index (from 1 to N) you can do t_i = s_i - s_i'
Is this useful? I think one specific way of using it is, but still trying to flesh it out. The idea is this:
To construct full sigma protocols (and somehow have them verified onchain), you need to use proofs of representation; it's not enough to just do DLEQs (as per previous post a few days ago). In other words, you need to publish points V and claim you know their representations against a set of bases G_i.
It seems like the trick to making this work, is to create an adaptor tuple where all values are the same; i.e. t1 = t2 = t3 .. in the tuple (t).
Then if you enforce, on chain, that s1 = k1 + t1 + H(R1+T1, V1, msg)V1 (i.e. the pubkey is the first component of the proof of representation), it follows that if, after verifying all the adaptors, s1 is published, then t is revealed and since t1=t2 etc. you "receive" all the s-values in the tuple (s) at the same time.
For now this is a very undeveloped set of ideas, one point is that you are revealing one component of your representation (x1 G1), whereas usually you should reveal none, but my intuition is that this is the least of the problems to address here :)