Olaoluwa Osuntokun [ARCHIVE] on Nostr: π Original date posted:2016-09-20 π Original message: Hi, y'all Excellent ...
π
Original date posted:2016-09-20
π Original message:
Hi, y'all
Excellent work!!
As you noted our, Flare paper as currently written only includes a series of
simulations of various topologies/parameters which are then extrapolated to
larger network sizes. The logical next step would be to deploy a
proto-implementation within a live testbed with real latencies, preferential
attachment, etc. I'm thrilled that y'all went ahead getting your hands dirty
to gauge the real-word feasibility of our scheme.
> In the paper, an assumption is made that "there exists a way for any two
> nodes in LN to communicate with each other, perhaps with some prior setup
> (such as a distributed hashtable overlay maintained among LN nodes that
> would store mapping of node identifiers to their current IP addresses)".
In reality we envisioned that a DHT would acts as a substitute for a pure
broadcast network, rather than to allow individual nodes to communicate with
each other. At the time of the writing of the paper, we envisioned that
information such as the current onion key for each node, identity keys,
channel proofs, etc would potentially be stored within a DHT.
For communications, we referenced that something akin to HORNET could be
used to allow nodes to communicate with each other without necessarily
knowing the IP address of each node or a node's selected beacons. As y'all
noted, in this scenario, nodes would only be able to directly communicate
with nodes that they have a direct path to.
HORNET was especially attractive as as after its setup phase, there exists a
bi-directional communication link between two nodes. This link could either
be used in a request/response manner to notify a node that it's a selected
beacon and/or to fetch routing tables, or in a more streaming manner between
the sender/receiver to negotiate the details of further payments (additional
R-Hashes, etc) once the link was established. As a substitute for the first
use-case (request/response) the latest design of our Sphinx construction
could be used, with the requesting node providing the backwards path within
the e2e payload. It's worth noting that this substitute reveals the entire
path to the responding node, while construction based on HORNET still
obfuscates the backwards route from the target node. Additionally with
HORNET, the backwards route can be distinct from the forwards route.
> We only focused on static ranking (finding a route formed of open channels
> between two given nodes), so it is very possible (probable?) that a route
is
> not actually usable because the channels are not balanced. Basically we
made
> the assumption that the network graph is undirected, which is not true
and is a
> pretty strong assumption.
Indeed, as channels themselves have fixed capacities, proper path finding
stems beyond simply "shortest path", and begins to wonder into the realm of
network flow theory[1]. Assuming nodes are aware of the available channel
capacity and current fee advertised by the each node, then optimal path
(cheapest) path can be discovered by solving the for the min-cost flow[2]
within the node's subset of the network graph. Additionally, the cost
function for each edge within the graph can also factor in the absolute HTLC
time delay between each node.
On a related note, in the past Tadge has suggested that the available
channel capacity that a nodes wants to advertise should be an input to a
function which derives the advertised fee across the link. One potential
strategy would be to have the advertised fee be inversely proportional to a
metric which captures how balanced the channel is. So if a channel was
mostly unbalanced to in one direction, then the advertised fee in that
direction would be relatively "high", while in the opposite (balancing flow)
direction the advertised fee would be proportionally lower. Fully depleted
channels (which should only exist right after channel creation, with no
committed state transitions) would then advertise a fee of "infinity"
allowing nodes to update their path finding accordingly (something Rusty
pointed out the other day). Following down this line of thinking further
beings to invoke the concept of "negative fees" which have been discussed a
bit informally in the past.
> 3) from a control server, we link nodes to each other following a given
> graph using json-rpc commands
Was the capacity of the channels created uniform? Or were the channel values
uniformly distributed? Or perhaps drawn from a particular distribution?
> 5) pick 1000 random routes (random sender, random receiver), and for each
> route using json-rpc we (a) asks the receiver for a "payment request" (H +
> routing_table) and then (b) asks the sender to find a route for this
payment
> request (without actually making a payment).
Similar to the question above, how were the satoshis requested within the
"payment request" distributed?
-- Laolu
[1]: https://en.wikipedia.org/wiki/Flow_network
[2]: https://en.wikipedia.org/wiki/Minimum-cost_flow_problem
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20160920/de0e3f77/attachment.html>
π Original message:
Hi, y'all
Excellent work!!
As you noted our, Flare paper as currently written only includes a series of
simulations of various topologies/parameters which are then extrapolated to
larger network sizes. The logical next step would be to deploy a
proto-implementation within a live testbed with real latencies, preferential
attachment, etc. I'm thrilled that y'all went ahead getting your hands dirty
to gauge the real-word feasibility of our scheme.
> In the paper, an assumption is made that "there exists a way for any two
> nodes in LN to communicate with each other, perhaps with some prior setup
> (such as a distributed hashtable overlay maintained among LN nodes that
> would store mapping of node identifiers to their current IP addresses)".
In reality we envisioned that a DHT would acts as a substitute for a pure
broadcast network, rather than to allow individual nodes to communicate with
each other. At the time of the writing of the paper, we envisioned that
information such as the current onion key for each node, identity keys,
channel proofs, etc would potentially be stored within a DHT.
For communications, we referenced that something akin to HORNET could be
used to allow nodes to communicate with each other without necessarily
knowing the IP address of each node or a node's selected beacons. As y'all
noted, in this scenario, nodes would only be able to directly communicate
with nodes that they have a direct path to.
HORNET was especially attractive as as after its setup phase, there exists a
bi-directional communication link between two nodes. This link could either
be used in a request/response manner to notify a node that it's a selected
beacon and/or to fetch routing tables, or in a more streaming manner between
the sender/receiver to negotiate the details of further payments (additional
R-Hashes, etc) once the link was established. As a substitute for the first
use-case (request/response) the latest design of our Sphinx construction
could be used, with the requesting node providing the backwards path within
the e2e payload. It's worth noting that this substitute reveals the entire
path to the responding node, while construction based on HORNET still
obfuscates the backwards route from the target node. Additionally with
HORNET, the backwards route can be distinct from the forwards route.
> We only focused on static ranking (finding a route formed of open channels
> between two given nodes), so it is very possible (probable?) that a route
is
> not actually usable because the channels are not balanced. Basically we
made
> the assumption that the network graph is undirected, which is not true
and is a
> pretty strong assumption.
Indeed, as channels themselves have fixed capacities, proper path finding
stems beyond simply "shortest path", and begins to wonder into the realm of
network flow theory[1]. Assuming nodes are aware of the available channel
capacity and current fee advertised by the each node, then optimal path
(cheapest) path can be discovered by solving the for the min-cost flow[2]
within the node's subset of the network graph. Additionally, the cost
function for each edge within the graph can also factor in the absolute HTLC
time delay between each node.
On a related note, in the past Tadge has suggested that the available
channel capacity that a nodes wants to advertise should be an input to a
function which derives the advertised fee across the link. One potential
strategy would be to have the advertised fee be inversely proportional to a
metric which captures how balanced the channel is. So if a channel was
mostly unbalanced to in one direction, then the advertised fee in that
direction would be relatively "high", while in the opposite (balancing flow)
direction the advertised fee would be proportionally lower. Fully depleted
channels (which should only exist right after channel creation, with no
committed state transitions) would then advertise a fee of "infinity"
allowing nodes to update their path finding accordingly (something Rusty
pointed out the other day). Following down this line of thinking further
beings to invoke the concept of "negative fees" which have been discussed a
bit informally in the past.
> 3) from a control server, we link nodes to each other following a given
> graph using json-rpc commands
Was the capacity of the channels created uniform? Or were the channel values
uniformly distributed? Or perhaps drawn from a particular distribution?
> 5) pick 1000 random routes (random sender, random receiver), and for each
> route using json-rpc we (a) asks the receiver for a "payment request" (H +
> routing_table) and then (b) asks the sender to find a route for this
payment
> request (without actually making a payment).
Similar to the question above, how were the satoshis requested within the
"payment request" distributed?
-- Laolu
[1]: https://en.wikipedia.org/wiki/Flow_network
[2]: https://en.wikipedia.org/wiki/Minimum-cost_flow_problem
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20160920/de0e3f77/attachment.html>