What is Nostr?
Billy Tetrud [ARCHIVE] /
npub1xqc…cnns
2023-06-09 12:47:32
in reply to nevent1q…ym00

Billy Tetrud [ARCHIVE] on Nostr: 📅 Original date posted:2017-09-04 📝 Original message: Thanks for the ...

📅 Original date posted:2017-09-04
📝 Original message:
Thanks for the information Laolu! I'm glad people smarter than me are
working on this : ) . Do you have any idea when Flare might be ready for
real-world use?

~BT

On Mon, Sep 4, 2017 at 1:02 PM, Olaoluwa Osuntokun <laolu32 at gmail.com>
wrote:

> Hi,
>
> Welcome to the mailing list!
>
> Replying here rather than the issue you filed on the lighting-rfc repo as
> I'd say this is a better venue to discuss matters such as this. The
> structure of the mailing list may be a bit strange initially, but once you
> get over that hump, I think you'll find it's rather pleasant to use. Also,
> if you stick around, you'll find that most members of the Bitcoin
> development community use technical mailing lists such as this one to
> share ideas, propose new solutions, analyze existing schemes, etc.
>
> This list has been a bit lower traffic recently than compared to the past
> as many of us have locked ourselves in dungeons coding without pants as
> we're rapidly approaching the first mainnet release of our various
> lightning node software. As a result, replies may have a bit of delay, but
> that's nothing to fret about!
>
> > I propose a protocol where each node knows about its own local network
> > topology, and to find a final route, a transaction originator queries a
> > number of its connections for routes to the intended destination.
>
> For color, I'm one of the co-authors of the Flare paper. What you've
> described in this email is close to the approach, but it utilizes a blind
> all-the-links flood towards the destination, rather than a DFS search
> guided by the distance metric proposed in the paper. One thing to note is
> that Flare utilizes HORNET to allow senders to query their beacon nodes,
> and also nodes beyond the horizon of their beacon nodes in a private
> manner. By using the bi-directional communication circuit, we maintain
> requester anonymity, and don't force those performing search to reveal
> their identity nor intent. Also note that Flare itself also uses an
> initial routing cache for a neighborhood of N nodes.
>
> What you've described here is essentially Dynamic Source Routing (DSR)[1],
> with a mix of components from Fisheye State Routing (FSR) making it a
> hybrid protocol that combines reactive and proactive elements in order to
> achieve its goals.
>
> Our current stop-gap is a pure proactive protocol, meaning that nodes
> gather all link state data and then make forwarding and
> routing decisions based off of that. The clear trade off (as you point
> out), is the increase in table state and bandwidth incurred due to keeping
> up with the announcements. The upside is that the latency observed when
> attempting payments to random sections of the graphs are minimized.
> Additionally, as we have complete information, we can make more
> intelligent path finding decisions, and also ensure that we collect a
> redundant set of routes for each payment. By ensuring that we collect a
> set of routes with high path diversity, we have many options to fall back
> to in the case of a routing failure with one of the earlier routes in our
> set.
>
> However, protocols that have a reactive element for each circuit
> establishment may not necessarily scale better. This is due to the
> computational overhead of circuit establishment. Particularly in your
> DSR+FSR combo as the flood proceeds in all directions. As a result,
> circuit establishment may have latency in the seconds as each random
> payment may potentially need to flood out in the graph in search of the
> destination. Without a sort of distance metric to guide the search, it
> may wastefully explore non-relevant sections, further increasing payment
> latency and overhead for all other nodes. Finally, one aspect to consider
> is how DoS-able schemes like this that require flooding for each circuit
> establishment are.
>
> > When a node wants to construct a route for a transaction:
> > 2. It asks all the channels on the edge of its cached local view for
> > their cheapest path
>
> Simply _asking_ nodes for a path to a destination defeats the point of
> using onion routing at all. If one is prepared to make that tradeoff, then
> far more scalable routing protocols can be explored as at that point, one
> would move to distance vector based algorithms.
>
> Very happy to see that more folks are exploring alternative
> routing/discovery solutions! In the future we'll definitely need to scale
> up the network.
>
> One beauty of the way the system is laid out is that multiple
> heterogeneous routing protocols can be used within the network just as
> within the Internet (eBGP vs iBGP), so different sub-graphs can chose
> protocols that achieve their goals in light of the various tradeoffs. I
> think I'll follow up this post with a general survey of potential
> approaches I've come up with and come across in the literature along with
> their various tradeoffs, and possible paths forward for the network as a
> whole.
>
>
> -- Laolu
>
>
> [1]: https://arxiv.org/pdf/1507.05724.pdf
> [2]: http://www.utdallas.edu/~ksarac/Papers/DSR.pdf
> [3]: http://nrlweb.cs.ucla.edu/publication/download/203/05_
> 75_fisheye-state-routing-in.pdf
>
>
> On Mon, Aug 21, 2017 at 6:09 PM Billy Tetrud <billy.tetrud at gmail.com>
> wrote:
>
>> Hey Guys,
>>
>> I'm testing this mailing list out for the first time, so I'm probably
>> gonna be doing it wrong.
>>
>> I want to talk about route discovery and route generation in the
>> lightning network. It seems there's a couple types of things going on with
>> routing:
>>
>> - Super basic flood-the-network style routing to get things up and
>> running, as I believe is implicitly proposed here: https://github.com/
>> lightningnetwork/lightning-rfc/blob/master/07-routing-gossip.md
>> <https://github.com/lightningnetwork/lightning-rfc/blob/master/07-routing-gossip.md>;
>> - More involved research projects that may not reach fruition any
>> time soon. Eg this: http://bitfury.com/content/5-white-papers-
>> research/whitepaper_flare_an_approach_to_routing_in_
>> lightning_network_7_7_2016.pdf
>> <http://bitfury.com/content/5-white-papers-research/whitepaper_flare_an_approach_to_routing_in_lightning_network_7_7_2016.pdf>;
>>
>> I'd like to discuss a near-term approach that can replace the basic
>> flood-the-network style route discovery, but isn't so complicated that it
>> needs a ton of study and work. This won't be the end-all solution to route
>> discovery, but should take us a lot further than flood-the-network.
>>
>> I propose a protocol where each node knows about its own local network
>> topology, and to find a final route, a transaction originator queries a
>> number of its connections for routes to the intended destination. By doing
>> this, it means that nodes are *not* receiving or storing the entire network
>> topology, which makes route discovery a lot less taxing on the network (in
>> terms of bandwidth and storage space).
>>
>> To go into more detail...
>>
>> When a node joins the network:
>> 1. it broadcasts its information to all its channels (pretty much as
>> proposed here
>> <https://github.com/lightningnetwork/lightning-rfc/blob/master/07-routing-gossip.md>;)
>> announcing its relevant channel information
>> 2. it requests local network topology information from all its channels
>> for information about channels 1 hop beyond its direct connection (ie it
>> will receive information about which addresses those channels are connected
>> to, and their related fee info / etc)
>> 3. it then requests topology information for channels 2 hops beyond, etc
>> until it has filled its cache to its satisfaction (the user can choose some
>> amount of megabytes as its limit of network topology data to store)
>> 4. it also subscribes to topology changes for nodes at those distances
>> (eg if a node has requested information from 3 hops away, it will receive
>> info about any new channels or removed channels that fall within that
>> distance)
>>
>> When a node receives an announcement message from a node joining the
>> network:
>> 1. it will store that node's info in its cache
>> 2. it will also forward that info to any node that's subscribed to
>> topology changes that fall within the relevant distance
>>
>> When a node wants to construct a route for a transaction:
>> 1. It checks to see if it has a path to that node in its cache. If it
>> does, it finds the cost of the cheapest path it has.
>> 2. It asks all the channels on the edge of its cached local view for
>> their cheapest path (however you want to define cheapest), specifying that
>> it only care about paths with a maximum cost of the cheapest path it has
>> already found in its cache. For example, if the node has nodes up to 3 hops
>> away in its cache, it will *only* ask the nodes 3 hops away (it will not
>> ask its direct connections, nor nodes 2 hops away, since it already has
>> enough information to ignore them)
>> 3. When it gets all its responses, it constructs a path
>>
>> When a node receives a path request from a node:
>> 1. It checks its cache for its cheapest cache-only path
>> 2. It asks nodes on the edge of its cached local view for their cheapest
>> path, specifying that it only cares about paths with a maximum cost of
>> either its cheapest cache-only path or the max-cost specified by the
>> requesting node minus the channel cost between it and the requesting node
>> (whichever is cheaper). A node on the edge of its cached local view is
>> *not* asked for route information if the cost to that node exceeds the
>> max-cost this relay node would specify.
>> 3. It reports the results that satisfy the max-cost requirements of the
>> requesting node
>>
>> And that's it. All the route information can be encrypted and signed so
>> relaying nodes can't read the information inside, and so the requesting
>> source node can verify which nodes sent that information.
>>
>> This protocol should keep both node-announcement messages *and* route
>> request messages highly localized.
>>
>> Thoughts?
>>
>> ~BT
>>
>>
>>
>> _______________________________________________
>> Lightning-dev mailing list
>> Lightning-dev at lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20170904/46ed51f1/attachment.html>;
Author Public Key
npub1xqcwcttsyk0a64d63crrwsxp88pa42np37rw87hrfn4uku78g2aqltcnns