Rusty Russell on Nostr: Thursday was an amazingly productive #CLN day (took a break from child wrangling ...
Thursday was an amazingly productive #CLN day (took a break from child wrangling during school holidays to go into my coworking space).
I've been grinding askrene's min-cost-flow solver against real data: my new toy is a stress test that asks for a route from my node, then asks again with "maxfee" 1msat less than the returned fee. This makes it prioritize fees more. I repeat this until failure, for each of the 100 most-channels nodes in my gossip snapshot.
This has been great for tuning the various parameters available. In particular, our linearization approximation for basefee (we need a linear function, which basefee breaks, so we approximate) was all wrong. Also, our mixing function (how much to weight fees vs probabilities) was both *complex* and *suboptimal*, so after a few tests I decided to simply multiply the probability factor by 8, which makes them comparable in practice! (This may change: we really want to compare the medians of each, to determine the factor, but 8 is simple, and reasonable for now).
Our algorithm would first run ignoring fees, and if that comes in under maxfee, just return. In practice this is silly: it would sometimes choose the more expensive of two identical paths! So now we start with a 1% fee weighting, to at least have *some* bias.
All this testing on real data is giving me more confidence than I ever had about our previous efforts, but there's still more to do before next month's release...
I've been grinding askrene's min-cost-flow solver against real data: my new toy is a stress test that asks for a route from my node, then asks again with "maxfee" 1msat less than the returned fee. This makes it prioritize fees more. I repeat this until failure, for each of the 100 most-channels nodes in my gossip snapshot.
This has been great for tuning the various parameters available. In particular, our linearization approximation for basefee (we need a linear function, which basefee breaks, so we approximate) was all wrong. Also, our mixing function (how much to weight fees vs probabilities) was both *complex* and *suboptimal*, so after a few tests I decided to simply multiply the probability factor by 8, which makes them comparable in practice! (This may change: we really want to compare the medians of each, to determine the factor, but 8 is simple, and reasonable for now).
Our algorithm would first run ignoring fees, and if that comes in under maxfee, just return. In practice this is silly: it would sometimes choose the more expensive of two identical paths! So now we start with a 1% fee weighting, to at least have *some* bias.
All this testing on real data is giving me more confidence than I ever had about our previous efforts, but there's still more to do before next month's release...