mleku on Nostr: fortunately i was able to get understanding to happen on this topic though, tl;dr - ...
fortunately i was able to get understanding to happen on this topic though, tl;dr - this is mostly machine-generated, already simple data sets (currently 8 factors, soon to be 9) and that this is not a data type that suits more advanced language comparison engines like VectorDB, and also, the main thing that they were worrying about was scalability for performance, and latency of data updates
i was able to clarify on these points and also educate that "fuzzy matching" is a baseline type of AI system based on simpler one dimensional data sets and requiring complete graph computation, and that we do the updates in the background so the fetching of recommendations is instant, in milliseconds
haha i also have to write up the comparison scheme now... i built an algorithm but hadn't writen it into human understandable form, and this will be good anyway, because i should now be able to spot any errors in the way i have constructed the comparison calculations
i was able to clarify on these points and also educate that "fuzzy matching" is a baseline type of AI system based on simpler one dimensional data sets and requiring complete graph computation, and that we do the updates in the background so the fetching of recommendations is instant, in milliseconds
haha i also have to write up the comparison scheme now... i built an algorithm but hadn't writen it into human understandable form, and this will be good anyway, because i should now be able to spot any errors in the way i have constructed the comparison calculations