jb55 on Nostr: Went down a rabbit hole on how to do resolution independent text rendering and ho boy ...
Went down a rabbit hole on how to do resolution independent text rendering and ho boy i did not know what i was getting myself into.
Sebastian Lague had a 1 hour video of him trying to do this on the gpu and its batshit insane:
https://youtu.be/SO83KQuuZvg
I was curious how browsers do it when pinch zooming on mobile, and learned that it is kind of tricky and they just generate glyph textures of various sizes in realtime on the cpu while you’re zooming.
I found a few comments on this by the guy working on linebender/webrender for servo (the rust web browser)
https://news.ycombinator.com/item?id=32113767
The best way to do it on the gpu is actually patented (fuck patenting algorithms 🙄)
https://sluglibrary.com
This algorithm looks at the bezier curve information in the font data and rasterizes each pixel in realtime on the GPU by detecting if the pixel is inside or outside the glyph by solving quadratic equations, that is a pretty interesting rabbithole on its own, but I digress…
I was looking into this for egui because I wanted full smooth pinch zooming across the UI in my nostr graph visualizer (zooming in egui is not smooth at the moment)
https://github.com/emilk/egui/issues/4813
To me it seems like the best approach is a combination of generating a glyph texture cache in realtime on the GPU and using that when pinch zooming is over, since solving complex quadratic equations for every text pixel and for every frame would be a lot of GPU work, especially for complex fonts.
It’s amazing to me that things that seem simple (like high quality pinch zooming) are way more complicated than they seem, and if we had slower CPUs it probably wouldn’t even work.
Sebastian Lague had a 1 hour video of him trying to do this on the gpu and its batshit insane:
https://youtu.be/SO83KQuuZvg
I was curious how browsers do it when pinch zooming on mobile, and learned that it is kind of tricky and they just generate glyph textures of various sizes in realtime on the cpu while you’re zooming.
I found a few comments on this by the guy working on linebender/webrender for servo (the rust web browser)
https://news.ycombinator.com/item?id=32113767
The best way to do it on the gpu is actually patented (fuck patenting algorithms 🙄)
https://sluglibrary.com
This algorithm looks at the bezier curve information in the font data and rasterizes each pixel in realtime on the GPU by detecting if the pixel is inside or outside the glyph by solving quadratic equations, that is a pretty interesting rabbithole on its own, but I digress…
I was looking into this for egui because I wanted full smooth pinch zooming across the UI in my nostr graph visualizer (zooming in egui is not smooth at the moment)
https://github.com/emilk/egui/issues/4813
To me it seems like the best approach is a combination of generating a glyph texture cache in realtime on the GPU and using that when pinch zooming is over, since solving complex quadratic equations for every text pixel and for every frame would be a lot of GPU work, especially for complex fonts.
It’s amazing to me that things that seem simple (like high quality pinch zooming) are way more complicated than they seem, and if we had slower CPUs it probably wouldn’t even work.