Microfractal on Nostr: The paper by K.I. Martin is not really up to date. It has no glitch-avoid-algorithm ...
The paper by K.I. Martin is not really up to date.
It has no glitch-avoid-algorithm and it uses an older / not stable / slower approximation method to skip iterations, called series approximation.
In August 2021, a glitch avoidance check called "rebasing" was found (without this method, precision issues occur in the pixel calculation if the precalculated reference orbit has fewer iterations (caused by previous divergences of Z)).
Later, a better method for skipping iterations was found, based on a precomputed lookup table (or something similar). If an iteration does not change the value significantly, why not skip it? This new method replaces the old series approximation and is more efficient in several cases.
I got my implementation working based on some algorithms in this Github repository:
https://github.com/ImaginaFractal/Algorithms
It comes from the same person who presented both algorithms in 2021.
More specifically, he said:
"Mip: (adaption of) Claude's method, the simplest one.
HInfM: More efficient at high iteration location, Somewhat more complex.
HInfL: Legacy, not recommended."
I used the HInfM (HInfMLA cpp and HInfMLA h) because it's the fastest to date.
It has no glitch-avoid-algorithm and it uses an older / not stable / slower approximation method to skip iterations, called series approximation.
In August 2021, a glitch avoidance check called "rebasing" was found (without this method, precision issues occur in the pixel calculation if the precalculated reference orbit has fewer iterations (caused by previous divergences of Z)).
Later, a better method for skipping iterations was found, based on a precomputed lookup table (or something similar). If an iteration does not change the value significantly, why not skip it? This new method replaces the old series approximation and is more efficient in several cases.
I got my implementation working based on some algorithms in this Github repository:
https://github.com/ImaginaFractal/Algorithms
It comes from the same person who presented both algorithms in 2021.
More specifically, he said:
"Mip: (adaption of) Claude's method, the simplest one.
HInfM: More efficient at high iteration location, Somewhat more complex.
HInfL: Legacy, not recommended."
I used the HInfM (HInfMLA cpp and HInfMLA h) because it's the fastest to date.