Tomas [ARCHIVE] on Nostr: 📅 Original date posted:2017-04-07 📝 Original message:Hi Eric, On Fri, Apr 7, ...
📅 Original date posted:2017-04-07
📝 Original message:Hi Eric,
On Fri, Apr 7, 2017, at 21:55, Eric Voskuil via bitcoin-dev wrote:
> Optimization for lower memory platforms then becomes a process of
> reducing the need for paging. This is the purpose of a cache. The seam
> between disk and memory can be filled quite nicely by a small amount
> of cache. On high RAM systems any cache is actually a de-optimization
> but on low RAM systems it can prevent excessive paging. This is
> directly analogous to a CPU cache.
I am not entirely sure I agree with that, or understand it correctly.
If -for example - the data of some application is a set of records
which can be sorted from least frequently used to most frequently used
then doing just that sort will beat any application-layer cache.
Regardless of size of data and size of RAM, you simply allow the OS to
use disk caching or memory map caching to work its magic .
In fact, I would argue that an application-layer cache *only* makes
sense if the data model shows a *hard* distinction between often and not
often used data. If usage-frequency is a continuous line, caching is
best left to the OS by focussing on proper spatial and temporal locality
of reference of your data, because the OS has much more information to
make the right decision.
📝 Original message:Hi Eric,
On Fri, Apr 7, 2017, at 21:55, Eric Voskuil via bitcoin-dev wrote:
> Optimization for lower memory platforms then becomes a process of
> reducing the need for paging. This is the purpose of a cache. The seam
> between disk and memory can be filled quite nicely by a small amount
> of cache. On high RAM systems any cache is actually a de-optimization
> but on low RAM systems it can prevent excessive paging. This is
> directly analogous to a CPU cache.
I am not entirely sure I agree with that, or understand it correctly.
If -for example - the data of some application is a set of records
which can be sorted from least frequently used to most frequently used
then doing just that sort will beat any application-layer cache.
Regardless of size of data and size of RAM, you simply allow the OS to
use disk caching or memory map caching to work its magic .
In fact, I would argue that an application-layer cache *only* makes
sense if the data model shows a *hard* distinction between often and not
often used data. If usage-frequency is a continuous line, caching is
best left to the OS by focussing on proper spatial and temporal locality
of reference of your data, because the OS has much more information to
make the right decision.