Thanks for the library! Really great work to see large models running on consumer devices. :D
I'm curious, why get rid of mmap? Is the kernel page cache really that much less efficient than rolling your own?
Memory allocated with mmap can be evicted from the page cache since it's backed by disk, but memory used with open()/read() must be kept around, so I don't understand why the README says you get lower RAM usage without it. With techniques like madvise(), I expect mmap should have been better on low-end and high-end devices.
When llama-cpp switched to mmap-based weight loading in 2023, they found it to be much faster and simpler, even on low-memory devices. See Justine Tunney’s tech report for motivation , https://justine.lol/mmap/ and the llama-cop comments + commit by @jart, ggml-org/llama.cpp#91
Thanks for the library! Really great work to see large models running on consumer devices. :D
I'm curious, why get rid of mmap? Is the kernel page cache really that much less efficient than rolling your own?
Memory allocated with mmap can be evicted from the page cache since it's backed by disk, but memory used with open()/read() must be kept around, so I don't understand why the README says you get lower RAM usage without it. With techniques like madvise(), I expect mmap should have been better on low-end and high-end devices.
When llama-cpp switched to mmap-based weight loading in 2023, they found it to be much faster and simpler, even on low-memory devices. See Justine Tunney’s tech report for motivation , https://justine.lol/mmap/ and the llama-cop comments + commit by @jart, ggml-org/llama.cpp#91