This repository was archived by the owner on Sep 8, 2025. It is now read-only.

Description
Starting point: Running an executable that was linked with -Wl,-zcommon-page-size=2097152 -Wl,-zmax-page-size=2097152 under IODLR_USE_EXPLICIT_HP=1 LD_PRELOAD=/path/to/liblppreload.so.
Expectation: The code mapping is completely mapped to 2 MiB hugepages.
Actual result:
- The first 2 MiB are still on 4 KiB pages.
- There is a tail of 1.5 MiB or so that is still on 4 KiB pages.
Discussion:
- This is due to the liblppreload library parsing the ELF headers and
sh_addr begins 20 KiB after the actual 2 MiB aligned r-xp memory mapping start address and thus the library aligns sh_addr up to the next 2 MiB boundary.
- This is caused by the linker aligning the start addresses to 2 MiB but not the segment size. However, in my tests, the address space after the code segment end and the next segment start wasn't mapped such that the end-address could actually be aligned up instead of down.
An alternative and arguably simpler approach to traversing ELF headers is to parse /proc/$pid/smaps and remap complete r-xp mappings to huge pages. That way one can also detect whether it's is safe to align the end of a mapping up to the next 2 MiB boundary.