Skip to content

Conversation

@Ostrzyciel
Copy link
Member

The new -H:+MLCallCountProfileInference option reduces the size of the binary by a lot – from 63.96 MB to 48.92 MB. But, as Oracle notes, this does come at a small cost to throughput.

GraalVM 24:

$ time jelly-cli rdf from-jelly ~/work/rb/jelly_full/digital-agenda-indicators.jelly > /dev/null

real    0m12,316s
user    0m10,773s
sys     0m1,542s

GraalVM 25:

$ time ./jelly-cli rdf from-jelly ~/work/rb/jelly_full/digital-agenda-indicators.jelly > /dev/null

real    0m14,501s
user    0m12,817s
sys     0m1,680s

I think it's worth it for most users... having a smaller binary should really be our priority. For maximum throughput we recommend the JAR builds anyway.

@Ostrzyciel
Copy link
Member Author

I tried it also with this option disabled, and while the size of the binary is same as in 24, the time is still 14 seconds... so this optimization is not the problem, something else is.

Huh!

Copy link
Collaborator

@Karolina-Bogacka Karolina-Bogacka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the binary is smaller but the processing time has not changed, then the update makes sense to me.
We should probably investigate the processing time difference, though.

@Ostrzyciel Ostrzyciel marked this pull request as draft September 17, 2025 17:41
@Ostrzyciel
Copy link
Member Author

Yeah, with the same settings, I'm getting like 10% worse throughput on GraalVM 25 vs 24. Ugh, okay...?

Maybe I will just wait for this to settle down a little bit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants