⚡️ Speed up method IntParseTable.from_ParseTable by 18%
#66
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 18% (0.18x) speedup for
IntParseTable.from_ParseTableinpython/ccxt/static_dependencies/lark/parsers/lalr_analysis.py⏱️ Runtime :
993 microseconds→839 microseconds(best of250runs)📝 Explanation and details
The optimization achieves an 18% speedup by reducing dictionary lookup overhead and intermediate object allocations in the critical loop that transforms parse table states.
Key optimizations applied:
Local variable caching: Stores
Shiftandstate_to_idxas local variables (shift,state_lookup) to avoid repeated attribute/global lookups in the hot loop processing each state's lookahead actions.Explicit loop transformation: Replaces the nested dictionary comprehension that was creating intermediate objects with explicit loops that build the transformed lookahead dictionary (
la_new) incrementally, reducing memory pressure and improving cache locality.Iterator reuse: Captures
parse_table.states.items()once asstates_itemsto avoid repeated method calls.Start/end state processing: Converts dictionary comprehensions to explicit loops for
start_statesandend_states, which reduces temporary object creation overhead.Why this leads to speedup:
Impact on workloads:
Based on the function reference, this optimization occurs in the LALR parser construction phase (
compute_lalr1_states), which is called during grammar compilation. The test results show consistent 4-25% improvements across various parse table sizes, with larger improvements (19-24%) for complex grammars with many states or actions. This benefits any application using Lark parsers, particularly those processing large or complex grammars during initialization.✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-IntParseTable.from_ParseTable-mhx82wtland push.