-
Notifications
You must be signed in to change notification settings - Fork 22
[Enhancement] Use weighted ranking to cap refinement candidates (CF-931) #962
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨Explore these optional code suggestions:
|
⚡️ Codeflash found optimizations for this PR📄 115% (1.15x) speedup for
|
KRRT7
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like this, just implement the changes that the PR review bot gave you
|
I suspect this will help with the long runtimes of our tracer-replay as well now that I think of it |
PR Type
Enhancement
Description
Rank refinements by runtime and diff
Add normalization and weighting utilities
Change refinement request payload to ints
Cap refinements to top 45% candidates
Diagram Walkthrough
File Walkthrough
aiservice.py
Humanize runtime fields; trim logscodeflash/api/aiservice.py
code_utils.py
Utilities for weighting and normalizationcodeflash/code_utils/code_utils.py
models.py
Refiner request runtime type to intcodeflash/models/models.py
function_optimizer.py
Weighted, selective, parallel refinement flowcodeflash/optimization/function_optimizer.py
config_consts.py
Config for weighted refinement cappingcodeflash/code_utils/config_consts.py