-
Notifications
You must be signed in to change notification settings - Fork 121
Detect tests with negative failing realizations #11971
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Detect tests with negative failing realizations #11971
Conversation
CodSpeed Performance ReportMerging #11971 will not alter performanceComparing Summary
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds detection logic to identify when the number of successful realizations exceeds the number of starting realizations, which would indicate an internal error in the experiment run tracking.
- Adds a check after logging experiment results to detect negative failing realizations
- Triggers a
ZeroDivisionErrorwhen this impossible condition is detected
| f"{len(starting_realizations) - num_successful_realizations}" | ||
| ) | ||
| if len(starting_realizations) - num_successful_realizations < 0: | ||
| 1 / 0 |
Copilot
AI
Nov 12, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code intentionally triggers a ZeroDivisionError with 1 / 0 to detect a condition that should be impossible. This is a poor error handling approach. Instead, raise a descriptive exception that explains what went wrong.
Suggested fix:
if len(starting_realizations) - num_successful_realizations < 0:
raise RuntimeError(
f"Internal error: Number of successful realizations ({num_successful_realizations}) "
f"exceeds number of starting realizations ({len(starting_realizations)})"
)| 1 / 0 | |
| raise RuntimeError( | |
| f"Internal error: Number of successful realizations ({num_successful_realizations}) " | |
| f"exceeds number of starting realizations ({len(starting_realizations)})" | |
| ) |
| if len(starting_realizations) - num_successful_realizations < 0: | ||
| 1 / 0 |
Copilot
AI
Nov 12, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This new detection logic for negative failing realizations lacks test coverage. According to the coding guidelines, all new functional paths or behaviors should be covered with unit tests or integration/UI tests.
The test should:
- Verify that an exception is raised when
num_successful_realizationsexceedslen(starting_realizations) - Use a descriptive test name following the
test_that_convention, such astest_that_internal_error_is_raised_when_successful_realizations_exceed_starting_realizations
| f"{len(starting_realizations) - num_successful_realizations}" | ||
| ) | ||
| if len(starting_realizations) - num_successful_realizations < 0: | ||
| 1 / 0 |
Copilot
AI
Nov 12, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message produced by triggering 1 / 0 will be a generic ZeroDivisionError: division by zero, which is unclear and unhelpful for users or developers trying to understand what went wrong. The error message should clearly explain that this is an internal inconsistency where the number of successful realizations exceeds the number of starting realizations.
| 1 / 0 | |
| raise ErtRunError( | |
| "Internal inconsistency: number of successful realizations " | |
| "exceeds the number of starting realizations. " | |
| f"Started: {len(starting_realizations)}, " | |
| f"Successful: {num_successful_realizations}." | |
| ) |
Issue
Resolves #my_issue
Approach
Short description of the approach
(Screenshot of new behavior in GUI if applicable)
git rebase -i main --exec 'just rapid-tests')When applicable