Skip to content

Conversation

@major
Copy link
Contributor

@major major commented Dec 16, 2025

Description

Add a very basic initial e2e test for RHEL Lightspeed's rlsapi v1 API.

⚠️ DEPENDS ON #928

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Tools used to create PR

Identify any AI code assistants used in this PR (for transparency and review context)

  • Assisted-by: Claude

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

Existing test scripts should be fine.

Summary by CodeRabbit

  • Tests
    • Added end-to-end test infrastructure for the RLSAPI v1 infer endpoint.
    • Configured test environments with configurable default model and provider settings.
    • Enhanced automated testing pipeline with environment-specific test configurations.

✏️ Tip: You can customize this high-level summary in your review settings.

@openshift-ci
Copy link

openshift-ci bot commented Dec 16, 2025

Hi @major. Thanks for your PR.

I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 16, 2025

Walkthrough

Adds end-to-end testing infrastructure for the rlsapi v1 infer endpoint, including a feature file with test scenarios, library and server mode configurations with environment variable-driven inference defaults, Docker Compose environment setup, GitHub Actions workflow steps for provider/model defaults, and test environment hooks for feature-tag-driven configuration switching.

Changes

Cohort / File(s) Summary
E2E Test Feature
tests/e2e/features/rlsapi_v1.feature, tests/e2e/test_list.txt
Adds feature file for rlsapi v1 /infer endpoint with POST scenario verifying HTTP 200 and JSON response; registers feature in test suite
Configuration Files
tests/e2e/configuration/library-mode/lightspeed-stack-rlsapi.yaml, tests/e2e/configuration/server-mode/lightspeed-stack-rlsapi.yaml, tests/e2e/configuration/library-mode/lightspeed-stack.yaml, tests/e2e/configuration/server-mode/lightspeed-stack.yaml
Adds library and server mode YAML configs with inference defaults using environment variables; applies formatting changes to existing configs
Docker Compose
docker-compose.yaml, docker-compose-library.yaml
Adds E2E_DEFAULT_PROVIDER (openai) and E2E_DEFAULT_MODEL (gpt-4o-mini) environment variables to lightspeed-stack service
GitHub Actions Workflow
.github/workflows/e2e_tests.yaml
Adds step to set default provider/model environment variables based on matrix environment (ci, azure, vertexai) with fallback behavior
Test Environment Setup
tests/e2e/features/environment.py
Adds RlsapiConfig feature tag handling with config backup, application, container restart, and restoration logic mirroring existing config-switch patterns

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Feature tag handling logic in environment.py — verify backup/restore and container restart sequencing
  • YAML configuration files — confirm all required fields for rlsapi v1 endpoint and environment variable interpolation
  • GitHub Actions workflow — validate environment variable fallback logic for all matrix conditions

Possibly related PRs

  • #916 — Implements the rlsapi v1 /infer endpoint being tested in this PR
  • #928 — Implements rlsapi v1 /infer endpoint router registration and inference defaults
  • #613 — Adds per-feature config switching and environment setup hooks in tests/e2e/features/environment.py

Suggested labels

ok-to-test

Suggested reviewers

  • tisnik
  • are-ces
  • radofuchs

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding an initial end-to-end test for the RLSAPI v1 infer endpoint. It accurately reflects the primary objective of the pull request without being vague or misleading.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/integration/endpoints/test_rlsapi_v1_integration.py (1)

181-197: Unused test_id parameter can be removed.

The test_id parameter is declared but never used in the test body. Since pytest.param(..., id="...") already provides test identification in the output, this parameter is redundant.

 async def test_rlsapi_v1_infer_with_context(
     mock_llama_stack: MockAgentFixture,
     mock_authorization: None,
     test_auth: AuthTuple,
     context: RlsapiV1Context,
-    test_id: str,
 ) -> None:

Also remove test_id from the parametrize tuples (lines 149, 153, 161, 165, 176).

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a6fb210 and a54071a.

📒 Files selected for processing (6)
  • examples/lightspeed-stack-rlsapi-cla.yaml (1 hunks)
  • src/app/routers.py (2 hunks)
  • tests/e2e/features/rlsapi_v1.feature (1 hunks)
  • tests/e2e/test_list.txt (1 hunks)
  • tests/integration/endpoints/test_rlsapi_v1_integration.py (1 hunks)
  • tests/unit/app/test_routers.py (5 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
tests/e2e/test_list.txt

📄 CodeRabbit inference engine (CLAUDE.md)

Maintain test list in tests/e2e/test_list.txt for end-to-end tests

Files:

  • tests/e2e/test_list.txt
tests/{unit,integration}/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

tests/{unit,integration}/**/*.py: Use pytest for all unit and integration tests; do not use unittest framework
Unit tests must achieve 60% code coverage; integration tests must achieve 10% coverage

Files:

  • tests/unit/app/test_routers.py
  • tests/integration/endpoints/test_rlsapi_v1_integration.py
tests/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use pytest-mock with AsyncMock objects for mocking in tests

Files:

  • tests/unit/app/test_routers.py
  • tests/integration/endpoints/test_rlsapi_v1_integration.py
tests/e2e/features/**/*.feature

📄 CodeRabbit inference engine (CLAUDE.md)

Use behave (BDD) framework with Gherkin feature files for end-to-end tests

Files:

  • tests/e2e/features/rlsapi_v1.feature
src/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

src/**/*.py: Use absolute imports for internal modules in LCS project (e.g., from auth import get_auth_dependency)
All modules must start with descriptive docstrings explaining their purpose
Use logger = logging.getLogger(__name__) pattern for module logging
All functions must include complete type annotations for parameters and return types, using modern syntax (str | int) and Optional[Type] or Type | None
All functions must have docstrings with brief descriptions following Google Python docstring conventions
Function names must use snake_case with descriptive, action-oriented names (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead of modifying input parameters
Use async def for I/O operations and external API calls
All classes must include descriptive docstrings explaining their purpose following Google Python docstring conventions
Class names must use PascalCase with descriptive names and standard suffixes: Configuration for config classes, Error/Exception for exceptions, Resolver for strategy patterns, Interface for abstract base classes
Abstract classes must use ABC with @abstractmethod decorators
Include complete type annotations for all class attributes in Python classes
Use import logging and module logger pattern with standard log levels: debug, info, warning, error

Files:

  • src/app/routers.py
🧠 Learnings (4)
📚 Learning: 2025-11-24T16:58:04.410Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-24T16:58:04.410Z
Learning: Applies to tests/e2e/test_list.txt : Maintain test list in `tests/e2e/test_list.txt` for end-to-end tests

Applied to files:

  • tests/e2e/test_list.txt
📚 Learning: 2025-11-24T16:58:04.410Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-24T16:58:04.410Z
Learning: Applies to tests/e2e/features/**/*.feature : Use behave (BDD) framework with Gherkin feature files for end-to-end tests

Applied to files:

  • tests/e2e/test_list.txt
  • tests/e2e/features/rlsapi_v1.feature
📚 Learning: 2025-09-02T11:14:17.117Z
Learnt from: radofuchs
Repo: lightspeed-core/lightspeed-stack PR: 485
File: tests/e2e/features/steps/common_http.py:244-255
Timestamp: 2025-09-02T11:14:17.117Z
Learning: The POST step in tests/e2e/features/steps/common_http.py (`access_rest_api_endpoint_post`) is intentionally designed as a general-purpose HTTP POST method, not specifically for REST API endpoints, so it should not include context.api_prefix in the URL construction.

Applied to files:

  • tests/e2e/features/rlsapi_v1.feature
📚 Learning: 2025-10-29T13:05:22.438Z
Learnt from: luis5tb
Repo: lightspeed-core/lightspeed-stack PR: 727
File: src/app/endpoints/a2a.py:43-43
Timestamp: 2025-10-29T13:05:22.438Z
Learning: In the lightspeed-stack repository, endpoint files in src/app/endpoints/ intentionally use a shared logger name "app.endpoints.handlers" rather than __name__, allowing unified logging configuration across all endpoint handlers (query.py, streaming_query.py, a2a.py).

Applied to files:

  • src/app/routers.py
🧬 Code graph analysis (2)
tests/integration/endpoints/test_rlsapi_v1_integration.py (5)
src/app/endpoints/rlsapi_v1.py (1)
  • infer_endpoint (125-183)
src/configuration.py (3)
  • configuration (73-77)
  • AppConfig (39-181)
  • inference (134-138)
src/models/rlsapi/requests.py (6)
  • RlsapiV1Attachment (8-25)
  • RlsapiV1CLA (69-86)
  • RlsapiV1Context (89-120)
  • RlsapiV1InferRequest (123-200)
  • RlsapiV1SystemInfo (42-66)
  • RlsapiV1Terminal (28-39)
src/models/rlsapi/responses.py (1)
  • RlsapiV1InferResponse (29-53)
src/utils/suid.py (1)
  • check_suid (19-54)
src/app/routers.py (1)
tests/unit/app/test_routers.py (1)
  • include_router (37-52)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: build-pr
  • GitHub Check: E2E: library mode / vertexai
  • GitHub Check: E2E: server mode / azure
  • GitHub Check: E2E: library mode / ci
  • GitHub Check: E2E: server mode / vertexai
  • GitHub Check: E2E: server mode / ci
  • GitHub Check: E2E: library mode / azure
🔇 Additional comments (14)
examples/lightspeed-stack-rlsapi-cla.yaml (1)

1-37: LGTM! Well-documented example configuration.

The configuration file is clearly structured with helpful comments explaining the purpose and usage for CLA deployments. The authorization rules appropriately grant rlsapi_v1_infer to all authenticated users, aligning with the stateless inference use case.

tests/e2e/features/rlsapi_v1.feature (1)

1-15: LGTM! Follows BDD/Gherkin conventions correctly.

The feature file properly uses the behave framework with Gherkin syntax as per coding guidelines. The initial scenario covers the basic happy path for the /v1/infer endpoint.

tests/e2e/test_list.txt (1)

12-12: LGTM! Test list correctly updated.

The new feature file path is properly added to the e2e test list as per the coding guidelines.

src/app/routers.py (2)

23-24: LGTM! Import follows existing patterns.

The import of rlsapi_v1 correctly follows the established pattern for endpoint imports in this file, using absolute imports as per coding guidelines.


54-56: LGTM! Router registration is correctly placed.

The router is properly registered with the /v1 prefix, consistent with other versioned endpoints in the file. The descriptive comment helps clarify the purpose.

tests/unit/app/test_routers.py (4)

26-27: LGTM! Import added correctly.

The import of rlsapi_v1 mirrors the production code change.


69-69: LGTM! Router count correctly updated.

The count is incremented to 17 to account for the new rlsapi_v1 router.


88-88: LGTM! Router presence assertion added.

Correctly verifies that rlsapi_v1.router is included in the registered routers.


97-97: LGTM! Prefix test updated correctly.

The router count and prefix assertion for /v1 are correctly added for rlsapi_v1.router.

Also applies to: 117-117

tests/integration/endpoints/test_rlsapi_v1_integration.py (5)

1-34: LGTM! Well-structured integration test setup.

The test module follows pytest conventions as per coding guidelines, uses appropriate imports, and clearly documents its purpose. The use of pytest-mock with AsyncMock is correct for testing async endpoints.


37-113: LGTM! Clean fixture design.

The MockAgentFixture NamedTuple and _create_mock_agent helper provide a well-organized approach to mocking Llama Stack components. The configurable output_message parameter enables testing various response scenarios.


222-254: LGTM! Comprehensive error handling test.

Good coverage of the 503 error path when Llama Stack is unavailable. The test properly verifies the status code and that the error detail contains the expected backend name.


295-323: LGTM! Input source combination test validates message construction.

This test effectively verifies that all input sources (question, stdin, attachments, terminal) are properly combined before being sent to the LLM. The assertion on line 322 checks for "terminal" substring which covers the terminal output case.


331-349: LGTM! Skip RAG parameter test with appropriate documentation.

The test correctly verifies the skip_rag parameter is accepted, and the NOTE comment appropriately documents that RAG is not implemented in lightspeed-stack rlsapi v1.

Add a basic smoke test for the RLSAPI v1 infer endpoint to verify
the endpoint is reachable and returns a valid JSON response.

Signed-off-by: Major Hayden <major@redhat.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
tests/e2e/configuration/server-mode/lightspeed-stack-rlsapi.yaml (1)

24-25: Environment variable syntax verification needed (see library-mode config).

The same environment variable interpolation syntax concern from tests/e2e/configuration/library-mode/lightspeed-stack-rlsapi.yaml applies here. The verification script provided in that review will check both configurations.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a54071a and 9aebb07.

📒 Files selected for processing (10)
  • .github/workflows/e2e_tests.yaml (3 hunks)
  • docker-compose-library.yaml (1 hunks)
  • docker-compose.yaml (1 hunks)
  • tests/e2e/configuration/library-mode/lightspeed-stack-rlsapi.yaml (1 hunks)
  • tests/e2e/configuration/library-mode/lightspeed-stack.yaml (1 hunks)
  • tests/e2e/configuration/server-mode/lightspeed-stack-rlsapi.yaml (1 hunks)
  • tests/e2e/configuration/server-mode/lightspeed-stack.yaml (1 hunks)
  • tests/e2e/features/environment.py (2 hunks)
  • tests/e2e/features/rlsapi_v1.feature (1 hunks)
  • tests/e2e/test_list.txt (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • tests/e2e/configuration/server-mode/lightspeed-stack.yaml
  • tests/e2e/configuration/library-mode/lightspeed-stack.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/e2e/features/rlsapi_v1.feature
🧰 Additional context used
📓 Path-based instructions (2)
tests/e2e/test_list.txt

📄 CodeRabbit inference engine (CLAUDE.md)

Maintain test list in tests/e2e/test_list.txt for end-to-end tests

Files:

  • tests/e2e/test_list.txt
tests/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use pytest-mock with AsyncMock objects for mocking in tests

Files:

  • tests/e2e/features/environment.py
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-24T16:58:04.410Z
Learning: Applies to tests/e2e/features/**/*.feature : Use behave (BDD) framework with Gherkin feature files for end-to-end tests
📚 Learning: 2025-09-02T11:09:40.404Z
Learnt from: radofuchs
Repo: lightspeed-core/lightspeed-stack PR: 485
File: tests/e2e/features/environment.py:87-95
Timestamp: 2025-09-02T11:09:40.404Z
Learning: In the lightspeed-stack e2e tests, noop authentication tests use the default lightspeed-stack.yaml configuration, while noop-with-token tests use the Authorized tag to trigger a config swap to the specialized noop-with-token configuration file.

Applied to files:

  • tests/e2e/configuration/server-mode/lightspeed-stack-rlsapi.yaml
📚 Learning: 2025-11-24T16:58:04.410Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-24T16:58:04.410Z
Learning: Applies to tests/e2e/test_list.txt : Maintain test list in `tests/e2e/test_list.txt` for end-to-end tests

Applied to files:

  • tests/e2e/test_list.txt
📚 Learning: 2025-11-24T16:58:04.410Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-24T16:58:04.410Z
Learning: Applies to tests/e2e/features/**/*.feature : Use behave (BDD) framework with Gherkin feature files for end-to-end tests

Applied to files:

  • tests/e2e/test_list.txt
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: E2E: library mode / azure
  • GitHub Check: E2E: server mode / azure
  • GitHub Check: E2E: server mode / ci
  • GitHub Check: E2E: server mode / vertexai
  • GitHub Check: E2E: library mode / ci
  • GitHub Check: E2E: library mode / vertexai
  • GitHub Check: build-pr
🔇 Additional comments (6)
docker-compose.yaml (1)

58-60: LGTM!

The environment variables for rlsapi v1 test defaults are properly configured with sensible fallback values and follow Docker Compose conventions.

tests/e2e/test_list.txt (1)

12-12: LGTM!

The new feature file is properly added to the test list, following the coding guideline to maintain test list in tests/e2e/test_list.txt.

docker-compose-library.yaml (1)

22-24: LGTM!

The environment variables are properly configured and consistent with the server-mode docker-compose.yaml file.

tests/e2e/features/environment.py (2)

174-181: LGTM!

The RlsapiConfig tag handling properly follows the established pattern used for the Authorized tag. The implementation correctly handles mode detection, config backup, switching, and container restart.


196-199: LGTM!

The cleanup logic properly restores the original configuration and removes the backup, maintaining consistency with the existing Authorized tag cleanup pattern.

tests/e2e/configuration/library-mode/lightspeed-stack-rlsapi.yaml (1)

23-24: No action needed. The environment variable syntax ${env.E2E_DEFAULT_PROVIDER:=openai} is the correct and documented syntax for llama-stack's configuration system, which uses bash-inspired parameter expansion for environment variable substitution with fallback defaults.

Comment on lines +166 to +189
- name: Set default model for rlsapi v1 tests
run: |
# Set default model/provider for rlsapi v1 endpoint based on environment
case "${{ matrix.environment }}" in
ci)
echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
;;
azure)
echo "E2E_DEFAULT_PROVIDER=azure" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
;;
vertexai)
echo "E2E_DEFAULT_PROVIDER=google-vertex" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gemini-2.0-flash-exp" >> $GITHUB_ENV
;;
*)
echo "⚠️ Unknown environment: ${{ matrix.environment }}, using defaults"
echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
;;
esac
echo "✅ Set E2E_DEFAULT_PROVIDER=${E2E_DEFAULT_PROVIDER} and E2E_DEFAULT_MODEL=${E2E_DEFAULT_MODEL}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix the echo statement to display the correct values.

The echo statement on Line 188 won't display the correct values because variables written to GITHUB_ENV are not available in the current shell context. The variables $E2E_DEFAULT_PROVIDER and $E2E_DEFAULT_MODEL will expand to empty strings or stale values from previous steps.

Apply this diff to fix the logging:

       case "${{ matrix.environment }}" in
         ci)
           echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
           echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
+          echo "✅ Set E2E_DEFAULT_PROVIDER=openai and E2E_DEFAULT_MODEL=gpt-4o-mini"
           ;;
         azure)
           echo "E2E_DEFAULT_PROVIDER=azure" >> $GITHUB_ENV
           echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
+          echo "✅ Set E2E_DEFAULT_PROVIDER=azure and E2E_DEFAULT_MODEL=gpt-4o-mini"
           ;;
         vertexai)
           echo "E2E_DEFAULT_PROVIDER=google-vertex" >> $GITHUB_ENV
           echo "E2E_DEFAULT_MODEL=gemini-2.0-flash-exp" >> $GITHUB_ENV
+          echo "✅ Set E2E_DEFAULT_PROVIDER=google-vertex and E2E_DEFAULT_MODEL=gemini-2.0-flash-exp"
           ;;
         *)
           echo "⚠️ Unknown environment: ${{ matrix.environment }}, using defaults"
           echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
           echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
+          echo "✅ Set E2E_DEFAULT_PROVIDER=openai and E2E_DEFAULT_MODEL=gpt-4o-mini (fallback)"
           ;;
       esac
-      echo "✅ Set E2E_DEFAULT_PROVIDER=${E2E_DEFAULT_PROVIDER} and E2E_DEFAULT_MODEL=${E2E_DEFAULT_MODEL}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Set default model for rlsapi v1 tests
run: |
# Set default model/provider for rlsapi v1 endpoint based on environment
case "${{ matrix.environment }}" in
ci)
echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
;;
azure)
echo "E2E_DEFAULT_PROVIDER=azure" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
;;
vertexai)
echo "E2E_DEFAULT_PROVIDER=google-vertex" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gemini-2.0-flash-exp" >> $GITHUB_ENV
;;
*)
echo "⚠️ Unknown environment: ${{ matrix.environment }}, using defaults"
echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
;;
esac
echo "✅ Set E2E_DEFAULT_PROVIDER=${E2E_DEFAULT_PROVIDER} and E2E_DEFAULT_MODEL=${E2E_DEFAULT_MODEL}"
- name: Set default model for rlsapi v1 tests
run: |
# Set default model/provider for rlsapi v1 endpoint based on environment
case "${{ matrix.environment }}" in
ci)
echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
echo "✅ Set E2E_DEFAULT_PROVIDER=openai and E2E_DEFAULT_MODEL=gpt-4o-mini"
;;
azure)
echo "E2E_DEFAULT_PROVIDER=azure" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
echo "✅ Set E2E_DEFAULT_PROVIDER=azure and E2E_DEFAULT_MODEL=gpt-4o-mini"
;;
vertexai)
echo "E2E_DEFAULT_PROVIDER=google-vertex" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gemini-2.0-flash-exp" >> $GITHUB_ENV
echo "✅ Set E2E_DEFAULT_PROVIDER=google-vertex and E2E_DEFAULT_MODEL=gemini-2.0-flash-exp"
;;
*)
echo "⚠️ Unknown environment: ${{ matrix.environment }}, using defaults"
echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
echo "✅ Set E2E_DEFAULT_PROVIDER=openai and E2E_DEFAULT_MODEL=gpt-4o-mini (fallback)"
;;
esac
🤖 Prompt for AI Agents
In .github/workflows/e2e_tests.yaml around lines 166-189, the final echo uses
$E2E_DEFAULT_PROVIDER and $E2E_DEFAULT_MODEL which are written to GITHUB_ENV but
not exported into the current shell, so they will be empty; fix by assigning the
chosen values to shell variables inside each case branch (e.g. set local
E2E_DEFAULT_PROVIDER and E2E_DEFAULT_MODEL), then append those variables to
$GITHUB_ENV, and keep the final echo using the same local variables so the
printed values match what was written to GITHUB_ENV.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant