Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 29 additions & 5 deletions .github/workflows/e2e_tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -135,9 +135,9 @@ jobs:
run: |
CONFIGS_DIR="tests/e2e/configs"
ENVIRONMENT="$CONFIG_ENVIRONMENT"

echo "Looking for configurations in $CONFIGS_DIR/"

# List available configurations
if [ -d "$CONFIGS_DIR" ]; then
echo "Available configurations:"
Expand All @@ -146,12 +146,12 @@ jobs:
echo "Configs directory '$CONFIGS_DIR' not found!"
exit 1
fi

# Determine which config file to use
CONFIG_FILE="$CONFIGS_DIR/run-$ENVIRONMENT.yaml"

echo "Looking for: $CONFIG_FILE"

if [ -f "$CONFIG_FILE" ]; then
echo "✅ Found config for environment: $ENVIRONMENT"
cp "$CONFIG_FILE" run.yaml
Expand All @@ -163,6 +163,30 @@ jobs:
exit 1
fi

- name: Set default model for rlsapi v1 tests
run: |
# Set default model/provider for rlsapi v1 endpoint based on environment
case "${{ matrix.environment }}" in
ci)
echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
;;
azure)
echo "E2E_DEFAULT_PROVIDER=azure" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
;;
vertexai)
echo "E2E_DEFAULT_PROVIDER=google-vertex" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gemini-2.0-flash-exp" >> $GITHUB_ENV
;;
*)
echo "⚠️ Unknown environment: ${{ matrix.environment }}, using defaults"
echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
;;
esac
echo "✅ Set E2E_DEFAULT_PROVIDER=${E2E_DEFAULT_PROVIDER} and E2E_DEFAULT_MODEL=${E2E_DEFAULT_MODEL}"

Comment on lines +166 to +189
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix the echo statement to display the correct values.

The echo statement on Line 188 won't display the correct values because variables written to GITHUB_ENV are not available in the current shell context. The variables $E2E_DEFAULT_PROVIDER and $E2E_DEFAULT_MODEL will expand to empty strings or stale values from previous steps.

Apply this diff to fix the logging:

       case "${{ matrix.environment }}" in
         ci)
           echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
           echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
+          echo "✅ Set E2E_DEFAULT_PROVIDER=openai and E2E_DEFAULT_MODEL=gpt-4o-mini"
           ;;
         azure)
           echo "E2E_DEFAULT_PROVIDER=azure" >> $GITHUB_ENV
           echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
+          echo "✅ Set E2E_DEFAULT_PROVIDER=azure and E2E_DEFAULT_MODEL=gpt-4o-mini"
           ;;
         vertexai)
           echo "E2E_DEFAULT_PROVIDER=google-vertex" >> $GITHUB_ENV
           echo "E2E_DEFAULT_MODEL=gemini-2.0-flash-exp" >> $GITHUB_ENV
+          echo "✅ Set E2E_DEFAULT_PROVIDER=google-vertex and E2E_DEFAULT_MODEL=gemini-2.0-flash-exp"
           ;;
         *)
           echo "⚠️ Unknown environment: ${{ matrix.environment }}, using defaults"
           echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
           echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
+          echo "✅ Set E2E_DEFAULT_PROVIDER=openai and E2E_DEFAULT_MODEL=gpt-4o-mini (fallback)"
           ;;
       esac
-      echo "✅ Set E2E_DEFAULT_PROVIDER=${E2E_DEFAULT_PROVIDER} and E2E_DEFAULT_MODEL=${E2E_DEFAULT_MODEL}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Set default model for rlsapi v1 tests
run: |
# Set default model/provider for rlsapi v1 endpoint based on environment
case "${{ matrix.environment }}" in
ci)
echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
;;
azure)
echo "E2E_DEFAULT_PROVIDER=azure" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
;;
vertexai)
echo "E2E_DEFAULT_PROVIDER=google-vertex" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gemini-2.0-flash-exp" >> $GITHUB_ENV
;;
*)
echo "⚠️ Unknown environment: ${{ matrix.environment }}, using defaults"
echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
;;
esac
echo "✅ Set E2E_DEFAULT_PROVIDER=${E2E_DEFAULT_PROVIDER} and E2E_DEFAULT_MODEL=${E2E_DEFAULT_MODEL}"
- name: Set default model for rlsapi v1 tests
run: |
# Set default model/provider for rlsapi v1 endpoint based on environment
case "${{ matrix.environment }}" in
ci)
echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
echo "✅ Set E2E_DEFAULT_PROVIDER=openai and E2E_DEFAULT_MODEL=gpt-4o-mini"
;;
azure)
echo "E2E_DEFAULT_PROVIDER=azure" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
echo "✅ Set E2E_DEFAULT_PROVIDER=azure and E2E_DEFAULT_MODEL=gpt-4o-mini"
;;
vertexai)
echo "E2E_DEFAULT_PROVIDER=google-vertex" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gemini-2.0-flash-exp" >> $GITHUB_ENV
echo "✅ Set E2E_DEFAULT_PROVIDER=google-vertex and E2E_DEFAULT_MODEL=gemini-2.0-flash-exp"
;;
*)
echo "⚠️ Unknown environment: ${{ matrix.environment }}, using defaults"
echo "E2E_DEFAULT_PROVIDER=openai" >> $GITHUB_ENV
echo "E2E_DEFAULT_MODEL=gpt-4o-mini" >> $GITHUB_ENV
echo "✅ Set E2E_DEFAULT_PROVIDER=openai and E2E_DEFAULT_MODEL=gpt-4o-mini (fallback)"
;;
esac
🤖 Prompt for AI Agents
In .github/workflows/e2e_tests.yaml around lines 166-189, the final echo uses
$E2E_DEFAULT_PROVIDER and $E2E_DEFAULT_MODEL which are written to GITHUB_ENV but
not exported into the current shell, so they will be empty; fix by assigning the
chosen values to shell variables inside each case branch (e.g. set local
E2E_DEFAULT_PROVIDER and E2E_DEFAULT_MODEL), then append those variables to
$GITHUB_ENV, and keep the final echo using the same local variables so the
printed values match what was written to GITHUB_ENV.

- name: Show final configuration
run: |
echo "=== Configuration Summary ==="
Expand Down
3 changes: 3 additions & 0 deletions docker-compose-library.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,9 @@ services:
# OpenAI
- OPENAI_API_KEY=${OPENAI_API_KEY}
- E2E_OPENAI_MODEL=${E2E_OPENAI_MODEL:-gpt-4-turbo}
# Default model for rlsapi v1 tests
- E2E_DEFAULT_PROVIDER=${E2E_DEFAULT_PROVIDER:-openai}
- E2E_DEFAULT_MODEL=${E2E_DEFAULT_MODEL:-gpt-4o-mini}
# Azure
- AZURE_API_KEY=${AZURE_API_KEY:-}
# RHAIIS
Expand Down
3 changes: 3 additions & 0 deletions docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,9 @@ services:
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- AZURE_API_KEY=${AZURE_API_KEY}
# Default model for rlsapi v1 tests
- E2E_DEFAULT_PROVIDER=${E2E_DEFAULT_PROVIDER:-openai}
- E2E_DEFAULT_MODEL=${E2E_DEFAULT_MODEL:-gpt-4o-mini}
depends_on:
llama-stack:
condition: service_healthy
Expand Down
24 changes: 24 additions & 0 deletions tests/e2e/configuration/library-mode/lightspeed-stack-rlsapi.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
name: Lightspeed Core Service (LCS)
service:
host: 0.0.0.0
port: 8080
auth_enabled: false
workers: 1
color_log: true
access_log: true
llama_stack:
# Library mode - embeds llama-stack as library
use_as_library_client: true
library_client_config_path: run.yaml
user_data_collection:
feedback_enabled: true
feedback_storage: "/tmp/data/feedback"
transcripts_enabled: true
transcripts_storage: "/tmp/data/transcripts"
authentication:
module: "noop"
inference:
# Configure default model/provider for rlsapi v1 endpoint
# These are set per-environment in the CI workflow
default_provider: ${env.E2E_DEFAULT_PROVIDER:=openai}
default_model: ${env.E2E_DEFAULT_MODEL:=gpt-4o-mini}
2 changes: 1 addition & 1 deletion tests/e2e/configuration/library-mode/lightspeed-stack.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@ user_data_collection:
transcripts_enabled: true
transcripts_storage: "/tmp/data/transcripts"
authentication:
module: "noop"
module: "noop"
25 changes: 25 additions & 0 deletions tests/e2e/configuration/server-mode/lightspeed-stack-rlsapi.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
name: Lightspeed Core Service (LCS)
service:
host: 0.0.0.0
port: 8080
auth_enabled: false
workers: 1
color_log: true
access_log: true
llama_stack:
# Server mode - connects to separate llama-stack service
use_as_library_client: false
url: http://llama-stack:8321
api_key: xyzzy
user_data_collection:
feedback_enabled: true
feedback_storage: "/tmp/data/feedback"
transcripts_enabled: true
transcripts_storage: "/tmp/data/transcripts"
authentication:
module: "noop"
inference:
# Configure default model/provider for rlsapi v1 endpoint
# These are set per-environment in the CI workflow
default_provider: ${env.E2E_DEFAULT_PROVIDER:=openai}
default_model: ${env.E2E_DEFAULT_MODEL:=gpt-4o-mini}
2 changes: 1 addition & 1 deletion tests/e2e/configuration/server-mode/lightspeed-stack.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ user_data_collection:
transcripts_enabled: true
transcripts_storage: "/tmp/data/transcripts"
authentication:
module: "noop"
module: "noop"
14 changes: 14 additions & 0 deletions tests/e2e/features/environment.py
Original file line number Diff line number Diff line change
Expand Up @@ -171,6 +171,15 @@ def before_feature(context: Context, feature: Feature) -> None:
switch_config(context.feature_config)
restart_container("lightspeed-stack")

if "RlsapiConfig" in feature.tags:
mode_dir = "library-mode" if context.is_library_mode else "server-mode"
context.feature_config = (
f"tests/e2e/configuration/{mode_dir}/lightspeed-stack-rlsapi.yaml"
)
context.default_config_backup = create_config_backup("lightspeed-stack.yaml")
switch_config(context.feature_config)
restart_container("lightspeed-stack")

if "Feedback" in feature.tags:
context.hostname = os.getenv("E2E_LSC_HOSTNAME", "localhost")
context.port = os.getenv("E2E_LSC_PORT", "8080")
Expand All @@ -184,6 +193,11 @@ def after_feature(context: Context, feature: Feature) -> None:
restart_container("lightspeed-stack")
remove_config_backup(context.default_config_backup)

if "RlsapiConfig" in feature.tags:
switch_config(context.default_config_backup)
restart_container("lightspeed-stack")
remove_config_backup(context.default_config_backup)

if "Feedback" in feature.tags:
for conversation_id in context.feedback_conversations:
url = f"http://{context.hostname}:{context.port}/v1/conversations/{conversation_id}"
Expand Down
16 changes: 16 additions & 0 deletions tests/e2e/features/rlsapi_v1.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
@RlsapiConfig
Feature: RLSAPI v1 infer endpoint
Basic tests for the RLSAPI v1 inference endpoint.

Background:
Given The service is started locally
And REST API service prefix is /v1

Scenario: Verify RLSAPI v1 infer endpoint returns 200
Given The system is in default state
When I access REST API endpoint "infer" using HTTP POST method
"""
{"question": "Say hello"}
"""
Then The status code of the response is 200
And Content type of response should be set to "application/json"
1 change: 1 addition & 0 deletions tests/e2e/test_list.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,4 @@ features/info.feature
features/query.feature
features/streaming_query.feature
features/rest_api.feature
features/rlsapi_v1.feature
Loading