-
Notifications
You must be signed in to change notification settings - Fork 47
Message Format Compatibility Fix #40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Message Format Compatibility Fix #40
Conversation
WalkthroughThe updates introduce CORS support and enhanced logging in the OpenAI-compatible API route, add a helper for message transformation, and improve robustness in UI components by using optional chaining. The Dockerfile now installs Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Middleware
participant API Route
Client->>Middleware: Send HTTP request
Middleware->>Middleware: Log method, URL, headers
Middleware->>API Route: Forward request
API Route->>API Route: Log request, transform messages (POST)
API Route->>API Route: Add CORS headers to response
API Route-->>Client: Respond with data (with CORS headers)
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (2)
src/app/[...openai]/route.ts (2)
282-285: Fix: Use transformed messages in non-streaming response.The non-streaming response uses the original
messagesinstead of the transformedmodifiedMessages, which defeats the purpose of the message transformation.Apply this fix:
const result = await generateText({ model: aiModel, - messages, + messages: modifiedMessages, });
292-294: Fix: Inconsistent cost calculation formula.The cost calculation formula differs between streaming and non-streaming responses. The streaming version divides by 1,000,000 (line 219-221), but the non-streaming version doesn't.
Apply this fix for consistency:
const modelCost = await getModelCost(provider, model); - const inputCost = inputTokens * modelCost.inputTokenCost; - const outputCost = outputTokens * modelCost.outputTokenCost; + const inputCost = (inputTokens / 1000000) * modelCost.inputTokenCost; + const outputCost = (outputTokens / 1000000) * modelCost.outputTokenCost; const totalCost = inputCost + outputCost;
🧹 Nitpick comments (1)
src/middleware.ts (1)
4-13: Consider production-ready logging practices.The middleware correctly logs request information, but consider these improvements:
- Use a proper logging library instead of
console.logfor production- Be cautious about logging sensitive headers (Authorization, API keys)
- Consider adding log levels to control verbosity in different environments
export function middleware(request: NextRequest) { - console.log("=== INCOMING REQUEST ==="); - console.log("Method:", request.method); - console.log("URL:", request.url); - console.log("Path:", new URL(request.url).pathname); - console.log("Headers:", Object.fromEntries(request.headers)); - console.log("======================="); + if (process.env.NODE_ENV === 'development') { + console.log("=== INCOMING REQUEST ==="); + console.log("Method:", request.method); + console.log("URL:", request.url); + console.log("Path:", new URL(request.url).pathname); + // Filter out sensitive headers + const safeHeaders = Object.fromEntries( + [...request.headers].filter(([key]) => + !key.toLowerCase().includes('authorization') && + !key.toLowerCase().includes('api-key') + ) + ); + console.log("Headers:", safeHeaders); + console.log("======================="); + } return NextResponse.next(); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
Dockerfile(2 hunks)package.json(1 hunks)src/app/[...openai]/route.ts(8 hunks)src/app/page.tsx(1 hunks)src/components/LogsList.tsx(2 hunks)src/middleware.ts(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/app/[...openai]/route.ts (1)
src/lib/db.ts (1)
insertLog(4-8)
🔇 Additional comments (14)
package.json (1)
20-28: LGTM! Proper pnpm configuration for native dependencies.The
onlyBuiltDependenciesconfiguration is correctly set up to handle packages that require building from source, particularly important for native modules likesqlite3and Prisma components in containerized environments.Dockerfile (1)
5-5: LGTM! OpenSSL installation for secure operations.Adding
opensslto the Alpine Linux packages is appropriate for an API application that likely needs SSL/TLS functionality for secure communications.src/components/LogsList.tsx (3)
45-46: LGTM! Safe property access with proper fallbacks.The optional chaining prevents TypeErrors when
log.metadatais undefined, with appropriate fallback values for tokens and cost calculations.
48-48: LGTM! Safe array access for message handling.The optional chaining on
log.body?.messagesprevents runtime errors when the body or messages array is undefined.
54-54: LGTM! Consistent fallback handling for provider and model.The fallback values ("other", "unknown") provide graceful degradation when metadata properties are missing, ensuring the UI remains functional.
Also applies to: 76-82
src/app/[...openai]/route.ts (8)
96-107: LGTM! Comprehensive request logging.The detailed logging of POST requests including headers and body provides good observability for debugging message format issues.
133-134: LGTM! Proper message transformation applied.The transformed messages are correctly used in the streaming flow, addressing the message format compatibility issue mentioned in the PR objectives.
275-277: LGTM! Proper CORS headers for streaming responses.The CORS headers are correctly configured to allow cross-origin requests with appropriate methods and headers.
308-315: LGTM! Consistent CORS headers in JSON response.The CORS headers are properly applied to the non-streaming JSON response, ensuring cross-origin compatibility.
340-344: LGTM! Enhanced GET request logging.The comprehensive logging of GET requests improves debugging capabilities and aligns with the overall logging strategy.
346-347: LGTM! Support for both model endpoint variants.Supporting both "models" and "v1/models" endpoints improves API compatibility with different client implementations.
507-527: LGTM! Proper CORS preflight handling.The OPTIONS handler correctly implements CORS preflight responses with appropriate headers and status code (204).
21-46: Verify transformCursorMessages aligns with AI SDK message schemaThe current implementation in
src/app/[...openai]/route.ts(lines 21–46):
- Converts Cursor messages with
role === "tool"into assistant messages:{ role: "assistant", content: message.content || "", tool_call_id: message.tool_call_id, name: message.name }- Converts assistant messages carrying
tool_callsinto:{ role: "assistant", content: message.content || "", tool_calls: message.tool_calls }- Passes all other messages through unchanged.
Before merging, please confirm:
- Does the AI SDK expect a field named
tool_call_id(singular) on assistant messages, or should it be wrapped in atool_callsarray/object?- Are there other Cursor roles or message shapes (e.g.
user,system, embedded metadata) that need explicit handling or normalization?- If the SDK schema differs, update
transformCursorMessagesto produce the exact structure (field names and nesting) required by the downstream SDK.src/app/page.tsx (1)
74-74: LGTM! Good defensive programming practice.Adding optional chaining when accessing the
findmethod prevents potential TypeError ifconfigDatais null or undefined. This aligns well with the PR objective of fixing TypeError issues in the UI, and the existing logic already handles falsy values fordefaultConfigappropriately.
Combined #36 and also fix #39
UI TypeError: A TypeError occurred in the UI due to accessing undefined properties.
Message Format Compatibility: There was a need to transform messages from Cursor's format to the AI SDK's expected format.