Skip to content

Conversation

@kening-fiddler
Copy link

@kening-fiddler kening-fiddler commented Jul 11, 2025

Combined #36 and also fix #39

  • UI TypeError: A TypeError occurred in the UI due to accessing undefined properties.

  • Message Format Compatibility: There was a need to transform messages from Cursor's format to the AI SDK's expected format.

@coderabbitai
Copy link

coderabbitai bot commented Jul 11, 2025

Walkthrough

The updates introduce CORS support and enhanced logging in the OpenAI-compatible API route, add a helper for message transformation, and improve robustness in UI components by using optional chaining. The Dockerfile now installs openssl, and the package.json includes a pnpm configuration for built dependencies. Middleware for request logging is also added.

Changes

File(s) Change Summary
Dockerfile Added openssl to the list of installed packages alongside libc6-compat.
package.json Introduced a pnpm section with onlyBuiltDependencies listing specific dependencies.
src/app/[...openai]/route.ts Added message transformation helper, CORS headers, enhanced logging, OPTIONS handler, and expanded GET endpoint support.
src/app/page.tsx Applied optional chaining to prevent errors when accessing config data.
src/components/LogsList.tsx Used optional chaining and fallback values for safer access to log metadata and messages.
src/middleware.ts Introduced middleware for logging HTTP request details and excluded static/image routes from processing.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Middleware
    participant API Route

    Client->>Middleware: Send HTTP request
    Middleware->>Middleware: Log method, URL, headers
    Middleware->>API Route: Forward request

    API Route->>API Route: Log request, transform messages (POST)
    API Route->>API Route: Add CORS headers to response
    API Route-->>Client: Respond with data (with CORS headers)
Loading

Poem

🐇
A hop and a skip, new code in the mix,
CORS now unlocked with a middleware fix!
Optional chains keep errors away,
Logging requests as they come our way.
With Docker and pnpm both up to date,
This bunny says, “Ship it!”—the changes are great!

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (2)
src/app/[...openai]/route.ts (2)

282-285: Fix: Use transformed messages in non-streaming response.

The non-streaming response uses the original messages instead of the transformed modifiedMessages, which defeats the purpose of the message transformation.

Apply this fix:

const result = await generateText({
  model: aiModel,
- messages,
+ messages: modifiedMessages,
});

292-294: Fix: Inconsistent cost calculation formula.

The cost calculation formula differs between streaming and non-streaming responses. The streaming version divides by 1,000,000 (line 219-221), but the non-streaming version doesn't.

Apply this fix for consistency:

const modelCost = await getModelCost(provider, model);
- const inputCost = inputTokens * modelCost.inputTokenCost;
- const outputCost = outputTokens * modelCost.outputTokenCost;
+ const inputCost = (inputTokens / 1000000) * modelCost.inputTokenCost;
+ const outputCost = (outputTokens / 1000000) * modelCost.outputTokenCost;
const totalCost = inputCost + outputCost;
🧹 Nitpick comments (1)
src/middleware.ts (1)

4-13: Consider production-ready logging practices.

The middleware correctly logs request information, but consider these improvements:

  • Use a proper logging library instead of console.log for production
  • Be cautious about logging sensitive headers (Authorization, API keys)
  • Consider adding log levels to control verbosity in different environments
export function middleware(request: NextRequest) {
-  console.log("=== INCOMING REQUEST ===");
-  console.log("Method:", request.method);
-  console.log("URL:", request.url);
-  console.log("Path:", new URL(request.url).pathname);
-  console.log("Headers:", Object.fromEntries(request.headers));
-  console.log("=======================");
+  if (process.env.NODE_ENV === 'development') {
+    console.log("=== INCOMING REQUEST ===");
+    console.log("Method:", request.method);
+    console.log("URL:", request.url);
+    console.log("Path:", new URL(request.url).pathname);
+    // Filter out sensitive headers
+    const safeHeaders = Object.fromEntries(
+      [...request.headers].filter(([key]) => 
+        !key.toLowerCase().includes('authorization') && 
+        !key.toLowerCase().includes('api-key')
+      )
+    );
+    console.log("Headers:", safeHeaders);
+    console.log("=======================");
+  }
  
  return NextResponse.next();
}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6577121 and c20b7e4.

📒 Files selected for processing (6)
  • Dockerfile (2 hunks)
  • package.json (1 hunks)
  • src/app/[...openai]/route.ts (8 hunks)
  • src/app/page.tsx (1 hunks)
  • src/components/LogsList.tsx (2 hunks)
  • src/middleware.ts (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/app/[...openai]/route.ts (1)
src/lib/db.ts (1)
  • insertLog (4-8)
🔇 Additional comments (14)
package.json (1)

20-28: LGTM! Proper pnpm configuration for native dependencies.

The onlyBuiltDependencies configuration is correctly set up to handle packages that require building from source, particularly important for native modules like sqlite3 and Prisma components in containerized environments.

Dockerfile (1)

5-5: LGTM! OpenSSL installation for secure operations.

Adding openssl to the Alpine Linux packages is appropriate for an API application that likely needs SSL/TLS functionality for secure communications.

src/components/LogsList.tsx (3)

45-46: LGTM! Safe property access with proper fallbacks.

The optional chaining prevents TypeErrors when log.metadata is undefined, with appropriate fallback values for tokens and cost calculations.


48-48: LGTM! Safe array access for message handling.

The optional chaining on log.body?.messages prevents runtime errors when the body or messages array is undefined.


54-54: LGTM! Consistent fallback handling for provider and model.

The fallback values ("other", "unknown") provide graceful degradation when metadata properties are missing, ensuring the UI remains functional.

Also applies to: 76-82

src/app/[...openai]/route.ts (8)

96-107: LGTM! Comprehensive request logging.

The detailed logging of POST requests including headers and body provides good observability for debugging message format issues.


133-134: LGTM! Proper message transformation applied.

The transformed messages are correctly used in the streaming flow, addressing the message format compatibility issue mentioned in the PR objectives.


275-277: LGTM! Proper CORS headers for streaming responses.

The CORS headers are correctly configured to allow cross-origin requests with appropriate methods and headers.


308-315: LGTM! Consistent CORS headers in JSON response.

The CORS headers are properly applied to the non-streaming JSON response, ensuring cross-origin compatibility.


340-344: LGTM! Enhanced GET request logging.

The comprehensive logging of GET requests improves debugging capabilities and aligns with the overall logging strategy.


346-347: LGTM! Support for both model endpoint variants.

Supporting both "models" and "v1/models" endpoints improves API compatibility with different client implementations.


507-527: LGTM! Proper CORS preflight handling.

The OPTIONS handler correctly implements CORS preflight responses with appropriate headers and status code (204).


21-46: Verify transformCursorMessages aligns with AI SDK message schema

The current implementation in src/app/[...openai]/route.ts (lines 21–46):

  • Converts Cursor messages with role === "tool" into assistant messages:
    { role: "assistant", content: message.content || "", tool_call_id: message.tool_call_id, name: message.name }
  • Converts assistant messages carrying tool_calls into:
    { role: "assistant", content: message.content || "", tool_calls: message.tool_calls }
  • Passes all other messages through unchanged.

Before merging, please confirm:

  • Does the AI SDK expect a field named tool_call_id (singular) on assistant messages, or should it be wrapped in a tool_calls array/object?
  • Are there other Cursor roles or message shapes (e.g. user, system, embedded metadata) that need explicit handling or normalization?
  • If the SDK schema differs, update transformCursorMessages to produce the exact structure (field names and nesting) required by the downstream SDK.
src/app/page.tsx (1)

74-74: LGTM! Good defensive programming practice.

Adding optional chaining when accessing the find method prevents potential TypeError if configData is null or undefined. This aligns well with the PR objective of fixing TypeError issues in the UI, and the existing logic already handles falsy values for defaultConfig appropriately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Invalid prompt: messages must be an array of CoreMessage

1 participant