-
Notifications
You must be signed in to change notification settings - Fork 0
Feature/gateway unified analysis #10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
❌ Deploy Preview for crowncode-by-rthur failed. Why did it fail? →
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request transforms the Netlify analyze function from returning mock responses to acting as a proxy that forwards multipart requests to an external inference service. It also adds OAuth-related environment variables to support future integration with Spotify and YouTube services.
Key changes:
- Replaced the mock response logic in the Netlify function with a proxy implementation that forwards requests to an external inference API
- Added environment variables for Spotify and YouTube OAuth flows, plus an INFERENCE_API_URL for the external service
- Removed the unused
fsimport from the Netlify function
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| netlify/functions/analyze.js | Converted from mock response generator to proxy that forwards multipart/form-data requests to external inference service |
| platform/.env.example | Added OAuth redirect URIs for Spotify and YouTube, plus INFERENCE_API_URL configuration |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const response = await fetch(target, { | ||
| method: 'POST', | ||
| headers: { 'content-type': contentType }, | ||
| body: bodyBuffer | ||
| }) |
Copilot
AI
Dec 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fetch request has no timeout configured, which means it could hang indefinitely if the inference service is slow or unresponsive. This will block the Netlify function and potentially cause user-facing timeouts. Consider adding an AbortSignal with a timeout (e.g., 30 seconds for analysis operations) to ensure the request fails gracefully if the backend takes too long.
| SPOTIFY_REDIRECT_URI=http://localhost:3000/api/auth/spotify/callback | ||
| YOUTUBE_OAUTH_CLIENT_ID=your-youtube-oauth-client-id | ||
| YOUTUBE_OAUTH_CLIENT_SECRET=your-youtube-oauth-client-secret | ||
| YOUTUBE_OAUTH_REDIRECT_URI=http://localhost:3000/ai-music-detection |
Copilot
AI
Dec 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
INFERENCE_API_URL appears to be a duplicate of NEXT_PUBLIC_API_URL based on the default value (both point to http://localhost:8000). The Netlify function falls back to NEXT_PUBLIC_API_URL if INFERENCE_API_URL is not set (line 9 of analyze.js). Consider either removing INFERENCE_API_URL if it's truly redundant, or documenting why two separate environment variables are needed (e.g., if they should point to different services in production).
| YOUTUBE_OAUTH_REDIRECT_URI=http://localhost:3000/ai-music-detection | |
| YOUTUBE_OAUTH_REDIRECT_URI=http://localhost:3000/ai-music-detection | |
| # INFERENCE_API_URL is kept separate from NEXT_PUBLIC_API_URL so that | |
| # AI inference can be hosted on a different service/URL in production. | |
| # In local development both services run on the same backend. |
| const target = targetBase.endsWith('/api/analyze') | ||
| ? targetBase | ||
| : `${targetBase.replace(/\/$/, '')}/api/analyze` |
Copilot
AI
Dec 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Netlify function forwards requests to '/api/analyze' on the inference service, but the current backend implementation in this repository only exposes '/api/youtube/analyze' (see backend/app/routes/youtube.py). Either the backend needs to be updated to add a unified '/api/analyze' endpoint that handles multiple source types (file, youtube, etc.), or the documentation should clarify that INFERENCE_API_URL should point to a different external service that implements this unified endpoint. Without this endpoint, the proxy will return 404 errors.
| const target = targetBase.endsWith('/api/analyze') | |
| ? targetBase | |
| : `${targetBase.replace(/\/$/, '')}/api/analyze` | |
| const target = /\/api\/(analyze|youtube\/analyze)$/.test(targetBase) | |
| ? targetBase | |
| : `${targetBase.replace(/\/$/, '')}/api/youtube/analyze` |
| const response = await fetch(target, { | ||
| method: 'POST', | ||
| headers: { 'content-type': contentType }, | ||
| body: bodyBuffer | ||
| }) | ||
|
|
||
| const text = await response.text() | ||
| return { | ||
| statusCode: response.status, | ||
| body: text, | ||
| headers: { | ||
| 'Content-Type': response.headers.get('content-type') || 'application/json' | ||
| } |
Copilot
AI
Dec 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The proxy only forwards the Content-Type header to the inference service, but other important headers from the original request (such as Accept, User-Agent, or custom headers) are not being forwarded. While Content-Type is the most critical for multipart uploads, consider whether other headers should be preserved. Additionally, headers from the backend response (other than Content-Type) are not being forwarded back to the client, which could include important metadata like rate limit information or custom error details.
| const response = await fetch(target, { | |
| method: 'POST', | |
| headers: { 'content-type': contentType }, | |
| body: bodyBuffer | |
| }) | |
| const text = await response.text() | |
| return { | |
| statusCode: response.status, | |
| body: text, | |
| headers: { | |
| 'Content-Type': response.headers.get('content-type') || 'application/json' | |
| } | |
| // Forward most incoming headers to the backend, excluding hop-by-hop / problematic ones. | |
| const incomingHeaders = event.headers || {} | |
| const outgoingHeaders = {} | |
| const headerBlacklist = new Set(['host', 'connection', 'content-length', 'transfer-encoding', 'accept-encoding']) | |
| for (const [name, value] of Object.entries(incomingHeaders)) { | |
| if (typeof value === 'undefined') continue | |
| const lowerName = name.toLowerCase() | |
| if (headerBlacklist.has(lowerName)) continue | |
| // We'll override content-type below with the validated value. | |
| if (lowerName === 'content-type') continue | |
| outgoingHeaders[name] = value | |
| } | |
| // Ensure the validated multipart content-type is sent to the backend. | |
| outgoingHeaders['content-type'] = contentType | |
| const response = await fetch(target, { | |
| method: 'POST', | |
| headers: outgoingHeaders, | |
| body: bodyBuffer | |
| }) | |
| const text = await response.text() | |
| // Forward most backend response headers back to the client. | |
| const responseHeaders = {} | |
| const responseHeaderBlacklist = new Set(['connection', 'content-length', 'transfer-encoding']) | |
| response.headers.forEach((value, name) => { | |
| const lowerName = name.toLowerCase() | |
| if (responseHeaderBlacklist.has(lowerName)) return | |
| responseHeaders[name] = value | |
| }) | |
| if (!Object.keys(responseHeaders).some((h) => h.toLowerCase() === 'content-type')) { | |
| responseHeaders['Content-Type'] = 'application/json' | |
| } | |
| return { | |
| statusCode: response.status, | |
| body: text, | |
| headers: responseHeaders |
| } catch (error) { | ||
| return { | ||
| statusCode: 502, | ||
| body: JSON.stringify({ | ||
| errors: ['backend_unreachable'], | ||
| message: 'Failed to reach inference service' | ||
| }) | ||
| } |
Copilot
AI
Dec 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The catch block doesn't log the error details, making debugging difficult when the inference service fails. The error message returned to the client is generic ('Failed to reach inference service'), which doesn't help distinguish between network errors, DNS failures, connection refused, or other issues. Consider logging the error (e.g., console.error(error)) for debugging purposes while keeping the client-facing message generic for security.
| return { | ||
| statusCode: 200, | ||
| body: JSON.stringify({ result, warnings: ['gateway_mock_preview'] }) | ||
| const bodyBuffer = Buffer.from(event.body || '', event.isBase64Encoded ? 'base64' : 'utf8') |
Copilot
AI
Dec 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Buffer encoding logic may corrupt binary data in multipart/form-data requests. When event.isBase64Encoded is false, the code treats the body as UTF-8 text, but multipart form data containing binary files (like audio files) should be treated as binary. This will corrupt the audio file data when forwarding to the inference service. Netlify automatically sets isBase64Encoded to true for binary content, but the utf8 fallback is problematic if that detection fails. Consider using 'binary' or 'latin1' encoding instead of 'utf8' for the non-base64 case, or always treat multipart bodies as binary.
| const bodyBuffer = Buffer.from(event.body || '', event.isBase64Encoded ? 'base64' : 'utf8') | |
| const bodyBuffer = Buffer.from( | |
| event.body || '', | |
| event.isBase64Encoded ? 'base64' : 'binary' | |
| ) |
This pull request updates the Netlify function for analysis to act as a proxy that forwards multipart requests to an external inference service, instead of returning a mock response. It also adds new environment variables to support integration with external services such as Spotify, YouTube, and the inference API.
Netlify function enhancements:
netlify/functions/analyze.jswith logic to forward multipart POST requests to an external inference service, returning the actual service response to the client. This includes error handling for missing configuration and backend unavailability.Environment variable additions:
SPOTIFY_REDIRECT_URI,YOUTUBE_OAUTH_CLIENT_ID,YOUTUBE_OAUTH_CLIENT_SECRET,YOUTUBE_OAUTH_REDIRECT_URI, andINFERENCE_API_URLtoplatform/.env.exampleto support OAuth flows and inference service configuration.