Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
76 changes: 38 additions & 38 deletions netlify/functions/analyze.js
Original file line number Diff line number Diff line change
@@ -1,49 +1,49 @@
// Unified analysis gateway (mock/preview). Real integrations (yt-dlp/model) should be wired to the providers.
const { promises: fs } = require('fs')
// Netlify proxy for unified analysis. It forwards multipart requests to an external inference service.
// Heavy work (yt-dlp, ffmpeg, model) should live in the external service, not on Netlify.

exports.handler = async (event) => {
if (event.httpMethod !== 'POST') {
return { statusCode: 405, body: 'Method Not Allowed' }
}

// NOTE: Netlify Functions cannot parse multipart by default; here we return a fixed preview response.
// In real usage, move this to a full server or add multipart parsing + external service calls.
const now = Date.now()
const seed = (now % 1000) / 1000
const isAIGenerated = seed > 0.5
const confidence = Number((0.55 + seed * 0.35).toFixed(3))
const targetBase = process.env.INFERENCE_API_URL || process.env.NEXT_PUBLIC_API_URL
if (!targetBase) {
return { statusCode: 500, body: JSON.stringify({ errors: ['backend_not_configured'] }) }
}

const result = {
isAIGenerated,
confidence,
processingTime: 0.4,
modelVersion: 'gateway-preview-v1',
decisionSource: 'preview',
source: {
kind: 'file',
fileName: 'preview',
fileSizeBytes: 0,
mimeType: 'application/octet-stream'
},
features: {
spectralRegularity: Number(((seed + 0.17) % 1).toFixed(3)),
temporalPatterns: Number(((seed + 0.43) % 1).toFixed(3)),
harmonicStructure: Number(((seed + 0.71) % 1).toFixed(3)),
artificialIndicators: [
'Preview-only decision based on fingerprint.',
'No model inference was available at request time.'
]
},
audioInfo: {
duration: 0,
sampleRate: 44100,
bitrate: 192,
format: 'PREVIEW'
}
const target = targetBase.endsWith('/api/analyze')
? targetBase
: `${targetBase.replace(/\/$/, '')}/api/analyze`
Comment on lines +14 to +16
Copy link

Copilot AI Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Netlify function forwards requests to '/api/analyze' on the inference service, but the current backend implementation in this repository only exposes '/api/youtube/analyze' (see backend/app/routes/youtube.py). Either the backend needs to be updated to add a unified '/api/analyze' endpoint that handles multiple source types (file, youtube, etc.), or the documentation should clarify that INFERENCE_API_URL should point to a different external service that implements this unified endpoint. Without this endpoint, the proxy will return 404 errors.

Suggested change
const target = targetBase.endsWith('/api/analyze')
? targetBase
: `${targetBase.replace(/\/$/, '')}/api/analyze`
const target = /\/api\/(analyze|youtube\/analyze)$/.test(targetBase)
? targetBase
: `${targetBase.replace(/\/$/, '')}/api/youtube/analyze`

Copilot uses AI. Check for mistakes.

const contentType = event.headers['content-type'] || event.headers['Content-Type']
if (!contentType || !contentType.toLowerCase().includes('multipart/form-data')) {
return { statusCode: 400, body: JSON.stringify({ errors: ['unsupported_media_type'] }) }
}

return {
statusCode: 200,
body: JSON.stringify({ result, warnings: ['gateway_mock_preview'] })
const bodyBuffer = Buffer.from(event.body || '', event.isBase64Encoded ? 'base64' : 'utf8')
Copy link

Copilot AI Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Buffer encoding logic may corrupt binary data in multipart/form-data requests. When event.isBase64Encoded is false, the code treats the body as UTF-8 text, but multipart form data containing binary files (like audio files) should be treated as binary. This will corrupt the audio file data when forwarding to the inference service. Netlify automatically sets isBase64Encoded to true for binary content, but the utf8 fallback is problematic if that detection fails. Consider using 'binary' or 'latin1' encoding instead of 'utf8' for the non-base64 case, or always treat multipart bodies as binary.

Suggested change
const bodyBuffer = Buffer.from(event.body || '', event.isBase64Encoded ? 'base64' : 'utf8')
const bodyBuffer = Buffer.from(
event.body || '',
event.isBase64Encoded ? 'base64' : 'binary'
)

Copilot uses AI. Check for mistakes.

try {
const response = await fetch(target, {
method: 'POST',
headers: { 'content-type': contentType },
body: bodyBuffer
})
Comment on lines +26 to +30
Copy link

Copilot AI Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fetch request has no timeout configured, which means it could hang indefinitely if the inference service is slow or unresponsive. This will block the Netlify function and potentially cause user-facing timeouts. Consider adding an AbortSignal with a timeout (e.g., 30 seconds for analysis operations) to ensure the request fails gracefully if the backend takes too long.

Copilot uses AI. Check for mistakes.

const text = await response.text()
return {
statusCode: response.status,
body: text,
headers: {
'Content-Type': response.headers.get('content-type') || 'application/json'
}
Comment on lines +26 to +38
Copy link

Copilot AI Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The proxy only forwards the Content-Type header to the inference service, but other important headers from the original request (such as Accept, User-Agent, or custom headers) are not being forwarded. While Content-Type is the most critical for multipart uploads, consider whether other headers should be preserved. Additionally, headers from the backend response (other than Content-Type) are not being forwarded back to the client, which could include important metadata like rate limit information or custom error details.

Suggested change
const response = await fetch(target, {
method: 'POST',
headers: { 'content-type': contentType },
body: bodyBuffer
})
const text = await response.text()
return {
statusCode: response.status,
body: text,
headers: {
'Content-Type': response.headers.get('content-type') || 'application/json'
}
// Forward most incoming headers to the backend, excluding hop-by-hop / problematic ones.
const incomingHeaders = event.headers || {}
const outgoingHeaders = {}
const headerBlacklist = new Set(['host', 'connection', 'content-length', 'transfer-encoding', 'accept-encoding'])
for (const [name, value] of Object.entries(incomingHeaders)) {
if (typeof value === 'undefined') continue
const lowerName = name.toLowerCase()
if (headerBlacklist.has(lowerName)) continue
// We'll override content-type below with the validated value.
if (lowerName === 'content-type') continue
outgoingHeaders[name] = value
}
// Ensure the validated multipart content-type is sent to the backend.
outgoingHeaders['content-type'] = contentType
const response = await fetch(target, {
method: 'POST',
headers: outgoingHeaders,
body: bodyBuffer
})
const text = await response.text()
// Forward most backend response headers back to the client.
const responseHeaders = {}
const responseHeaderBlacklist = new Set(['connection', 'content-length', 'transfer-encoding'])
response.headers.forEach((value, name) => {
const lowerName = name.toLowerCase()
if (responseHeaderBlacklist.has(lowerName)) return
responseHeaders[name] = value
})
if (!Object.keys(responseHeaders).some((h) => h.toLowerCase() === 'content-type')) {
responseHeaders['Content-Type'] = 'application/json'
}
return {
statusCode: response.status,
body: text,
headers: responseHeaders

Copilot uses AI. Check for mistakes.
}
} catch (error) {
return {
statusCode: 502,
body: JSON.stringify({
errors: ['backend_unreachable'],
message: 'Failed to reach inference service'
})
}
Comment on lines +40 to +47
Copy link

Copilot AI Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The catch block doesn't log the error details, making debugging difficult when the inference service fails. The error message returned to the client is generic ('Failed to reach inference service'), which doesn't help distinguish between network errors, DNS failures, connection refused, or other issues. Consider logging the error (e.g., console.error(error)) for debugging purposes while keeping the client-facing message generic for security.

Copilot uses AI. Check for mistakes.
}
}
5 changes: 5 additions & 0 deletions platform/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,11 @@ UPLOAD_DIR=/tmp/uploads
GITHUB_TOKEN=your-github-token
SPOTIFY_CLIENT_ID=your-spotify-client-id
SPOTIFY_CLIENT_SECRET=your-spotify-client-secret
SPOTIFY_REDIRECT_URI=http://localhost:3000/api/auth/spotify/callback
YOUTUBE_OAUTH_CLIENT_ID=your-youtube-oauth-client-id
YOUTUBE_OAUTH_CLIENT_SECRET=your-youtube-oauth-client-secret
YOUTUBE_OAUTH_REDIRECT_URI=http://localhost:3000/ai-music-detection
Copy link

Copilot AI Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

INFERENCE_API_URL appears to be a duplicate of NEXT_PUBLIC_API_URL based on the default value (both point to http://localhost:8000). The Netlify function falls back to NEXT_PUBLIC_API_URL if INFERENCE_API_URL is not set (line 9 of analyze.js). Consider either removing INFERENCE_API_URL if it's truly redundant, or documenting why two separate environment variables are needed (e.g., if they should point to different services in production).

Suggested change
YOUTUBE_OAUTH_REDIRECT_URI=http://localhost:3000/ai-music-detection
YOUTUBE_OAUTH_REDIRECT_URI=http://localhost:3000/ai-music-detection
# INFERENCE_API_URL is kept separate from NEXT_PUBLIC_API_URL so that
# AI inference can be hosted on a different service/URL in production.
# In local development both services run on the same backend.

Copilot uses AI. Check for mistakes.
INFERENCE_API_URL=http://localhost:8000

# Analytics (optional)
NEXT_PUBLIC_GA_TRACKING_ID=G-XXXXXXXXXX
Expand Down
Loading