Skip to content

Conversation

@LifeRIP
Copy link
Contributor

@LifeRIP LifeRIP commented Jun 22, 2025

This pull request introduces a moderation feature to the chat application, including new services, handlers, and API endpoints to manage reported and banned users. The changes also update the API documentation to reflect the new functionality.

Moderation Feature Integration:

  • Added new services (NewModerationService) and handlers (NewModerationHandler) for moderation functionality, along with a repository for managing reports (NewReportRepository) in cmd/api/main.go.

API Endpoints for Moderation:

  • Introduced a GET /chat/rooms/{roomId}/banned-users endpoint to retrieve a list of banned users in a chat room. [1] [2]
  • Added a POST /chat/rooms/{roomId}/clear-reports endpoint to clear all reports for a specific user in a chat room. [1] [2]
  • Introduced a POST /chat/rooms/{roomId}/report endpoint to report inappropriate messages in a chat room. [1] [2]

API Documentation Updates:

  • Updated docs/docs.go, docs/swagger.json, and docs/swagger.yaml to include new endpoints and their details, such as descriptions, parameters, and response schemas. [1] [2] [3] [4] [5]

Data Models for Moderation:

  • Added new models:
    • BannedUserResponse and BannedUsersResponse to represent banned users.
    • ClearReportRequest for clearing user reports.
    • ReportRequest for reporting messages.
    • Updated the Room model to include a reportedUsers field mapping user IDs to report counts. [1] [2] [3] [4] [5] [6] [7] [8] [9]

LifeRIP added 2 commits June 21, 2025 22:31
- Introduces content moderation capabilities, including reporting inappropriate messages, banning users based on report thresholds, and clearing user reports in chat rooms.
- Adds new handlers, services, and repositories to manage these features.
- Updates WebSocket logic to prevent banned users from sending messages and modifies room models to track reported users.
- Enhances API with moderation-specific endpoints and integrates moderation checks into existing workflows.
- Introduces success confirmation messages when users join rooms or direct chats via WebSocket.
- Adds a check to prevent users from reporting the same message multiple times.
- Also updates moderation logic to enforce the maximum report threshold dynamically.
@LifeRIP LifeRIP requested review from Copilot and removed request for Copilot June 22, 2025 18:32
@LifeRIP LifeRIP merged commit 4ddb29a into develop Jun 22, 2025
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds moderation capabilities to the chat application by introducing reporting, banning, and report-clearing functionality.

  • Added a ModerationService with methods to report messages, fetch banned users, clear reports, and enforce report-based bans.
  • Introduced ModerationHandler and new API routes (/report, /banned-users, /clear-reports) with corresponding request/response models.
  • Updated WebSocket logic to prevent banned users from sending messages and notify them of join successes, and extended documentation (swagger.yaml, swagger.json, docs.go) with these endpoints.

Reviewed Changes

Copilot reviewed 14 out of 14 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
internal/services/room_service.go Added IsUserAdminOrOwner and left contains helper unused
internal/services/moderation_service.go New moderation logic for reporting, banning, and clearing reports
internal/routes/router.go Registered moderation endpoints in the router
internal/repositories/report_repository.go New repository methods for report CRUD operations
internal/repositories/message_repository.go Added GetMessageByID method
internal/pkg/websocket/websocket.go Enforced ban checks in WebSocket, added success message type
internal/pkg/websocket/hub.go Injected ReportRepository into the WebSocket hub
internal/models/room.go Added ReportedUsers map to Room model
internal/models/report.go Defined Report, ReportRequest, ClearReportRequest, and ban response models
internal/handlers/moderation_handler.go Implemented HTTP handlers for report, banned-user, and clear-report endpoints
docs/swagger.yaml, swagger.json, docs.go Extended API docs with moderation endpoints and models
cmd/api/main.go Wired new moderation service and handler into application
Comments suppressed due to low confidence (2)

internal/handlers/moderation_handler.go:77

  • Domain-specific errors (e.g. "message not found" or "already reported") should map to appropriate HTTP status codes (404 or 400) instead of always returning 500.
		http.Error(w, err.Error(), http.StatusInternalServerError)

docs/swagger.yaml:526

  • The summary key is declared twice under this path (once at line 494 and again here), creating redundancy; remove the extra declaration to maintain clarity.
      summary: Get banned users in a room

}

// Helper para verificar si un slice contiene un valor
func contains(slice []string, value string) bool {
Copy link

Copilot AI Jun 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The contains helper is defined but not used; consider either using it in IsUserAdminOrOwner to check admin membership or removing it to keep the codebase clean.

Copilot uses AI. Check for mistakes.
}

// Verificar si el usuario es parte de la sala antes de enviar el mensaje
} // Verificar si el usuario es parte de la sala antes de enviar el mensaje
Copy link

Copilot AI Jun 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Inline comment after a closing brace can reduce readability; consider moving the comment above the } or translating it consistently into English.

Suggested change
} // Verificar si el usuario es parte de la sala antes de enviar el mensaje
// Check if the user is part of the room before sending the message
}

Copilot uses AI. Check for mistakes.
continue
}

// Check if the user is banned from sending messages due to reports
Copy link

Copilot AI Jun 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This ban-enforcement logic duplicates service behavior; extract it into a shared method (e.g. CanUserSendMessageInRoom) to avoid duplication and ease testing.

Copilot uses AI. Check for mistakes.
// Validate that the message exists
message, err := s.messageRepo.GetMessageByID(roomID, messageID)
if err != nil {
return fmt.Errorf("message not found: %v", err)
Copy link

Copilot AI Jun 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use fmt.Errorf("message not found: %w", err) to wrap the original error and preserve its type for inspection.

Suggested change
return fmt.Errorf("message not found: %v", err)
return fmt.Errorf("message not found: %w", err)

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants