System Architecture
The Gova Backend is a high-performance, event-driven system designed to handle real-time content moderation, community trend analysis, and automated escalation. It leverages a modern stack consisting of FastAPI for the REST interface, Kafka for asynchronous event processing, and PostgreSQL with PgVector for storing user profiles and behavioral trends.
High-Level Component Overview
The architecture is divided into four primary layers:
- API Layer (FastAPI): Orchestrates user management, Discord integration, and the manual approval workflow for moderation actions.
- Messaging Layer (Kafka): Decouples long-running moderation tasks and status updates from the request-response cycle.
- Intelligence Engine: Utilizes LLM-powered agents to evaluate message context against server-specific guidelines.
- Integration Layer: Manages external communication with Discord (via Webhooks/Bot API), Stripe (Payments), and Brevo (Email).
1. API Layer & Routing
The API is built using FastAPI and organized into functional domains. Most endpoints require a JWT Payload for authentication, which includes the user's pricing tier and verification status.
Core Modules
| Module | Description |
| :--- | :--- |
| auth | Handles registration, login, and Discord OAuth2 flows for linking identities. |
| moderators | Manages the lifecycle of AI moderators, including configuration (conf) and status. |
| actions | The execution engine for moderation tasks (Kick, Timeout, Reply). |
| connections | Retrieves Discord guild and channel metadata for configuration. |
| payments | Manages Stripe subscriptions and pricing tier upgrades. |
Authentication & Security
The system uses JWTs for session management. Sensitive data, such as Discord OAuth tokens, are encrypted at rest using an EncryptionService before being stored in PostgreSQL.
2. The Intelligence Engine (LLM Agents)
At the heart of Gova is the ReviewAgent. Unlike simple keyword filters, this agent uses a Large Language Model to understand the nuance of a conversation based on the following inputs:
- Server Summary: A high-level description of the community's purpose.
- Server Guidelines: The specific rules the moderator is tasked to enforce.
- Message Context: The current message and its metadata.
- Channel Summary: Recent history to provide conversational context.
Agent Output Schema
The agent evaluates messages and returns a structured ReviewAgentOutput:
{
"action": {
"type": "TIMEOUT", # e.g., REPLY, TIMEOUT, KICK
"params": {"duration": 3600}
},
"severity_score": 0.85, # Float between 0 and 1
"reason": "User violated community guidelines regarding hate speech."
}
3. Event-Driven Workflow
Gova uses Kafka to manage moderator states and process events without blocking the API.
- Event Trigger: When a user starts or updates a moderator via the API, a
StartModeratorEventorUpdateConfigModeratorEventis published to theKAFKA_MODERATOR_EVENTS_TOPIC. - Execution: Background runners (consumers) listen for these events to initialize or modify the AI moderator instances.
- Action Pipeline:
- If the AI determines an action is necessary, it creates an
ActionEventin the database. - If the moderator is configured for "Manual Approval," the action remains in
AWAITING_APPROVALstatus until a human moderator hits the/actions/{action_id}/approveendpoint.
- If the AI determines an action is necessary, it creates an
4. Database Schema & Persistence
The system uses PostgreSQL as its primary data store. Key tables include:
- Users: Stores credentials, pricing tiers, and encrypted OAuth payloads.
- Moderators: Stores the AI configuration and state for each community.
- EvaluationEvents: Records every message analyzed by the LLM, including the
severity_scoreandbehavior_score. - ActionEvents: Tracks the lifecycle of a moderation action from detection to execution (or rejection).
Vector Capabilities
The inclusion of PgVector allows Gova to perform similarity searches across message contexts. This is used to build comprehensive user profiles and detect recurring behavioral patterns across a community over time.
5. Deployment & Runners
The backend is designed to run in a distributed environment using Docker. The system is started via "Runners," which provide a clean interface for launching the FastAPI server or background workers.
# Example: Starting the API via the APIRunner
from runners.api_runner import APIRunner
runner = APIRunner(host="0.0.0.0", port=8000, reload=True)
runner.run()
Required Infrastructure
- Kafka: For event orchestration.
- PostgreSQL (with pgvector extension): For relational data and vector embeddings.
- Redis (Optional): Often used for caching or rate limiting within the FastAPI ecosystem.