Skip to main content
Docs / Protection / Ai Moderation

AI Moderation

Automatic detection of inappropriate content using artificial intelligence (OpenAI).

What does it detect?

Category Description Action
CSAM (minors) Child sexual abuse material Permanent ban (not configurable)
NSFW Explicit sexual content Configurable
Scam/Phishing Scams, fake investments Configurable
Violence Graphic violent content Configurable
Harassment Threats, harassment Configurable
Self-harm Self-harm content Configurable

How does it work?

  1. The user sends a message (text or photo)
  2. The message is delivered normally (without delay)
  3. In the background, the AI analyses the content
  4. If prohibited content is detected: the message is deleted and the action is triggered

Important: The analysis is asynchronous — the message may be visible for 1-2 seconds before being deleted.

Configuration

  1. /config → Select group → 🤖 AI Moderation
  2. Enable/Disable — Toggle on/off
  3. Action — For non-CSAM categories: delete, warn, mute, kick, ban

Zero tolerance for CSAM

CSAM detection (child sexual abuse material) has a very low threshold and always results in a permanent ban. This action is NOT configurable — it is a mandatory security measure.

Notes

  • Administrators are exempt
  • Analyses text and images (photos)
  • Does not analyse videos, GIFs or stickers
  • If the AI is unavailable, messages pass normally (fail-open)
  • Admins receive a private notification when the AI takes action