Models
Image Deepfake Detection
AI-ForensiX AI Forensix platform provides multi-modal deepfake detection across
images, videos, audio, and text.
Each modality returns structured responses that include:
- Prediction labels
- Confidence scores
- Explainability (XAI) heatmaps
- Manipulation source classification
This page documents the Image Deepfake Detection response schema.
Response Schema
ImageDeepfakeDetectionResult
| Field | Type | Description |
|---|---|---|
label | string ("real" | "fake") | Classification label indicating whether the image is authentic or manipulated. |
score | number (0.0 – 1.0) | Confidence score representing the probability of the predicted class. |
heatmap_url | string (URL) - optional | URL to the explainability heatmap showing important manipulated regions. |
source | string ("real" | "face_swap" | "face_edit" | "ai_generated") | Type of manipulation or authenticity classification. |
Source Classification Explanation
| Value | Meaning |
|---|---|
| real | Image is authentic and unaltered. |
| face_swap | Face swap manipulation detected (e.g., DeepFake-style). |
| face_edit | Facial attribute editing / modification detected. |
| ai_generated | Image generated by AI (GANs, diffusion models, etc.). |
Example Responses
Listing: Fake Image Detection Example
{
"label": "fake",
"score": 0.967,
"heatmap_url": "https://forensiX.com/.png",
"source": "face_swap"
}Listing: Real Image Detection Example
{
"label": "real",
"score": 0.982,
"source": "real"
}