AI Chat

Interactive AI assistant specialized in digital forensics and incident response. Ask questions about forensic artifacts, pipe in tool output for interpretation, get MITRE ATT&CK mapping, and receive structured incident response guidance. Powered by Claude with a domain-restricted system prompt engineered for DFIR workflows.

POST/api/v1/ai/chat

Authentication

API key with ai:chat or api:full permission

Credits

Dynamic pricing — ~1 credit per 1K tokens

Plans

Starter, Professional, Enterprise

DFIR-Only Scope & Dynamic Credits

The AI assistant is strictly scoped to digital forensics and incident response topics. Off-topic questions (weather, cooking, general programming, etc.) will be declined. Credits are deducted after the response stream completes, based on actual token usage. The formula is: credits = max(1, ceil((input_tokens + output_tokens × 5) / divisor)) where divisor is 2,000 for both Haiku and Sonnet.

Request Body

Send a JSON object with the conversation messages and optional configuration. For multi-turn conversations, include the full message history (alternating user/assistant messages).

FieldTypeRequiredDescription
messagesarrayYesArray of message objects. Each has role ("user" or "assistant") and content (string). Send full conversation history for multi-turn.
modelstringNoAI model to use: "haiku" (fast, lower cost) or "sonnet" (thorough, default). Affects response quality and credit cost.
contextstringNoAdditional context such as piped tool output or log data. Automatically wrapped in context tags by the CLI.
streambooleanNoEnable SSE streaming (default: true). When false, returns a single JSON response after completion.

Example Request Body

{
  "messages": [
    {
      "role": "user",
      "content": "What Windows event IDs indicate lateral movement?"
    }
  ],
  "model": "sonnet",
  "stream": true
}

Example with Piped Context

{
  "messages": [
    {
      "role": "user",
      "content": "<piped_input>\nPID  PPID  ImageFileName  Offset(V)\n4    0     System         0xfa8000c5e040\n296  4     smss.exe       0xfa800184e900\n3688 1520  svchost.exe    0xfa80025f1060\n</piped_input>\n\nAnalyze these processes for anomalies."
    }
  ],
  "model": "sonnet",
  "stream": true
}

Streaming Response (SSE)

The response is a Server-Sent Events stream. Each event is a data: line containing a JSON object with a type field. The stream ends with data: [DONE].

Event TypeDescription
message_startStream started. Contains model identifier.
content_deltaIncremental text chunk. The text field contains the new content to append.
content_stopContent generation finished. No more content_delta events will follow.
doneStream complete. Contains usage (token counts, credits_used) and meta (request_id, credits_remaining).
errorAn error occurred. Contains error field with the error message. Stream ends after this event.

Example SSE Stream

data: {"type":"message_start","model":"sonnet"}

data: {"type":"content_delta","text":"## Summary\n\n"}

data: {"type":"content_delta","text":"Lateral movement on Windows "}

data: {"type":"content_delta","text":"leaves artifacts across multiple evidence sources."}

data: {"type":"content_stop"}

data: {"type":"done","usage":{"input_tokens":1842,"output_tokens":387,"credits_used":3,"model":"sonnet"},"meta":{"request_id":"req_abc123","credits_used":3,"credits_remaining":497,"processing_time_ms":2341}}

data: [DONE]

Credit Pricing

Unlike other endpoints with fixed credit costs, AI Chat uses dynamic pricing based on actual token usage. Output tokens are weighted 5× higher because they cost more to generate. Credits are calculated after the response completes.

weighted_tokens = input_tokens + (output_tokens × 5)

credits = max(1, ceil(weighted_tokens / divisor))

Divisor: 2,000 (both Haiku and Sonnet). Sonnet uses a more capable model at the same credit rate.

ScenarioInputOutputWeightedCredits
Quick question (Haiku)2,0003003,5002
Contextual Q&A (Sonnet)6,50080010,5006
Piped log analysis (Sonnet)15,0002,00025,00013
Deep analysis (Sonnet)30,0004,00050,00025

Code Examples

cURL— human-readable output

Streams the response text live to your terminal. Requires jq installed (brew install jq).

# Human-readable output — streams live (recommended)
curl -sN -X POST https://api.dfir-lab.ch/v1/ai/chat \
  -H "Authorization: Bearer sk-dfir-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [
      {
        "role": "user",
        "content": "What artifacts indicate lateral movement on Windows?"
      }
    ],
    "model": "sonnet",
    "stream": true
  }' | jq --unbuffered -Rj '
    if startswith("data: {") then
      (.[6:] | fromjson | select(.type == "content_delta") | .text // empty)
    else empty end'

Example Output

## Key Windows Artifacts for Lateral Movement Detection

### Authentication and Session Artifacts

**Windows Event Logs**
```
Security.evtx:
- Event ID 4624 (Logon) - Type 3 (Network), Type 10 (RemoteInteractive)
- Event ID 4625 (Failed Logon)
- Event ID 4648 (Explicit Credential Use)
- Event ID 4672 (Special Privileges Assigned to New Logon)
- Event ID 4768/4769 (Kerberos TGT/TGS Request)
- Event ID 4776 (NTLM Authentication)
```

**Registry Keys**
```
HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Shares
HKLM\SECURITY\Policy\Secrets (cached credentials)
```

### Remote Execution Artifacts

**PSExec/Service-Based Execution**
- System Event Log: Event ID 7045 (Service Installation)
- Registry: HKLM\SYSTEM\CurrentControlSet\Services\*
- Files: C:\Windows\PSEXESVC.exe, C:\Windows\*.exe (dropped executables)
- Prefetch: PSEXESVC.EXE-*.pf

**WMI/WMIC**
- Microsoft-Windows-WMI-Activity/Operational.evtx: Event ID 5857, 5860, 5861
- Registry: HKLM\SOFTWARE\Microsoft\WBEM\CIMOM
- Files: C:\Windows\System32\wbem\Repository\ (WMI repository)

-- more --

cURL— raw SSE stream

Shows the raw Server-Sent Events stream. Use this when building your own client that parses content_delta events programmatically.

# Raw SSE stream (for programmatic use)
curl -sN -X POST https://api.dfir-lab.ch/v1/ai/chat \
  -H "Authorization: Bearer sk-dfir-your-key-here" \
  -H "Content-Type: application/json" \
  -H "Accept: text/event-stream" \
  -d '{
    "messages": [
      {
        "role": "user",
        "content": "What artifacts indicate lateral movement on Windows?"
      }
    ],
    "model": "sonnet",
    "stream": true
  }'

dfir-cli

# One-shot question
dfir-cli ai "What Windows event IDs indicate lateral movement?"

# Pipe tool output for analysis
vol.py -f memory.dmp windows.pslist | dfir-cli ai "Analyze for anomalies"

# Pipe dfir-cli results for deeper analysis
dfir-cli enrichment lookup --ip 1.2.3.4 -j | dfir-cli ai "Explain these results"

# Interactive chat session
dfir-cli ai chat

# Use a specific model
dfir-cli ai --model haiku "Quick: what is shimcache?"

Error Responses

StatusDescription
400Invalid request body or missing required fields.
401Missing or invalid API key.
402Insufficient credits. Top up at the billing page.
403Free plan — AI features require Starter or above. Or missing ai:chat permission.
429Rate limit exceeded. Retry after the Retry-After period.
500Internal server error. The AI model may be temporarily unavailable.