Powerful CLI for batch processing and automation
Modern async API for seamless integration
Interactive UI for easy classification tasks
Understanding the three prompting techniques with schema examples
Direct classification without any examples. The model uses only class descriptions to make decisions.
Classification with demonstration examples. The model learns from provided query-classification pairs.
Reasoning-based classification. The model explains its thinking process before making a decision.
Enhanced classification with confidence scores, reasoning explanations, and native reasoning modes
Every classification automatically includes a confidence score (0.0 to 1.0) indicating the model's certainty.
Optional explanations from the LLM about why it made each classification decision. Enable with --enable-reasoning flag.
For reasoning-capable models, enable deeper analysis with --reasoning-effort (low/medium/high).
| Feature | Always Included | Parameter | Output Field |
|---|---|---|---|
| Confidence Score | ✓ Yes | - | confidence |
| Reasoning Output | Optional | --enable-reasoning | reasoning |
| Native Reasoning | Optional | --reasoning-effort | reasoning_content |
The ATAP AI Annotator leverages LiteLLM's unified API to provide seamless access to multiple LLM providers
OpenAI • Anthropic • Google • AWS Bedrock • Azure • Cohere • Ollama (Local) • HuggingFace • and many more
One API interface • Automatic cost tracking • Token counting • Dynamic model discovery
🔒 Privacy-First Option: Use Ollama for completely local processing of sensitive data
CLI / REST API / WebApp
CSV, XLSX, JSON
Async Processing
Rate Limiting
100+ LLM Providers
Unified API Interface
JSON, CSV
Cost Reports