Reducto API home page
Search...
⌘K
Ask AI
Search...
Navigation
On-Premise
LLM & Service Configuration Options
Documentation
API Reference
Get Started
Overview
API Quickstart
Studio Playground
Core Functions
Upload
Parse
Split
Extract
Edit
Building Pipelines
Webhooks & Batch Jobs
Configurations
OCR Configurations
Page Ranges
Table Output Formats
Chunking Methods
FAQ
What do your Error Codes mean?
Why is my job_id not found?
Why am I getting a URL?
Security and Privacy
Policies
Filing Complaints
On-Premise
Securing Reducto
LLM & Service Configuration Options
On-Premise Changelog
Self-Hosted Fair Queueing with Reducto
Automatic File Cleanup
Air-gapped Usage for Billing
On this page
OCR service configuration
Textract region configuration
Format
Use cases
Examples
LLM provider environment variables
LiteLLM proxy
OpenAI
Azure OpenAI
Anthropic
Google
Gemini
AWS Bedrock
AI usage tracking
How AI usage tracking works
Available via /parse API
Usage information structure
Field descriptions
Enabling AI usage tracking
Tracked AI operations
Model name standardization
Possible model identifiers
OpenAI models
Anthropic models
On-Premise
LLM & Service Configuration Options
Copy page
Complete guide to LLM configuration and environment variables for Reducto
Copy page
Was this page helpful?
Yes
No
Securing Reducto
On-Premise Changelog
Assistant
Responses are generated using AI and may contain mistakes.