Base URL
https://tokencounteronline.com/api
Authentication
No authentication required. This is a free public API, rate limited by IP address.
Rate Limits
- Limit: 60 requests per hour per IP
- Window: Sliding 1-hour window
- Headers: Rate limit info included in response headers
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 58
X-RateLimit-Reset: 1735306800
POST
/api/count
Count tokens for given text and model.
Request Body
{
"text": "Your text here to count tokens...",
"model": "openai"
}
Parameters
| Field | Type | Required | Description |
|---|---|---|---|
text |
string | Yes | Text to count (max 100,000 chars) |
model |
string | No | Model ID (default: "generic") |
Success Response (200)
{
"success": true,
"data": {
"tokens": 1234,
"words": 890,
"characters": 5432,
"model": "openai",
"encoding": "o200k_base"
},
"rateLimit": {
"limit": 60,
"remaining": 58,
"reset": 3540
}
}
Error Response (429)
{
"success": false,
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded. Try again in 59 minutes.",
"retryAfter": 3540
}
}
GET
/api/models
Get list of all supported models.
Response (200)
{
"success": true,
"data": {
"models": [
{"id": "generic", "name": "Generic", "encoding": "approximation", "description": "Fast character/word-based approximation"},
{"id": "openai", "name": "ChatGPT (OpenAI)", "encoding": "o200k_base", "description": "GPT-4o, o1, o3, GPT-4, GPT-3.5"},
{"id": "claude", "name": "Claude (Anthropic)", "encoding": "sentencepiece", "description": "Claude 3.5, Claude 3, Claude 2"},
{"id": "gemini", "name": "Gemini (Google)", "encoding": "sentencepiece", "description": "Gemini Pro, Flash, Ultra"},
{"id": "llama", "name": "LLaMA (Meta)", "encoding": "sentencepiece", "description": "LLaMA 3.x, LLaMA 2"},
{"id": "mistral", "name": "Mistral", "encoding": "sentencepiece", "description": "Mistral AI models"}
]
}
}
Supported Models
| Model ID | Name | Encoding |
|---|---|---|
generic | Generic | approximation |
openai | ChatGPT (OpenAI) | o200k_base |
claude | Claude (Anthropic) | sentencepiece |
gemini | Gemini (Google) | sentencepiece |
llama | LLaMA (Meta) | sentencepiece |
mistral | Mistral | sentencepiece |
Code Examples
cURL
curl -X POST https://tokencounteronline.com/api/count \
-H "Content-Type: application/json" \
-d '{"text": "Hello, how are you today?", "model": "openai"}'
JavaScript (Fetch)
const response = await fetch('https://tokencounteronline.com/api/count', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
text: 'Hello, how are you today?',
model: 'openai'
})
});
const data = await response.json();
console.log(`Tokens: ${data.data.tokens}`);
Python
import requests
response = requests.post(
'https://tokencounteronline.com/api/count',
json={
'text': 'Hello, how are you today?',
'model': 'openai'
}
)
data = response.json()
print(f"Tokens: {data['data']['tokens']}")
Node.js
const response = await fetch('https://tokencounteronline.com/api/count', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
text: 'Hello, how are you today?',
model: 'openai'
})
});
const data = await response.json();
console.log(`Tokens: ${data.data.tokens}`);