EOSAI API Reference
Complete reference documentation for the EOSAI API. Build applications with EOS methodology intelligence built-in.
Authentication
All API requests require authentication via API key. Include your key in the request header:
Bearer Token (Recommended)
Authorization: Bearer YOUR_API_KEYX-API-Key Header
X-API-Key: YOUR_API_KEYChat Request Parameters
Complete list of parameters for the /v1/chat endpoint:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
messages | array | Yes | - | Array of message objects with role (system/user/assistant) and content |
model | string | No | eosai-v1 | Model to use for completion |
stream | boolean | No | false | If true, returns a stream of server-sent events |
temperature | number | No | 0.7 | Sampling temperature (0-2). Higher values make output more random |
max_tokens | integer | No | 4096 | Maximum number of tokens to generate (1-16384) |
top_p | number | No | - | Nucleus sampling parameter (0-1) |
frequency_penalty | number | No | - | Penalty for token frequency (-2 to 2) |
presence_penalty | number | No | - | Penalty for token presence (-2 to 2) |
stop | string | string[] | No | - | Sequences where the API will stop generating |
include_eos_context | boolean | No | true | Whether to include EOS knowledge base context |
eos_namespace | string | No | eos-implementer | EOS knowledge namespace to search for context |
Chat Completions
Generate AI responses with EOS methodology knowledge built-in.
Conversations
Persistent multi-turn conversations with automatic history management.
Document Analysis
Analyze documents and answer questions about their content.
Embeddings
Generate vector embeddings for semantic search and similarity.
Models
Discover available models and EOS knowledge namespaces.
Usage
Monitor API usage and rate limits for your API key.
Error Codes
All errors follow a standard format with a message, type, code, and optional parameter field.
{
"error": {
"message": "Invalid API key",
"type": "authentication_error",
"code": "invalid_api_key",
"param": null
}
}| Status | Type | Code | Description |
|---|---|---|---|
| 400 | invalid_request_error | invalid_json | Request body is not valid JSON |
| 400 | invalid_request_error | invalid_param | A request parameter is invalid |
| 400 | invalid_request_error | model_not_found | The requested model does not exist |
| 400 | invalid_request_error | message_too_long | Message content exceeds maximum length |
| 401 | authentication_error | missing_api_key | No API key provided in request |
| 401 | authentication_error | invalid_api_key | API key is invalid or expired |
| 401 | authentication_error | invalid_api_key_format | API key format is invalid |
| 403 | permission_error | insufficient_scope | API key lacks required permissions |
| 403 | permission_error | model_not_allowed | API key cannot access this model |
| 404 | invalid_request_error | not_found | Resource not found |
| 429 | rate_limit_error | rate_limit_exceeded | Too many requests |
| 500 | server_error | internal_error | Internal server error |
| 503 | server_error | service_unavailable | Service temporarily unavailable |
Error Handling Best Practices
Always implement proper error handling with retry logic for rate limits and server errors. Here are production-ready examples:
JavaScript / TypeScript
// Robust error handling with retry logic
async function callEOSAI(messages, options = {}) {
const { maxRetries = 3, baseDelay = 1000 } = options;
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await fetch('https://eosbot.ai/api/v1/chat', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.EOSAI_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({ messages, model: 'eosai-v1' }),
});
// Handle rate limiting
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After');
const delay = retryAfter
? parseInt(retryAfter) * 1000
: baseDelay * Math.pow(2, attempt);
console.log(`Rate limited. Retrying in ${delay}ms...`);
await new Promise(r => setTimeout(r, delay));
continue;
}
// Handle server errors with retry
if (response.status >= 500) {
const delay = baseDelay * Math.pow(2, attempt);
console.log(`Server error. Retrying in ${delay}ms...`);
await new Promise(r => setTimeout(r, delay));
continue;
}
// Parse response
const data = await response.json();
// Handle API errors
if (!response.ok) {
const error = data.error;
throw new Error(`[${error.code}] ${error.message}`);
}
return data;
} catch (error) {
if (attempt === maxRetries - 1) throw error;
}
}
}
// Usage
try {
const result = await callEOSAI([
{ role: 'user', content: 'What is a Level 10 Meeting?' }
]);
console.log(result.choices[0].message.content);
} catch (error) {
console.error('API Error:', error.message);
}Python
import requests
import time
from typing import Optional
class EOSAIError(Exception):
def __init__(self, message: str, code: str, status: int):
self.message = message
self.code = code
self.status = status
super().__init__(f"[{code}] {message}")
def call_eosai(messages: list, max_retries: int = 3, base_delay: float = 1.0) -> dict:
"""Call EOSAI API with automatic retry and error handling."""
for attempt in range(max_retries):
try:
response = requests.post(
'https://eosbot.ai/api/v1/chat',
headers={
'Authorization': f'Bearer {os.environ["EOSAI_API_KEY"]}',
'Content-Type': 'application/json',
},
json={'messages': messages, 'model': 'eosai-v1'},
timeout=60
)
# Handle rate limiting
if response.status_code == 429:
retry_after = response.headers.get('Retry-After')
delay = int(retry_after) if retry_after else base_delay * (2 ** attempt)
print(f"Rate limited. Retrying in {delay}s...")
time.sleep(delay)
continue
# Handle server errors with retry
if response.status_code >= 500:
delay = base_delay * (2 ** attempt)
print(f"Server error. Retrying in {delay}s...")
time.sleep(delay)
continue
data = response.json()
# Handle API errors
if not response.ok:
error = data.get('error', {})
raise EOSAIError(
error.get('message', 'Unknown error'),
error.get('code', 'unknown'),
response.status_code
)
return data
except requests.exceptions.RequestException as e:
if attempt == max_retries - 1:
raise
delay = base_delay * (2 ** attempt)
print(f"Request failed: {e}. Retrying in {delay}s...")
time.sleep(delay)
raise Exception("Max retries exceeded")
# Usage
try:
result = call_eosai([
{'role': 'user', 'content': 'What is a Level 10 Meeting?'}
])
print(result['choices'][0]['message']['content'])
except EOSAIError as e:
print(f"API Error ({e.status}): {e.message}")
except Exception as e:
print(f"Error: {e}")Key Best Practices
- 1.Always use exponential backoff for retries (double the delay each attempt)
- 2.Check the Retry-After header on 429 responses for exact wait time
- 3.Set a maximum retry limit (3-5 attempts) to avoid infinite loops
- 4.Log error codes and messages for debugging
- 5.Use timeouts to prevent hanging requests (60s recommended)
Rate Limits
API keys have per-minute and daily request limits. Rate limit headers are included in all responses.
Default Limits
- Requests per minute
60 - Requests per day
1,000
Response Headers
X-RateLimit-Limit-RPMX-RateLimit-Remaining-RPMX-RateLimit-Limit-RPDX-RateLimit-Remaining-RPD