Back to Docs|
EOSAIEOSAI API Reference

EOSAI API Reference

Complete reference documentation for the EOSAI API. Build applications with EOS methodology intelligence built-in.

Authentication

All API requests require authentication via API key. Include your key in the request header:

Bearer Token (Recommended)

Authorization: Bearer YOUR_API_KEY

X-API-Key Header

X-API-Key: YOUR_API_KEY

Chat Request Parameters

Complete list of parameters for the /v1/chat endpoint:

ParameterTypeRequiredDefaultDescription
messagesarrayYes-Array of message objects with role (system/user/assistant) and content
modelstringNoeosai-v1Model to use for completion
streambooleanNofalseIf true, returns a stream of server-sent events
temperaturenumberNo0.7Sampling temperature (0-2). Higher values make output more random
max_tokensintegerNo4096Maximum number of tokens to generate (1-16384)
top_pnumberNo-Nucleus sampling parameter (0-1)
frequency_penaltynumberNo-Penalty for token frequency (-2 to 2)
presence_penaltynumberNo-Penalty for token presence (-2 to 2)
stopstring | string[]No-Sequences where the API will stop generating
include_eos_contextbooleanNotrueWhether to include EOS knowledge base context
eos_namespacestringNoeos-implementerEOS knowledge namespace to search for context

Chat Completions

Generate AI responses with EOS methodology knowledge built-in.

Conversations

Persistent multi-turn conversations with automatic history management.

Document Analysis

Analyze documents and answer questions about their content.

Embeddings

Generate vector embeddings for semantic search and similarity.

Models

Discover available models and EOS knowledge namespaces.

Usage

Monitor API usage and rate limits for your API key.

Error Codes

All errors follow a standard format with a message, type, code, and optional parameter field.

{
  "error": {
    "message": "Invalid API key",
    "type": "authentication_error",
    "code": "invalid_api_key",
    "param": null
  }
}
StatusTypeCodeDescription
400invalid_request_errorinvalid_jsonRequest body is not valid JSON
400invalid_request_errorinvalid_paramA request parameter is invalid
400invalid_request_errormodel_not_foundThe requested model does not exist
400invalid_request_errormessage_too_longMessage content exceeds maximum length
401authentication_errormissing_api_keyNo API key provided in request
401authentication_errorinvalid_api_keyAPI key is invalid or expired
401authentication_errorinvalid_api_key_formatAPI key format is invalid
403permission_errorinsufficient_scopeAPI key lacks required permissions
403permission_errormodel_not_allowedAPI key cannot access this model
404invalid_request_errornot_foundResource not found
429rate_limit_errorrate_limit_exceededToo many requests
500server_errorinternal_errorInternal server error
503server_errorservice_unavailableService temporarily unavailable

Error Handling Best Practices

Always implement proper error handling with retry logic for rate limits and server errors. Here are production-ready examples:

JavaScript / TypeScript

// Robust error handling with retry logic
async function callEOSAI(messages, options = {}) {
  const { maxRetries = 3, baseDelay = 1000 } = options;
  
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      const response = await fetch('https://eosbot.ai/api/v1/chat', {
        method: 'POST',
        headers: {
          'Authorization': `Bearer ${process.env.EOSAI_API_KEY}`,
          'Content-Type': 'application/json',
        },
        body: JSON.stringify({ messages, model: 'eosai-v1' }),
      });

      // Handle rate limiting
      if (response.status === 429) {
        const retryAfter = response.headers.get('Retry-After');
        const delay = retryAfter 
          ? parseInt(retryAfter) * 1000 
          : baseDelay * Math.pow(2, attempt);
        console.log(`Rate limited. Retrying in ${delay}ms...`);
        await new Promise(r => setTimeout(r, delay));
        continue;
      }

      // Handle server errors with retry
      if (response.status >= 500) {
        const delay = baseDelay * Math.pow(2, attempt);
        console.log(`Server error. Retrying in ${delay}ms...`);
        await new Promise(r => setTimeout(r, delay));
        continue;
      }

      // Parse response
      const data = await response.json();

      // Handle API errors
      if (!response.ok) {
        const error = data.error;
        throw new Error(`[${error.code}] ${error.message}`);
      }

      return data;
    } catch (error) {
      if (attempt === maxRetries - 1) throw error;
    }
  }
}

// Usage
try {
  const result = await callEOSAI([
    { role: 'user', content: 'What is a Level 10 Meeting?' }
  ]);
  console.log(result.choices[0].message.content);
} catch (error) {
  console.error('API Error:', error.message);
}

Python

import requests
import time
from typing import Optional

class EOSAIError(Exception):
    def __init__(self, message: str, code: str, status: int):
        self.message = message
        self.code = code
        self.status = status
        super().__init__(f"[{code}] {message}")

def call_eosai(messages: list, max_retries: int = 3, base_delay: float = 1.0) -> dict:
    """Call EOSAI API with automatic retry and error handling."""
    
    for attempt in range(max_retries):
        try:
            response = requests.post(
                'https://eosbot.ai/api/v1/chat',
                headers={
                    'Authorization': f'Bearer {os.environ["EOSAI_API_KEY"]}',
                    'Content-Type': 'application/json',
                },
                json={'messages': messages, 'model': 'eosai-v1'},
                timeout=60
            )
            
            # Handle rate limiting
            if response.status_code == 429:
                retry_after = response.headers.get('Retry-After')
                delay = int(retry_after) if retry_after else base_delay * (2 ** attempt)
                print(f"Rate limited. Retrying in {delay}s...")
                time.sleep(delay)
                continue
            
            # Handle server errors with retry
            if response.status_code >= 500:
                delay = base_delay * (2 ** attempt)
                print(f"Server error. Retrying in {delay}s...")
                time.sleep(delay)
                continue
            
            data = response.json()
            
            # Handle API errors
            if not response.ok:
                error = data.get('error', {})
                raise EOSAIError(
                    error.get('message', 'Unknown error'),
                    error.get('code', 'unknown'),
                    response.status_code
                )
            
            return data
            
        except requests.exceptions.RequestException as e:
            if attempt == max_retries - 1:
                raise
            delay = base_delay * (2 ** attempt)
            print(f"Request failed: {e}. Retrying in {delay}s...")
            time.sleep(delay)
    
    raise Exception("Max retries exceeded")

# Usage
try:
    result = call_eosai([
        {'role': 'user', 'content': 'What is a Level 10 Meeting?'}
    ])
    print(result['choices'][0]['message']['content'])
except EOSAIError as e:
    print(f"API Error ({e.status}): {e.message}")
except Exception as e:
    print(f"Error: {e}")

Key Best Practices

  • 1.Always use exponential backoff for retries (double the delay each attempt)
  • 2.Check the Retry-After header on 429 responses for exact wait time
  • 3.Set a maximum retry limit (3-5 attempts) to avoid infinite loops
  • 4.Log error codes and messages for debugging
  • 5.Use timeouts to prevent hanging requests (60s recommended)

Rate Limits

API keys have per-minute and daily request limits. Rate limit headers are included in all responses.

Default Limits

  • Requests per minute60
  • Requests per day1,000

Response Headers

  • X-RateLimit-Limit-RPM
  • X-RateLimit-Remaining-RPM
  • X-RateLimit-Limit-RPD
  • X-RateLimit-Remaining-RPD