Agent integration guide
This is a compact API reference designed for LLM agent context windows. If you are building an integration using an AI coding assistant or agent framework, this page contains everything it needs.
What is Lexey?
Lexey is a customer support agent platform. It exposes two REST APIs: a Customer API (send support messages, receive AI responses) and a Management API (configure the support agent via natural language).
Both APIs use the same pattern: create a conversation, then send messages within it.
Authentication
Pass an API key in the Authorization header on every request:
Authorization: Bearer <key>
- Customer keys have prefix
lxc_and can only access/api/v1/customer/* - Management keys have prefix
lxm_and can only access/api/v1/management/*
Endpoints
Create a conversation:
| 1 | POST /api/v1/{customer|management}/conversations |
| 2 | → 201 {"conversationId": "<uuid>"} |
No request body required. You must create a conversation before sending messages.
Send a message (streamed response):
| 1 | POST /api/v1/{customer|management}/conversations/<id>/messages |
| 2 | Body: {"message": "<text>"} |
| 3 | → 200 text/event-stream (SSE) |
The response is an SSE stream, NOT a JSON response. See the Streaming guide for details.
Get message history:
| 1 | GET /api/v1/{customer|management}/conversations/<id>/messages |
| 2 | → 200 {"messages": [{"role": "user"|"assistant", "content": "...", "createdAt": "..."}]} |
Reading SSE streams
| 1 | Pseudocode: |
| 2 | full_text = "" |
| 3 | for each event (separated by blank lines): |
| 4 | find the line starting with "data: " |
| 5 | data = everything after "data: " |
| 6 | if data == "[DONE]": break |
| 7 | parse data as JSON |
| 8 | if json.type == "delta": full_text += json.text |
| 9 | if json.type == "error": handle error |
| 10 | return full_text |
Pre-stream errors (status 400, 401, 402, 403, 404, 429) return JSON, not SSE. Always check the HTTP status code first.
Workflow examples
Customer: ask a support question
| 1 | 1. POST /conversations → get conversationId |
| 2 | 2. POST /conversations/{id}/messages {"message": "What are your hours?"} |
| 3 | → read SSE stream → collect full_text |
| 4 | 3. POST /conversations/{id}/messages {"message": "Are you open weekends?"} |
| 5 | → read SSE stream → collect full_text |
Management: configure the agent
| 1 | 1. POST /conversations → get conversationId |
| 2 | 2. POST /conversations/{id}/messages {"message": "Set business context: We are a pet store..."} |
| 3 | → read SSE stream → agent confirms |
| 4 | 3. POST /conversations/{id}/messages {"message": "Add a knowledge article about returns..."} |
| 5 | → read SSE stream → agent confirms |
Error codes
| Status | Meaning | Action |
|---|---|---|
| 400 | Invalid request | Fix request body |
| 401 | Invalid API key | Check key |
| 402 | No subscription / credits | Subscribe or add credits |
| 403 | Wrong key type | Use lxc_ for customer, lxm_ for management |
| 404 | Conversation not found | Check ID |
| 429 | Message limit (100) | Create new conversation |
Rules for agents
- Always create a conversation first — you cannot send messages without a
conversationId. - Send message responses are SSE streams — do NOT try to parse as JSON.
- Check HTTP status before reading the stream — errors return JSON, not SSE.
- Use one conversation for related messages — the agent retains context.
- Start a new conversation when switching topics — this gives a clean context.
- Management messages may take longer — the agent executes tools during the response.
- Collect the full response before acting on it — don't parse partial streams.