LexeyLexey
HomePricingDocs
Sign in
Agent integration
Basics
  • Getting started
Features
  • Manage tab
  • Knowledge base
  • Conversations
  • Customer chat
  • Skills & automations
  • Quality assurance
  • AI safety
  • Billing & usage
Deployment
  • Embedding your chat
API
  • Customer Chat API
  • Management Chat API
  • Webhook events
  • API key management
  • Streaming guide
  • Agent integration
Help
  • FAQ

Agent integration guide

This is a compact API reference designed for LLM agent context windows. If you are building an integration using an AI coding assistant or agent framework, this page contains everything it needs.

What is Lexey?

Lexey is a customer support agent platform. It exposes two REST APIs: a Customer API (send support messages, receive AI responses) and a Management API (configure the support agent via natural language).

Both APIs use the same pattern: create a conversation, then send messages within it.

Authentication

Pass an API key in the Authorization header on every request:

Authorization: Bearer <key>
  • Customer keys have prefix lxc_ and can only access /api/v1/customer/*
  • Management keys have prefix lxm_ and can only access /api/v1/management/*

Endpoints

Create a conversation:

1POST /api/v1/{customer|management}/conversations
2→ 201 {"conversationId": "<uuid>"}

No request body required. You must create a conversation before sending messages.

Send a message (streamed response):

1POST /api/v1/{customer|management}/conversations/<id>/messages
2Body: {"message": "<text>"}
3→ 200 text/event-stream (SSE)

The response is an SSE stream, NOT a JSON response. See the Streaming guide for details.

Get message history:

1GET /api/v1/{customer|management}/conversations/<id>/messages
2→ 200 {"messages": [{"role": "user"|"assistant", "content": "...", "createdAt": "..."}]}

Reading SSE streams

1Pseudocode:
2 full_text = ""
3 for each event (separated by blank lines):
4 find the line starting with "data: "
5 data = everything after "data: "
6 if data == "[DONE]": break
7 parse data as JSON
8 if json.type == "delta": full_text += json.text
9 if json.type == "error": handle error
10 return full_text

Pre-stream errors (status 400, 401, 402, 403, 404, 429) return JSON, not SSE. Always check the HTTP status code first.

Workflow examples

Customer: ask a support question

11. POST /conversations → get conversationId
22. POST /conversations/{id}/messages {"message": "What are your hours?"}
3 → read SSE stream → collect full_text
43. POST /conversations/{id}/messages {"message": "Are you open weekends?"}
5 → read SSE stream → collect full_text

Management: configure the agent

11. POST /conversations → get conversationId
22. POST /conversations/{id}/messages {"message": "Set business context: We are a pet store..."}
3 → read SSE stream → agent confirms
43. POST /conversations/{id}/messages {"message": "Add a knowledge article about returns..."}
5 → read SSE stream → agent confirms

Error codes

StatusMeaningAction
400Invalid requestFix request body
401Invalid API keyCheck key
402No subscription / creditsSubscribe or add credits
403Wrong key typeUse lxc_ for customer, lxm_ for management
404Conversation not foundCheck ID
429Message limit (100)Create new conversation

Rules for agents

  1. Always create a conversation first — you cannot send messages without a conversationId.
  2. Send message responses are SSE streams — do NOT try to parse as JSON.
  3. Check HTTP status before reading the stream — errors return JSON, not SSE.
  4. Use one conversation for related messages — the agent retains context.
  5. Start a new conversation when switching topics — this gives a clean context.
  6. Management messages may take longer — the agent executes tools during the response.
  7. Collect the full response before acting on it — don't parse partial streams.

Product

  • Features
  • Pricing
  • Use Cases

Documentation

  • Getting Started
  • Customer API
  • Management API

Company

  • Sign Up
  • Sign In
  • Terms of Service
  • Privacy Policy
© 2026 Lexey. All rights reserved.