TnsAI
Core

Agents

An `Agent` is the top-level orchestrator in TnsAI. It owns an LLM client, one or more roles, a memory store, and an event system. Agents handle the full chat loop: receiving a message, consulting their roles for available actions, calling the LLM, executing tool calls, and returning a response.

Quick Start

The fastest way to create an agent is with AgentBuilder:

Agent agent = AgentBuilder.create()
    .llm(LLMClientFactory.create("openai", "gpt-4o", 0.7f))
    .role(RoleBuilder.create()
        .name("Assistant")
        .goal("Help users with their questions")
        .build())
    .build();

String response = agent.chat("What is BDI architecture?");

For more control, extend the Agent class directly:

@AgentSpec(name = "ResearchAgent", description = "Conducts research")
public class ResearchAgent extends Agent {

    @Override
    protected LLMClient getLLM() {
        return LLMClientFactory.create("anthropic", "claude-sonnet-4-20250514", 0.7f);
    }

    @Override
    protected List<Role> getRoles() {
        return List.of(Role.create(ResearchRole.class));
    }
}

Creating Agents

There are two ways to create an agent: programmatically with AgentBuilder, or declaratively by extending the Agent class and using annotations. Use the builder when you want quick, inline setup. Use annotations when you want a reusable agent class with its configuration baked in.

With AgentBuilder (programmatic)

AgentBuilder lets you configure an agent in a single fluent chain. This is the best approach for simple agents or when you want to assemble an agent dynamically at runtime.

Agent agent = AgentBuilder.create()
    .id("agent-001")
    .llm(new OpenAIClient("gpt-4o"))
    .role(myRole)
    .roles(List.of(role1, role2))
    .tool(new BraveSearchTool())
    .tools(List.of(tool1, tool2))
    .memoryStore(new InMemoryStore())
    .maxContextTokens(8192)
    .build();

With Annotations (declarative)

If you prefer a class-per-agent design, extend Agent and use @AgentSpec and @LLMSpec annotations. This keeps configuration next to the code and makes agents easy to discover in your project.

@AgentSpec(name = "Analyst", description = "Data analysis agent")
@LLMSpec(provider = "openai", model = "gpt-4o", temperature = 0.3f)
public class AnalystAgent extends Agent {

    @Override
    protected List<Role> getRoles() {
        return List.of(Role.create(AnalystRole.class));
    }
}

Chat Methods

Once you have an agent, you interact with it through chat methods. TnsAI provides several variants depending on whether you need conversation history, streaming output, or visibility into tool calls happening inside the agent loop.

// Simple chat — single turn, uses conversation history
String response = agent.chat("Explain quantum computing");

// Chat without history
String response = agent.chat("Translate this to French", false);

// Streaming — returns tokens as they arrive
Stream<String> tokens = agent.streamChat("Write a poem about Java");
tokens.forEach(System.out::print);

// Event-driven chat — full visibility into the agent loop
String response = agent.chatWithEvents("Research AI safety", event -> {
    switch (event) {
        case ToolCallStartEvent e -> System.out.println("Calling: " + e.toolName());
        case ToolCallEndEvent e -> System.out.println("Result: " + e.result());
        case ErrorEvent e -> System.err.println("Error: " + e.message());
        default -> {}
    }
});

Memory Management

Agents automatically track conversation history so the LLM has context across turns. You can also inspect, modify, or prune this history directly when you need to manage token usage or reset a conversation.

// Get conversation history
List<Map<String, Object>> history = agent.getConversationHistory();

// Clear all history
agent.clearHistory();

// Add a message manually
agent.addToHistory("user", "Remember this context");

// Prune memory to fit within a token limit (removes oldest messages first)
agent.getMemoryStore().prune(4096);

Lifecycle

Agents have a start/stop lifecycle. Call start() to initialize the agent and shutdown() to release its resources. You can check whether an agent is active or inspect its health state at any time.

agent.start();
boolean running = agent.isRunning();
AgentHealthState health = agent.getHealthState();
agent.shutdown();

Configuration Summary

This table lists every property you can set on an agent through AgentBuilder. Only llm and at least one role are required; everything else has sensible defaults.

PropertyBuilder MethodDefaultDescription
ID.id(String)Auto-generatedUnique agent identifier
LLM.llm(LLMClient)RequiredLanguage model client
Roles.role(Role)RequiredAgent roles
Tools.tool(Tool)EmptyExternal tools
Memory.memoryStore(MemoryStore)InMemoryStoreConversation memory
Context limit.maxContextTokens(int)Provider defaultMax context window
Knowledge base.knowledgeBase(KnowledgeBase)NoneRAG knowledge source
Prompt strategy.promptStrategy(PromptStrategy)DefaultPrompt enhancement
Reasoning.reasoningStrategy(String)NoneReasoning strategy name

SPI Extension Points

The Core module defines SPI interfaces that other modules implement. Extensions are discovered automatically via ServiceLoader:

SPI InterfacePurposeImplementing Module
MessageBrokerAgent communication routingCoordination
ToolRegistryTool discovery and registrationTools
ResilienceStrategyResilience pattern implementationsQuality
CognitiveModelCognitive processing modelsIntelligence
CheckpointerFactoryState checkpointingCustom
CheckpointerProviderCheckpoint storage backendsCustom

Register an SPI implementation by adding a file to META-INF/services/:

# META-INF/services/com.tnsai.spi.ToolRegistry
com.example.MyCustomToolRegistry

On this page