TnsAI

SCOP Bridge

The Integration module connects TnsAI with the SCOP (Self-Constructing Object Program) framework using reflection-based discovery. No compile-time dependency on SCOP is required -- detection happens at runtime. It also provides an HTTP fallback transport for 11 LLM providers when Core's native SPI implementations are unavailable.

Quick Start

The SCOPBridge is a singleton facade that handles everything: building system prompts from annotations, executing actions, and resolving LLM clients. Here is a minimal example showing the three main operations.

SCOPBridge bridge = SCOPBridge.getInstance();

// Build a system prompt from TnsAI annotations
String systemPrompt = bridge.buildSystemPromptFromAnnotations(roleObject);

// Execute a TnsAI action
ActionExecutionResult result = bridge.executeAction(roleObject, "searchPapers", params);
if (result.isSuccess()) {
    System.out.println(result.getResult());
}

// Resolve an LLM client (prefers Core SPI, falls back to HTTP)
Optional<LLMClient> client = bridge.resolveLLMClient(roleObject);

SCOPBridge

The SCOPBridge is the main entry point for all SCOP integration. It acts as a facade that delegates to specialized helpers for prompt building, LLM communication, and action execution.

ComponentResponsibility
SystemPromptBuilderAnnotation-to-markdown prompt conversion
LLMDispatcherProvider-specific HTTP transport (fallback)
BridgeLLMClientLLMClient adapter over HTTP fallback
LLMConfigurationResolved LLM config (provider, model, endpoint, keys)
ActionExecutionResultAction execution result wrapper

Factory Methods

The bridge is created as a singleton. You can optionally configure the HTTP read timeout for LLM calls.

// Default read timeout (300 seconds)
SCOPBridge bridge = SCOPBridge.getInstance();

// Custom read timeout
SCOPBridge bridge = SCOPBridge.getInstance(120);  // 120 seconds

On construction, the bridge attempts to discover Core's LLMClientProvider SPI via ServiceLoader. If found, all LLM calls route through real provider implementations (with streaming, retry, etc.). Otherwise, the HTTP fallback path is used.

System Prompt Building

The bridge can automatically generate a structured system prompt by reading TnsAI annotations from your role class. This means you define the agent's identity, capabilities, and behavior through annotations, and the bridge converts them into a prompt the LLM can understand.

String prompt = bridge.buildSystemPromptFromAnnotations(myRole);

Extracted annotations:

  • @RoleSpec -- Role identity, description, responsibilities
  • @Communication -- Communication style guidance
  • @State -- Current state fields for context
  • @ActionSpec -- Available actions with descriptions and parameters

Action Execution

The bridge routes action calls through the TnsAI ActionExecutor pipeline, which dispatches to the appropriate executor based on the action type. The result includes the return value, pre/post narration text, and error details if something went wrong.

Map<String, Object> params = Map.of("query", "sales trends", "limit", 10);
ActionExecutionResult result = bridge.executeAction(role, "analyzeData", params);

if (result.isSuccess()) {
    Object data = result.getResult();
    String pre = result.getPreNarration();    // Before-action narration
    String post = result.getPostNarration();  // After-action narration
    String display = result.getNarration();   // Smart: postNarration on success, errorMessage on failure
} else {
    String error = result.getErrorMessage();
    Exception cause = result.getException();
}

ActionExecutionResult

The ActionExecutionResult wraps the outcome of an action execution, providing a consistent API for both success and failure cases.

// Factory methods
ActionExecutionResult.success(resultObject, preNarration, postNarration);
ActionExecutionResult.error(errorMessage, exception);

// Accessors
result.isSuccess();
result.getResult();           // The action return value
result.getPreNarration();     // Empty string if null
result.getPostNarration();    // Empty string if null
result.getNarration();        // Smart display text
result.getErrorMessage();
result.getException();

LLM Configuration

The LLMConfiguration class holds everything needed to connect to an LLM provider: the provider name, model, temperature, max tokens, endpoint URL, and which environment variable holds the API key. The bridge resolves this configuration automatically from your annotations.

Resolution Hierarchy

When multiple annotations specify LLM configuration, the bridge picks the most specific one. Role-level settings override agent-level, which overrides playground-level.

  1. Role-level @RoleSpec(llm = @LLMSpec(...)) -- Highest priority
  2. Agent-level @AgentSpec(llm = @LLMSpec(...)) -- Fallback
  3. Playground-level @AgentSpec(llm = @LLMSpec(...)) -- Lowest priority
Optional<LLMClient> client = bridge.resolveLLMClient(roleObject);

Supported Providers (11)

The bridge supports 11 LLM providers out of the box. Most use the OpenAI-compatible API format, while Anthropic and Gemini have custom formatting.

ProviderAPI FormatDefault EndpointAPI Key Env Var
OLLAMAOpenAI-compatiblehttp://localhost:11434/v1/chat/completionsOLLAMA_API_KEY
OPENAINativehttps://api.openai.com/v1/chat/completionsOPENAI_API_KEY
ANTHROPICNative (tool format conversion)https://api.anthropic.com/v1/messagesANTHROPIC_API_KEY
GEMININativehttps://generativelanguage.googleapis.com/v1beta/models/{model}:generateContentGEMINI_API_KEY
AZURE_OPENAIOpenAI-compatibleUser-configuredAZURE_OPENAI_API_KEY
BEDROCKAWS-specificUser-configuredAWS_ACCESS_KEY_ID
GROQOpenAI-compatiblehttps://api.groq.com/openai/v1/chat/completionsGROQ_API_KEY
TOGETHEROpenAI-compatiblehttps://api.together.xyz/v1/chat/completionsTOGETHER_API_KEY
MISTRALOpenAI-compatiblehttps://api.mistral.ai/v1/chat/completionsMISTRAL_API_KEY
DEEPSEEKOpenAI-compatiblehttps://api.deepseek.com/chat/completionsDEEPSEEK_API_KEY
CUSTOMOpenAI-compatibleUser-configuredCUSTOM_LLM_API_KEY

OpenAI-compatible providers (OLLAMA, OPENAI, GROQ, TOGETHER, MISTRAL, DEEPSEEK, AZURE_OPENAI, CUSTOM) share the same request/response format. Anthropic and Gemini have provider-specific formatting.

Endpoint Resolution

The bridge resolves the API endpoint using a three-step fallback chain, so you can override endpoints per-role, per-environment, or rely on provider defaults.

  1. Custom endpoint from @LLMSpec.endpoint() (if non-empty)
  2. Base URL from environment variable (e.g., OPENAI_BASE_URL) + provider chat path
  3. Default endpoint from the provider table above
LLMConfiguration config = new LLMConfiguration(
    "OPENAI", "gpt-4o", 0.7f, 4096, "", "OPENAI_API_KEY");

String apiUrl = config.resolveApiUrl();  // Checks custom -> env var -> default
String apiKey = config.resolveApiKey();  // Checks custom env -> default env

LLMDispatcher (HTTP Fallback)

The LLMDispatcher is the fallback LLM transport used when TnsAI Core's native provider SPI is not on the classpath. It communicates with all 11 LLM providers over HTTP using OkHttp, building provider-specific request bodies and parsing responses. It builds provider-specific request bodies, sends them via OkHttpClient, and parses responses.

LLMResponse response = dispatcher.sendToLLM(config, conversationArray, toolsArray);

// Response types
response.getContent();       // Text content
response.getToolCalls();     // Tool call requests (if any)
response.getErrorMessage();  // Error message (if failed)

The dispatcher handles:

  • OpenAI-compatible request/response format for 8 providers
  • Anthropic-specific message format and tool schema conversion
  • Gemini-specific content format
  • Azure OpenAI endpoint path construction
  • API key header injection (Authorization: Bearer for most, x-api-key for Anthropic)

BridgeLLMClient (Adapter Pattern)

The BridgeLLMClient wraps the HTTP-based LLMDispatcher behind the standard LLMClient interface. This adapter pattern means the rest of TnsAI does not need to know whether it is talking to a native SPI provider or the HTTP fallback -- the API is identical.

// Created internally by SCOPBridge when resolving LLM clients
BridgeLLMClient client = new BridgeLLMClient(config, dispatcher);

// Implements LLMClient
client.getModel();        // From LLMConfiguration
client.getTemperature();  // From LLMConfiguration
client.getMaxTokens();    // Optional<Integer> from LLMConfiguration

// Standard chat interface
ChatResponse response = client.chat(
    message,
    Optional.of(systemPrompt),
    Optional.of(history),
    Optional.of(tools)
);

The adapter translates between Core's ChatResponse format and the LLMResponse format used by the dispatcher.

Design Patterns

The integration module uses several well-known design patterns to keep the code modular and testable. This table summarizes them for contributors and architects.

PatternUsage
FacadeSCOPBridge delegates to SystemPromptBuilder, LLMDispatcher, BridgeLLMClient
AdapterBridgeLLMClient adapts LLMDispatcher to the LLMClient interface
SPI DiscoveryPrefers Core's LLMClientProvider when available on classpath
ReflectionDetects SCOP classes at runtime without compile-time dependency
Configuration HierarchyRole -\> Agent -\> Playground annotation resolution

On this page