SCOP Bridge
The Integration module connects TnsAI with the SCOP (Self-Constructing Object Program) framework using reflection-based discovery. No compile-time dependency on SCOP is required -- detection happens at runtime. It also provides an HTTP fallback transport for 11 LLM providers when Core's native SPI implementations are unavailable.
Quick Start
The SCOPBridge is a singleton facade that handles everything: building system prompts from annotations, executing actions, and resolving LLM clients. Here is a minimal example showing the three main operations.
SCOPBridge bridge = SCOPBridge.getInstance();
// Build a system prompt from TnsAI annotations
String systemPrompt = bridge.buildSystemPromptFromAnnotations(roleObject);
// Execute a TnsAI action
ActionExecutionResult result = bridge.executeAction(roleObject, "searchPapers", params);
if (result.isSuccess()) {
System.out.println(result.getResult());
}
// Resolve an LLM client (prefers Core SPI, falls back to HTTP)
Optional<LLMClient> client = bridge.resolveLLMClient(roleObject);SCOPBridge
The SCOPBridge is the main entry point for all SCOP integration. It acts as a facade that delegates to specialized helpers for prompt building, LLM communication, and action execution.
| Component | Responsibility |
|---|---|
SystemPromptBuilder | Annotation-to-markdown prompt conversion |
LLMDispatcher | Provider-specific HTTP transport (fallback) |
BridgeLLMClient | LLMClient adapter over HTTP fallback |
LLMConfiguration | Resolved LLM config (provider, model, endpoint, keys) |
ActionExecutionResult | Action execution result wrapper |
Factory Methods
The bridge is created as a singleton. You can optionally configure the HTTP read timeout for LLM calls.
// Default read timeout (300 seconds)
SCOPBridge bridge = SCOPBridge.getInstance();
// Custom read timeout
SCOPBridge bridge = SCOPBridge.getInstance(120); // 120 secondsOn construction, the bridge attempts to discover Core's LLMClientProvider SPI via ServiceLoader. If found, all LLM calls route through real provider implementations (with streaming, retry, etc.). Otherwise, the HTTP fallback path is used.
System Prompt Building
The bridge can automatically generate a structured system prompt by reading TnsAI annotations from your role class. This means you define the agent's identity, capabilities, and behavior through annotations, and the bridge converts them into a prompt the LLM can understand.
String prompt = bridge.buildSystemPromptFromAnnotations(myRole);Extracted annotations:
@RoleSpec-- Role identity, description, responsibilities@Communication-- Communication style guidance@State-- Current state fields for context@ActionSpec-- Available actions with descriptions and parameters
Action Execution
The bridge routes action calls through the TnsAI ActionExecutor pipeline, which dispatches to the appropriate executor based on the action type. The result includes the return value, pre/post narration text, and error details if something went wrong.
Map<String, Object> params = Map.of("query", "sales trends", "limit", 10);
ActionExecutionResult result = bridge.executeAction(role, "analyzeData", params);
if (result.isSuccess()) {
Object data = result.getResult();
String pre = result.getPreNarration(); // Before-action narration
String post = result.getPostNarration(); // After-action narration
String display = result.getNarration(); // Smart: postNarration on success, errorMessage on failure
} else {
String error = result.getErrorMessage();
Exception cause = result.getException();
}ActionExecutionResult
The ActionExecutionResult wraps the outcome of an action execution, providing a consistent API for both success and failure cases.
// Factory methods
ActionExecutionResult.success(resultObject, preNarration, postNarration);
ActionExecutionResult.error(errorMessage, exception);
// Accessors
result.isSuccess();
result.getResult(); // The action return value
result.getPreNarration(); // Empty string if null
result.getPostNarration(); // Empty string if null
result.getNarration(); // Smart display text
result.getErrorMessage();
result.getException();LLM Configuration
The LLMConfiguration class holds everything needed to connect to an LLM provider: the provider name, model, temperature, max tokens, endpoint URL, and which environment variable holds the API key. The bridge resolves this configuration automatically from your annotations.
Resolution Hierarchy
When multiple annotations specify LLM configuration, the bridge picks the most specific one. Role-level settings override agent-level, which overrides playground-level.
- Role-level
@RoleSpec(llm = @LLMSpec(...))-- Highest priority - Agent-level
@AgentSpec(llm = @LLMSpec(...))-- Fallback - Playground-level
@AgentSpec(llm = @LLMSpec(...))-- Lowest priority
Optional<LLMClient> client = bridge.resolveLLMClient(roleObject);Supported Providers (11)
The bridge supports 11 LLM providers out of the box. Most use the OpenAI-compatible API format, while Anthropic and Gemini have custom formatting.
| Provider | API Format | Default Endpoint | API Key Env Var |
|---|---|---|---|
OLLAMA | OpenAI-compatible | http://localhost:11434/v1/chat/completions | OLLAMA_API_KEY |
OPENAI | Native | https://api.openai.com/v1/chat/completions | OPENAI_API_KEY |
ANTHROPIC | Native (tool format conversion) | https://api.anthropic.com/v1/messages | ANTHROPIC_API_KEY |
GEMINI | Native | https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent | GEMINI_API_KEY |
AZURE_OPENAI | OpenAI-compatible | User-configured | AZURE_OPENAI_API_KEY |
BEDROCK | AWS-specific | User-configured | AWS_ACCESS_KEY_ID |
GROQ | OpenAI-compatible | https://api.groq.com/openai/v1/chat/completions | GROQ_API_KEY |
TOGETHER | OpenAI-compatible | https://api.together.xyz/v1/chat/completions | TOGETHER_API_KEY |
MISTRAL | OpenAI-compatible | https://api.mistral.ai/v1/chat/completions | MISTRAL_API_KEY |
DEEPSEEK | OpenAI-compatible | https://api.deepseek.com/chat/completions | DEEPSEEK_API_KEY |
CUSTOM | OpenAI-compatible | User-configured | CUSTOM_LLM_API_KEY |
OpenAI-compatible providers (OLLAMA, OPENAI, GROQ, TOGETHER, MISTRAL, DEEPSEEK, AZURE_OPENAI, CUSTOM) share the same request/response format. Anthropic and Gemini have provider-specific formatting.
Endpoint Resolution
The bridge resolves the API endpoint using a three-step fallback chain, so you can override endpoints per-role, per-environment, or rely on provider defaults.
- Custom endpoint from
@LLMSpec.endpoint()(if non-empty) - Base URL from environment variable (e.g.,
OPENAI_BASE_URL) + provider chat path - Default endpoint from the provider table above
LLMConfiguration config = new LLMConfiguration(
"OPENAI", "gpt-4o", 0.7f, 4096, "", "OPENAI_API_KEY");
String apiUrl = config.resolveApiUrl(); // Checks custom -> env var -> default
String apiKey = config.resolveApiKey(); // Checks custom env -> default envLLMDispatcher (HTTP Fallback)
The LLMDispatcher is the fallback LLM transport used when TnsAI Core's native provider SPI is not on the classpath. It communicates with all 11 LLM providers over HTTP using OkHttp, building provider-specific request bodies and parsing responses. It builds provider-specific request bodies, sends them via OkHttpClient, and parses responses.
LLMResponse response = dispatcher.sendToLLM(config, conversationArray, toolsArray);
// Response types
response.getContent(); // Text content
response.getToolCalls(); // Tool call requests (if any)
response.getErrorMessage(); // Error message (if failed)The dispatcher handles:
- OpenAI-compatible request/response format for 8 providers
- Anthropic-specific message format and tool schema conversion
- Gemini-specific content format
- Azure OpenAI endpoint path construction
- API key header injection (
Authorization: Bearerfor most,x-api-keyfor Anthropic)
BridgeLLMClient (Adapter Pattern)
The BridgeLLMClient wraps the HTTP-based LLMDispatcher behind the standard LLMClient interface. This adapter pattern means the rest of TnsAI does not need to know whether it is talking to a native SPI provider or the HTTP fallback -- the API is identical.
// Created internally by SCOPBridge when resolving LLM clients
BridgeLLMClient client = new BridgeLLMClient(config, dispatcher);
// Implements LLMClient
client.getModel(); // From LLMConfiguration
client.getTemperature(); // From LLMConfiguration
client.getMaxTokens(); // Optional<Integer> from LLMConfiguration
// Standard chat interface
ChatResponse response = client.chat(
message,
Optional.of(systemPrompt),
Optional.of(history),
Optional.of(tools)
);The adapter translates between Core's ChatResponse format and the LLMResponse format used by the dispatcher.
Design Patterns
The integration module uses several well-known design patterns to keep the code modular and testable. This table summarizes them for contributors and architects.
| Pattern | Usage |
|---|---|
| Facade | SCOPBridge delegates to SystemPromptBuilder, LLMDispatcher, BridgeLLMClient |
| Adapter | BridgeLLMClient adapts LLMDispatcher to the LLMClient interface |
| SPI Discovery | Prefers Core's LLMClientProvider when available on classpath |
| Reflection | Detects SCOP classes at runtime without compile-time dependency |
| Configuration Hierarchy | Role -\> Agent -\> Playground annotation resolution |
WebSocket Protocol
The Server module is a Javalin-based backend that bridges frontends (CLI, IDE, web) to the TnsAI agent framework via WebSocket. It provides multi-agent sessions, real-time streaming, risk-based tool approval, hybrid RAG search, and an audit trail.
Evaluation
The Evaluation module provides a three-layer system for measuring agent quality: evaluators that score responses, a benchmark engine that runs test datasets, and reporting tools for quality gates, trend analysis, and regression detection.