Custom Tools
This guide covers creating, annotating, and registering custom tools for the TnsAI framework.
For the full list of 152 built-in tools, see the Tool Catalog.
Creating Custom Tools
Every custom tool extends AbstractTool and overrides a single method: doExecute(String query). The framework handles parameter validation, caching, rate limiting, and metrics around your implementation automatically.
Basic Tool
The simplest custom tool only needs a @ToolSpec annotation (which tells the LLM what the tool does) and a doExecute method (which contains your logic). Extend AbstractTool and implement doExecute:
@ToolSpec(
name = "my_tool",
description = "Does something useful",
category = ToolSpec.Category.UTILITY,
keywords = {"custom", "example"},
priority = 70,
idempotent = true,
latency = ToolSpec.Latency.FAST
)
public class MyTool extends AbstractTool {
@Override
protected String doExecute(String query) throws Exception {
// Your implementation here
return "Result for: " + query;
}
}Tool with API Integration
When your tool calls an external API, you can use the built-in httpGet and requireApiKey helper methods from AbstractTool. The @Resilience annotation adds automatic retry and timeout behavior so you don't have to write that logic yourself.
@ToolSpec(
name = "weather_lookup",
description = "Get current weather for a city",
category = ToolSpec.Category.DATA,
priority = 60,
latency = ToolSpec.Latency.MEDIUM
)
@Resilience(retry = @Resilience.Retry(maxAttempts = 2), timeout = 10000)
public class WeatherTool extends AbstractTool {
private static final String API_KEY_ENV = "WEATHER_API_KEY";
@Override
protected String doExecute(String query) throws Exception {
String apiKey = requireApiKey(API_KEY_ENV);
String url = "https://api.weather.com/v1/current?q="
+ URLEncoder.encode(query, StandardCharsets.UTF_8)
+ "&key=" + apiKey;
String json = httpGet(url);
Map<String, Object> data = parseJson(json);
return formatWeatherResponse(data);
}
}Tool with Structured Input
If your tool accepts multiple named parameters instead of a single string query, override buildSchema() to define the expected input shape. The LLM will then send a JSON object matching your schema, which you parse in doExecute.
@ToolSpec(
name = "file_search",
description = "Search files by name pattern",
category = ToolSpec.Category.CODE
)
public class FileSearchTool extends AbstractTool {
@Override
protected ToolSchema buildSchema() {
return ToolSchema.builder()
.property("pattern", "string", "File name pattern (glob)")
.property("directory", "string", "Directory to search in")
.required(List.of("pattern"))
.build();
}
@Override
protected String doExecute(String input) throws Exception {
Map<String, String> params = parseJson(input);
String pattern = params.get("pattern");
String dir = params.getOrDefault("directory", ".");
// Search implementation
}
}Tool Requiring Confirmation
For destructive or sensitive operations like deleting files or sending payments, set requiresConfirmation = true in the @ToolSpec annotation. This tells the agent to request user approval before executing the tool.
@ToolSpec(
name = "file_delete",
description = "Delete a file from the filesystem",
category = ToolSpec.Category.CODE,
requiresConfirmation = true // Agent must get approval before execution
)
@Security(audit = Security.AuditLevel.STANDARD)
public class FileDeleteTool extends AbstractTool {
@Override
protected String doExecute(String query) throws Exception {
Path path = Path.of(query);
Files.delete(path);
return "Deleted: " + path;
}
}Tool Annotations
TnsAI uses annotations to configure tool behavior declaratively. Instead of writing boilerplate code for retries, caching, or rate limiting, you add an annotation and the framework handles it.
@ToolSpec
This is the primary annotation every tool needs. It declares the tool's identity and metadata that the LLM uses to decide when to call the tool.
@ToolSpec(
name = "tool_name", // Unique identifier used by LLM
description = "...", // Shown to LLM for tool selection
category = ToolSpec.Category.SEARCH,
keywords = {"web", "search"}, // For tool discovery
priority = 80, // Higher = preferred (0-100)
idempotent = true, // Safe to retry
parallelizable = true, // Can run in parallel
latency = ToolSpec.Latency.MEDIUM, // FAST / MEDIUM / SLOW
requiresConfirmation = false // Needs user approval
)@Resilience
Adds automatic retry and timeout behavior to your tool. If a tool call fails due to a transient error (like a network timeout), the framework will retry it according to the configured backoff strategy.
Adds retry and timeout behavior:
@Resilience(
retry = @Resilience.Retry(maxAttempts = 3, backoff = EXPONENTIAL),
timeout = 15000 // milliseconds
)@RateLimit
Prevents your tool from being called too frequently, which is important for staying within API rate limits. The framework will queue or reject excess calls.
Throttles tool execution:
@RateLimit(burstSize = 10, timeout = 1, timeoutUnit = TimeUnit.MINUTES)@Cached
Stores tool results so that identical queries return instantly from cache instead of re-executing. This saves time and API costs for tools that return the same result for the same input.
Caches tool results:
@Cached(ttl = 1, unit = TimeUnit.HOURS, maxSize = 100)@Security
Controls audit logging for tool executions. Use this on sensitive tools (e.g., database writes, payment processing) so that every call is recorded for compliance and debugging.
Controls audit logging:
@Security(audit = Security.AuditLevel.STANDARD)Tool Execution Flow
When a tool is called, AbstractTool runs this pipeline:
- Parameter validation -- Validate input against schema constraints
- Contract preconditions -- Check tool-specific preconditions
- Cache lookup -- Return cached result if available (
@Cached) - Rate limit -- Acquire rate limit permit (
@RateLimit) - Security check -- Pre-execution security audit
- Execute with resilience -- Run
doExecute()with retry/timeout (@Resilience) - Contract postconditions -- Validate output
- Security audit -- Post-execution logging
- Cache store -- Store result for future use
- Metrics -- Record execution time, success/failure
- Listener notification -- Notify registered listeners
Registering Custom Tools via SPI
The Java SPI (Service Provider Interface) pattern lets the framework automatically discover and load your custom tools at startup, without any manual registration code. To make your tools auto-discoverable by the framework:
- Implement
ToolProvider:
public class MyToolProvider implements ToolProvider {
@Override
public List<Tool> getTools() {
return List.of(
new WeatherTool(),
new FileSearchTool()
);
}
}- Register in
META-INF/services/com.tnsai.tools.spi.ToolProvider:
com.example.MyToolProvider- Your tools are now available to all agents automatically.
AbstractTool Utilities
AbstractTool provides shared infrastructure so you don't have to set up HTTP clients or JSON parsing:
| Method | Description |
|---|---|
httpGet(url) | HTTP GET request |
httpPostJson(url, body) | HTTP POST with JSON body |
httpPostForm(url, params) | HTTP POST with form data |
parseJson(json) | Parse JSON to Map<String, Object> |
toJson(object) | Serialize object to JSON string |
requireApiKey(envVar) | Get API key or throw |
requireNonEmpty(value, name) | Validate non-empty string |
cleanHTML(html) | Strip HTML tags |
truncate(text, maxLength) | Truncate with ellipsis |
All HTTP calls use a shared OkHttpClient with connection pooling (5 connections, 5-minute keep-alive).
Tool Discovery
When an agent has many tools available, it might not know which one to use for a given task. ToolSearchTool is a meta-tool that lets agents dynamically find other tools at runtime. When an agent does not know which tool to use, it can call tool_search with a natural-language query, a category filter, or a direct info lookup.
Query Formats
You can search for tools using natural language, filter by category, look up a specific tool by name, or list everything at once.
| Format | Example | Behavior |
|---|---|---|
| Natural language | "web search" | Fuzzy search across names, descriptions, and keywords |
category:<name> | "category:search" | List all tools in a category |
info:<tool> | "info:brave_search" | Detailed info for one tool (description, keywords, examples, prompt) |
list or all | "list" | List every registered tool grouped by category |
Registration
By default, ToolSearchTool lazily populates its registry from the global ToolRegistry on first search. You can also register tools explicitly:
ToolSearchTool search = new ToolSearchTool();
search.register(myCustomTool);
search.registerAll(List.of(toolA, toolB));Search Strategy
The search strategy controls how tool names, descriptions, and keywords are matched against a query. The default search strategy is CompositeSearchStrategy.standard(). Results are ranked by relevance score and capped at defaultMaxResults (default 5). Each result includes the tool name, category, match percentage, short description, and an example if available.
Customization
You can provide your own search strategy and adjust the maximum number of results returned per query.
// Custom search strategy and max results
ToolSearchTool search = new ToolSearchTool(CompositeSearchStrategy.standard(), 10);Health Monitoring
In production, you need to know if a tool is working reliably or starting to fail. ToolHealthRegistry and ToolHealthIndicator provide health checks for registered tools based on execution metrics like success rate and latency.
Status Levels
Each tool is classified into one of four health statuses based on its recent performance metrics.
| Status | Condition |
|---|---|
UP | Success rate \>= 95% and P95 latency within threshold |
DEGRADED | Success rate \>= 80% but below 95%, or P95 latency exceeds threshold |
DOWN | Success rate \< 80% |
UNKNOWN | No executions recorded yet |
Default thresholds: 95% UP, 80% DOWN, 5000ms latency. All are configurable per tool.
Usage
Register your tools with the health registry, then query their status individually or check the overall system health.
ToolHealthRegistry healthRegistry = new ToolHealthRegistry();
// Register with default thresholds
healthRegistry.register(mySearchTool);
healthRegistry.registerAll(List.of(toolA, toolB));
// Register with custom thresholds
healthRegistry.register(myApiTool,
0.90, // successUpThreshold
0.70, // successDownThreshold
10_000 // latencyThresholdMs
);
// Check individual tool health
ToolHealthIndicator.Health health = healthRegistry.check("brave_search");
health.status(); // UP, DEGRADED, DOWN, or UNKNOWN
health.toolName(); // "brave_search"
health.stats(); // ToolMetrics.ToolStats with execution details
health.reason(); // null when UP; explanation string otherwise
// Check all tools
Map<String, ToolHealthIndicator.Health> all = healthRegistry.checkAll();
// Filter by status
Map<String, ToolHealthIndicator.Health> down = healthRegistry.getByStatus(Status.DOWN);
// Overall system health (worst status across all tools)
ToolHealthIndicator.Status overall = healthRegistry.getOverallStatus();Batch Execution
When you need to run several tools at once (for example, searching multiple sources simultaneously), BatchExecutor handles the orchestration. It runs multiple tools in a single batch, automatically parallelizing tools that declare isParallelizable() == true and running the rest sequentially. When a ToolDependencyGraph is provided, execution follows topological order.
Basic Usage
Create a BatchExecutor, pass it a map of tools to their input queries, and inspect the results. The executor automatically determines which tools can run in parallel.
try (BatchExecutor executor = new BatchExecutor()) {
Map<Tool, String> requests = new LinkedHashMap<>();
requests.put(searchTool, "java frameworks");
requests.put(calculatorTool, "2 + 2");
BatchExecutor.BatchResult result = executor.execute(requests);
System.out.println("Success rate: " + result.successRate());
System.out.println("Duration: " + result.totalDuration());
result.results().forEach((name, sr) -> {
if (sr.success()) {
System.out.println(name + ": " + sr.result());
} else {
System.out.println(name + " FAILED: " + sr.error());
}
});
}Configuration
You can adjust the thread pool size and the overall timeout for the batch.
// Custom parallelism (default 4) and timeout (default 60s)
BatchExecutor executor = new BatchExecutor(8, 120_000);BatchExecutor implements AutoCloseable -- use try-with-resources to shut down the thread pool cleanly.
Progress Listener
To track batch progress in real time (for example, updating a UI or logging), provide a BatchListener that receives callbacks as each tool starts and completes.
executor.execute(requests, null, new BatchExecutor.BatchListener() {
@Override
public void onBatchStart(int totalTools) { ... }
@Override
public void onToolStart(String toolName) { ... }
@Override
public void onToolComplete(String toolName, boolean success,
int completed, int total) { ... }
@Override
public void onBatchComplete(long successes, long failures,
Duration totalDuration) { ... }
});ToolDependencyGraph
When some tools depend on the output of other tools (for example, "analyze_data" must run after "fetch_data"), you need to declare those dependencies. ToolDependencyGraph models directed acyclic dependencies between tools. BatchExecutor uses it to determine execution order.
Declaring dependencies:
ToolDependencyGraph graph = new ToolDependencyGraph();
// Programmatic
graph.addDependency("analyze_data", "fetch_data");
graph.addDependency("generate_report", "analyze_data");
// Annotation-driven (reads @ToolSpec(dependsOn = ...))
graph.register(myTool);
// Auto-discover from global ToolRegistry
graph.autoDiscoverFromRegistry();Adding a dependency that would create a cycle throws IllegalArgumentException.
Querying the graph:
// Topological order (dependencies first)
List<String> order = graph.topologicalOrder();
// -> [fetch_data, analyze_data, generate_report]
// Direct dependencies
Set<String> deps = graph.getDependencies("analyze_data");
// Transitive closure
Set<String> allDeps = graph.getTransitiveDependencies("generate_report");
// Execution levels for parallel scheduling
List<Set<String>> levels = graph.getExecutionLevels();
// -> [{fetch_data}, {analyze_data}, {generate_report}]
// Each level can run concurrently; all dependencies are in earlier levels.Visualization. Export the graph in DOT format for Graphviz rendering:
String dot = graph.toDotFormat();
// Render: dot -Tpng graph.dot -o graph.pngCombining with BatchExecutor:
BatchExecutor.BatchResult result = executor.execute(requests, graph);Tool Manifest
A tool manifest is a machine-readable JSON document that lists every registered tool and its full metadata. This is useful for documentation generation, tooling dashboards, and auditing which tools are available in your system. ToolManifestGenerator produces a JSON manifest describing all registered tools and their metadata. The manifest captures @ToolSpec, @Contract, @Resilience, and @Security annotations into a single structured document.
Generating a Manifest
You can generate a manifest from all SPI-discovered tools or from a specific collection, then serialize it to JSON.
// From the global ToolRegistry (all SPI-discovered tools)
ToolManifest manifest = ToolManifestGenerator.generate();
// From an explicit tool collection
ToolManifest manifest = ToolManifestGenerator.generate(myTools);
// Serialize to JSON
String json = ToolManifestGenerator.toJson(manifest);
// Write to file
try (Writer writer = new FileWriter("manifest.json")) {
ToolManifestGenerator.writeTo(manifest, writer);
}Manifest Contents
Each tool entry in the manifest captures everything the framework knows about that tool, pulled from annotations and the Tool interface.
| Field | Source |
|---|---|
name, description | Tool interface |
category, version, keywords, latency, dependsOn | @ToolSpec |
priority, idempotent, parallelizable, requiresConfirmation | Tool interface / @ToolSpec |
contract (preconditions, postconditions, invariants) | @Contract |
resilience (maxAttempts, backoffMs, timeout, circuitBreaker) | @Resilience |
security (approvalRequired, audit, sensitive, allowedCallers, maskFields) | @Security |
Tool entries are sorted alphabetically by name. The manifest also records a schemaVersion and generation timestamp.
Enhancement Pipeline
Instead of manually wrapping your tool with retry, rate limiting, and caching logic, you can annotate it and let ToolEnhancer apply the wrappers automatically. It reads @Retry, @RateLimit, and @Cached annotations from a tool class and applies the corresponding wrapper layers automatically.
Wrapper Order
The order in which wrappers are applied matters for correctness. Wrappers are applied from innermost to outermost:
CachedTool(RateLimitedTool(RetryableTool(YourTool)))This means:
- Cache hits bypass rate limiting entirely
- Rate limiting is checked before retries
- Retries happen closest to the tool
Annotating a Tool
Add the @Retry, @RateLimit, and/or @Cached annotations directly on your tool class. You only need to include the annotations you want -- they are all optional.
@Retry(maxAttempts = 3, backoff = Retry.Backoff.EXPONENTIAL,
initialDelay = 500, multiplier = 2.0, jitter = true)
@RateLimit(requests = 10, per = 1, unit = TimeUnit.SECONDS,
burstSize = 15, throwOnLimit = true)
@Cached(ttl = 5, unit = TimeUnit.MINUTES, maxSize = 500,
cacheErrors = false)
public class MySearchTool extends AbstractTool {
@Override
protected String doExecute(String query) throws Exception {
return httpGet("https://api.example.com/search?q=" + query);
}
}Applying Enhancements
Call ToolEnhancer.enhance() to wrap your tool with all declared annotation-driven behaviors. If no enhancement annotations are present, the original tool is returned unchanged.
Tool raw = new MySearchTool();
Tool enhanced = ToolEnhancer.enhance(raw);
// enhanced is now: CachedTool(RateLimitedTool(RetryableTool(MySearchTool)))If the tool class has no enhancement annotations, enhance() returns the original tool unchanged.
Checking for Annotations
You can check whether a tool class has any enhancement annotations before calling enhance().
boolean hasAny = ToolEnhancer.hasEnhancements(MySearchTool.class);Retry Details
The @Retry annotation gives you fine-grained control over how failed tool executions are retried, including backoff timing and which exceptions to retry on.
maxAttempts-- maximum retry countbackoff--FIXEDorEXPONENTIALinitialDelay,multiplier,maxDelay-- backoff timingjitter-- add randomness to avoid thundering herdretryOn/noRetryOn-- exception class filters (noRetryOntakes precedence)retryOnResponse-- regex patterns matched against the response string
Rate Limit Details
The @RateLimit annotation protects external APIs from being overwhelmed by too many calls. It converts requests / per / unit into a requests-per-second rate internally. The burstSize allows short spikes. When throwOnLimit is true, exceeding the limit throws instead of blocking.
Cache Details
The @Cached annotation avoids redundant work by storing previous results in an in-memory cache. It wraps the tool with CachedTool. The ttl and maxSize control eviction. Set cacheErrors = true to also cache error responses (default false).
Tool Catalog
The Tools module provides 152 ready-to-use tools across 37 categories. All tools extend `AbstractTool`, are discoverable via SPI, and can be registered with any agent using `AgentBuilder.tool()`.
Advanced Patterns
Advanced coordination patterns in TnsAI.Coordination for production multi-agent systems.