Planning
Goal-oriented planning for AI agents. TnsAI provides three planner implementations: annotation-driven backward chaining, utility-based scoring, and LLM-powered dynamic planning with human-in-the-loop approval and adaptive replanning.
Planner Interface
Every planner in TnsAI implements this interface, which defines how to generate action plans from the current world state. You can use one of the three built-in planners or register your own via META-INF/services/com.tnsai.planning.Planner.
public interface Planner {
List<PlanningAction> plan(Map<String, Object> state);
List<PlanningAction> plan(Map<String, Object> state, boolean useChaining);
List<PlanningGoal> getGoals();
List<PlanningAction> getActions();
List<PlanningGoal> findUnsatisfiedGoals(Map<String, Object> state);
List<PlanningGoal> findSatisfiedGoals(Map<String, Object> state);
boolean isGoalSatisfied(String goalName, Map<String, Object> state);
List<PlanningAction> findActionsForGoal(PlanningGoal goal, Map<String, Object> state);
List<PlanningAction> findApplicableActions(Map<String, Object> state);
Map<String, Object> applyEffects(PlanningAction action, Map<String, Object> state);
}PlanningGoal
Goals define what the agent wants to achieve. Created from @Goal annotations or programmatically.
// Simple goal with defaults
PlanningGoal goal = PlanningGoal.of("survive", "health > 0");
// Goal with priority
PlanningGoal urgent = PlanningGoal.of("heal", "health > 50", Priority.HIGH);
// Full constructor
PlanningGoal full = new PlanningGoal(
"survive", // name
"health > 0", // condition expression
Priority.CRITICAL, // priority
"Keep health above zero", // description
true, // persistent (re-evaluate after achievement)
100 // deadline in ticks (-1 for none)
);| Field | Type | Description |
|---|---|---|
name | String | Unique goal identifier |
condition | String | Boolean expression evaluated against state |
priority | Priority | Determines planning order (higher = first) |
persistent | boolean | Re-evaluate after achievement |
deadline | int | Ticks until expiry (-1 = none) |
PlanningAction
Actions represent things the agent can do, with preconditions, postconditions, and utility fields.
// Simple factory
PlanningAction heal = PlanningAction.of(
"heal", "health < 50", "health = 100", "survive");
// Builder with utility fields
PlanningAction search = PlanningAction.builder("search")
.description("Search for resources")
.precondition("energy > 10")
.postcondition("resources = resources + 5")
.fulfills("gather")
.cost(10)
.value(50)
.weight(1.5f)
.tags("exploration", "gathering")
.build();
search.utility(); // 40 (value - cost)
search.weightedUtility(); // 60.0 (utility * weight)
search.fulfillsGoal("gather"); // true
search.hasTag("exploration"); // true
search.hasPrecondition(); // true
search.hasPostcondition(); // true| Field | Type | Default | Description |
|---|---|---|---|
name | String | required | Action identifier |
precondition | String | "" | Condition that must be true before execution |
postcondition | String | "" | State changes after execution |
fulfills | Set<String> | {} | Goal names this action helps achieve |
method | Method | null | Java method to invoke (null for simulation) |
cost | int | 1 | Execution cost for utility calculation |
value | int | 1 | Expected value for utility calculation |
weight | float | 1.0 | Multiplier for utility score |
tags | Set<String> | {} | Tags for filtering/grouping |
BackwardChainingPlanner
Starts from unsatisfied goals and works backward to find action sequences. Handles multi-step plans where one action's postcondition enables another's precondition.
Annotation-Driven Setup
The easiest way to define goals, actions, and state is with annotations on a Java class. The planner reads @Goal, @ActionSpec, and @State annotations at construction time and builds the planning model automatically.
@RoleSpec(
name = "combat-medic",
goals = {
@Goal(name = "survive", condition = "health > 0", priority = Priority.CRITICAL),
@Goal(name = "heal-team", condition = "teamHealth > 50", priority = Priority.HIGH),
@Goal(name = "gather", condition = "supplies > 10", priority = Priority.NORMAL)
}
)
public class CombatMedic {
@State(name = "health")
private int health = 100;
@State(name = "supplies")
private int supplies = 5;
@State(name = "teamHealth")
private int teamHealth = 30;
@ActionSpec(
description = "Use medkit to heal a teammate",
precondition = "supplies > 0",
postcondition = "teamHealth = 80, supplies = supplies - 1",
fulfills = {"heal-team"}
)
public void healTeammate() { /* ... */ }
@ActionSpec(
description = "Search area for supplies",
precondition = "health > 20",
postcondition = "supplies = supplies + 3",
fulfills = {"gather"}
)
public void searchForSupplies() { /* ... */ }
}Using the Planner
Once you have a planner, call plan(state) with the current world state to get an ordered list of actions. The backward chaining algorithm figures out which actions to execute and in what order to satisfy unsatisfied goals.
// Create from annotated class
Planner planner = new BackwardChainingPlanner(CombatMedic.class);
// Extract current state from @State fields
Map<String, Object> state = BackwardChainingPlanner.extractState(medicInstance);
// state = {health=100, supplies=5, teamHealth=30}
// Generate plan (backward chaining enabled by default)
List<PlanningAction> plan = planner.plan(state);
// Result: [searchForSupplies, healTeammate]
// Because: need supplies first (heal-team precondition), then heal
// Query goals
planner.findUnsatisfiedGoals(state); // [heal-team, gather]
planner.isGoalSatisfied("survive", state); // true (health=100 > 0)
// Custom max depth
Planner planner = new BackwardChainingPlanner(CombatMedic.class, 5);
// Programmatic setup
Planner planner = new BackwardChainingPlanner(goals, actions, 8);The backward chaining algorithm recurses up to maxDepth (default 10) and tracks visited actions to prevent infinite cycles.
UtilityAIPlanner
Greedily selects the action with the highest utility score. Unlike backward chaining (goal-directed), utility AI is reactive -- it picks the best action at each step.
Considerations
Considerations are scoring functions that evaluate how desirable each action is given the current state. The planner multiplies all consideration scores together to produce a final utility value for each action, then picks the highest one.
| Factory Method | Description |
|---|---|
Consideration.cost() | Lower cost = higher score (inverse, normalized to 100) |
Consideration.cost(weight) | Weighted cost consideration |
Consideration.value() | Higher value = higher score (normalized to 100) |
Consideration.value(weight) | Weighted value consideration |
Consideration.utility() | value - cost, normalized |
Consideration.preconditionSatisfied() | 1.0 if met, 0.0 if not |
Consideration.hasTag(tag) | 1.0 if action has tag |
Consideration.combine(...) | Weighted average of multiple |
// Custom consideration
Consideration urgency = (action, state) -> {
Integer priority = (Integer) state.get("taskPriority");
return priority != null ? priority / 10.0f : 0.5f;
};Builder Pattern
You can build a UtilityAIPlanner programmatically by adding goals, actions, and considerations. The planner evaluates all actions against the considerations and selects the one with the highest combined score.
UtilityAIPlanner planner = UtilityAIPlanner.builder()
.goal(PlanningGoal.of("optimize", "efficiency > 80"))
.action(PlanningAction.builder("cacheResults")
.cost(5).value(40).fulfills("optimize").build())
.action(PlanningAction.builder("parallelProcess")
.cost(20).value(80).fulfills("optimize").build())
.consideration(Consideration.cost(0.3f))
.consideration(Consideration.value(0.5f))
.consideration(Consideration.preconditionSatisfied())
.build();
Optional<PlanningAction> best = planner.selectBestAction(state);
List<PlanningAction> ranked = planner.getActionsByUtility(state);
float score = planner.calculateUtility(action, state);Annotation-Driven with @Utility
Instead of building programmatically, you can annotate actions with @Utility to set their cost, value, and weight directly in the class definition. The planner reads these at construction time.
@ActionSpec(
description = "Cache query results",
precondition = "cacheSize < maxCache",
postcondition = "cacheHitRate = 0.8",
fulfills = {"performance"},
utility = @Utility(cost = 5, value = 40, weight = 1.2f, tags = {"cache"})
)
public void cacheResults() { /* ... */ }
UtilityAIPlanner planner = new UtilityAIPlanner(MyRole.class);LLMDynamicPlanner
Uses an LLM to decompose natural-language goals into executable step sequences. Suitable for open-ended tasks where actions cannot be predefined.
LLMDynamicPlanner planner = LLMDynamicPlanner.builder()
.llm(client)
.capability(CapabilityDescriptor.of("search", "Search the web for information"))
.capability(CapabilityDescriptor.of("write_file", "Write content to a file"))
.capability(CapabilityDescriptor.of("run_tests", "Execute test suite"))
.additionalContext("Project uses Java 21 with Maven")
.temperature(0.2f)
.build();
LLMPlan plan = planner.generatePlan("Create a summary of recent AI news");
System.out.println(plan.toDisplayString());
// Plan for: Create a summary of recent AI news
// Steps:
// 1. [search] Find recent AI news articles
// 2. [write_file] Write summary to output.md
for (LLMPlanStep step : plan.steps()) {
System.out.printf("[%s] %s (args: %s)%n",
step.actionName(), step.description(), step.arguments());
}LLMPlan
An immutable data structure representing the generated plan. It supports non-destructive modifications (removing steps, reordering) that return a new plan, which is useful for human-in-the-loop approval workflows where reviewers may want to adjust the plan before execution.
plan.size(); // Number of steps
plan.isEmpty(); // True if no steps
plan.goal(); // Original goal string
plan.reasoning(); // LLM's overall strategy
plan.withoutStep(2); // New plan without step at index 2
plan.withReorderedSteps(List.of(0, 2, 1)); // New plan with reordered steps
plan.remainingFrom(3); // New plan with steps from index 3 onward
plan.toDisplayString(); // Human-readable formatLLMPlanStep
Each step in an LLM-generated plan maps to one of the declared capabilities. It includes the action to execute, a human-readable description, optional arguments, and the LLM's reasoning for why this step is needed.
LLMPlanStep step = LLMPlanStep.of(0, "search", "Find recent articles");
step.stepIndex(); // 0
step.actionName(); // "search"
step.description(); // "Find recent articles"
step.arguments(); // Map<String, Object>
step.reasoning(); // Why this step is neededPlanApprovalGate
Human-in-the-loop approval between plan generation and execution.
PlanApprovalGate gate = PlanApprovalGate.builder()
.reviewCallback(plan -> {
System.out.println(plan.toDisplayString());
System.out.print("Approve? (y/n): ");
String input = scanner.nextLine();
if ("y".equals(input)) return ApprovalDecision.approve();
return ApprovalDecision.reject("User declined");
})
.autoApproveEmpty(true)
.build();
Optional<LLMPlan> approved = gate.review(generatedPlan);
approved.ifPresent(plan -> engine.executePlan(plan));
// Generate + review in one call
Optional<LLMPlan> result = gate.generateAndReview(planner, "Deploy the app");
// Auto-approve for testing
PlanApprovalGate autoGate = PlanApprovalGate.autoApprove();ApprovalDecision
The reviewer's response to a proposed plan. Decisions can accept, reject, or modify the plan by removing or reordering steps.
| Factory | Description |
|---|---|
ApprovalDecision.approve() | Accept plan as-is |
ApprovalDecision.reject(reason) | Reject with reason |
ApprovalDecision.removeSteps(List<Integer>) | Accept with steps removed |
ApprovalDecision.reorder(List<Integer>) | Accept with reordered steps |
ApprovalDecision.modify(removed, newOrder) | Accept with both modifications |
AdaptiveReplanEngine
Executes LLM-generated plans with automatic replanning on step failure.
AdaptiveReplanEngine engine = AdaptiveReplanEngine.builder()
.llm(client)
.planner(planner)
.stepExecutor(step -> {
try {
String output = myToolRunner.run(step.actionName(), step.arguments());
return StepExecutionResult.success(output);
} catch (Exception e) {
return StepExecutionResult.failure(e.getMessage());
}
})
.maxReplanAttempts(3)
.build();
PlanExecutionResult result = engine.execute("Deploy the application");
System.out.println(result.success());
System.out.println(result.replanCount());
// Execute an existing plan
PlanExecutionResult result = engine.executePlan(approvedPlan, currentState);Replanning Flow
When a step fails, the engine does not simply stop. Instead, it asks the LLM to create a revised plan that accounts for the failure, then continues execution. This makes plans resilient to unexpected errors.
- Execute steps sequentially via
StepExecutor - On failure: collect completed steps, error details, remaining steps
- Call LLM with failure context to generate a revised plan
- Continue execution with revised plan
- Repeat up to
maxReplanAttemptstimes
Full Pipeline Example
This shows the recommended end-to-end workflow: the LLM generates a plan, a human reviews and approves it, and the adaptive engine executes it with automatic replanning on failure.
// 1. Create planner with capabilities
LLMDynamicPlanner planner = LLMDynamicPlanner.builder()
.llm(client).capabilities(capabilities).build();
// 2. Set up approval gate
PlanApprovalGate gate = PlanApprovalGate.builder()
.reviewCallback(myReviewUI::showPlan).build();
// 3. Set up execution engine
AdaptiveReplanEngine engine = AdaptiveReplanEngine.builder()
.llm(client).planner(planner)
.stepExecutor(myExecutor).maxReplanAttempts(3).build();
// 4. Generate, approve, execute
Optional<LLMPlan> approved = gate.generateAndReview(planner, goal, state);
approved.ifPresent(plan -> {
PlanExecutionResult result = engine.executePlan(plan, state);
if (result.success()) {
System.out.println("Goal achieved!");
}
});Learning and Refinement
Feedback-driven learning, normative constraint enforcement, iterative refinement loops, prompt optimization, and structured output validation. These components enable agents to improve over time and produce higher-quality outputs.
RAG Strategy SPI
TnsAI.Intelligence provides a pluggable Retrieval-Augmented Generation (RAG) framework with three built-in strategies and a composable pipeline. Package: `com.tnsai.intelligence.rag`.