Prompt Strategies
TnsAI includes a prompt enhancement system that applies proven prompting techniques to improve LLM response quality. The system is built around the `PromptStrategy` enum, `PromptEnhancer` builder, and `EnhancedPrompt` output.
Package: com.tnsai.prompt.strategy
PromptStrategy Enum
Prompt strategies are research-backed techniques that improve LLM response quality by structuring how the model approaches a problem. Instead of writing complex system prompts by hand, you pick one or more strategies and the framework generates the right instructions automatically. Twelve predefined strategies are available, based on techniques from OpenAI, Anthropic, and Google AI research.
| Strategy | Description | Multi-Pass | Post-Processing |
|---|---|---|---|
CHAIN_OF_THOUGHT | Step-by-step reasoning before final answer. Best for math, logic, multi-step analysis. | No | No |
CHAIN_OF_VERIFICATION | Self-verification with generated questions. Initial answer, then verify, then refine. Reported 60% to 92% accuracy improvement on complex queries. | Yes | Yes |
CONFIDENCE_WEIGHTED | Includes confidence score (0--100%), key assumptions, and alternatives when confidence is below threshold. | No | No |
STRUCTURED_THINKING | Four-phase protocol: UNDERSTAND, ANALYZE, STRATEGIZE, EXECUTE. | No | No |
MULTI_PERSPECTIVE | Examines problem from Technical, Business, User Experience, and Risk perspectives, then synthesizes a balanced recommendation. | No | No |
CONSTRAINT_FIRST | Separates hard constraints (must satisfy) from soft preferences (nice to have) before proceeding. | No | No |
ITERATIVE_REFINEMENT | Multi-pass generation: Draft, Critique, Refine, Review. | Yes | Yes |
CONTEXT_BOUNDARIES | Clear separation of Context, Focus, Task, and Constraints. Explicitly flags insufficient information. | No | No |
FEW_SHOT_EXAMPLES | Learns from positive and negative examples to guide response format and quality. | No | No |
META_PROMPTING | AI designs the optimal prompt for the task first, then responds to it. | Yes | No |
SIX_PART_ANATOMY | Comprehensive structure: Role, Objective, Request, Process, Output, Stop Condition. | No | No |
ATOM_OF_THOUGHT | Decomposes problems into independent atoms solved in parallel, then synthesizes. Unlike CoT, errors are isolated per atom. +30--40% accuracy on complex reasoning, +20--30% token usage. Best for 70B+ parameter models. (since 2.10.7) | Yes | No |
Strategy Methods
Each strategy enum value provides these methods to retrieve its system instruction text and to check whether it requires multiple LLM passes or post-processing.
| Method | Return Type | Description |
|---|---|---|
getSystemInstruction() | String | The full system instruction text injected for this strategy |
getDescription() | String | Short human-readable description |
requiresPostProcessing() | boolean | true for CHAIN_OF_VERIFICATION and ITERATIVE_REFINEMENT |
isMultiPass() | boolean | true for CHAIN_OF_VERIFICATION, ITERATIVE_REFINEMENT, META_PROMPTING, and ATOM_OF_THOUGHT |
PromptEnhancer Builder
PromptEnhancer is where you assemble your prompting configuration. It is a fluent builder that lets you combine multiple strategies, set a role and objective, define constraints, provide few-shot examples, and choose an output format. When you call .enhance(), it compiles everything into an EnhancedPrompt ready to send to the LLM.
Quick Construction
If you only need a single strategy with no extra configuration, use the shorthand factory method.
// Single strategy
PromptEnhancer enhancer = PromptEnhancer.withStrategy(PromptStrategy.CHAIN_OF_THOUGHT);
EnhancedPrompt prompt = enhancer.enhance("Solve: 2x + 5 = 15");Full Builder API
For more control, use the builder. You can combine multiple strategies, set a role and objective, add constraints and preferences, provide positive and negative examples, define process steps, and choose an output format.
PromptEnhancer enhancer = PromptEnhancer.builder()
.strategy(PromptStrategy.CHAIN_OF_THOUGHT)
.strategy(PromptStrategy.CONFIDENCE_WEIGHTED)
.role("Expert mathematician")
.objective("Solve algebra problems accurately")
.constraint("Show all work")
.constraint("Verify answer by substitution")
.constraints(List.of("Use standard notation", "Simplify"))
.softPreference("Explain in simple terms")
.softPreferences(List.of("Use examples", "Keep it concise"))
.positiveExample("2x = 10", "x = 5", "Correct division by 2")
.positiveExample("3x + 1 = 7", "x = 2")
.negativeExample("2x = 10", "x = 10", "Forgot to divide")
.processStep("Parse the equation")
.processStep("Isolate the variable")
.process(List.of("Simplify", "Verify"))
.outputFormat(OutputFormat.STRUCTURED)
.stopCondition("Stop after verification pass")
.verificationQuestions(5) // CoVe: number of verification questions
.refinementPasses(3) // Iterative Refinement: number of passes
.confidenceThreshold(0.7f) // Confidence Weighted: threshold for alternatives
.build();Builder Methods
The complete list of builder methods. All setter methods are chainable and can be called in any order.
| Method | Parameter | Description |
|---|---|---|
strategy(PromptStrategy) | strategy | Adds a prompting strategy (chainable, multiple allowed) |
role(String) | role | Sets the role/persona (e.g., "Expert researcher") |
objective(String) | objective | Sets the high-level goal |
constraint(String) | constraint | Adds a single hard constraint |
constraints(List<String>) | constraints | Adds multiple hard constraints |
softPreference(String) | preference | Adds a single soft preference |
softPreferences(List<String>) | preferences | Adds multiple soft preferences |
positiveExample(String, String) | input, output | Adds a good example |
positiveExample(String, String, String) | input, output, explanation | Adds a good example with reasoning |
negativeExample(String, String, String) | input, badOutput, whyBad | Adds a bad example with explanation |
processStep(String) | step | Adds a single numbered process step |
process(List<String>) | steps | Adds multiple process steps |
outputFormat(OutputFormat) | format | Sets the response output format |
stopCondition(String) | condition | Sets the completion/stop condition |
verificationQuestions(int) | count | Number of verification questions (for CHAIN_OF_VERIFICATION) |
refinementPasses(int) | passes | Number of refinement passes (for ITERATIVE_REFINEMENT) |
confidenceThreshold(float) | threshold | Threshold for showing alternatives (for CONFIDENCE_WEIGHTED) |
OutputFormat Enum
This enum tells the LLM what format to use in its response. The framework appends the corresponding instruction to the system prompt so the model produces output in the desired structure.
PromptEnhancer.OutputFormat controls the response format instruction.
| Value | Instruction |
|---|---|
TEXT | Provide your response as plain text. |
JSON | Provide your response as valid JSON. |
MARKDOWN | Format your response using Markdown. |
MARKDOWN_TABLE | Format your response as a Markdown table. |
BULLET_POINTS | Use bullet points for your response. |
NUMBERED_LIST | Use a numbered list for your response. |
STRUCTURED | Use clear sections with headers for your response. |
CODE | Provide code with comments explaining each section. |
COMPARISON_TABLE | Create a comparison table with pros/cons. |
Example Record
When using the FEW_SHOT_EXAMPLES strategy, you teach the LLM by showing it input/output pairs. The Example record holds one such pair, with an optional explanation of why the output is correct (or incorrect for negative examples).
// record Example(String input, String output, String explanation)
new PromptEnhancer.Example("2x = 10", "x = 5", "Divided both sides by 2");
new PromptEnhancer.Example("2x = 10", "x = 5"); // explanation is optional (null)Enhancer Instance Methods
Once built, the PromptEnhancer instance provides these methods. The main one is enhance(), which takes a user message and returns a fully assembled EnhancedPrompt.
| Method | Return Type | Description |
|---|---|---|
enhance(String userMessage) | EnhancedPrompt | Applies all configured strategies and produces the enhanced prompt |
getStrategies() | List<PromptStrategy> | Returns the configured strategies |
requiresMultiPass() | boolean | true if any strategy is multi-pass |
EnhancedPrompt
EnhancedPrompt is what you get after calling enhance(). It holds the assembled system prompt (with all strategy instructions baked in), the original user message, and metadata about which strategies are active. Pass it directly to your LLMClient to make the enhanced call.
Methods
These methods let you access the prompt content and check which capabilities the enhanced prompt expects from the LLM response.
| Method | Return Type | Description |
|---|---|---|
getSystemPrompt() | String | The full system prompt with all strategy instructions |
getUserMessage() | String | The original user message |
getSystemPromptOptional() | Optional<String> | System prompt wrapped in Optional for LLMClient convenience |
getStrategies() | List<PromptStrategy> | List of applied strategies |
getOutputFormat() | Optional<OutputFormat> | The output format, if specified |
requiresMultiPass() | boolean | true if any applied strategy is multi-pass |
requiresPostProcessing() | boolean | true if any strategy needs post-processing |
expectsConfidenceScore() | boolean | true if CONFIDENCE_WEIGHTED is applied |
expectsStructuredThinking() | boolean | true if STRUCTURED_THINKING is applied |
expectsVerification() | boolean | true if CHAIN_OF_VERIFICATION is applied |
getCombinedPrompt() | String | System prompt + user message in a single string (for models without separate system prompt support) |
getEstimatedOverhead() | int | Estimated additional tokens from enhancement (~4 chars/token) |
Using EnhancedPrompt with LLMClient
Here is how to pass the enhanced prompt to your LLM client. Most providers accept a separate system prompt; for those that do not, use getCombinedPrompt() to get a single string.
EnhancedPrompt prompt = enhancer.enhance("What causes inflation?");
// With separate system prompt support
ChatResponse response = client.chat(
prompt.getUserMessage(),
prompt.getSystemPromptOptional(),
Optional.empty(),
Optional.empty()
);
// Without system prompt support
String combined = prompt.getCombinedPrompt();Integration with AgentBuilder
You can wire prompt strategies directly into an agent through the AgentBuilder, so every message the agent processes is automatically enhanced. There are three approaches: adding individual strategies, adding a list of strategies, or providing a fully configured PromptEnhancer.
// Add individual strategies
Agent agent = AgentBuilder.create()
.llm(llmClient)
.role(myRole)
.promptStrategy(PromptStrategy.CHAIN_OF_THOUGHT)
.promptStrategy(PromptStrategy.CONFIDENCE_WEIGHTED)
.build();
// Add multiple strategies at once
Agent agent = AgentBuilder.create()
.llm(llmClient)
.promptStrategies(List.of(
PromptStrategy.STRUCTURED_THINKING,
PromptStrategy.MULTI_PERSPECTIVE
))
.build();
// Use a fully configured PromptEnhancer
PromptEnhancer enhancer = PromptEnhancer.builder()
.role("Expert researcher")
.objective("Provide accurate information")
.strategy(PromptStrategy.CHAIN_OF_VERIFICATION)
.constraint("Always cite sources")
.positiveExample("Question", "Good answer", "Why it's good")
.build();
Agent agent = AgentBuilder.create()
.llm(llmClient)
.promptEnhancer(enhancer)
.build();| AgentBuilder Method | Description |
|---|---|
.promptStrategy(PromptStrategy) | Adds a single strategy (since 2.10.0) |
.promptStrategies(List<PromptStrategy>) | Adds multiple strategies at once (since 2.10.0) |
.promptEnhancer(PromptEnhancer) | Sets a fully configured enhancer (since 2.10.0) |
Code Examples
These examples show how to apply different strategies to real-world use cases. Each one demonstrates a different prompting technique suited to the task at hand.
Chain-of-Thought for Math
Chain-of-Thought prompting asks the model to show its step-by-step reasoning before giving a final answer. This significantly improves accuracy on math, logic, and multi-step analysis tasks.
PromptEnhancer enhancer = PromptEnhancer.builder()
.strategy(PromptStrategy.CHAIN_OF_THOUGHT)
.role("Mathematics tutor")
.outputFormat(OutputFormat.STRUCTURED)
.build();
EnhancedPrompt prompt = enhancer.enhance(
"A train travels 120km in 2 hours. It then travels 180km in 3 hours. " +
"What is its average speed for the entire journey?"
);Chain-of-Verification for Fact Checking
Chain-of-Verification (CoVe) makes the model generate an initial answer, then create verification questions to check its own claims, and finally refine the answer based on what it finds. This is a multi-pass strategy that dramatically reduces factual errors.
PromptEnhancer enhancer = PromptEnhancer.builder()
.strategy(PromptStrategy.CHAIN_OF_VERIFICATION)
.verificationQuestions(5)
.constraint("Each claim must be independently verifiable")
.build();
EnhancedPrompt prompt = enhancer.enhance("What were the causes of World War I?");
if (prompt.requiresMultiPass()) {
// Handle multi-pass verification flow
}Atom-of-Thought for Complex Reasoning
Atom-of-Thought decomposes a complex problem into independent "atoms" that can be solved in parallel, then synthesizes the results. Unlike Chain-of-Thought, errors in one atom do not cascade to others. This works best with large models (70B+ parameters) and is combined here with confidence scoring.
PromptEnhancer enhancer = PromptEnhancer.builder()
.strategy(PromptStrategy.ATOM_OF_THOUGHT)
.strategy(PromptStrategy.CONFIDENCE_WEIGHTED)
.confidenceThreshold(0.7f)
.objective("Analyze system architecture tradeoffs")
.build();
EnhancedPrompt prompt = enhancer.enhance(
"Compare microservices vs monolith for a 10-person startup " +
"building a real-time analytics platform"
);
// Estimated overhead: prompt.getEstimatedOverhead() tokensFew-Shot with Examples
Few-shot prompting teaches the model by example. You provide a few input/output pairs (both good and bad), and the model learns the expected format and quality from them. This is especially effective for classification, formatting, and style-matching tasks.
PromptEnhancer enhancer = PromptEnhancer.builder()
.strategy(PromptStrategy.FEW_SHOT_EXAMPLES)
.positiveExample(
"The food was great",
"Sentiment: POSITIVE (0.95)",
"Clear positive language"
)
.positiveExample(
"Terrible service, never again",
"Sentiment: NEGATIVE (0.98)",
"Strong negative indicators"
)
.negativeExample(
"The food was great",
"positive",
"Missing confidence score and proper format"
)
.outputFormat(OutputFormat.TEXT)
.build();
EnhancedPrompt prompt = enhancer.enhance("The product works but could be better");Output Parsing & Serialization
TnsAI provides type-safe output parsing for converting raw LLM responses into structured Java objects, and a multi-format serialization system for producing structured output.
Resilience
TnsAI.Core provides a declarative resilience framework built on top of Resilience4j. The `@Resilience` annotation configures retry, circuit breaker, rate limiting, bulkhead isolation, timeout, and fallback policies for actions and roles. The `ResilienceExecutor` applies these policies in a layered pipeline and tracks terminal failures in a dead-letter queue.