TnsAI
Core

Prompt Strategies

TnsAI includes a prompt enhancement system that applies proven prompting techniques to improve LLM response quality. The system is built around the `PromptStrategy` enum, `PromptEnhancer` builder, and `EnhancedPrompt` output.

Package: com.tnsai.prompt.strategy

PromptStrategy Enum

Prompt strategies are research-backed techniques that improve LLM response quality by structuring how the model approaches a problem. Instead of writing complex system prompts by hand, you pick one or more strategies and the framework generates the right instructions automatically. Twelve predefined strategies are available, based on techniques from OpenAI, Anthropic, and Google AI research.

StrategyDescriptionMulti-PassPost-Processing
CHAIN_OF_THOUGHTStep-by-step reasoning before final answer. Best for math, logic, multi-step analysis.NoNo
CHAIN_OF_VERIFICATIONSelf-verification with generated questions. Initial answer, then verify, then refine. Reported 60% to 92% accuracy improvement on complex queries.YesYes
CONFIDENCE_WEIGHTEDIncludes confidence score (0--100%), key assumptions, and alternatives when confidence is below threshold.NoNo
STRUCTURED_THINKINGFour-phase protocol: UNDERSTAND, ANALYZE, STRATEGIZE, EXECUTE.NoNo
MULTI_PERSPECTIVEExamines problem from Technical, Business, User Experience, and Risk perspectives, then synthesizes a balanced recommendation.NoNo
CONSTRAINT_FIRSTSeparates hard constraints (must satisfy) from soft preferences (nice to have) before proceeding.NoNo
ITERATIVE_REFINEMENTMulti-pass generation: Draft, Critique, Refine, Review.YesYes
CONTEXT_BOUNDARIESClear separation of Context, Focus, Task, and Constraints. Explicitly flags insufficient information.NoNo
FEW_SHOT_EXAMPLESLearns from positive and negative examples to guide response format and quality.NoNo
META_PROMPTINGAI designs the optimal prompt for the task first, then responds to it.YesNo
SIX_PART_ANATOMYComprehensive structure: Role, Objective, Request, Process, Output, Stop Condition.NoNo
ATOM_OF_THOUGHTDecomposes problems into independent atoms solved in parallel, then synthesizes. Unlike CoT, errors are isolated per atom. +30--40% accuracy on complex reasoning, +20--30% token usage. Best for 70B+ parameter models. (since 2.10.7)YesNo

Strategy Methods

Each strategy enum value provides these methods to retrieve its system instruction text and to check whether it requires multiple LLM passes or post-processing.

MethodReturn TypeDescription
getSystemInstruction()StringThe full system instruction text injected for this strategy
getDescription()StringShort human-readable description
requiresPostProcessing()booleantrue for CHAIN_OF_VERIFICATION and ITERATIVE_REFINEMENT
isMultiPass()booleantrue for CHAIN_OF_VERIFICATION, ITERATIVE_REFINEMENT, META_PROMPTING, and ATOM_OF_THOUGHT

PromptEnhancer Builder

PromptEnhancer is where you assemble your prompting configuration. It is a fluent builder that lets you combine multiple strategies, set a role and objective, define constraints, provide few-shot examples, and choose an output format. When you call .enhance(), it compiles everything into an EnhancedPrompt ready to send to the LLM.

Quick Construction

If you only need a single strategy with no extra configuration, use the shorthand factory method.

// Single strategy
PromptEnhancer enhancer = PromptEnhancer.withStrategy(PromptStrategy.CHAIN_OF_THOUGHT);
EnhancedPrompt prompt = enhancer.enhance("Solve: 2x + 5 = 15");

Full Builder API

For more control, use the builder. You can combine multiple strategies, set a role and objective, add constraints and preferences, provide positive and negative examples, define process steps, and choose an output format.

PromptEnhancer enhancer = PromptEnhancer.builder()
    .strategy(PromptStrategy.CHAIN_OF_THOUGHT)
    .strategy(PromptStrategy.CONFIDENCE_WEIGHTED)
    .role("Expert mathematician")
    .objective("Solve algebra problems accurately")
    .constraint("Show all work")
    .constraint("Verify answer by substitution")
    .constraints(List.of("Use standard notation", "Simplify"))
    .softPreference("Explain in simple terms")
    .softPreferences(List.of("Use examples", "Keep it concise"))
    .positiveExample("2x = 10", "x = 5", "Correct division by 2")
    .positiveExample("3x + 1 = 7", "x = 2")
    .negativeExample("2x = 10", "x = 10", "Forgot to divide")
    .processStep("Parse the equation")
    .processStep("Isolate the variable")
    .process(List.of("Simplify", "Verify"))
    .outputFormat(OutputFormat.STRUCTURED)
    .stopCondition("Stop after verification pass")
    .verificationQuestions(5)       // CoVe: number of verification questions
    .refinementPasses(3)            // Iterative Refinement: number of passes
    .confidenceThreshold(0.7f)      // Confidence Weighted: threshold for alternatives
    .build();

Builder Methods

The complete list of builder methods. All setter methods are chainable and can be called in any order.

MethodParameterDescription
strategy(PromptStrategy)strategyAdds a prompting strategy (chainable, multiple allowed)
role(String)roleSets the role/persona (e.g., "Expert researcher")
objective(String)objectiveSets the high-level goal
constraint(String)constraintAdds a single hard constraint
constraints(List<String>)constraintsAdds multiple hard constraints
softPreference(String)preferenceAdds a single soft preference
softPreferences(List<String>)preferencesAdds multiple soft preferences
positiveExample(String, String)input, outputAdds a good example
positiveExample(String, String, String)input, output, explanationAdds a good example with reasoning
negativeExample(String, String, String)input, badOutput, whyBadAdds a bad example with explanation
processStep(String)stepAdds a single numbered process step
process(List<String>)stepsAdds multiple process steps
outputFormat(OutputFormat)formatSets the response output format
stopCondition(String)conditionSets the completion/stop condition
verificationQuestions(int)countNumber of verification questions (for CHAIN_OF_VERIFICATION)
refinementPasses(int)passesNumber of refinement passes (for ITERATIVE_REFINEMENT)
confidenceThreshold(float)thresholdThreshold for showing alternatives (for CONFIDENCE_WEIGHTED)

OutputFormat Enum

This enum tells the LLM what format to use in its response. The framework appends the corresponding instruction to the system prompt so the model produces output in the desired structure.

PromptEnhancer.OutputFormat controls the response format instruction.

ValueInstruction
TEXTProvide your response as plain text.
JSONProvide your response as valid JSON.
MARKDOWNFormat your response using Markdown.
MARKDOWN_TABLEFormat your response as a Markdown table.
BULLET_POINTSUse bullet points for your response.
NUMBERED_LISTUse a numbered list for your response.
STRUCTUREDUse clear sections with headers for your response.
CODEProvide code with comments explaining each section.
COMPARISON_TABLECreate a comparison table with pros/cons.

Example Record

When using the FEW_SHOT_EXAMPLES strategy, you teach the LLM by showing it input/output pairs. The Example record holds one such pair, with an optional explanation of why the output is correct (or incorrect for negative examples).

// record Example(String input, String output, String explanation)
new PromptEnhancer.Example("2x = 10", "x = 5", "Divided both sides by 2");
new PromptEnhancer.Example("2x = 10", "x = 5");  // explanation is optional (null)

Enhancer Instance Methods

Once built, the PromptEnhancer instance provides these methods. The main one is enhance(), which takes a user message and returns a fully assembled EnhancedPrompt.

MethodReturn TypeDescription
enhance(String userMessage)EnhancedPromptApplies all configured strategies and produces the enhanced prompt
getStrategies()List<PromptStrategy>Returns the configured strategies
requiresMultiPass()booleantrue if any strategy is multi-pass

EnhancedPrompt

EnhancedPrompt is what you get after calling enhance(). It holds the assembled system prompt (with all strategy instructions baked in), the original user message, and metadata about which strategies are active. Pass it directly to your LLMClient to make the enhanced call.

Methods

These methods let you access the prompt content and check which capabilities the enhanced prompt expects from the LLM response.

MethodReturn TypeDescription
getSystemPrompt()StringThe full system prompt with all strategy instructions
getUserMessage()StringThe original user message
getSystemPromptOptional()Optional<String>System prompt wrapped in Optional for LLMClient convenience
getStrategies()List<PromptStrategy>List of applied strategies
getOutputFormat()Optional<OutputFormat>The output format, if specified
requiresMultiPass()booleantrue if any applied strategy is multi-pass
requiresPostProcessing()booleantrue if any strategy needs post-processing
expectsConfidenceScore()booleantrue if CONFIDENCE_WEIGHTED is applied
expectsStructuredThinking()booleantrue if STRUCTURED_THINKING is applied
expectsVerification()booleantrue if CHAIN_OF_VERIFICATION is applied
getCombinedPrompt()StringSystem prompt + user message in a single string (for models without separate system prompt support)
getEstimatedOverhead()intEstimated additional tokens from enhancement (~4 chars/token)

Using EnhancedPrompt with LLMClient

Here is how to pass the enhanced prompt to your LLM client. Most providers accept a separate system prompt; for those that do not, use getCombinedPrompt() to get a single string.

EnhancedPrompt prompt = enhancer.enhance("What causes inflation?");

// With separate system prompt support
ChatResponse response = client.chat(
    prompt.getUserMessage(),
    prompt.getSystemPromptOptional(),
    Optional.empty(),
    Optional.empty()
);

// Without system prompt support
String combined = prompt.getCombinedPrompt();

Integration with AgentBuilder

You can wire prompt strategies directly into an agent through the AgentBuilder, so every message the agent processes is automatically enhanced. There are three approaches: adding individual strategies, adding a list of strategies, or providing a fully configured PromptEnhancer.

// Add individual strategies
Agent agent = AgentBuilder.create()
    .llm(llmClient)
    .role(myRole)
    .promptStrategy(PromptStrategy.CHAIN_OF_THOUGHT)
    .promptStrategy(PromptStrategy.CONFIDENCE_WEIGHTED)
    .build();

// Add multiple strategies at once
Agent agent = AgentBuilder.create()
    .llm(llmClient)
    .promptStrategies(List.of(
        PromptStrategy.STRUCTURED_THINKING,
        PromptStrategy.MULTI_PERSPECTIVE
    ))
    .build();

// Use a fully configured PromptEnhancer
PromptEnhancer enhancer = PromptEnhancer.builder()
    .role("Expert researcher")
    .objective("Provide accurate information")
    .strategy(PromptStrategy.CHAIN_OF_VERIFICATION)
    .constraint("Always cite sources")
    .positiveExample("Question", "Good answer", "Why it's good")
    .build();

Agent agent = AgentBuilder.create()
    .llm(llmClient)
    .promptEnhancer(enhancer)
    .build();
AgentBuilder MethodDescription
.promptStrategy(PromptStrategy)Adds a single strategy (since 2.10.0)
.promptStrategies(List<PromptStrategy>)Adds multiple strategies at once (since 2.10.0)
.promptEnhancer(PromptEnhancer)Sets a fully configured enhancer (since 2.10.0)

Code Examples

These examples show how to apply different strategies to real-world use cases. Each one demonstrates a different prompting technique suited to the task at hand.

Chain-of-Thought for Math

Chain-of-Thought prompting asks the model to show its step-by-step reasoning before giving a final answer. This significantly improves accuracy on math, logic, and multi-step analysis tasks.

PromptEnhancer enhancer = PromptEnhancer.builder()
    .strategy(PromptStrategy.CHAIN_OF_THOUGHT)
    .role("Mathematics tutor")
    .outputFormat(OutputFormat.STRUCTURED)
    .build();

EnhancedPrompt prompt = enhancer.enhance(
    "A train travels 120km in 2 hours. It then travels 180km in 3 hours. " +
    "What is its average speed for the entire journey?"
);

Chain-of-Verification for Fact Checking

Chain-of-Verification (CoVe) makes the model generate an initial answer, then create verification questions to check its own claims, and finally refine the answer based on what it finds. This is a multi-pass strategy that dramatically reduces factual errors.

PromptEnhancer enhancer = PromptEnhancer.builder()
    .strategy(PromptStrategy.CHAIN_OF_VERIFICATION)
    .verificationQuestions(5)
    .constraint("Each claim must be independently verifiable")
    .build();

EnhancedPrompt prompt = enhancer.enhance("What were the causes of World War I?");

if (prompt.requiresMultiPass()) {
    // Handle multi-pass verification flow
}

Atom-of-Thought for Complex Reasoning

Atom-of-Thought decomposes a complex problem into independent "atoms" that can be solved in parallel, then synthesizes the results. Unlike Chain-of-Thought, errors in one atom do not cascade to others. This works best with large models (70B+ parameters) and is combined here with confidence scoring.

PromptEnhancer enhancer = PromptEnhancer.builder()
    .strategy(PromptStrategy.ATOM_OF_THOUGHT)
    .strategy(PromptStrategy.CONFIDENCE_WEIGHTED)
    .confidenceThreshold(0.7f)
    .objective("Analyze system architecture tradeoffs")
    .build();

EnhancedPrompt prompt = enhancer.enhance(
    "Compare microservices vs monolith for a 10-person startup " +
    "building a real-time analytics platform"
);
// Estimated overhead: prompt.getEstimatedOverhead() tokens

Few-Shot with Examples

Few-shot prompting teaches the model by example. You provide a few input/output pairs (both good and bad), and the model learns the expected format and quality from them. This is especially effective for classification, formatting, and style-matching tasks.

PromptEnhancer enhancer = PromptEnhancer.builder()
    .strategy(PromptStrategy.FEW_SHOT_EXAMPLES)
    .positiveExample(
        "The food was great",
        "Sentiment: POSITIVE (0.95)",
        "Clear positive language"
    )
    .positiveExample(
        "Terrible service, never again",
        "Sentiment: NEGATIVE (0.98)",
        "Strong negative indicators"
    )
    .negativeExample(
        "The food was great",
        "positive",
        "Missing confidence score and proper format"
    )
    .outputFormat(OutputFormat.TEXT)
    .build();

EnhancedPrompt prompt = enhancer.enhance("The product works but could be better");

On this page