Output Parsing & Serialization
TnsAI provides type-safe output parsing for converting raw LLM responses into structured Java objects, and a multi-format serialization system for producing structured output.
OutputParser<T> Interface
LLMs return free-form text, but your application usually needs structured Java objects. The OutputParser<T> interface defines the contract for converting raw LLM text into a typed object of your choice. It also provides prompt instructions you can include in your LLM call so the model knows what format to produce.
| Method | Description |
|---|---|
parse(String) | Returns a ParseResult<T> (success or error) |
parseOrThrow(String) | Returns T or throws ParseException |
parseOptional(String) | Returns Optional<T> |
getTargetType() | Returns the Class<T> this parser produces |
getFormatInstruction() | Prompt text guiding the LLM to produce the expected format |
getSchemaDescription() | Schema string (e.g. JSON Schema) for the target type |
getErrorCorrectionPrompt(failedOutput, error) | Generates a retry prompt when parsing fails |
OutputParser<WeatherResponse> parser = JsonOutputParser.forType(WeatherResponse.class);
ParseResult<WeatherResponse> result = parser.parse(llmOutput);
if (result.isSuccess()) {
WeatherResponse weather = result.get();
}JsonOutputParser<T>
The most commonly used parser. It extracts JSON from LLM responses -- even when the model wraps JSON in markdown code blocks or surrounds it with explanatory text -- and deserializes it into your Java class using Jackson.
Features:
- Extracts JSON from
```jsoncode blocks - Falls back to raw
{...}or[...]detection - Supports Java Records and POJOs
- Field validation via
OutputValidator - Auto-generates schema descriptions from target type reflection
Factory method
The simplest way to create a JsonOutputParser is with the static forType() factory. It sets up sensible Jackson defaults and auto-generates schema descriptions from your target class.
JsonOutputParser<Person> parser = JsonOutputParser.forType(Person.class);Builder
For more control, use the builder to supply a custom Jackson ObjectMapper, enable strict mode (which rejects unknown JSON properties), or plug in a custom validator.
JsonOutputParser<Person> parser = JsonOutputParser.builder(Person.class)
.objectMapper(customMapper) // custom Jackson ObjectMapper
.strictMode(true) // fail on unknown properties
.validator(customValidator) // custom OutputValidator
.build();Parsing LLM output with embedded JSON
This example demonstrates the parser's ability to extract a JSON block from an LLM response that includes surrounding explanatory text. The parser automatically locates the JSON within the markdown code fence and deserializes it.
record Person(String name, int age) {}
JsonOutputParser<Person> parser = JsonOutputParser.forType(Person.class);
ParseResult<Person> result = parser.parse("""
Here's the person data:
```json
{"name": "John", "age": 30}
```
""");
Person person = result.get(); // Person[name=John, age=30]Default ObjectMapper settings:
FAIL_ON_UNKNOWN_PROPERTIES = falseACCEPT_SINGLE_VALUE_AS_ARRAY = trueINDENT_OUTPUT = trueNON_NULLproperty inclusion
RetryableParser<T>
LLMs occasionally produce malformed output -- missing fields, broken JSON, or wrong structure. RetryableParser wraps any parser and handles this automatically: when parsing fails, it sends the LLM an error-correction prompt explaining what went wrong and asks it to try again, up to a configurable number of retries.
Wrapping a parser
To add retry behavior, wrap your existing parser with RetryableParser.wrap(). You can optionally configure the maximum number of retries (default is 3).
JsonOutputParser<Person> baseParser = JsonOutputParser.forType(Person.class);
RetryableParser<Person> parser = RetryableParser.wrap(baseParser)
.maxRetries(3) // default is 3
.build();Manual retry flow
If you want to control the retry loop yourself (for example, to use a different LLM for corrections), you can get the correction prompt and send it manually.
ParseResult<Person> result = parser.parse(llmOutput);
if (result.isFailure()) {
String correctionPrompt = parser.getCorrectionPrompt(llmOutput, result.getError());
// Send correctionPrompt to LLM, then parse the new response
}Automatic retry with LLM function
The easiest approach: pass parseWithRetry a function that calls your LLM. It will automatically loop -- sending correction prompts and re-parsing -- up to maxRetries times until parsing succeeds or retries are exhausted.
ParseResult<Person> result = parser.parseWithRetry(initialOutput, prompt -> {
return llmClient.chat(prompt); // your LLM call
});
if (result.isSuccess()) {
Person person = result.get();
}Attempt tracking
The retryable parser records every attempt so you can inspect what happened during the retry loop -- useful for debugging and monitoring parse success rates.
parser.getAttemptCount(); // total attempts made
parser.getAttempts(); // List<ParseAttempt> (output, success, error)
parser.getLastAttempt(); // most recent attempt
parser.clearAttempts(); // reset historyParseResult<T>
Every parser returns a ParseResult<T> instead of throwing exceptions or returning null. It is a monadic result type (similar to Rust's Result or Scala's Either) that always tells you whether parsing succeeded or failed, and gives you safe access to the value or the error message.
Construction
You create ParseResult instances through static factory methods rather than constructors. Parsers return these automatically, but you can also create them yourself for testing or custom parsing logic.
| Factory method | Description |
|---|---|
ParseResult.success(value, rawOutput, parseTimeMs) | Successful parse with timing |
ParseResult.success(value) | Successful parse (shorthand) |
ParseResult.failure(error, rawOutput) | Failed parse |
ParseResult.validationFailure(error, validationErrors, rawOutput) | Failed validation with details |
Querying
These methods let you inspect the result, extract the parsed value, or get error details without risking null pointer exceptions.
result.isSuccess(); // true if parsed
result.isFailure(); // true if error
result.get(); // value or throws IllegalStateException
result.getOrElse(defaultVal); // value or fallback
result.getError(); // error message (null on success)
result.getValidationErrors(); // List<String> validation details
result.getRawOutput(); // original LLM text
result.getParseTimeMs(); // parse duration in ms
result.toOptional(); // Optional<T>Transformations
Like Optional or Stream, ParseResult supports map and flatMap so you can transform the parsed value without unwrapping it first. Failures pass through unchanged.
// Map to a different type
ParseResult<String> nameResult = result.map(User::name);
// FlatMap to another ParseResult
ParseResult<Address> addr = result.flatMap(user -> parseAddress(user.addressJson()));Side effects
Use these methods to run an action only when the result is a success or a failure, without needing an if statement. This keeps your code concise and readable.
// Conditional actions
result.ifSuccess(user -> save(user));
result.ifFailure(error -> log.warn(error));
// Handle both cases
result.ifSuccessOrElse(
user -> System.out.println("Parsed: " + user),
error -> System.err.println("Error: " + error)
);OutputSerializer Interface
While OutputParser converts LLM text into Java objects, OutputSerializer does the reverse: it converts Java objects into structured text formats (JSON, YAML, etc.) and back again. This is useful when you need to produce output in a specific format, or when re-serializing a parsed result into a different format for downstream consumers.
| Method | Description |
|---|---|
getFormat() | The OutputFormat this serializer handles |
serialize(data, prettyPrint) | Serialize an object to string |
serialize(data) | Serialize with pretty print (default) |
serializeList(items, itemClass, prettyPrint) | Serialize a typed list |
deserialize(data, targetClass) | Deserialize string to object |
deserializeList(data, itemClass) | Deserialize string to typed list |
getFormatInstructions(targetClass) | LLM prompt instructions for single-object output |
getListFormatInstructions(itemClass) | LLM prompt instructions for list output |
supportsType(dataClass) | Check if format supports a data structure |
OutputFormat enum
TnsAI supports five output formats. JSON and YAML are standard, while TOON and TONL are custom token-optimized formats that significantly reduce token usage when sending structured data to LLMs.
| Format | Extension | MIME Type | Nesting | Token Efficiency |
|---|---|---|---|---|
JSON | .json | application/json | Yes | Baseline |
YAML | .yaml | application/x-yaml | Yes | ~10-15% fewer tokens |
TOON | .toon | text/x-toon | Yes | ~40% fewer tokens |
TONL | .tonl | text/x-tonl | Yes | ~32-50% fewer tokens |
TEXT | .txt | text/plain | No | N/A |
Implementations
Each format has a dedicated serializer class. You rarely need to use these directly -- the OutputSerializerRegistry provides a simpler API for accessing them.
- JsonOutputSerializer -- Jackson-based JSON with configurable pretty printing.
- YamlOutputSerializer -- Zero-dependency YAML with multi-line string support and flow-style compact lists.
- ToonOutputSerializer -- Token-Optimized Object Notation for uniform arrays.
- TonlOutputSerializer -- Token-Optimized Notation Language with schema support.
- TextOutputSerializer -- Plain
toString()serialization. Deserialization limited toStringand basic primitives.
OutputSerializerRegistry
The registry is a one-stop shop for serialization. It holds all built-in serializers and provides convenience methods so you do not need to look up or instantiate serializers yourself. You can also register custom serializers here.
OutputSerializerRegistry registry = OutputSerializerRegistry.getInstance();
// Get a specific serializer
OutputSerializer jsonSerializer = registry.getSerializer(OutputFormat.JSON);
String json = jsonSerializer.serialize(myObject);
// Convenience methods
String yaml = registry.serialize(myObject, OutputFormat.YAML);
Person person = registry.deserialize(jsonString, OutputFormat.JSON, Person.class);
// LLM prompt instructions
String instructions = registry.getFormatInstructions(OutputFormat.JSON, Person.class);
// Register a custom serializer
registry.register(OutputFormat.JSON, new CustomJsonSerializer());
// Reset to defaults
registry.reset();Full Example
This end-to-end example shows the complete workflow: defining an output type as a Java record, creating a parser with retry support, including format instructions in the prompt, parsing the LLM response with automatic correction, handling the result, and re-serializing to a different format.
// 1. Define output type
record AnalysisResult(String summary, List<String> issues, double score) {}
// 2. Create parser with retry
JsonOutputParser<AnalysisResult> baseParser = JsonOutputParser.forType(AnalysisResult.class);
RetryableParser<AnalysisResult> parser = RetryableParser.wrap(baseParser)
.maxRetries(2)
.build();
// 3. Include format instructions in the prompt
String prompt = "Analyze this code.\n\n" + baseParser.getFormatInstruction();
// 4. Parse LLM response with automatic retry
String llmResponse = llmClient.chat(prompt);
ParseResult<AnalysisResult> result = parser.parseWithRetry(llmResponse, llmClient::chat);
// 5. Handle result
result.ifSuccessOrElse(
analysis -> {
System.out.println("Score: " + analysis.score());
analysis.issues().forEach(issue -> System.out.println("- " + issue));
},
error -> System.err.println("Parse failed after retries: " + error)
);
// 6. Re-serialize to a different format
if (result.isSuccess()) {
OutputSerializerRegistry registry = OutputSerializerRegistry.getInstance();
String yaml = registry.serialize(result.get(), OutputFormat.YAML);
System.out.println(yaml);
}Memory
TnsAI.Core provides a pluggable memory system for agent conversation history. The `MemoryStore` interface defines storage, retrieval, pruning, and search operations. Four implementations cover different persistence and sharing requirements. The `AgentBuilder.memoryStore()` method wires a store into an agent.
Prompt Strategies
TnsAI includes a prompt enhancement system that applies proven prompting techniques to improve LLM response quality. The system is built around the `PromptStrategy` enum, `PromptEnhancer` builder, and `EnhancedPrompt` output.