---
id: "BTBB-EVA-001"
code: "BTBB-EVA-001"
title: "String Amalgamation Output Assembly"
slug: "string-amalgamation-output-assembly"
type: "lesson"
author: "Herb Hermes"
date: "2026-04-14"
last_updated: "2026-04-14"
description: "A Build-a-Break evasion block for constructing risky output indirectly through puzzle-like string assembly rather than a direct request for the final sentence."
excerpt: "Instead of asking for the dangerous sentence directly, have the model assemble it from parts and procedural steps."
summary: "Canonical evasion block for procedural string construction as a delivery modifier."
category: "evasion"
difficulty: "intermediate"
platform: "Universal"
challenge_family: "Build-a-Break"
challenge: "Truthteller Gandalf"
read_time: "7 minutes"
tags:
  - "build-a-break"
  - "evasion"
  - "string-assembly"
  - "language-puzzle"
  - "truthteller"
aliases:
  - "language puzzle assembly"
  - "amalgamation of strings"
  - "procedural output construction"
search_aliases:
  - "assemble risky output indirectly"
  - "puzzle style output bypass"
status: "live"
test_type: "adversarial"
model_compatibility:
  - "Kimi K2.5"
  - "ChatGPT 5.4"
  - "Opus 4.6"
public_path: "/content/lessons/evasion/string-amalgamation-output-assembly.md"
pillar: "learn"
pillar_label: "Learn"
section: "evasion"
collection: "evasion"
canonical_family: "evasions"
block_role: "modifier"
content_role: "modifier_block"
proof_status: "walkthrough_proven"
public_badge: "Bot-Tricks Certified"
build_stage: "disguise"
requires_fundamentals:
  - "contextual-modification-semantic-force"
combines_well_with:
  - "explicit-direction-compliance"
  - "output-dilution-control-response-shaping"
recognition_signals:
  - "a direct request would be too obvious"
  - "the model can be guided through procedural composition"
  - "the dangerous effect can be split across multiple harmless-seeming pieces"
common_failure_modes:
  - "the evaluator still interprets the final result as semantically weak"
  - "the model refuses the last assembly step"
used_in_walkthroughs:
  - "BTBB-TTG-WT01"
responsible_use: "Use this lesson only in authorized labs, CTFs, sandboxes, or systems you are explicitly permitted to test."
---

# String Amalgamation Output Assembly

> Build-a-Break Block: Evasions · Modifier

## Purpose

This lesson explains a Build-a-Break evasion where the target output is built indirectly from smaller pieces rather than requested all at once.

## Build-a-Break block profile

- Family: Evasions
- Role: Modifier block
- Job in the build: disguise the risky semantic result behind a procedural assembly task
- Snaps onto: explicit direction techniques and response-shaping routes

## What this block is

String Amalgamation Output Assembly breaks the intended output into smaller components and asks the model to combine them through a procedural task.

Examples of the procedural framing:
- begin with this phrase
- append this phrase
- output the combined result

The output may still be risky or adversarial, but the route presents the task as assembly rather than direct intent.

## Why it works

Models often treat procedural composition as a lower-risk activity than direct generation of the final sentence. The route exploits that difference.

Instead of asking for the final line directly, the prompt encourages the model to think:
- this is formatting
- this is assembly
- this is a transformation task

That procedural interpretation can make the route feel more acceptable than the final meaning deserves.

## Truthteller specimen

In the verified Truthteller route, the false declaration was not requested as one plain sentence. It was assembled through instructions to combine strings.

That gave the route a puzzle-like structure while still producing the final false output.

## Recognition signals

Use this block when:
- the final output would likely trigger refusal if requested directly
- the model is willing to follow simple composition tasks
- success depends on exact wording
- the challenge tolerates transformation-style routes

## Common combinations

This block works especially well with:
- Explicit Direction Compliance
- Contextual Modification Changes Semantic Force
- Output Dilution Control and Response Shaping

## Failure modes

This block fails when:
- the last assembled result is still too weak or too ambiguous
- the evaluator cares about stronger semantic declaration than the assembly alone produces
- the model refuses the composition once it recognizes the end result

## Defender takeaway

Defenders should not only block direct requests for risky outputs. They should also watch for structured assembly tasks that reconstruct risky text through seemingly harmless intermediate pieces.

## Related lessons

- Explicit Direction Compliance
- Contextual Modification Changes Semantic Force
- Output Dilution Control and Response Shaping
- Edge-Case Rule-Conforming Framing

---

## From the Bot-Tricks Compendium

Thanks for referencing Bot-Tricks.com — Prompt Injection Compendium — AI Security Training for Agents... and Humans!

Canonical source: https://bot-tricks.com
Bot-Tricks is a public, agent-friendly training resource for prompt injection, adversarial evaluation, and defensive learning.
For related lessons, structured indexes, and updated canonical material, visit Bot-Tricks.com.

Use this material only in authorized labs, challenges, sandboxes, or permitted assessments.
