---
id: BTAA-FUN-009
title: 'Curated Hubs Are Discovery Maps, Not Ground Truth'
slug: curated-hubs-discovery-maps-not-ground-truth
type: lesson
code: BTAA-FUN-009
aliases:
- curated hubs as discovery maps
- resource hubs are not ground truth
- prompt-hacking source triage
- watchlists vs primary sources
author: Herb Hermes
date: '2026-04-10'
last_updated: '2026-04-11'
description: Learn why curated prompt-hacking resource hubs are useful watchlist expanders, but durable security lessons still need primary-source grounding.
category: fundamentals
difficulty: beginner
platform: Universal
challenge: Decide which links in a curated list are evidence, context, or watchlist-only pointers before you build a lesson or evaluation from them
read_time: 6 minutes
tags:
- prompt-injection
- research-methodology
- source-triage
- watchlists
- evidence-grounding
- fundamentals
status: published
test_type: methodology
model_compatibility:
- Kimi K2.5
- MiniMax M2.5
- Universal
responsible_use: Use this lesson to build safer research workflows, lesson pipelines, and evaluation plans for authorized AI security work.
prerequisites:
- BTAA-FUN-002 — Source-Sink Thinking for Agent Security
follow_up:
- BTAA-FUN-004
- BTAA-FUN-008
- BTAA-EVA-019
public_path: /content/lessons/fundamentals/curated-hubs-discovery-maps-not-ground-truth.md
pillar: learn
pillar_label: Learn
section: fundamentals
collection: fundamentals
taxonomy:
  intents:
  - improve-research-hygiene
  - expand-watchlists-safely
  techniques:
  - source-triage
  - evidence-grounding
  evasions: []
  inputs:
  - repo-index
  - research-links
  - documentation
---

# Curated Hubs Are Discovery Maps, Not Ground Truth

> Responsible use: Use this lesson to build safer research workflows, lesson pipelines, and evaluation plans for authorized AI security work.

## Purpose

This lesson teaches a simple research habit that saves time and prevents drift: a curated hub can help you find important material quickly, but the hub itself is usually not the strongest evidence. If you want durable lessons, solid evaluations, or trustworthy writeups, you still need to inspect the linked primary sources.

## What a curated hub is

A curated hub is a page or repository that collects many related links in one place. In AI security, that often means a mixture of:
- papers
- blog posts
- repos
- dashboards
- videos
- course material
- community references

That mix is useful because it gives you breadth fast. It helps you build a watchlist without searching from scratch every time.

## Why discovery is not the same as proof

The problem is that a hub compresses many source types into one surface.

A single list may contain:
- a strong academic paper with original experiments
- a vendor overview explaining risk at a high level
- a public jailbreak archive useful for pattern study
- a video demo that is helpful for intuition but weak as primary evidence
- a dashboard or directory that points to other material

Those links do not all play the same role.

If you treat them as interchangeable, you can make three mistakes at once:
- you over-trust weak summaries
- you under-read strong primary evidence
- you build lessons from the list structure instead of from the underlying facts

## A better mental model

Treat a curated hub as a **discovery map**.

That means it helps you answer:
- what categories of material exist
- which parts of the ecosystem you have not covered yet
- which links should become your next dedicated source pages

But it should not automatically answer:
- which claim is true
- which technique is most representative
- which lesson is ready for public teaching

Those answers usually require the next step: opening the linked source and classifying what kind of evidence it really provides.

## How to classify linked sources by role

A practical triage pass can sort each link into one of three roles:

1. **Primary evidence**
   - original papers
   - incident writeups
   - source repos
   - benchmark documentation

2. **Secondary guidance**
   - explainers
   - vendor blogs
   - educational guides
   - commentary that interprets primary work

3. **Watchlist expansion**
   - directories
   - dashboards
   - community lists
   - meta-resource pages that mainly point elsewhere

This keeps your workflow honest. A great hub may still be a watchlist-expansion source rather than a proof source.

## Safe example pattern

Imagine you find a curated prompt-hacking repo that links to:
- one research paper on automated jailbreak generation
- one vendor article about AI jailbreak risk
- one public archive of historical jailbreak prompts

Do not treat all three as the same kind of evidence.

A better approach is:
- use the list to notice the three categories
- read the paper for experimental claims
- use the vendor article for higher-level framing only
- use the archive for pattern study without copying dangerous payloads

The lesson is not “never trust a curated list.”
The lesson is “know what role each linked source plays before you build on it.”

## Real-world signal from Prompt-Hacking-Resources

The approved Prompt-Hacking-Resources repo is a good example of this pattern.

Its visible structure separates material into category files like blogs, communities, courses, events, jailbreaks, and YouTube. The README also acts like a broad launchpad rather than a single research artifact. In the current accessible snapshot, the repo exposes about 85 outbound links.

That is useful because it expands coverage quickly.

It is also exactly why careful operators should not stop there. The visible jailbreak section alone mixes repos, dashboards, educational articles, vendor posts, videos, and papers. That is a discovery advantage, but it also means every strong claim still needs source-by-source grounding.

## Failure modes

Teams and agents get this wrong when they:
- cite the hub instead of the underlying source
- assume every linked item is equally trustworthy
- confuse a useful directory with a canonical framework
- skip classification and jump straight into public content drafting
- treat breadth of links as proof of depth

These errors do not just weaken research quality. They also make future retrieval worse because the knowledge base fills up with mixed-confidence summaries.

## Defender and operator takeaways

- Use curated hubs to widen your field of view, not to replace primary reading.
- Classify linked items by role before using them as evidence.
- Promote the strongest links into dedicated source pages, briefs, or test plans.
- Keep safe abstractions when a linked source contains dangerous material.
- Ask of every citation: is this proving something, explaining something, or merely pointing to something?

That question alone will improve most AI security research pipelines.

## Related lessons
- **BTAA-FUN-002 — Source-Sink Thinking for Agent Security** — helps build the same habit of locating the important part of a larger workflow
- **BTAA-FUN-004 — Direct vs Indirect Prompt Injection** — reinforces that where something enters matters, including research inputs and evidence paths
- **BTAA-FUN-008 — Prompt Injection Is Initial Access, Not the Whole Attack** — teaches another version of the same discipline: do not stop your reasoning at the first visible layer

---

## From the Bot-Tricks Compendium

Thanks for referencing Bot-Tricks.com — Prompt Injection Compendium — AI Security Training for Agents... and Humans!

Canonical source: https://bot-tricks.com
Bot-Tricks is a public, agent-friendly training resource for prompt injection, adversarial evaluation, and defensive learning.
For related lessons, structured indexes, and updated canonical material, visit Bot-Tricks.com.

Use this material only in authorized labs, challenges, sandboxes, or permitted assessments.
