9P
9Pic AI Team • January 15, 2026 • 10 min read

Manual race photo tagging (BIB number entry): why it breaks at scale (and the AI alternative)

Manual race photo tagging is the “classic” way to help runners find their photos: humans review images, read bib numbers, and tag each photo so your gallery becomes searchable. But at high volume, manual-only becomes a bottleneck—and it still can’t tag photos where the bib isn’t readable.

TL;DR: A manual race photo tagging service assigns BIB numbers to photos (usually via IPTC keywords or a CSV mapping). It’s useful, but it breaks down at scale because it’s slow, expensive, and it can’t tag what it can’t read (hands, bottles, jackets, medals, blur). The fastest modern approach is hybrid: use 9pic BibTrack for BIB discovery + 9pic FaceFind for selfie search (so runners still find photos even when the bib is hidden), then keep manual review only for exceptions.

What is a manual race photo tagging service?

A manual race photo tagging service is a workflow where people (not software) look at each photo, identify the participant’s BIB number, and record that number as a tag for that image. The result is a searchable photo library—typically “search by bib number”—so runners can quickly find all photos where their bib appears.

In the marathon world, you’ll also see this called marathon photo tagging, race photo indexing, race bib tagging, or BIB number entry. Some vendors describe this as “keywording” because the bib number becomes a keyword attached to the image.

If you want a real-world example of how outsourced providers describe this service, see Race Photo Tagging - BIB Number Entry and their background page About BIB Number Entry.

Where manual BIB tagging breaks (and why runners still can’t find photos)

Manual tagging can be accurate on photos where the bib is clearly readable. The problem is that a race gallery is full of images where the bib is not readable—and those are often the photos participants care about most.

  • BIB not visible: hands, water bottles, hydration belts, jackets, ponchos, medals, bib folded/curved, or the runner turned sideways.
  • Crowds + overlap: multiple runners block each other, especially at finish lines and photo hotspots.
  • Motion blur + distance: the runner is sharp enough to recognize, but digits are not.
  • Scale and turnaround: as photo volume grows, delivery time grows too—so photos go live late, when runner excitement is already gone.
  • Support load: “I can’t find my photos” becomes a customer support issue, even if the photos exist in the gallery.

This is why bib-only workflows (manual or automated) don’t fully solve discovery. You need a second search mode that doesn’t depend on the bib being readable.

Why 9Pic AI works better for race photo delivery

9Pic AI is built for fast, participant-friendly search at scale. Instead of relying on a single fragile identifier (a readable bib), you can offer two ways to find photos:

  • 9pic BibTrack: BIB number recognition for the photos where the bib is visible.
  • 9pic FaceFind: selfie search so runners can find photos even when the bib is blocked, missing, or unreadable.

The practical result: more participants find more photos faster, and your team spends less time doing manual lookup, retagging, and support.

What you actually receive (deliverables)

Before hiring a manual photo tagging team, clarify the deliverable—because “tagged photos” can mean multiple formats. Common deliverables include:

  • CSV / spreadsheet mapping: a table like filename → bib number(s), sometimes also including checkpoint/time window/folder name.
  • Embedded metadata: the bib number written into image metadata (often IPTC keywords) so the tag “travels” with the file.
  • Platform import format: if you use a specific gallery system, the vendor may deliver in that platform’s required import schema.
  • QA report: spot-check stats, duplicate checks, and edge-case notes (e.g., “unreadable bib” bucket).

A simple question that prevents pain later: “Can we run a small pilot (500–1,000 photos) and validate the output against our gallery workflow?”

How manual BIB tagging works (typical workflow)

The best manual race photo tagging workflows are boring—in a good way. They’re designed to reduce human error with structure, batching, and validation.

Step 1
Define rules
What counts as a match? Partial bibs? Multiple bibs?
Step 2
Batch photos
Split by camera, checkpoint, or time window.
Step 3
Key in bibs
Human review assigns bib number tags per image.
Step 4
QA + deliver
Double-entry checks, sampling, exports/imports.

Step 1: Define tagging rules (this is where most projects fail)

Manual tagging gets messy when there’s ambiguity. Your rule sheet should answer:

  • Multiple runners in one photo: do you tag every visible bib, or only the most prominent?
  • Partial bibs: do you tag “123?” or skip unless you can confirm all digits?
  • Occlusions and motion blur: do you allow “best guess,” or require certainty?
  • Duplicates: do you dedupe bursts, or tag everything?
  • Non-bib identifiers: do you ever tag name/age category/finish time?

Step 2: Prepare photo batches + naming

A manual team works fastest when photos are grouped logically. Splitting by checkpoint (start, mid-course, finish), camera, or time bucket reduces context switching and helps QA.

If your filenames aren’t stable (e.g., you rename after editing), decide when tagging happens. A common best practice is: final-export → then tag, so the tags map to the images you’ll actually publish.

Step 3: Tagging (where humans are still best)

Manual tagging shines in the exact situations that cause automated methods to struggle:

  • Crinkled bibs and curved fabric
  • Partial visibility (hands, hydration belts, jackets)
  • Non-standard bib placement (side, back, shorts)
  • Low light / harsh shadows
  • Wet bibs (monsoon races) or reflective bib surfaces

But remember: even the best manual team can’t reliably tag photos where the bib is fully blocked or unreadable. That’s exactly where selfie search (FaceFind) fills the gap—so runners still discover those photos.

Step 4: QA (accuracy is a process, not a promise)

When vendors talk about “accuracy,” ask how it’s measured. A good QA plan usually includes:

  • Double-entry on a sample set (two people tag independently; mismatches are reviewed)
  • Sanity checks (invalid bib formats, impossible bib ranges)
  • Spot checks on high-impact buckets (finish line hero shots)
  • Uncertain bucket for unreadable bibs (so you can decide what to do)

Turnaround time: what actually affects it?

The biggest driver of turnaround is the math: total images × average seconds per image. But in real life, these factors matter more than most teams expect:

  • Photo quality: blur, distance, and occlusion raise the “seconds per photo.”
  • How many bibs per image: finish-line crowds can multiply work.
  • Whether you require full certainty: conservative rules slow output but reduce errors.
  • Change requests: “can you retag with the final filenames?” is a classic rework trap.

If your event is high volume and speed matters, it’s worth comparing manual-only vs hybrid workflows where AI delivers instant discovery and humans focus on exceptions.

Data security and privacy questions to ask

A manual tagging service means your photos and participant identifiers are handled by third parties. At minimum, ask:

  • Access controls: who can download the files? how is access revoked after delivery?
  • Storage: are photos stored encrypted at rest? how long are files retained?
  • Sharing: are photos ever shared with subcontractors?
  • Auditability: can the vendor provide a simple activity log (upload/download timestamps)?

For platform guidance on publishing galleries to participants, start with How it Works and your event’s privacy policy language.

Vendor selection checklist (copy/paste)

Use this checklist when you’re evaluating vendors for manual marathon photo tagging:

  • Pilot: Will you do a small paid pilot with our real photos and confirm deliverable format?
  • Rules: Do you have a written rule sheet for partial bibs, multiple bibs, uncertainty, and rework?
  • Output: Is output IPTC keywords, CSV, or a gallery import? Can you show a sample file?
  • QA: What QA is performed and what is the process for corrections?
  • Security: Where are files stored, how long retained, and who has access?
  • Timeline: What’s the promised turnaround and what assumptions does it depend on?

Manual vs AI (and why hybrid usually wins)

Manual tagging is a strong option when accuracy on tough photos matters more than speed. But if you’re serving thousands of runners, “find my photos” is a time-to-first-result problem—and also a coverage problem (many great photos don’t have readable bibs).

Here’s the approach we see work best for modern races:

  • AI for instant discovery: participants use selfie search (9pic FaceFind) and BIB recognition (9pic BibTrack) so they can find photos even if their bib is blocked in many shots.
  • Manual review for edge cases: humans correct rare misreads and handle the truly ambiguous photos—without holding back the entire gallery.
Manual-only (bib tagging service)
  • Works when the bib is clearly readable.
  • Can handle some tricky partial-bib cases.
  • Simple deliverables (CSV / metadata keywords).
  • Limitation: photos with hidden/unreadable bibs are effectively “unfindable” via bib search.
9Pic AI hybrid (recommended)
  • BibTrack finds the bib-visible photos fast.
  • FaceFind finds photos even when bibs are blocked, missing, or unreadable.
  • Better participant experience: fewer “can’t find my photos” complaints.
  • Manual work is focused on exceptions, not every photo.

If you’re planning a high-volume race photo experience, see For Marathons and Pricing to understand options for scale.

FAQ

Is manual race photo tagging the same as “photo keywording”?

Often yes. Many services describe adding the bib number as a keyword (either embedded into metadata or in a spreadsheet that your gallery platform uses as keywords).

Should we tag before or after editing?

Usually after final export (or after your filenames are finalized). If you tag early and later rename or re-export, you can create expensive rework.

What’s the most common reason for wrong tags?

Ambiguous rules around partial bibs and crowded finish-line shots. Clear “uncertain bucket” rules reduce wrong matches.

Can manual tagging replace face recognition?

It can, but it changes the participant experience: runners must know (and type) their bib number. With selfie search, participants can find photos even if they forgot their bib or it’s not visible in some shots.

What if the runner’s bib is blocked in many photos?

That’s common at scale (hands, bottles, jackets, medals, crowds). If your discovery method relies only on a readable bib, those photos become hard to find. Selfie search (9pic FaceFind) helps participants discover photos even when the bib can’t be read, while 9pic BibTrack covers the bib-visible shots.

Next steps

If you’re choosing between manual photo tagging and AI for an upcoming race, start with a small pilot: pick 1,000 mixed photos (finish line + course), test manual tagging output against your gallery, and compare it with a 9Pic AI flow (BibTrack + FaceFind). The fastest way to judge is simple: measure how quickly real participants can find their photos.

Want help designing a hybrid workflow for your marathon? Contact us and share your approximate photo count, number of photographers, and when you need photos live.

Want runners to find photos in seconds (not days)?

Use BibTrack + FaceFind for fast, bib-independent discovery—then keep manual tagging only for edge cases.