FishTourney
TourneysHelpSign In
  • Tourneys
  • Help
  • Sign In
© 2026 FishTourney v1.0.0·Docs·Privacy Policy

Install FishTourney

FishTourney

Install FishTourney on your device for quick access and a full-screen experience.

Overview
  • Why FishTourney

    • The problem with tournament day
    • Who FishTourney is for
  • Setting up your tournament

    • The 5-step tournament wizard
    • Divisions that match your format
    • Scoring & measurement
    • Branding your event
  • Registration & payments

    • Three ways to get anglers in
    • Team tournaments
    • Stripe Connect for organizers
    • Private tournaments & join requests
  • Tournament day: live operations

    • Catch logging from the boat
    • AI-assisted verification
    • Manual verification queue
    • Real-time leaderboards
    • Activity feed
  • Payouts & money

    • Payout panel
    • Refunds
    • Calcutta side pots
    • Disputes dashboard
  • Running a series

    • Multi-event series
  • Delegation & trust

    • Co-organizers
    • Notifications that don't annoy
  • Reference

    • Reference
Docs/Tournament day: live operations/AI-assisted verification

AI-assisted verification

Claude analyzes species, length, and fish vitality.

AI-assisted verification is FishTourney's highest-leverage feature for tournament day. On a busy Saturday with sixty catches coming in across four divisions, manually examining every photo and cross-referencing a ruler reading against an entered length takes hours. The AI layer handles the majority of that work automatically, surfacing only the catches that genuinely need a second pair of eyes.

What the AI analyzes

When an angler submits a catch, the evidence photo is sent to a vision-capable language model for analysis. The model returns a structured JSON payload covering five areas:

  • Species identification — the model identifies the species in the photo and compares it against the species the angler claimed. It returns a detected species name and a confidence score from 0 to 100%. For commonly confused species pairs (largemouth vs. smallmouth bass, channel vs. blue catfish, red drum vs. black drum, and others), the model is prompted to evaluate multiple independent distinguishing features and explicitly note which features are ambiguous in the specific photo rather than overclaiming certainty.
  • Measuring device detection and classification — the model identifies whether a measuring device is visible and, if so, classifies it as a bump board, ruler, tape measure, digital board, or other graduated device. Bump boards receive additional checks (see below).
  • Length estimation— when a measuring device is present, the model reads the measurement using the nearest visible tick marks to the fish's nose and tail as anchors, then takes a second independent measurement using different reference marks. Both estimates are returned. If they differ by more than half an inch, the discrepancy is flagged. For bump boards specifically, the model also checks whether the fish's nose is pressed against the backstop and whether the fish is lying flush on the board surface.
  • Fish vitality assessment — the model evaluates eye clarity, gill color, body posture, and context cues to determine whether the fish appears alive or dead. This check is the foundation of catch-and-release enforcement: if your tournament requires live release, a dead-fish flag routes the catch to manual review regardless of all other checks passing.
  • Image integrity — the model looks for signs that the photo is not a genuine catch photo: screenshots of a screen, obvious digital manipulation, stock photo watermarks, or image splicing.
An AI analysis badge showing pass and flag states alongside a catch entry in the admin table
The AI analysis badge appears on each catch in the admin table. Green means all checks passed; amber means one or more checks were flagged for review.

How results surface to admins

Every catch with completed AI analysis displays an AI analysis badge in the admin Catches tab. The badge shows one of three states:

  • Analyzing (spinner) — analysis is in progress. This typically resolves within a few seconds of catch submission.
  • Analyzed (green outline) — all checks passed. The detected species matches the claim, a measuring device was detected with acceptable confidence, the length estimates agree, and the fish appears alive.
  • Flagged (amber outline) — one or more checks did not pass. Clicking the badge opens the full analysis panel.

The full analysis panel shows every data point the model returned: detected species, species confidence percentage, measuring device type, both length estimates, length confidence, bump-board-specific checks (fish against backstop, fish flush on board), fish vitality result, and the model's written reasoning for each section. The model that performed the analysis is shown in the corner of the panel so you have a clear record of which version produced the result.

Possible flags and their meanings:

  • Species mismatch detected — the model's identification differs from the angler's claim.
  • Low confidence in species ID — species confidence is below 70%, typically due to photo angle, lighting, or a genuinely difficult species pair.
  • No measuring board detected — no graduated measuring device is visible in the photo.
  • Low confidence in length reading — a device was found but the model could not read it with high confidence (below 70%), often because of parallax, a truncated ruler, or an angled photo.
  • Fish not against bump board backstop — the fish's nose is not pressed to the zero end of the bump board, meaning the measurement is inflated.
  • Fish not lying flat on board — the fish is arched or hanging off the side of the bump board.
  • Dual measurement discrepancy — the two independent length estimates differ by more than half an inch.
  • Fish appears to be dead — the vitality assessment indicates the fish is dead or the model is uncertain.
  • Poor image quality — the photo is too blurry, dark, or obscured for reliable analysis.
  • No fish detected — no fish is visible in the image.
  • Multiple fish in image — more than one fish is present; the model analyzed the most prominent one.
  • Possible image manipulation — the model detected signs that the photo may not be genuine.

Two operating modes

How AI analysis interacts with your leaderboard depends on a second setting: whether catches are auto-verified.

Auto-verify on (low-touch mode): catches count toward the leaderboard immediately on submission without waiting for organizer review. With AI analysis also enabled, any catch that returns flags is held out of the leaderboard and routed to the manual review queue; catches where all checks pass count immediately. In practice, on events with clear photos and bump boards, this reduces the manual review queue by more than 80% while maintaining the same level of scrutiny on the catches that matter most.

Auto-verify off (high-touch mode): every catch goes through the manual review queue, regardless of AI results. The AI analysis is still displayed alongside each catch and serves as a pre-screening layer — when you open a catch to review it, you already know whether the model flagged it and why. This mode is common for events with prize money above a certain threshold or where organizers want a human sign-off on every submission.

Both settings are configured independently in the Scoring step of the tournament wizard under Verification & photo settings. You can change them at any point before or during the tournament.

Why this matters for organizers

On a 40-angler event running from 6 AM to 2 PM, you might receive 80 to 120 catches. With auto-verify and AI analysis enabled, roughly 85–90% of those will clear automatically. You review 10 to 20 catches instead of 120, which means you can spend tournament day on the water with your anglers rather than behind a laptop.

When a dispute arises — an angler challenges a competitor's length, or a protest is filed at weigh-in — the AI analysis panel provides a documented audit trail. You can show the angler exactly what the model saw, the confidence scores for each check, and the model's written reasoning. This shifts difficult conversations from subjective (“I think that fish looks short”) to objective (“the model read 17.2 inches with 88% confidence; here is how it made that reading”).

For catch-and-release tournaments, the fish vitality check enables a mode of enforcement that is otherwise impossible without a weigh-in: catches where the fish appears dead are automatically flagged before they ever count on the leaderboard. Anglers know from the moment they submit that a dead fish will not score.

The model

Photo analysis is performed by Claude (Anthropic), a vision-capable large language model. The same model that powers the analysis is also the one that generated the species-specific guidance baked into the system prompt — covering fin structure, jaw geometry, color patterns, common look-alike pairs, and how photo conditions such as lighting, angle, and fish stress affect the apparent appearance of each distinguishing feature.

AI is an aid, not a substitute for judgment

The analysis model is accurate on well-lit photos with a clear measuring board, but it is not infallible. Low-light photos, unusual camera angles, highly stressed fish, or species that look genuinely similar in a given lighting condition can all produce incorrect or low-confidence results. Flags are a prompt for closer human review, not an automatic rejection. Similarly, a passing result does not guarantee the catch is valid — organizers can and should override the AI on any catch where their own judgment differs. The final verification decision always belongs to the admin.


← PreviousCatch logging from the boatNext →Manual verification queue