Game Fish SJDS 2026 ran 48 boats across three concurrent categories — Billfish, Roosterfish, and Funfish — in two languages, in front of an online audience that hit 12,847 spectator views during the live event. Zero disputes survived the audit trail. This is what that actually looked like from the director's chair.
The brief, internally, was simple. We wanted to run an event that felt like the major-league sport fishing tournaments — same level of operational polish, same level of public visibility — without renting somebody else's staff to do it. We wanted bilingual EN/ES throughout, because half our anglers are international and half are Nicaraguan. We wanted a permanent results archive, because the results of last year's event are part of why people sign up for next year's. And we wanted the ability to defend any disputed result with an unambiguous record.
We had previously run on spreadsheets. The chaos of three concurrent categories on three separate spreadsheets is not something I will repeat.
We configured DockScore three weeks before lines-in. The setup conversation took about four hours total — most of it deciding on the species multipliers and the tie-breakers for each category. After that, the system is a configuration file.
Captains registered themselves through the public registration page. By the time we closed registration seven days before the event, we had 48 confirmed boats with complete crew lists, no missing fields, and no chasing. That alone was worth the price.
On the dock, judges logged catches on their phones. Five taps per catch. The auto-generated radio code (M-0031, RF-007, VS-022) became the primary identifier for radio reporting — judges said the code on the air, the director referenced the code in the jury panel, captains heard their boat's code on the live broadcast. It tightened the whole communication loop.
One catch was challenged during the event. A captain disputed that a marlin had been logged as released within the timing window — the rule was a five-minute submission window from the boatside release.
In the old world, this would have been an argument. We would have looked at the whiteboard, debated whose memory was right, and probably resolved it informally with one party leaving unhappy. Saturday night, around the bar.
In the audit log, we had the submission timestamp (15:47:32), the submitting judge (a senior dock captain), the photo the angler had uploaded, the GPS metadata from the judge's phone, and the original catch entry which referenced the boat's radio call to the dock at 15:43:15 — well within the five-minute window. The whole dispute was reviewed and closed in 12 minutes, with both sides agreeing the call was correct.
Twelve minutes. Not the rest of the weekend. Not the next year of the tournament's reputation.
About 40% of our anglers were international, primarily English-speaking. About 60% were local or regional, primarily Spanish-speaking. About half of the dock judges were bilingual. The standard play in tournaments like this is to pick one operating language and force everyone else to adapt. Both choices alienate roughly half your participants.
We ran the tournament in both. Spanish-speaking judges used the Spanish interface. English-speaking anglers followed the English leaderboard. The same audit log served both. The same results page served both. One tap on the language toggle. No second-class experience for anyone.
48 boats registered. 48 boats started.
3 concurrent categories.
2 languages.
12,847 live spectator views during the event.
0 disputes that survived the audit trail.
The platform handles a lot of what used to be the director's manual work. The leaderboard updated within 10 seconds of every accepted catch. Three category leaderboards ran simultaneously without anyone manually maintaining them. The IMPESCA report exported in one click on Monday morning.
What I did during the event was monitor the jury queue, walk the dock, and answer questions from sponsors. I did not maintain a whiteboard. I did not chase down judges to confirm logged catches. I did not retype data from one system to another. The tournament felt smaller to me than it had with 30 boats on a spreadsheet.
Mostly nothing. The system held up under load — peak concurrent dock activity was around 17:00 on the second day, when six boats came in within twenty minutes and the jury queue had eleven catches in review. Nothing slowed down. Nothing dropped.
Next year, we'll probably enable the Auto Story Publisher (Pro) to push leaderboard story images to Instagram and Facebook automatically at scheduled intervals during the event — currently we did the social posts manually. And we'll get tighter on sponsor delivery reporting, which we did but did not yet have the automated post-event report to lean on.
Game Fish SJDS is org_id = 1 in the DockScore platform. Every feature in DockScore exists because we needed it on the dock at this event. If your tournament looks anything like this — concurrent categories, bilingual fleet, real audience, real sponsors — DockScore was built for you.
Get DockScore for My Tournament →Read the structured case study for the numbers in one place. Browse the rest of the director blog for how-to content. Or jump straight to the showcase to see the platform running.