Sheep Dog Sim has six game modes today: Just Play, Solo Classic, Solo Extreme, Solo Insane, Solo Chaos, and Multiplayer (which itself splits into co-op, competitive, and timed). Each mode has slightly different rules — different sheep counts, different scoring, different win conditions, different camera behaviour. For a long time, the code that handled "what should this mode do here?" lived in one large central file, with a lot of small branching scattered through it.
That file had grown to about 1,300 lines, and it was getting hard to add new modes without breaking old ones. So this round of work pulled it apart into six smaller files, each focused on one concern.
One file now owns the mode definitions themselves — a single table that says, for each mode, how many sheep, what counts as a win, whether a timer runs, whether scores get submitted to the leaderboard. Adding a new mode is now a one-row edit to that table instead of half a dozen scattered if-statements.
One file owns the polygon-spawn calculations (where the sheep get placed at the start of a round, given a fenced shape). One owns the win-condition checks. One owns the round objective state machine — the part that knows about gather zones, drive phases, and corral-vs-portal scoring. One owns the round-completion flow (formatting times, posting to leaderboards, showing the completion screen). One owns the sandbox startup path (the "build your own scene" mode editor).
Honestly, in this update? It doesn't, directly. The game does the same thing it did before. There's no new mode, no new feature, no rebalance.
What it matters for is the next round of new things. If I want to add a "no-dog" mode, or a "sheep run from you" mode, or a multiplayer-Chaos tier, that's now a small, contained change. The cost of trying out new modes — and shipping the good ones — drops. The cost of leaving the bad ones unshipped also drops. That's the whole goal.
There was also a hidden net win: the new layout meant a few duplicated chunks of logic could be deleted, and the file shrunk by about 43% even with all the structural overhead.
All of this was done under a new safety net — a set of tests that captured exactly what the original code did for every mode and combination of inputs, and ran on every change. Any time the new code produced a different answer than the old code for the same input, the tests went red and I had to either accept the change deliberately or fix it. Most of the time it was a fix.
← Back to game