← Back to all posts

The Overfitting Curse: From LLMs to Life, and Why I Pitch Cold to Stay Sharp

I was sipping my cup of cacao this morning, skimming through some LLM outputs, when I noticed it again—that creeping sense of overfitting. Spend long enough with these models, and they start turning into a “Yes Man,” mirroring your patterns, stroking your ego with every response. I’ve seen this before in tech, time and time again. Whether it was training regression models at Edmodo to predict what makes a great lesson plan, calculating expected rental yields and sales prices for RentalNerd, or endlessly tweaking Edgar’s AI swing trading algorithm, overfitting is the silent killer. It sneaks in, and before you know it, you’re drowning in false positives and negatives because the model’s too cozy with your biases.

Here’s the parallel that keeps nagging at me: I’ve observed the same pattern in human echo chambers, just under a different name. When I’m in a group where everyone shares my opinions, my worldview—and they’re the only people I talk to—I start losing sight of the bigger reality. It’s a trap, and I’ve fallen into it. That’s why I’ve come to appreciate the value of not getting too in love with any one idea. There’s something powerful in hearing out even the most obnoxious person from your perspective. Their voice, their pushback, becomes part of a calibration process, a way to keep your mental model from overfitting to a narrow slice of the world.

In the Web3 and tech space, it’s way too easy to hang out in tech hubs with the tech bros, buzzing about ideas that all sound the same. It feels good, validating—but it’s a bubble. So, I’ve made it a point to break out of that comfortable ecosystem. Here’s how I keep myself grounded:

Now, let’s talk about the cost of overfitting—because I’ve paid it. At Edmodo, we’d recommend a lesson plan based on a model that swore the teacher would love it, only to have them hate it or find it irrelevant. Boom, lost trust, lost an eyeball. With RentalNerd, we’d spot a property with a high predicted yield, bid on it, and then the cap ratio would tank—deployment screwed. And Edgar? Man, I spent a whole year overfitting that trading model. It looked great against blackout data—stuff it wasn’t trained on—but when I benchmarked it against live market data, I lost cash. During inflation years, when everyone else was getting outsized returns thanks to Federal Reserve rates, my portfolio just went sideways. Painful lesson.

Key observation: Marketing—and really, understanding people—is a never-ending grind because the market always shifts. AI can automate the backend, the plumbing, sure. But deciphering the zeitgeist, feeling the pulse of what’s happening? That’s a human thing. From my parents’ stall to Web3, it’s an end-to-end loop of pitching, listening, failing, and adapting.

Reflections for the day: I’ve learned the hard way that overfitting—whether in tech or life—can cost you big. I’m more committed than ever to stepping out, testing my ideas against raw reality. So, what about you? Have you ever paid the price for overfitting to a model or an echo chamber? How do you keep yourself calibrated to the real world, especially with AI taking over more tasks? Drop a thought—I’m all ears.