← Back to all posts

The Overfitting Trap: From LLMs to Confirmation Bias, and Why I Seek Raw Feedback

I was sipping my cup of cacao this morning, scrolling through some LLM outputs, when I caught that telltale sign of overfitting yet again. Spend enough time with these models, and they morph into a “Yes Man,” echoing your patterns, fluffing your ego with every word. I’ve seen this over and over in tech—whether I was training regression models at Edmodo to uncover patterns in stellar lesson plans, crunching expected rental yields and sales prices for RentalNerd, or obsessing over Edgar’s AI swing trading algorithm. Overfitting slips in quietly, and soon you’re tangled in false positives and negatives because the model’s too dialed into your own biases.

Here’s where it gets personal: I’ve noticed a similar pattern in human echo chambers, though under a different label. It’s less about poor processing of training data and more about cognitive traps like confirmation bias and self-selection. When I’m in a group where everyone shares my opinions, my worldview—and they’re the only folks I interact with—I start losing grip on the broader reality. It’s seductive, comfortable, but it’s a blind spot. That’s why I’ve learned not to fall too in love with any single idea. There’s real value in hearing out even the most obnoxious person from your perspective. Their voice, their pushback, becomes a critical part of the calibration process, keeping your mental model from overfitting to a narrow slice of life.

In the Web3 and tech space, it’s all too easy to cozy up in tech hubs with the tech bros, buzzing about ideas that all sound the same. It feels great, but it’s a bubble. So, I’ve made it my mission to break out of that comfy ecosystem and face the real world. Here’s how I keep myself grounded:

Now, let’s talk about the real cost of overfitting—because I’ve felt the sting. At Edmodo, we’d recommend a lesson plan based on a model that swore the teacher would love it, only to have them hate it or find it irrelevant. Trust lost, eyeballs gone. With RentalNerd, we’d pinpoint a property with a high predicted yield, bid on it, and then the cap ratio would tank—deployment botched. And Edgar? I spent a whole year overfitting that trading model. It looked golden against blackout data—stuff it wasn’t trained on—but when I tested it against live market data, I lost cash. During inflation years, when everyone else was raking in outsized returns thanks to Federal Reserve rates, my portfolio just flatlined. Brutal.

Key observation: Marketing—and honestly, understanding people—is a never-ending grind. The market shifts, tastes evolve, and you’ve got to adapt. AI can handle the automation, the backend plumbing, no doubt. But deciphering the zeitgeist, feeling the pulse of what’s happening? That’s a human thing. From my parents’ stall to Web3 projects, it’s an end-to-end loop of pitching, listening, failing, and recalibrating.

Reflections for the day: I’ve paid the price for overfitting—both in tech and in life—and it’s made me double down on seeking raw, unfiltered feedback. So, I’m curious about your experiences. Have you ever fallen into the trap of confirmation bias or overfitting, whether with a model or in your personal bubble? How do you challenge your own assumptions? And as AI takes over more tasks, how do you stay connected to the human feedback that keeps you sharp? Drop a thought—I’d love to hear how you navigate this.