When Does Helpful Become Too Much? Rethinking Our Relationship with AI

I went to dinner with my good friend John Raymonds and our sons this week. John is a deep thinker and an experienced entrepreneur. Unsurprisingly, the conversation turned to AI. I thought I’d share some of the things we examined and discussed.

John and I at a Texas steak house with our sons,

As we all admitted to using AI more often and for more things, what stood out wasn’t our agreement, but rather the tension around our use cases. While we’re all very bullish on AI (excited, even), we kept circling two questions: “How much AI is too much?” and “How important is it to preserve your own voice (and thinking) in the wake of AI content generation?” Neither had clean answers, yet both felt increasingly important and relevant.

How much AI is too much?

When you see AI everywhere all the time, you might have passed the point of diminishing returns (or you might be passionate about the discovery of something as important as the discovery of fire, electricity, or the Internet).

The real issue is that “too much AI” is not about volume of usage but about when AI starts to (a) complicate your process or (b) dilute your voice. Ultimately, you must consciously decide where the line is for you.

Here’s an example we talked about, stemming from Zach and me writing this weekly commentary together. I’ve been experimenting with something I call “content pillars.” That is where I combine various sources (including research, articles, notes, recordings, etc.) on a subject, run them through various AI tools with layered prompts, and distill everything into a multi-faceted outline that provides a more complete, multi-dimensional view of the material and its meaning. The goal is to get a better sense of the big picture (and make it easier to spot patterns, overlaps, contradictions, tensions, and agreements). I include most of the process steps in the content pillar. The result is dense, sometimes overwhelming, but undeniably rich. I used to do a lot of that in my head. This consolidates all of that in one place, making it easy to save for reference or reuse. To me, this was a step forward.

My son pushed back.

At some point, he argued, the process becomes so complex that it requires its own layer of AI just to consume it. The time investment balloons. What started as a tool to simplify thinking and streamline our process can quietly become something that complicates it.

He’s not wrong.

But I don’t think that invalidates the process … or the end result. Beyond the output, there’s value in building and using these systems (exploring, experimenting, and stretching a different kind of mental muscle). The process itself becomes the product, at least in part.

Still, the question lingers: if the tool designed to accelerate us begins to slow us down, where’s the line?

I won’t really attempt to answer that here. However, I will note that I originally created the process to augment, automate, and extend parts of my work. Over time, I refined the process to the point where I wanted to share it. That’s when I was confronted by a stubborn truth I’ve battled many times: a process designed for you won’t necessarily please others.

What I found is that very few people want information in the quantity, velocity, depth, or breadth that I would choose. In fact, it became clear to me that I wasn’t even the real audience anymore. As I continued to build these content pillars, they expanded as I began to view each pillar as a new, richer data set to feed the machine (rather than producing something I’d want to consume myself or share with others).

But the point remains, if you design something to satisfy a machine, it shouldn’t surprise you that it doesn’t satisfy a human.

How much does voice matter?

The second tension was more subtle and subjective.

Something shifts as AI becomes more embedded in writing, editing, and content generation. It doesn’t take a literary genius to see or feel it.

Sentences smooth out. Paragraphs tighten. Structure improves. But sometimes, the voice flattens. Most of us can tell when something is written by AI, even if we can’t always tell when something is written with the help of AI.

Yet, even tools like Grammarly optimize toward familiarity. They rely on proven patterns, common phrasing, and widely accepted “good writing.” The result is predictably better writing… but also just predictable writing.

Of course, there’s a tradeoff.

AI enables depth. It helps us see angles we might have missed, incorporate ideas we wouldn’t have found, and build more comprehensive pieces. It has also been vital in catching when we’re making assumptions or claims without backing them up. Our writing becomes more informed, more structured, and often more valuable to the reader.

But at what cost?

I see it firsthand. My son spends extra time pulling our writing back toward something that feels like us (restoring tone, rhythm, personality). It’s deliberate work, and it can be frustrating.

Our previous rhythm was relatively painless, but we’d also plateaued in the caliber and tone of our articles.

So the question becomes: Is added value worth a diluted voice? Or is voice itself part of the value we’re trying to create?

Could we spend extra time improving our prompts so that our voice is more carefully curated? If the voice is there but we didn’t write it, is it still our article?

Different generations, different instincts

What became clear over dinner wasn’t just disagreement—it was a difference in posture.

John and I are leaning in hard. There’s a kind of curiosity that borders on recklessness. We’re exploring, testing limits, and integrating AI into everything we can. Not because we have to, but because we want to see what’s possible. John even built a niche AI app recently, just to prove he could.

There’s joy in that.

Our sons, on the other hand, seem to play a different role. Not resistant or disengaged (they both use these tools extensively), but more measured. More aware of the tradeoffs and the nature of their parents. More willing to question whether efficiency is always the goal.

If we are accelerating, they are steering.

Perhaps its a result of them being so close to us that they end up playing “defense”. And maybe that balance matters more than either side being “right.”

We hear it all the time: too much of a good thing becomes a bad thing.

But AI complicates that idea. Because it’s not just a tool — it’s a multiplier of output, of speed, of ideas … and of noise.

So how do we know when we’ve crossed the line?

That is a question worth sitting with.

Maybe it’s not about a universal threshold. Maybe it’s more personal, more situational. Maybe the better question isn’t “how much is too much?” but:

  • Is this helping me think more clearly, or just more quickly?
  • Is this enhancing my voice, or replacing it?
  • Am I using the tool, or adapting myself to fit the tool?

There may not be definitive answers yet.

But the act of asking — of pausing long enough to notice how these tools are shaping not just what we produce, but how we think — might be the most important habit we can build right now.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *