In an AI-Saturated Market, Original Thought Becomes a Scarce Asset

With the advent of generative AI, I’ve been giving some thought to the concept of original thought.

There is no doubt in today’s market that organizations feel pressure to deploy AI. Executives are demanding it; boards are asking questions about it. In my research at TDWI, I’ve seen that most organizations we survey are doing something with AI. Often, this involves starting with off-the-shelf generative AI tools for marketing and content creation. Many organizations don’t go further than that because they lack the data, skills, governance, and organizational foundations to evolve.

Yes, they may take their call center notes or trouble tickets and create a system that classifies what their customers are saying, and this can be valuable. Yet, when I run surveys at TDWI, I see that these same organizations, those using what I refer to as consumerized AI, report value in the form of productivity gains, and that sounds great. But productivity in what? Volume? Speed? Insights? Polish?

Often, this is volume and speed productivity in content generation. However, what many are doing is generating content based on homogenized thinking—what they get from an LLM. I’ve written before about what some researchers call “workslop” or “AI slop.” The term was coined in 2024 by researchers from the Stanford Social Media Lab and BetterUp Labs and published in the Harvard Business Review.[1] It describes low-quality AI-generated output that looks credible but lacks depth, nuance, or even factual reliability. When it is passed downstream, it can degrade productivity or decision quality, although it appears to save time upfront. Their research showed that over 40% of U.S. desk workers have received workslop, with recipients wasting nearly two hours fixing each instance.

What is behind the slop? Homogenized thinking and strategic averaging; what often comes out of a generative AI tool.

In the article, “AI Models Collapse When Trained on Recursively Generated Data,” Shumailov et al. define model collapse as “a degenerative process affecting generations of learned generative models, in which the data they generate end up polluting the training set of the next generation.” [2] In other words, if models are trained on generated data, effectively training on themselves, their outputs average toward the center of the distribution.

But original thought, those bursts of insight and creativity, rarely occurs at the center. It occurs in the long tail. If the tail shrinks, so does the probability of breakthrough thinking. What remains is a slow drift toward mediocrity. Some have referred to this recursive dynamic as the ouroboros; the serpent eating its own tail.

Another, deeper issue is the loss of engagement with reality.

I think of the kind of research I do. It is based on primary research studies, dialogue with leaders and vendors, reading, and my own critical thinking and experience. When I speak to people at events or on calls, I can encounter perspectives I hadn’t considered. I hear contradictions. I see tension between what vendors say and what customers are actually doing. That friction sparks new ideas.

Yet some people don’t operate that way. They prompt and accept the output of the prompt.

A 2025 study published in Societies found a strong negative correlation between AI usage and critical thinking, describing a pattern of cognitive offloading; delegating tasks to tools rather than thinking independently.[3] Another study by MIT Media Lab researchers divided students into three groups (LLM-only, search-only, and brain-only) and used electroencephalography (EEG) to measure cognitive engagement during essay-writing tasks. They found that over time, writing with AI assistance reduced overall neural connectivity and shifted the dynamics of information flow.[4]

These findings raise uncomfortable questions.

Are we losing the ability to generate outlier thoughts? Are we building a world where both the tool and the user gravitate towards average?

Generative AI will continue to improve. It will draft faster, summarize better, and automate more tasks. That is not the concern. The concern is that in a market where everyone uses the same tools trained on the same corpus, competitive advantage compresses.

Original thought rarely lives at the center of the distribution. It lives in the outliers.  It lives in contradiction, tension, and lived experience. When we delegate too much thinking to systems optimized for probability, we should not be surprised if what we get back is probability.

The organizations that will benefit most from AI will not be those that prompt the most. They will be those that remain deeply engaged with reality and use AI to extend judgment rather than replace it.

In an AI-saturated market, original thought becomes a scarce asset. Scarce assets, historically, are where value accumulates.

What do you think?


[1] Niederhoffer, K. etc. al (2025). “AI Generated Workslop is Destroying Productivity” Harvard Business Review. https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity

[2] [2]  Shumailov, I., et al. (2024). “AI models collapse when trained on recursively generated data.” Nature. 631(8022):755–759. https://www.nature.com/articles/s41586-024-07566-y

[3] Gerlich, Michael. 2025. “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking” Societies 15, no. 1: 6. https://doi.org/10.3390/soc15010006.

[4] Kosmyna, N, et. al. 2025. “Writing with AI assistance, in contrast, reduces overall neural connectivity, and shifts the dynamics of information flow.” https://arxiv.org/abs/2506.08872.

Leave a comment