The Fractal Focus Trap
I have always loved research. I’m curious about how systems are built, how processes are managed and how they are made seamless with minimal friction. AI Research tools like “deep thinking”/”Pro”/”and what not” has made life easier for people to research about a topic. But they also made it easier to drown in overabundance of knowledge that “Deep thinking” has to offer to us.
Infinite Curiosity
In the Pre-LLMs era (its a thing now), if I wanted to branch out into a subtopic I had to manually search filter and read bibliographies. Read about authors who spent time creating those papers. Good old days of research has its own pros n cons but all this added a natural filter. But now with “Deep thinking” models, agents, sub agents the cost of expanding your research is near zero.
As a result, those with constant curiosity over consume. We end up with a never ending focus or Fractal focus where every answer contains 3 more questions and we follow/or ask sub agents to follow them for us simultaneously.
The Infinite loop
I recently sat down to build a “dream project” which will have everything. Like a Pro++++ version of a product that does everything. It should be easy right, we just need to “VIBE” with the Agents and get it done. WRONG!!!. Before even reaching the Vibe part of the project I got lost.
Here is the Deep Thinker Trap:
- I want to understand Topic A.
- AI being helpful => Here is A but see there is B, C and D. These 3 are tangential thoughts like shining silver thingy which I feel will be great to add.
- C looks the shiniest lets go and then you see more shining thing like C1 and C2.
- I am now 3 level deep already, and my original purpose is lost because I’m chasing things that AI is throwing at me and rather than what is essential for my project.
Result => We are constraint with our ability to Prune it and not by the availability of information anymore.
This ‘shining silver’ trap is actually being studied by researchers as ‘Epistemic Rabbit Holes’—a phenomenon where the zero cost of expansion leads to a type of AI chatbot addiction characterized by endless, horizontal wandering (Shen et al., 2026).
Productive Wandering?
When AI performs Deep Research it generates what 10K words of context or maybe more. It looks productive because text is flowing but our human Return on Investment is actually dropping. We are trading depth for a never-ending horizontal.
Conclusion
We are in this era where bottleneck is no longer access to information but our own capacity for rejection. As most of use experienced the AI Genie is all too happy to grants us infinite wishes and leading us down to Rabbit holes where returning to the attention begins to drop. At the end of the day we should remain the architect of our own focus.
And this bring us to adding guardrails to Research AI tools like:
- Be explicit to provide high level of summary of Topic A and do not explore any tangents unless asked to explore.
- Set timer for research brain like 10 to 15min and enforce to return to the Main Purpose.
- Prune your research to actionable insights for the Main Purpose.
- Even if you want to explore a tangent using sub agents make your intent clear how does it align with your Main Purpose.
By implementing these you move from a state of Wandering to Intentional Research.