djfa.ai
writing...

Attention Was All We Needed

By Jeff NashFebruary 2026

We have become geniuses at killing time — and I don't mean the enjoyable, meandering time of a Sunday afternoon. I mean the necessary, digestion-like time that thought requires. Save for the greatest creative, literary, and scientific geniuses, there is an inherent boredom or inertia that precedes deep creation for most of us. And, ironically, these scientific geniuses have been the ones to invent things that make it progressively easier to kill time.

SCROLL DEPTHADDLOCKED INsteady passRead line by line

Every so often, a technology comes around that transforms a fundamental aspect of the human experience. Without question, the last invention to do so was the smartphone. The smartphone changed how we interact with one another intentionally, using the device itself, and it also changed the frequency with which we choose not to interact with one another and instead stay glued to our phone screens. Small talk while waiting in line has turned to scrolling until it's your turn. Why waste time and energy breaking the ice or making conversation when you have an endless, already-melted stream of parasocial entertainment in your pocket.

Those who are lucky enough to have been born in the 90s -- which I consider the 'sweet spot' for smartphone adoption -- seem to have gotten the best of the pre and post-smartphone worlds. We were internet native, so when smartphones came around, we quickly learned how to use them. Many of our parents didn't know how to switch the input on the TV. However, unlike subsequent generations, the internet we grew up with was largely relegated to a desktop computer in the living room or the computer lab. The fact that the internet wasn't glued to our bodies gave us plenty of time to interact face-to-face, unencumbered, in virtually every other scenario. By the time smartphones became ubiquitous in the early-2010s, we had built enough of a foundation that, while many of us certainly became smartphone addicts, we at least had our baseline, pre-smartphone social instincts to fall back on if the occasion required it.

AI is beginning to change what it means to spend time doing our work. Just as social media and the smartphone transformed how we choose to fill our time socially, I believe that AI will usher in a fundamental shift in what it means to 'pay attention' when it comes to our general productivity. Whether this is a net positive remains to be seen, and, though the intro may seem that way, this article is not here to fearmonger or paint AI as the next thing to bring us closer to the destruction of civilization. In fact, I don't even think the smartphone brought us closer to this: many of the frequent criticisms lobbed at smartphones sound suspiciously similar to what was said about newspapers preventing idle conversation on the train.

As someone who is terminally online (though tries his best to touch grass), I believe the rapid access to information and instant communication are net positives. More existentially, I believe that their use as the primary means of socialization for nascent generations is just the next step in the evolution in how we communicate. In a similar vein, while I believe there is an immense risk of humans becoming detached from their work just as the smartphone has detached us from the world around us, I also see plenty of upside to productivity and quality of life if we are deliberate in how we use AI.

It all comes down to how we define what it means to pay attention, to learn, and to multitask. This is the story of how AI is starting to redefine these terms. And to understand where we're going, we have to stop misunderstanding how attention actually works. Full disclosure: what follows is strictly anecdotal, based on my own messy experience with focus, ADHD, and too many screens. I'm not a neuroscientist or psychologist; I'm just a guy who works a lot and notices when his brain feels different while doing so.

The Chewing Gum Lie (MultiTasking isn't MultiTasking)

"You can walk and chew gum at the same time" is used as a metaphor for being able to care for multiple issues at once because the premise is true: it is generally extremely possible. Politicians use it to explain how they'll pander to the needs of multiple demographics. Rude people use it in the negative to insult others' competence. However it's used, those specific actions, ambulation and mastication, are incredibly monotonous, physical, and require little conscious thought.

Psychologists and parents alike have pointed out that what we view as "multitasking" is really "rapid switching". Those who think they can text and drive are really texting then driving, then texting, then driving, and so on, until something very bad happens. When you "multitask" in this way, you are gambling that nothing truly requiring your attention happens when you are at a "then texting" part, as the only thing that would jar you out of this texting is a loud noise or an odd, human-shaped bump in the road.

Lethal examples aside, multitasking is often looked down upon as a shortcut that never pays off, made for those who want to spend less linear time doing their work (as if that's inherently a bad thing). Yet, for some reason we allot people only N hours in the day and expect them to get X, Y and Z done during that time. So for better or worse, if you participate in the American rat race, you either find yourself multitasking at work to get multiple things done at once or multitasking to try to fit a tiny slice of leisure or pleasure into the times you are working.

What people call "multitasking" is usually just task-switching, and the cost depends on how far the context switch travels. Walking while listening to music is nearly free; debugging while Slacking is catastrophic for depth.

Humans and Multitasking

On the one hand, we are clearly designed for parallel processing. Evidence for this:

We possess five senses, not one, and one of these senses directly interacts with our manipulations of the physical world around us (spoiler alert: it's touch). Our brains evolved as ambient surveillance systems, constantly scanning for threats while the hands do busywork.

In many ways, multitasking is a skill that can be learned through repetition and myelination.

Some examples of skillful layering:
🏀Sports
DRIBBLESCANHands handle ball, eyes handle map

Point guards dribble with one motor loop while visual attention continuously scans defenders and passing lanes.

Cognitive Viscosity

Whether we are truly "multitasking" or "rapid switching", we are, in effect, dividing our attention. We tend to classify different activities by how much "active attention" they require, but there is perhaps another metric that is better suited to classify tasks. It's one thing for a task to require you to maintain constant sensory processing, like looking at the road without taking a break to look at something else, but there is a different measure of cognitive load required by various tasks that I will refer to as "cognitive viscosity". A task can require attention but not necessarily consume much working memory or cognitive load. The converse usually isn't true; there aren't many tasks that require a high amount of cognitive load but not constant attention.

Cognitive viscosity measures how "thick" or resistant the task is to mental flow. If the Y axis is the "level of locked in" we associate with attention, cognitive viscosity is the X axis that represents how much your brain hurts while you're locked in at any given level. Tasks with low cognitive viscosity flow easily like water (e.g. typing handwritten notes, small talk), whereas high viscosity tasks move through your brain like the last bit of ketchup in the bottle at a diner (e.g. debugging, arguing).

Attention × Cognitive Viscosity
Click any dot to see the activity
Cognitive Viscosity →"how much your brain hurts"← Attention (locked in)FLOW STATETORTURE ZONEWalkingEatingListening to musicShoweringDriving (familiar)Folding laundryScrolling phoneTaking notesBoring emailsSimple codeSmall talkDebuggingHeated debateConsoling someoneDriving (new city)Watching a thrillerFamiliar video gameHighway drivingMeditationTax formsProofreadingLong legal docsgeneral correlation
Low viscosity
High viscosity
WALKING+ eating + musicBRAIN:~5%Autopilot: body runs routine while attention stays mostly free

For centuries, irrespective of the technology and tools that came and went, the fact remained: there were tools that could reduce the amount of physical effort required to produce the same output. Drills, cars, and jackhammers come to mind. And, on the cerebral side, there were tools that made the transfer of brain effort to real-world output more efficient by eliminating intermediate steps like handwriting, re-writing the same thing (copy/paste), typing (dictation), or even calculation of large numbers. There were even drugs that claimed to make your brain smarter, work faster, or help you stay awake and attentive for longer. However, until AI, none existed that actually purported to reduce the amount of pure brain effort required to produce the same output.

While some of the cognitive aids above can lower a task's viscosity, AI seems to be collapsing the spectrum entirely. When a tool like Claude or ChatGPT can generate the debugged code, the consoling text to your partner, or the architectural decision in seconds, the "resistance" that made high-viscosity tasks valuable disappears. This is different from a calculator (which still requires you to understand the math) or an old-school (read: non-self driving) car (which still requires you to navigate). AI offers to remove nearly all the load of the 'cognitive journey' while leaving you with the output.

For those of us who have always struggled to maintain the "locked-in" state that high-viscosity tasks require, this is a welcome, albeit potentially dangerous, relief. We spent years building scaffolding to compensate for wandering attention: Pomodoro, Adderall, and Do Not Disturb mode. Now, AI offers to remove the need for attention altogether. The "rubber duck" effect with coding can now be applied to many other disciplines. When you explain your problem to something that responds—especially a voice agent—it can clarify your thinking, though you need to be hypervigilant about where the LLM "fills in the blanks" in your explanation. The question is whether we're conserving that attention for what matters, or simply letting it atrophy by squandering it on other non-productive activities.

In many of the above examples, AI can blur the lines of the cognitive viscosity of many tasks. And yes, that literally includes driving (Waymo) and listening to your significant other (ChatGPT: tell me what to say to console my husband after a bad day at work where he "nuked prod" (whatever that means). My wife said she got a haircut and sent me a picture, what the hell changed and what do I compliment?). On the surface, this seems like a good thing. People can now multitask while their car drives them. Polygamists can handle more spouses. But, as we all know, there is no such thing as a free lunch. What do we lose during this process?

I can't and shouldn't answer for everyone, so I'll take a step back and just talk about my experiences.

What people call "multitasking" is usually one of three things

Though the last is the one people are talking about when they're using it to derisively imply you are doing multiple things at once, poorly:

WALKING (legs + balance)motor cortex+MUSIC (auditory)auditory cortexNO CONFLICT

Multitasking and Learning

Suffice to say, I cannot fully express how much more productive I feel AI has made me when performing tasks on the computer. However, as the tools, workflows, and models have become exponentially more powerful over the past 2–3 years of rapid iteration in the space, there are some things that I feel AI will never truly substitute for. To put it simply, AI will never, in my opinion, be a shortcut for "true" learning. That's not to say it can't explain concepts or even create entire lesson plans that make information more accessible and easy to learn, but there is no "learn this faster" agentic workflow that turns one hour's worth of "real" learning into ten minutes like there are for writing emails, compiling research, or moving data around.

Why is this?

Whether it's learning about a concept by reading, understanding a complicated codebase, my brain needs undivided attention and to be in a "flow state" to truly absorb concepts. When I'm learning, it is as if I am tying an extremely complex knot and having to consciously be aware of where every finger is, ensuring that no finger that is supposed to act as a stationary fulcrum slips even a millimeter as I move the end of the rope to its intended position on each step. Any distraction, whether it's a text, a thought, or a bodily urge, collapses the mental model I was building before it's solidified and the knot comes apart before it is tied (and the concept is learned). And, what's more, any knot that I try to resume tying in the middle of this process after getting distracted is tenuous at best. I always end up having to straighten out the rope and start again from the very beginning.

LECTURE CHANNELBROWSER CHANNELENCODING QUALITY86%INTERFERENCE12%Single stream: concepts bind cleanly

While learning is one of life's greatest joys, most of us don't actually do it that much in our day to day life, save for those who are privileged or crazy enough to spend their whole life in academia. Learning is, for better or worse, often relegated to a mere hobby as we are forced to spend our intellectual energy tending to our work commitments, which leverage the very brain that we spent so long in school building as a cog in a machine designed to make widgets that disproportionately benefit an elite class at the other end of an ever-expanding wealth gap. The small silver lining in all of this is that we may actually get to leverage the latest and greatest widget, AI, to help these cogs turn faster and with less effort as we grind away.

For decades, the asset-owning class has been able to take much of the credit and reap a disproportionate amount of the rewards for the work of these human cogs, so would it be the end of the world if we could have our own cogs whose work we get to claim as ours? If the wealth gap that has precluded so many from home ownership, forcing us to live as literal tenants in our landlords' homes, has been facilitated by the use of our collective cognitive effort, why can't we get a sweet taste of that experience and become cognitive landlords? The risk, of course, is that landlords who never visit their properties eventually face emergencies they can't handle: pipes burst, foundations crack, and they've forgotten where the shut-off valves are. You can collect the rent, but you can't call a super when your own mind breaks down.

But being a landlord requires tools, and these tools cut both ways.

The Double-Edged Screen

There is a perverse irony here: AI giveth focus, and AI taketh away. The 15–30 seconds it takes for an LLM to generate a complex answer gives you just enough time to get distracted by something else. Fifteen seconds quickly turns into fifteen minutes because you opened Reddit. On the other hand, AI can also break the paralysis of starting. When facing a blank page or an intimidating task, simply complaining to the LLM about how hard it is will often generate a "shitty first draft"—and something terrible on the page is less daunting than nothing.

  • The anger advantage: AI is far from perfect, and oftentimes how bad its attempt was at solving a problem can be the thing that kicks your ass into gear and improving it, whether with or without AI.
  • External memory that actually works: Everyone has tried jotting down everything they think they will forget into a note as it comes to mind. This breaks flow because you have to context switch just to take the note, and then you often forget to check the note. If you mention something offhandedly in conversation with an LLM, it can go a long way towards you actually remembering or getting reminded to do it, especially if you make it standard practice to ask the LLM if there is anything you forgot to do. You can go back to a three-month-old thread and ask it to summarize what you were doing and it's like you never left.
  • False certainty as fuel: AI can help you avoid decision fatigue, which is often a subconscious way of stalling, by asking it to pick the "best option." This might not even be the best option and may be the equivalent of running an RNG, and yet the false authority with which AI presents its beliefs can be the very thing to make you commit to a decision that would have been inconsequential anyways but was holding you up completely.

The trick is using it to remove administrative friction without removing the thinking itself.

Do agents and LLMs make ADD worse or help?

AI decouples the act of creation from the process of understanding.

Scene navigator
TIME ELAPSED0:00Focused on work...

The AI response lands while your attention is elsewhere.

Worse:
Waiting on a response = time to get distracted
TIME ELAPSED0:00Focused on work...

The 15–30 seconds it takes for an LLM to generate a complex answer gives you just enough time to get distracted by something. 15 seconds quickly turns into 15 minutes because you opened Reddit.

AI often goes on a tangent → sends you on a tangent

Assembly line analogy:
📝Code ReviewYOUAI ProcessingLike putting laundry in → setting a timer → doing something else → timer goes off60mfree time!🔔Not multitasking!

Imagine a bunch of AI tasks are like items coming down an assembly line. One reaches you (a prompt response is done), you do something to it (e.g. respond, take some other action with the response), the next one comes. You are focusing on one thing at once, things are happening in the background.

It depends on the level of immersion / frequency of human intervention required in the AI workflow.

If you're waiting for a deep research request to finish or a large amount of code to be generated, you aren't doing anything — you are free to do something else.

A rapid-fire back and forth conversation with an LLM is a different story — multitasking during that is similar to texting while talking to someone and saying "sorry what?" every time they respond.

How it helps:
Overcome inertia
blank pageugh...where to start?Paralyzed by the blank page...

Getting the ball rolling is often the hardest part of completing any task. Things can seem daunting, especially when you view them as impossible goals and not step by step actions. In many cases, you can simply bitch about how much work whatever you are trying to avoid doing is and, 9 times out of 10, the LLM will break it down or provide good starting points without you even asking. A shitty first draft is more motivating than an empty document because at least there is "something" there.

Getting you pissed off
Reading AI's attempt...🤨 "hmm..."Step 1: AI generates something

AI is far from perfect, and oftentimes how bad its attempt was at solving a problem can be the thing that kicks your ass into gear and improving it, whether with or without AI.

Remember one-off things AND ACTIVELY REMIND you
Deep in work...

Everyone with ADD has tried jotting down everything they think they will forget into a note as it comes to mind and this 1/ breaks them from the flow of what they were doing because they have to context switch just to take the note and 2/ they often forget to check their note and it goes forgotten. The trick? Just tack your random reminders at the bottom of whatever prompt you're already sending. You don't even need a separate note — just "oh also remind me to buy milk" at the end. The LLM doesn't care, and now it's in the conversation history.

Related: they remember things FOREVER. You can go back to a 3 month old thread and ask it to summarize what you were doing and it's like you never left. Pepperidge Farm remembers, and so does your LLM.

Provide certainty (even if it isn't real)
❓❓❓Option AOption BOption CDecision paralysis — can't pick, can't move

AI can help you avoid decision fatigue, which is often a subconscious way of stalling, by asking it to pick the "best option". This might not even be the best option and may be the equivalent of running an RNG, and yet the false authority with which AI is notorious for presenting its beliefs can be the very thing to make you commit to a decision that would have been inconsequential anyways but was holding you up completely.

Anthropomorphism as shame
🤖AI is here.Waiting for you.→ You feel accountable→ Harder to tab to Reddit→ "Someone" is waitingFeeling like "something" expects a response keeps you on task

For some, feeling like you are chatting with "something" expecting a timely response is a better way to stay focused than doing a series of aimless Google searches that you might sigh and abandon in favor of birdwatching or scrolling TikTok. Doubly so when you use a voice agent.

Anthropomorphism as an aid
AI Duck🤖🦆tangledthoughts...Unclear on your own thinking...

The "rubber duck" effect with coding can now be applied to many other disciplines, and of course coding itself, when interacting with AI — especially when you were initially unclear. The caveat is that LLMs often "fill in the blanks", since they are a talking rubber duck, so you need to be hypervigilant about where they're doing this (why not ask?) and act accordingly.

Reduce boring overhead
WITHOUT AIformat tablesort colsfix spacingreorder rowsalign textBrain wasted on grunt workWITH AIauto-formatted ✨Brain free for real thinkingAIFormatting, sorting, aligning = brain drain

The formatting side of things — having to format text, sort columns, reorder rows in tables — is likely inconsequential to the task at hand, whether it's learning, producing a report, or otherwise. When you offload this "grunt-work" onto AI, it can 1/ keep you focused and prevent you from needing to context switch to these administrative-type tasks and 2/ make the tasks seem more appealing and substantive and less arduous in the first place.

Chunking and "reverse chunking"

"A chunk is a collection of basic units that are strongly associated with one another, and have been grouped together and stored in a person's memory." — Wikipedia
  • Phone number into groups, credit card into groups. What are the 4th and 5th digits of your brother's phone number? What is your brother's phone number? The former is harder to answer, most likely.
  • When you enter it into the phone or recite it chunk by chunk, you, the human, are able to transfer those chunks into expanded thought/information. Even though you've chunked it in your brain for easy memorization, you are able to easily answer the question by virtue of reciting the chunks. And, more relevantly to the discussion to come, you understand that you are splitting a 10 digit phone number into discrete chunks for easy memorization.
How Chunking Works
Digits in isolation (meaningless)
1
2
3
4
5
6
7
8
9
0

When you enter a phone number into the phone or recite it chunk by chunk, you, the human, are able to transfer those chunks into expanded thought/information. Even though you've chunked it in your brain for easy memorization, you are able to easily answer the question by virtue of reciting the chunks. And you understand this, which shows even in the way we recite things, e.g. one two three, pause, four five six, pause, seven eight nine ten when reciting a number.

And now, the reverse...
Reverse Chunking: How AI Expands Your Ideas
YOUR HIGH-LEVEL IDEAS
meeting summary
action items
tone: professional
AI
AI-EXPANDED OUTPUT
  • When you use AI to write a document or an email or expound upon an idea or thought, you are "reverse chunking" your high level ideas into something you, in all likelihood, will never read closely. In this case, however, you are decoupling the act of creation from the process of understanding. At best, you will skim the creation to make sure it didn't completely contradict your point, but you are essentially speed-reading something you yourself are putting your name on.
  • Not necessarily a 100% bad thing, it depends on the stakes. You skipped having to put in time and attention to produce an output you may not be as familiar with. This sounds "bad", but may be a worthwhile tradeoff in many cases. We do this with assistants all the time: lawyers have paralegals, bosses have secretaries. For these professionals, they so strongly believe their time and attention is better spent elsewhere that they are paying to outsource particular tasks requiring someone's attention to someone else. At the expense of true understanding.
  • However, LLMs are increasingly cheap, have much less overhead than hiring a real human, and don't invoke any level of shame in what we are asking them to do. Would you ask a living, breathing human being to organize some of the slop into coherent thought like we ask AI to do? Likely not. This means that we aren't as intentional with the cost of sacrificing our attention. This can lead to doing something that is unquestionably LESS productive during the time we would have been "paying attention". If attention is the scarcest resource, and AI lets us skip the attention-heavy part of many tasks, are we conserving attention for what matters, or just creating attention surpluses we immediately spend on distraction? We are taking the bad part of the attention vs. free time tradeoff (giving up attention) without making valuable use of the free time gained.
  • The tangent tax: AI often goes on a tangent, which sends you on a tangent. You prompt it for a simple email and end up with a 500-word treatise that sounds authoritative, so now you're down a rabbit hole verifying claims you never intended to research. In fact, my original version of my intro paragraph mentioning texting and driving had an aside that it was actually legal to do so in Montana (it is, at least at the state level!). When asking GPT-5.2 Thinking to check for grammar, it got hung up on this fact, because it couldn't quite believe it either, and spent ~10min doing deep research on random lawyer websites about the law.
  • You are providing full intent, AI is expanding upon it, but you aren't internalizing the expansion at anywhere near the same level as if you had expanded yourself. If the expansion is related to things like formatting tables, adding pleasantries in an email, summarizing a concept with which you are already deeply familiar, fine, no big deal. If you are using AI to expand on something that you would gain understanding had you expanded yourself, maybe not so good.
  • Nothing makes me roll my eyes more than "Claude-bros" claiming they have their AIs running 24/7, doing twenty things that they themselves couldn't do without AI. Good luck when five of those twenty things go off the rails at the same time and you have absolutely no understanding of what they were even doing when they were "working".
  • When you look at AI-generated code/text and feel a vague nausea/confusion because your brain recognizes it should understand this (it came from your prompt) but the chunks don't slot into your existing mental models.
The Reward Mechanism
Traditional: You Do The Work
EFFORT
Real
STATUS
REWARD
Stable

Reward mechanism is at stake here: you get to mark tasks as "done" perhaps more quickly than ever. However, you not only are less aware of what you have "done", but the entire reward mechanism of effort spent = perceived reward gained starts to go off the rails. If you don't remain aware of this, this can be dangerous.

Traditional — effort builds to completion → proportional dopamine reward → growth.

Why We Need the Loading Screen

We've mistaken the buffering icon for a malfunction. That restless, itchy sensation when you're waiting for the dentist or standing in line—the "boredom" that AI now eliminates—isn't a bug in the system. It's the cognitive equivalent of a computer's idle process, defragmenting the hard drive of your mind.

We've confused eating with digestion. Reading a book, debugging code, or learning a concept is eating; the boredom that follows—the staring out windows, the doodling—is the digestion. AI offers IV nutrition that bypasses the stomach entirely. We're full but starving.

The blog post you're reading was written by a human staring at a blinking cursor, bored and frustrated, wrestling with the viscosity of these ideas. The gaps between the words—the pauses where you, reader, connect these concepts to your own life—are the empty rooms where you actually live.

DISTRACTED15 sec wait→ 15 min driftASSISTEDAI handles overheadyou keep intentINTENTIONALkeep loading screenkeep learning ownershipWaiting turns into drift.

Don't sublet them all.

By Jeff Nash

February 2026