Using ChatGPT as an applied cognitive enhancement system.

I’ve been using ChatGPT for a while now, in fairly conventional ways.

  • As a design buddy, to explore ideas and challenge assumptions, Q&A style
  • As a coder, to draft code, debug
  • As a callable API, to embed, generate drafts, options, or variations
  • As a prompt generation engine, to formalise reusable patterns
  • As a content generator, for first passes that I then refine heavily
  • As a CV focuser, to map a detailed career history to a specific job description

Very practical work: coding assistance, documentation scaffolding, and design exploration — not just abstract ideation or “testing ideas out”.

All useful. All incremental. All fundamentally transactional.

Recently, I had a genuine light-bulb moment — because something qualitatively different happened.

Not fast outputs. Not better wording. A change in how the interaction itself functioned.


From producing artefacts to shaping cognition

Probably most AI interactions are transactional

prompt → response → edit → move on

Even when the output is good, the value is usually ephemeral. Once the task is done, the thinking disappears with it.

In this recent piece of work, I noticed that the value wasn’t in the answers at all (although it was good). It was in the structure that was emerging around the answers.

Instead of asking:

  • “What should I say?”
  • “Can you generate X?”

I found myself asking, in the context of an applied knowledge workflow:

  • “What structure would make this recallable under pressure?”
  • “How do I reduce cognitive load without losing depth?”
  • “What is the minimal architecture that scales across contexts?”

At that point, the AI stopped feeling like a tool producing artefacts and started participating in the emergence of an applied cognitive system — one designed to support reasoning, recall, and judgment over time.

That was new, at least in my own usage.


An unexpected analogy (and why it mattered)

At one point, the interaction reminded me of an imagined scene from Star Trek.

Spock interrogates the ship’s computer about the likelihood of successful ship-to-ship transport while travelling at warp — a scenario with no direct precedent (possibly dredged up from Into Darkness by my subconscious).

The computer doesn’t decide, Spock is the agent. It extrapolates possibilities. Spock applies judgment, context, and restraint.

That distinction felt important.

The value wasn’t the machine declaring an answer. It was the disciplined questioning that allowed a conclusion to emerge, with uncertainty explicitly acknowledged.

That’s the closest analogy I’ve found for what was happening here: not outsourcing thinking, but interrogating a system to make thinking visible enough to shape deliberately.


Back to reality: this only works if you capture it

Here’s the critical part — and where this stops being a nice metaphor.

Without explicit capture, this kind of work evaporates just as quickly as ordinary prompt–response interactions.

Usually I’d be upstream of the thinking. What made the difference in this case was deliberately working mid-stream and downstream:

  • participating as agent in the applied cognitive system.
  • articulating the structure that had emerged
  • writing it up as a formal specification
  • defining boundaries, constraints, and non-goals
  • treating it as a system that could be inspected, challenged, and reused

In other words: applying the same discipline you would apply to any serious delivery work.

Without that step, this would just have been another interesting conversation.


Why this isn’t “vibe coding” (or rose-coloured AI optimism)

I’m deliberately sceptical of the current wave of tired metaphors:

  • “AI thinks”
  • “AI reasons”
  • “AI replaces expertise”

None of that is necessary to explain what’s useful here.

What is defensible — and well supported by research — is that externalising cognition can improve reasoning quality, particularly under complexity and time pressure. This has been studied for decades in areas such as:

  • distributed cognition (Hutchins)
  • cognitive load theory (Sweller)
  • metacognition and reflective practice (Schön)

Large language models don’t change those fundamentals. They simply make certain forms of externalisation cheaper and more interactive.

That doesn’t eliminate the need for judgment. It makes the absence of judgment more obvious.


Are people doing this now?

I think some are — quietly.

You can see adjacent practices in:

  • pair programming (human–human)
  • rubber-duck debugging (explaining code line-by-line to an inanimate object)
  • architectural decision records
  • facilitation techniques that externalise reasoning

What still seems relatively rare outside the realms of achademia is treating AI explicitly as:

a scaffold for augmentation of designing thinking rather than a generator of finished outputs

That rarity isn’t a technology problem. It’s a discipline problem.

This approach only works if:

  • you stay in control of direction
  • you challenge assertions
  • you enforce guardrails
  • and you’re willing to slow down at the right moments

Without that, the interaction collapses back into pattern-filling and over-assertive prose — which is easy to spot and rarely useful.


A cautious conclusion

I’m not presenting this as a universal model, or even a finished one.

What I’ve described is an emerging practice that feels structurally different from how I’ve used AI before — and different enough to be worth naming.

Not as hype. Not as replacement. But as a disciplined way of collaborating with a system to design thinking itself.

Whether that proves durable will depend, as always, on whether it survives contact with reality.

That’s the only test that matters. It has resulted in the proprietory IMRS. Want to know what an IMRS is? Contact us.

No CTA, but interested in your thoughts.

Best regards

RichFM

Hastags #AppliedAI #SystemsThinking #CognitiveDesign #KnowledgeWork #AIinPractice #CriticalThinking #HumanJudgement #ProfessionalPractice #GhostGen.AI


Leave a Reply

Your email address will not be published. Required fields are marked *