Copilots and Chaos - The Employee Experience
Act 1, Chapter 6 : When Agentic AI Replaces Perspective Instead Of Process
Thank you for coming back after the Backstage tour. Let’s dive back into the act.
Let us talk about the most underrepresented piece in the Digital Transformation Theater - Us. This article, is about us... the individuals inside the system. The people trying to get things done while drowning in context switches, Teams pings, and dashboards that still don’t talk to each other.
This is about employee experience.
And I am not talking about the kinds that require HR intervention in the shape of a yoga Friday or a mental awareness campaign. I am talking about the raw reality of navigating messy tools, broken processes, and the creeping irritation no amount of transformation, Design-led or otherwise, was able to fix. We seem to have normalized the dysfunction.
And now, the Copilot has walked into this landscape. The Agentic AI. The Whispering Assistant has replaced the “Hey it look’s like you’re writing a letter” clippy. The "partner" that promises to help us code, summarize, track, decide, prioritise, and remember.
But here’s the catch:
What we’re witnessing is the quiet shift from using technology to depending on it for judgment, flow, and even thought itself. Let’s dig deeper.
Ways of Working - past and present
Shadow Work 2.0
If I rewind to my early coding days, one memory stands out: how damn hard it all was. My brief stint in software development was defined by recursive errors, server crashes, and long debugging sessions with a friend, poring over lines of code to figure out where it all went wrong.
Today, some of that pain has been abstracted away - through better programming languages and cleaner UI-driven development. But for most knowledge workers, the struggle remains. The process demands a layer of cognitive effort—a mix of trial, intuition, and domain fluency. It still means being an artist to a certain degree. Which is a good thing.
The broken state of the employee experience
For years, and across orgs, I could never understand why the web applications we promised our clients looked nothing like the intranet portals we used ourselves. We pitched swank, but lived in swamp.
The experience would be so jarring, so absurd that I would gag. As a sales person, I actually had to consider whether I want to travel at all for a business meeting if my travel expense claim would mean a guaranteed migraine. And moving to enterprise software (Teams, Slack et al) didn’t fix it. If anything, it made life more miserable. The systems didn’t talk to each other. Add the complexity of a merger or acquisition, and suddenly you’re filling out five different timesheets across five different platforms. Intranets became graveyards. Communities were ghost towns. Corporate comms felt like marketing copy pasted into Outlook.
People were tired to begin with, but they soon became uncomfortably numb. Teams ran on unspoken “Jugaad” (an Indian art form where broken systems are kept alive through duct tape, divine intervention, and one guy who “knows a guy.”) and spoken favours. Real coordination happened in unofficial WhatsApp groups - until, of course, they got banned.
To me, this seemed like the detritus of cultural decay.
This is just to demonstrate that AI agents, that have stepped in promising clarity, aren’t entering a vacuum.
They’re entering a workplace starved for clarity and connection. And because they’re so good at pretending to offer both, we may not notice what we’ve given up until it’s gone.
Are we looking at a new layer of behavioural dependency?
We’ve already seen this with GenZ designers whose first instinct is to prompt instead of sketch. We’re seeing early signs of knowledge workers whose first response to a query is, "Let me check what Claude says."
I think, just like that, the muscle memory of trying, fumbling, and thinking independently will start to atrophy.
The Invisible Impact on Culture
When internal AI agents become default copilots across teams, a few things shift:
Portals vs People:
The good: Why bother navigating a 5-layer internal HR portal when you can just ask your AI agent to fetch the policy and summarize it? Good, right? Yes. Also Efficient. Seamless.
The bad: imagine the team that built and owns that process. Their role fades and their influence vanishes if they are also dependent on AI to create the policy to begin with.Labour Shifts:
The good: The “helpful assistant” quietly shifts coordination work from middle managers and support roles to machines without formal acknowledgment.
The bad: While the overhead decreases significantly, the unofficial role of the person who “keeps it all together”, vanishes. Genius gets lost.
Reprogramming of Rituals:
The good: Expense reports, timesheets, tax filing etc become seamless The bad: Onboarding, knowledge sharing, retrospectives - all lose warmth and texture when mediated by AI summaries or prompts.
Communities are abandoned:
The good: There isn’t one. Internal communities were not very active in most workplaces to begin with.
The bad: Internal knowledge bases become obsolete. Organic interaction drops. Trust-building moments disappear. Water-coolers dry up. Spontaneous idea-sharing, brainstorming with actual brains, and ‘stumbling into insights’ are replaced by clean, filtered answers. Over time, people trust the AI more than their colleagues. AI becomes a surrogate authority, even in ambiguous situations. Juniors ask AI, not seniors - short-circuiting informal knowledge transmission and professional growth loops.Trust Gaps:
This just has bad news all over: Over time, just as today I can read a paragraph and immediately know if it is written by an agent, managers and eventually clients will wisen up. They’ll know when a transformative idea is genuine, and when it is generated with a prompt. More than anything, the telltale sign will be when the clients asks deeper follow up questions, which they invariably will. And right there, as one fumbles to put together a coherent statement in contrast to the HBR level articulation on the slide, trust will be obliterated.
The Human Cost
What about the individual then? What is the psychological, cognitive, and emotional toll on individuals navigating a co-working life with AI?
You’re sitting in a client meeting. The deck is tight. The phrasing is flawless. But when the client asks, “Tell me more about slide 8,” something in you panics. You didn’t write that sentence. You didn’t feel it form in your mind. So now you’re selling a thought that was never yours, however well versed with the product you are.
Instant Competence Illusion: An AI Agent creates the feeling that you’re smarter for having asked. Whether it’s writing code, navigating policy or designing a pitch deck, it compresses complexity in a way that flatters your curiosity. That feeling - I’m learning fast, I’m in control, I’m doing so well - is addictive. You get better at asking, worse at solving.
Erosion of the identity of self: Repeatedly outsourcing expression (emails, decks, messages, even opinions) to AI creates a mirror-world self: polished, articulate - but disembodied. In meetings, people find themselves parroting what “the tool suggested” without feeling it as theirs. Eventually, it creates imposter syndrome 2.0 because your ability has been dis-intermediated. You show up with an output you didn’t internalize. And owning a product you don’t recognize.
Loneliness scales: You’re “helped,” but never heard. You get answers, not acknowledgment. You function, but don’t feel seen. Agentic AI gives you clarity, not connection. And because it’s so good at what it does, you don’t notice what you’ve lost - until the isolation feels permanent.
Disintegration of Inherent or Intangible or Tacit knowledge: This is the knowledge that you can’t quite explain. But you know, and you know you know. Like how to read a room, how to find a bug by feel, how to pick the exact moment to speak up in a tense meeting.
Worst of all : Always polished without emotion: An Agent will always be articulate, never impatient, never judgmental. That means you can throw raw thoughts, dumb questions, broken grammar at it, and it’ll give you polish in return. That asymmetry creates a sense of false safety. You don’t deal with the human emotion - no one to rubbish you, no one to ground you in reality, no empathy for others, no condescension, no anger, no frustration, no ridicule. . The human friction that builds perspective, patience, and character? Absent.
What does this mean for Design?
The question has changed. While designers are the biggest proponent of “Where does the user fit into all this?”, the question with Copilots is -
What kind of humans are we nudging people to become?
If you’re a designer working on AI copilots, internal tools, or employee-facing experiences, you are no longer designing just interfaces.
Design for Agency, not Automation
AI tools should remove friction—yes. But be brutally clear on what kind of friction you’re removing. Helping someone skip a bureaucratic portal? Great. Helping them skip human collaboration or reflective thought? Questionable. You have to design tools that assist, not anesthetize.
Create Feedback Loops
A smart assistant shouldn’t just return answers—it should foster learning.
Can your tool nudge someone to check their understanding?
Can it say, “Here’s one way—do you want to explore another?”
You’re not just delivering content. You’re shaping cognition.
Embed Humanity in the Edges
So much of employee experience lives in the transitions, in the micro-moments:
How the bot replies when someone is overwhelmed.
Whether it knows when to say “I don’t know.”
Whether it nudges a person to connect with a teammate, instead of replacing them.
You will have to figure out a way to design values, disguised as UX decisions.
Design with Organizational Psychology in Mind
You’re probably designing something for -
A company where middle managers might be quietly displaced by agents.
A culture where juniors stop asking seniors and just prompt AI.
A workplace where outputs get slicker, but ownership gets fuzzier.
Design for alignment. Design for trust. Design for shared authorship.
The Bottom Line
AI should reduce drudgery. It should make the system easier to navigate. But it shouldn’t replace the inner compass that makes someone a thoughtful or in some cases, a mission critical contributor.
Because if we’re not careful, the same tools meant to sharpen our edge... might just dull it.
And that’s the real chaos.