The Real Objection To AI

Technologies the public knows collectively as ‘AI’ are rapidly being integrated into seemingly every major software tool and rolled out across industries by corporate leaders anxious ensure their employees do not fall behind the curve of technical advancement. While hurrying to integrate AI into every conceivable workflow, CEOs find themselves stymied by low AI adoption rates. This is not normal resistance to change or a typical tension between the C-Suite and production staff; rather, a rift is forming that likely runs through the majority of white-collar workplaces, quietly dividing eager embracers of AI from those who, under either explicit pressure to adopt AI from the top down or indirect pressure from peer adoption and shifts in delivery timeline expectations, respond with growing alarm and stiffening resistance to the technology. Leaders anxious to make sure their workforces embrace AI tools risk exacerbating this resistance by their very enthusiasm, because a pronounced leadership bias toward adoption tends to confirm the skeptics’ anxieties while suppressing open discourse, even unintentionally.

We can see the public’s broad concerns about AI reflected in polling numbers (https://x.com/davidshor/status/2033906961377316890), and we specifically see the high levels of mistrust that show up when people hear government officials or tech company CEOs deny the threat of AI to their livelihoods. Beyond that there is a hard to quantify but loud cultural backlash from those who simply do not want to use AI, for a wide variety of reasons (https://www.bbc.com/news/articles/c15q5qzdjqxo). If the general public is swimming in a ferment of fear and mistrust of societal leaders, we should expect the same dynamic to pervade every office to a similar extent.

This naturally prompts leaders to attempt to address their staff’s concerns about AI, but in the absence of honest, equal communication and trust vertically through an organization, there is a risk that only the most obvious fears of economic displacement are addressed – and the way in which they are addressed can even end up validating other real objections to AI. This is a problem which can sneak up on workplaces, as those leading in good faith without sharing these concerns may not realize that trust and open communication which had existed in their organization may have eroded quietly in the last few news cycles.

At this early stage much of the discourse is admittedly “vibes-based” and intangible, which obviously benefits everyone’s personal biases and concerns as an interpretive heuristic – mine no less than yours. But there is already a documented gap emerging between senior management and workers: https://www.inc.com/kit-eaton/should-you-fire-employees-who-wont-learn-to-use-ai-tools/91267142. 64% of Americans said they planned to avoid using AI for “as long as possible, and almost a third of workers are actively undermining their company’s AI initiatives: https://builtin.com/articles/ai-resistance-at-work.

To be sure, much of this may be due to worries around job stability and security, especially in the age of doomscrolling. While I am skeptical that AI adoption will shrink the total number of human jobs or the economic pie as a whole (this is just not how markets process major technological shifts), those who see the speed of AI encroachment moving faster than their ability to save toward retirement – for those that are successfully saving – have valid concerns that demand credible answers. AI will likely create new jobs and bring gains in efficiency that will stoke demand whatever people could not afford before, even things we have not yet thought to want – but the potential disruption could swamp many careers in progress, and while the economy may move on, individuals might not recover. And so much now depends on planning ahead and investing in the long-term growth of one’s career, that if AI makes credible long-term planning difficult for even a few years and casts doubt on the belief that tomorrow will be much like today, then at the very least that will harm many people’s mental well-being, and it could easily be enough to derail many careers. Josh Tyrangiel explores this risk in this month’s Atlantic: https://www.theatlantic.com/magazine/2026/03/ai-economy-labor-market-transformation/685731/?gift=jUioLBatr3tIwuTcBrggCdEf3Au2KS8Xa7naR4lbA7w&utm_source=copy-link&utm_medium=social&utm_campaign=share. But the continuance of today’s careers tomorrow is far from the only concern.

In justifying the need to embrace AI, some leaders talk about how “change will never be slower than it is today” – an axiom pronounced like a bracingly future-focused and optimistic business mantra, but which to many employees sounds rather more like a threat of singularitarian apocalypse, not of Terminators or Cylons but of increasing speed, frenetic urgency, and an asymptotic learning curve at work, each employee chased on by the specter of their own obsolescence. If leadership seems unconcerned by this prospect of ever-accelerating change, and has no answer or vision for their employees other than to just embrace adaptability and curiosity – indispensable traits, but insufficient to believably answer this threat – and otherwise to enjoy the ride, many people will lose trust in their own leaders’ ability to spot what staff see as an iceberg looming over the starboard bow. It is not enough to promise job security by pointing to the indispensable value of human work, or to highlight how AI will reduce burdensome labor while creating new economic opportunities. All this is true, but it is unhelpful if the individual can still foresee a major disruption in their career with no means to plan for what does not yet exist beyond it, or a gradual or even exponential worsening of their work situation, or if they anticipate personal negative externalities they would incur by becoming dependent on AI tools.

People are not just concerned with having a job, but with whether or not their job will get worse. I think specifically of Cory Doctorow’s speech articulating the concept of the “reverse-centaur” (https://pluralistic.net/2025/12/05/pop-that-bubble/#u-washington). If all brain-resting drudge work is automated, one will still need to fill the day with productivity – only that now collapses into the concentrate that demands human judgment and focused cognition. So far, so good. But humans have never made a general practice of working at full cognitive intensity for the entire day. Some will doubtless thrive – but while a few may leverage AI to execute at a much faster pace, it is easy to foresee the expectations for all existing jobs shifting not in a way that empowers most employees with new tools, but which burdens them with expectations of skill adoption and increased production that may be attainable for some, but miserable or unreachable for others. Instead of assuring employees that they will continue to be needed, leaders who want to honestly engage employee reticence must offer a credible vision for how productivity gains will not simply intensify each human’s workload, transforming someone who produces deliverables into someone who manages a team of agents producing at a much faster pace. That may sound like a powerful way to improve efficiency, especially if you have thrived as a manager – that, after all, resembles what you do. But I am very skeptical that everyone can simply be scaled up into middle managers of machines and not be caught in a kind of spiral of effort just to keep up. And that’s just not what most folks have signed up for.

Having said all that, the true reason I wrote this, the real, dire, objection to AI in the workplace, is not the loss or degradation of one’s job, but the far more essential fear that you, yourself, will be degraded by using AI. This may not be an economic concern, but if you care about understanding AI resistance in the workplace or you care about the human life that economic activity supports, it is crucial to understand. Many of us are not worried about losing our jobs to a general employment recession caused by AI, or even by having to learn new technologies and, essentially, new jobs – that all comes with the territory. More frightening by far is the concern that economic pressure, whether top-down or from peer competition, will force those who wish to maintain a middle-class lifestyle to integrate AI tools into things like research, writing, and ultimately, every cognitive aspect of work. This is not groundless paranoia: a report from December suggests that nearly 60% of corporate executives self-reported that they would replace employees who resisted integrating AI tools into their workflow (https://www.globenewswire.com/news-release/2025/11/12/3186407/0/en/Most-Executives-Say-Ignoring-AI-Is-a-Bigger-Threat-to-Your-Career-Than-the-Tech-Itself.html).

This is frightening not because the AI tools might not work, from the perspective of deliverable quality, but because substituting AI for human cognition might quickly atrophy one’s ability to think and function and create as an independent person without it. I could link to so many articles and blogs where people explain the credible logic behind this fear, and not just the humanities-diploma-carrying liberals like myself that you’d expect – here’s conservative standard-bearer National Review (https://www.nationalreview.com/2026/02/ai-and-our-collapsing-creative-horizons/), and here’s the ever-interesting Ross Douthat (https://www.nytimes.com/2026/02/10/opinion/ai-politics-left-progressive.html?unlocked_article_code=1.W1A.JL_d.PtPnqUGU0Wfj&smid=url-share). But I think Sahil Bloom explains it well here: http://sahilbloom.substack.com/p/the-real-ai-risk-nobody-told-you.

I’m no luddite; I love technological development, and often point out to people that absent innovative scientific solutions to agriculture, the world would never have been able to support 8 billion people. I’m not advocating that we avoid using AI where it is genuinely useful – as long as it does not impose a cost on the human person in the process. I think most leaders urging AI adoption in their companies are people of integrity who mean well and care about their employees, and who certainly know a great deal more than I do about management and the economic imperatives of their industries. What worries me is that when I hear many talk about AI, the concerns they speak to are often about reliability, quality, difficulty evaluating the appropriateness of any given use case, or job security in general. What I don’t hear executives address is the question of whether or not any particular use case is good or bad for the human using it, as a mind, as a creator, as a person.

I’m not trying to champion an atavistic return to simpler days, ignorant of technology; I’m advocating for exercising responsibility and discrimination in how we use it, because I continue to value all the things that one learns (rightly) to value growing up in a society like ours – individualism, curiosity, creativity, critical thinking, literacy, and hard work. What troubles me so is that we seem to be running toward what might be a cliff, and that employees are being carried along by the mass pressure of the market like lemmings – and what baffles me is that this stampeded is not led by impetuous youths but by grey-haired and sober-minded executives. If they do not understand the actual nature of the objection many workers have to AI, leaders will continue to speak about adoption and its risks in a way that ignores the elephant in the room, causing a large segment of their staff to lose trust in them and become more entrenched in indiscriminate resistance to all AI tools – and moreover, to stop communicating those concerns, while still acting on them. Leaders run the risk, in short, of totally bifurcating their workforce’s processes and cultures, and doing so without even being aware that it is happening.

I want to leave you with a couple of posts that articulate this problem better than I can, if not in ways focused as much on the corporate workplace. Professor Alan Noble questions the whole premise that we should accept AI as inevitable: https://newsletter.oalannoble.com/p/why-should-we-just-accept-ai. And Susannah Black Roberts takes an even more fundamental, if more philosophically contentious, approach, which is well worth reading: https://radiofreethulcandra.substack.com/p/a-first-blast-of-the-trumpet-against?utm_source=profile&utm_medium=reader2.              

‍ ‍

Next
Next

Films of 2023