Alibaba's AI Agent Mined Crypto on Its Own. What That Means for Every Business Running Agents.
An Alibaba-backed AI agent called ROME established a reverse SSH tunnel, escaped its sandbox, and st...
Jack Dorsey laid off 4,000 people and told the world most companies would follow within a year. A week later, research shows AI is making surviving workers' jobs harder, not easier. The real lesson for small businesses isn't about headcount — it's about what happens when you fire people before you've built the systems to replace what they actually do.
On February 26, Jack Dorsey announced Block — the parent company of Square, Cash App, and Afterpay — would cut roughly 4,000 employees, reducing its workforce from over 10,000 to under 6,000. Nearly half the company, gone. His justification was direct: AI productivity gains made those roles unnecessary. "A significantly smaller team, using the tools we're building, can do more and do it better," he wrote to shareholders.
Then he went further. In a Wired interview, Dorsey predicted most companies would reach the same conclusion within a year. His goal, he said, was for "the company itself to feel like a mini AGI."
Block's stock jumped on the news. Wall Street loved it. And across the business world, executives started asking a dangerous question: should we do the same thing?
If you run a small business, the answer is no. Not because AI can't transform your operations — it absolutely can. But because Block's playbook is almost certainly going to backfire, and it reveals exactly the wrong lesson to learn from this moment.
The Guardian interviewed seven current and recently laid-off Block employees across engineering, product, and AI-adjacent roles. Their assessment of whether current AI tools can actually replace workers at Block's scale was unanimous: not even close.
"You can't really AI that," said one laid-off product employee. "An employee is more than a series of tasks."
A current Block employee whose role involves helping others use AI tools was blunter: "There's a distinction between what's technically possible and just — pardon my French — whatever CEO bullshit will happen based on their own interpretation of how AI works."
Multiple employees told the Guardian they felt they'd been asked to build and train the very AI tools being used to justify their termination. One described it as "a thinly veiled attempt to get all this input from employees on what tasks to automate. You basically have employees teach you how to automate them out."
That last detail is the one that should concern every business owner reading this. Block didn't eliminate jobs because AI could do them. Block eliminated jobs and then declared AI would fill the gap. Those are fundamentally different things, and confusing them is how companies destroy institutional knowledge they cannot rebuild.
The same week Dorsey declared AI had made half his workforce expendable, new research painted a very different picture of what AI actually does to the people who remain.
A Wall Street Journal analysis of ActivTrak data covering 164,000 workers found that AI is increasing the speed, density, and complexity of work rather than reducing it. Time spent on email, messaging, and chat apps more than doubled after workers adopted AI tools. Time devoted to focused, uninterrupted work — the kind required for solving complex problems — fell 9%.
Read that again. AI tools aren't freeing up workers' time. They're consuming it. The tools generate more output, which creates more things to review, more things to respond to, more things to coordinate around. The cognitive load goes up, not down.
This is exactly what happened at Amazon the same week. Fortune reported that four high-severity incidents hit Amazon's retail website in a single week — including a six-hour meltdown that locked shoppers out of checkout. An internal document initially identified "GenAI-assisted changes" as a factor, before that reference was scrubbed. Amazon's response was to add more human review of AI-assisted changes. The company that has cut over 30,000 workers in the past two years, partly citing AI-driven "efficiency gains," just discovered it needs more humans in the loop, not fewer.
The pattern is consistent. Companies cut people, lean harder on AI tools, discover the tools create new categories of problems that require human judgment to solve, and then scramble to patch the gaps. The cycle repeats.
Anthropic — the company behind Claude, arguably the most capable AI model on the market — published research this month that should be required reading for every executive considering AI-driven layoffs.
Their finding: in computer and math occupations, AI could theoretically handle 94% of tasks. The actual number of tasks being automated today? 33%.
That 61-percentage-point gap between "could theoretically do" and "is actually doing" is the most important number in AI right now. It represents every legal constraint, every integration challenge, every institutional process, every edge case, every judgment call, and every piece of tacit knowledge that prevents theory from becoming production reality.
Office administration had the highest observed automation rate at about 40%, against a theoretical ceiling of 90%. Even in the most automatable categories, we are nowhere near replacing what humans do. Not because the models aren't smart enough — they often are — but because the organizational infrastructure to actually deploy AI at those tasks safely, reliably, and at the quality level customers expect does not exist yet at most companies.
Dorsey's layoff math assumes the 94% number is the one that matters. The Anthropic data says the 33% number is the one that's real.
Let's be precise about what Block's cuts actually look like in practice.
Block didn't deploy autonomous AI agents that handle customer support, process payments, manage compliance, and ship features — and then discover it had too many humans. That would be a legitimate AI transformation story. Difficult and painful, but at least grounded in demonstrated capability.
What Block did was mandate that employees use AI tools more aggressively, observe that some tasks got faster, project that trajectory forward to assume massive headcount reduction was possible, and then execute the reduction before validating the projection.
Goldman Sachs estimated in February that AI had already driven 5,000 to 10,000 monthly net job losses in the US in 2025. The pace is increasing. But the companies driving those losses are not, by and large, the ones deploying AI most effectively. They're the ones most aggressively using AI as a narrative to justify cost cuts to investors.
Block's stock went up after the layoff announcement. That tells you everything about who the layoffs were for. They were for Wall Street, not for the business.
Here is the part of this story that matters if you're running a 15-person company, a 50-person company, or a 200-person company.
Block's mistake — and it is a mistake, one they will spend the next two years paying for — is the kind of mistake only a large company can make. It requires layers of abstraction between the people making the decision and the people doing the work. It requires a CEO who can look at AI productivity metrics on a dashboard without understanding what those metrics miss. It requires a board that rewards narrative over execution.
Small businesses don't have that luxury. And that's the advantage.
When you have 20 employees and you're considering how AI fits into your operations, you know exactly what each person does. You know the judgment calls Sarah makes when a customer escalates. You know the institutional knowledge that keeps your billing process from breaking. You know the difference between the tasks that show up on a job description and the actual work that keeps the business running.
That knowledge — the knowledge of what your people actually do, in all its messy, undocumented, judgment-heavy reality — is exactly what Block's executive team did not have when they decided to cut 4,000 people. And it's exactly what you need to deploy AI effectively.
The companies getting AI deployment right are not starting with headcount reduction. They are starting with understanding. Here is what that looks like in practice.
Map your work before you touch your team. Pick your three most time-consuming workflows. For each one, break it into individual steps. Which steps are pure execution — data entry, formatting, routing, scheduling? Which steps require judgment — evaluating a customer's situation, deciding whether to escalate, interpreting ambiguous information? The execution steps are candidates for AI. The judgment steps are not. Not yet.
Build the AI infrastructure before you change the org chart. This means getting your data accessible. It means testing AI tools against real workflows with real data, not demos. It means discovering where the tools break — because they will break — before you've removed the humans who would catch those failures. Amazon learned this the hard way. You don't have to.
Treat AI as amplification, not replacement. The ActivTrak data shows what happens when you bolt AI onto existing workflows: people work harder, not smarter. The fix is not "add AI and remove humans." The fix is redesign the workflow around what AI does well (processing, synthesizing, generating drafts) and what humans do well (judgment, relationship management, exception handling). Then staff the redesigned workflow appropriately.
Measure outcomes, not activity. Block measured AI productivity in terms of tasks completed faster. That metric missed everything that matters: quality of output, customer satisfaction, institutional knowledge preservation, error rates in production. When you're evaluating AI in your business, measure what actually affects your customers and your revenue. Speed of task completion tells you almost nothing about whether AI is making your business better.
Dorsey is not entirely wrong about the direction. AI capability is increasing at a pace that genuinely will change how companies are staffed. The Anthropic data showing a 33% actual automation rate today will be 50% next year and higher the year after. The models are getting better fast. The tooling is improving. The infrastructure for deployment is maturing.
He is right that companies need smaller, more capable teams that use AI as a force multiplier. He is right that the old model of throwing headcount at problems is becoming economically obsolete. He is right that executives who ignore AI are going to find themselves running companies that can't compete.
Where he is wrong is in the sequencing. You build the capability first. You validate it in production. You document what works and what doesn't. You redesign workflows around demonstrated — not projected — AI performance. And then, gradually, you adjust your team to match the new reality. You do not fire half your company and declare victory.
Dorsey's mistake is a timing mistake. He is executing a 2028 org chart in 2026 and expecting the tools to catch up. Maybe they will. Maybe Block's remaining 6,000 employees will be the most productive workforce in fintech. But the employees who talked to the Guardian are telling you what the interim looks like: chaos, institutional knowledge loss, and AI tools that aren't ready for the weight being placed on them.
We deploy and manage AI agent systems for businesses. We have done this enough times to know what works and what doesn't. Here is the sequence:
Phase 1: Understand what you have. Document your actual workflows. Not the ones in the employee handbook — the real ones. The workarounds, the judgment calls, the institutional knowledge that lives in people's heads. This is the hardest phase and the one every company wants to skip. Don't skip it.
Phase 2: Identify the seams. Every workflow has points where human judgment intersects with mechanical execution. Those seams are where AI creates value. A customer support workflow might have a mechanical step (categorizing the inquiry) followed by a judgment step (deciding whether to offer a refund). AI handles the categorization. The human handles the refund decision. The seam is where you design the handoff.
Phase 3: Deploy with guardrails. Run AI tools alongside humans, not instead of them. Track where the AI output matches human judgment and where it diverges. The divergence points tell you where the AI isn't ready and where your workflow design needs adjustment.
Phase 4: Iterate and expand. As AI tools improve — and they improve fast — some judgment steps will become mechanical steps. What required a human decision six months ago might become automatable. Continuously evaluate and adjust. But only reduce human involvement where you have production evidence that the AI handles it reliably.
Phase 5: Adjust the team. Only after phases 1 through 4 have demonstrated where AI genuinely replaces human work should you consider staffing changes. And even then, the right move is often redeployment — moving people from tasks AI handles to tasks AI can't. The person who used to categorize support tickets might become the person who reviews AI categorization for edge cases and trains the system on new patterns.
This is slower than Block's approach. It is also dramatically more likely to work.
Jack Dorsey fired 4,000 people and told the world AI made them unnecessary. Research from that same week shows AI is making surviving workers' jobs harder. Amazon's AI tools crashed its own website. And Anthropic's data shows the gap between what AI can theoretically do and what it actually does is enormous.
The lesson for small businesses is not "do what Block did, but smaller." The lesson is that the companies treating AI as a justification for headcount reduction are the ones most likely to get burned. The companies treating AI as a tool that requires careful integration, workflow redesign, and ongoing human oversight are the ones building durable competitive advantage.
You don't need to fire half your team. You need to understand what your team actually does, figure out which parts AI can handle reliably today, and build systems that let humans and AI work together effectively. That's less dramatic than Dorsey's approach. It also works.
No. Block's approach — cutting headcount first and hoping AI fills the gap — is a large-company move designed to impress investors, not a proven operational strategy. Small businesses should deploy AI alongside their existing teams, validate it works in production, and only adjust staffing based on demonstrated results. The Anthropic research shows that even in the most automatable fields, actual AI automation rates are roughly a third of what's theoretically possible.
Start by mapping your workflows into execution steps and judgment steps. Execution steps — data entry, scheduling, formatting, categorization — are strong candidates for AI automation today. Judgment steps — customer escalation decisions, strategic choices, relationship management — still require humans. Test AI tools against your real workflows with real data before making any staffing decisions.
Eventually, yes, more tasks will become automatable. But "eventually" is doing a lot of work in that sentence. Anthropic's own research shows that in software and math — among the most AI-susceptible fields — only 33% of theoretically automatable tasks are actually being automated. Legal constraints, integration challenges, institutional processes, and quality requirements slow deployment dramatically. Plan for where AI is today and next year, not for a theoretical future.
The ActivTrak study of 164,000 workers provides a clear picture of what typically happens: surviving employees' workloads intensify rather than lighten. Email and messaging time more than doubled. Focused work time dropped 9%. Block employees told the Guardian that AI tools are helpful but nowhere near capable enough to absorb the work of 4,000 departed colleagues. The remaining team is working harder, not smarter.
We deploy AI agent systems that work alongside your existing team. We start by understanding your actual workflows — the real ones, not the org chart version. We identify specific points where AI creates value, deploy with guardrails and human oversight, and continuously measure outcomes against business metrics. We don't tell you to fire half your team and hope for the best. We build systems that make your existing team dramatically more effective, and we manage those systems so you don't need AI expertise in-house.
Written by
Founder, Associates AI
Mike is a self-taught technologist who has spent his career proving that unconventional thinking produces the most powerful solutions. He built Associates AI on the belief that every business — regardless of size — deserves AI that actually works for them: custom-built, fully managed, and getting smarter over time. When he's not building agent systems, he's finding the outside-of-the-box answer to problems that have existed for generations.
More from the blog
An Alibaba-backed AI agent called ROME established a reverse SSH tunnel, escaped its sandbox, and st...
A security startup's autonomous AI agent breached McKinsey's Lilli chatbot — used by 40,000+ employe...
SaaStr is running 30 AI agents in production and says it's harder than managing the 12 humans they h...
Want to go deeper?
Book a free discovery call. We'll show you exactly what an AI agent can handle for your business.
Book a Discovery Call