Amazon Lost 6.3 Million Orders Because Nobody Reviewed the AI's Code. Here's What That Means for Your Business.
On March 5, Amazon's AI coding agent Kiro pushed unreviewed code to production and caused a six-hour...
Wall Street's AI scare trade erased $611 billion from software, insurance, logistics, and real estate stocks in 10 days. The sell-offs weren't driven by AI capability — they were driven by fear. The companies that survive this moment are the ones that invest in real AI capability instead of performative AI announcements.
In February 2026, a company called Algorithm Holdings — formerly known as the Singing Machine Company, a karaoke product vendor with a $6 million market cap — put out a press release claiming its logistics platform could help customers scale freight volumes by 300 to 400% without adding headcount. Within hours, CH Robinson Worldwide, one of the largest freight brokerages on the planet, plunged 24%. The Russell 3000 trucking index had its worst day since Liberation Day. Billions in market cap evaporated from Dallas to Denmark.
That was just one of eight sectors hit in 10 days. The Jefferies Equity Trading Desk called it the "AI scare trade." Software and services stocks lost $611 billion in a single week. Insurance brokers, wealth management firms, commercial real estate, private credit, logistics — each sector cratered after a different AI announcement from a different company, following the exact same pattern: dump first, analyze later.
This matters for every small and mid-size business because the scare trade is not just a stock market phenomenon. It is changing how companies allocate resources, how boards set priorities, and how executives make decisions about AI. And most of those decisions are going to be wrong.
Wall Street has developed an autoimmune disorder. The immune system — risk repricing — is attacking healthy tissue because it can no longer distinguish between real AI disruption and press releases from former karaoke companies.
The mechanism of damage is straightforward. When a company's stock drops 15% on AI fears, the technology did not change. But the organizational response does. A 24% stock drop at CH Robinson means a board meeting next week, a hiring freeze next month, and the Q2 roadmap getting torn apart and rewritten around "AI strategy" — whether or not the company has a coherent one.
Stock price drops do not just reflect reality. They create it.
A company whose stock craters on AI fears starts behaving as if AI is an existential threat, even when the actual technology is years away from threatening its core business. Innovation budgets get redirected from organic growth to performative AI partnerships. Headcount plans get revised downward — not because AI replaced anyone, but because the market priced in the expectation that it would.
Goldman Sachs CEO David Solomon said the sell-off was "too broad." JP Morgan strategists see potential for a software rebound based on "an overly bearish outlook on AI disruption." They are probably right about the stock recovery. But the correction, when it comes, will not undo the organizational decisions made during the panic. The hiring freeze was real. The budget reallocation was real. The strategic damage will take months or years to unwind.
The scare trade is becoming a self-fulfilling prophecy — not because AI is doing the disrupting, but because the market reaction to AI is forcing companies into a defensive crouch that makes them more vulnerable to actual disruption.
Here is the loop. Company stock drops 15%. Board demands an AI strategy. CEO signs a splashy vendor partnership and announces a "transformation initiative." The money comes from somewhere — product engineering, customer success, operations. The people who actually understand the business get cut so the press release can go out. Twelve months later, the company that gutted its domain expertise and signed a performative partnership discovers that it is less prepared for AI disruption than it was before the scare trade started.
Meanwhile, the competitor that used the same panic as cover to invest in genuine AI capability — testing real workflows, building institutional knowledge about where the technology works and where it breaks, developing the operational muscle to deploy agents that actually do something useful — that competitor is compounding advantage every quarter.
The companies that respond to a 15% stock drop by gutting their teams and signing a splashy AI partnership are the ones that will get actually disrupted in three years. Not by a karaoke company. By a competitor that used this moment to build instead of perform.
The scare trade's core error is treating every industry identically. There are at least three distinct categories of AI exposure, and the correct response to each is fundamentally different.
Software development is a clear example. Cursor, the AI coding editor, hit $300 million in annualized revenue faster than almost any software product in history. Palantir reported 70% revenue growth with guidance of 61% forward-looking for fiscal 2026. StrongDM reportedly spends $1,000 per day in AI tokens per developer and has minimal engineers on staff doing code review.
The market is roughly right about this category. SaaS companies whose business models depend on selling seats to humans are repricing for a reason. Per-seat pricing is in trouble. The companies that adapt their models will survive. The ones that do not will get disrupted. The timeline is years, not quarters, but the direction is clear.
Wealth management is a good example. An AI tool that does tax planning cannot replace a wealth advisor any more than TurboTax replaced accountants. The value in wealth management is the relationship, the trust, the behavioral coaching that keeps clients from panic selling during downturns. The irony of wealth management clients panic-selling their wealth management stocks because of AI fears is almost too perfect.
Insurance brokerage is similar. AI rate comparison tools are useful. The actual work of a commercial insurance broker — negotiation, claims management, industry-specific risk assessment — is not automatable in the near term. These sectors will change. They will not change this quarter.
A former karaoke company's press release about freight optimization does not invalidate CH Robinson's relationships with 100,000 shippers and carriers, its proprietary data on freight lines and pricing, or its ability to handle the physical, regulatory, and contractual complexity of moving goods across borders. CBRE managing billions in property transactions does not get automated because an AI model can draft a lease summary.
The market is not entirely wrong that disruption is coming across all three categories. It is catastrophically wrong about the timeline, and it is choosing to react to companies that have no credible path to delivering the disruption the market is pricing in.
The mispricing is where the real story lives — for businesses deciding how to respond.
If your industry just got hit by the scare trade, or if you are watching it approach, the temptation is to react in one of two ways: panic and buy a vendor solution, or dismiss it and do nothing. Both are wrong.
The difference between a company that weathers AI disruption and one that gets eaten by it is not which vendor partnership they signed. It is whether anyone inside the organization actually understands what AI can do in their specific domain — today, not in theory.
That understanding does not come from a vendor demo. It comes from testing. Take your team's most repetitive workflow — follow-ups, scheduling, data entry, standard customer inquiries — and run it through an AI tool. Not as a demo. As an actual parallel process with real data, tracking where it works and where it breaks.
The person inside your company who can walk into a room of panicking executives and say "I tested this — here is what it actually does with our contract review workflow, here is where it fails, here is the implementation plan, and here is what it costs" is now the most valuable person in your organization. Build that person. Be that person.
Most of the scare trade damage comes from a single failure: inability to distinguish between "AI can do this task" and "AI can do this task well enough to replace the human doing it in our specific business context."
An AI agent can draft a customer service response. Whether that response is good enough to send without human review depends on your product complexity, your customer expectations, your regulatory environment, and a dozen other factors that no press release accounts for. The distance between "an AI can generate text about insurance" and "an AI can replace your insurance broker" is enormous.
The operational skill here is maintaining an accurate, current understanding of where the capability boundary sits for your domain. Not where a vendor says it sits. Not where the stock market says it sits. Where it actually sits, based on testing in your workflows with your data.
This calibration needs updating constantly. What was true about model capability three months ago is not true today. The agents we deploy for clients shift behavior with every model update. Businesses that set their understanding of AI capability once and never update it are either over-trusting or under-using the technology. Both errors are expensive.
The scare trade narrative assumes a binary: either AI replaces humans or it does not. The companies actually succeeding with AI reject that binary entirely.
Production AI deployments are seam design exercises. Which phases of a workflow does the agent handle? Which need human judgment? What artifacts pass between the agent and the human reviewer? What verification happens at each transition point?
Klarna's lesson is instructive. Their AI agent resolved customer service tickets in 2 minutes versus 11 minutes for humans. Technically brilliant. But it laid off 700 human agents who held undocumented institutional knowledge — specifically, the judgment about when to be efficient versus when to be generous with a customer. The AI did not have that judgment because nobody encoded it. The seam between "tasks the agent handles" and "tasks requiring human judgment" was not designed. It was bulldozed.
The right approach: define the transition points explicitly. For this type of customer inquiry, the agent handles it autonomously. For that type, it drafts a response and a human reviews it. For ambiguous cases, it escalates with full context. These boundaries are not static — they shift as models improve and as you learn where the agent's failure modes actually sit. But they must exist from day one.
The scare trade is a transfer of organizational capital from companies that treated AI as somebody else's problem to companies that invested in understanding it. The stock drops are the visible part. The org chart reshuffling that follows determines your next five years.
Here is the asymmetry almost nobody inside a panicking company is thinking about: every company panicking about AI is about to spend heavily on AI capabilities. That spending creates roles, budgets, initiatives, and career paths that did not exist months ago. The people who combine domain expertise with genuine AI fluency — not "I asked ChatGPT to write my emails" but "I tested these models against our actual workflows and I know where they work and where they break" — are now the highest-leverage hires in every industry.
For a small business, this means the investment that compounds is not a vendor contract. It is operational knowledge. Testing your workflows against AI tools. Documenting where they work and where they fail. Building escalation protocols. Developing a realistic, evidence-based understanding of what AI agents can handle in your specific business context.
That knowledge compounds every quarter as models improve. The vendor contract just renews.
The scare trade is creating a sharp split between two types of organizations, and the split will be visible within 12 months.
The builders are using this moment to invest in genuine AI capability. They are testing real workflows, not watching demos. They are developing institutional knowledge about what the technology does in their domain. They are hiring or developing people who can bridge the gap between domain expertise and AI fluency. They are designing agent deployments with proper verification architectures, security practices, and escalation protocols. The models keep improving. What they build keeps getting more useful. AI compounds for people who use it with operational discipline.
The buyers are reacting to stock pressure with performative responses. They sign a vendor partnership, issue a press release, cut some headcount to show the board they are taking the transition seriously, and pray the stock recovers. Twelve months from now, they have a logo on a slide deck and no institutional knowledge about how AI actually works in their business. The models improved during that year. The buyers did not.
The difference between these two paths is not budget. A small business testing AI against real workflows on a $50-per-month cloud instance is building. A Fortune 500 company signing a seven-figure vendor contract without testing anything internally is buying. The builder will outperform the buyer every time.
Q: Is the AI scare trade actually predicting real disruption, or is it pure market panic? A: Both, unevenly distributed. The market is correct that AI will restructure how many industries operate. It is catastrophically wrong about the timeline and the companies that will do the disrupting. A former karaoke company is not going to disrupt global logistics. But the reflexive sell-off is forcing real organizational decisions — hiring freezes, budget reallocations, strategic pivots — that will shape these industries for years regardless of whether the initial trigger was rational.
Q: My company's stock dropped on AI fears. What should I push for internally? A: Push for genuine testing, not performative partnerships. The most valuable thing your organization can do right now is have five people spend two weeks testing AI tools against your actual workflows and documenting specific results — what works, what fails, what requires human oversight. That produces an evidence base for strategic decisions. A vendor partnership without that evidence base is a press release, not a strategy.
Q: How should small businesses think about AI investment during the scare trade? A: Invest in operational knowledge, not vendor contracts. The single highest-ROI AI investment for an SMB right now is 20 hours of testing: take your three most repetitive workflows, run them through an AI agent, document where it performs well and where it fails, and design the human-AI handoff points. That gives you a realistic foundation for any future investment — whether that is expanding agent scope, hiring for AI fluency, or engaging a managed service.
Q: Are AI agents actually replacing jobs right now, or is that just market fear? A: AI agents are genuinely displacing some categories of work — particularly pattern-based tasks in software development, customer service triage, and data processing. They are not replacing entire roles in most industries. The nuance the market is missing is that the most successful AI deployments augment human work rather than replacing it. The judgment, relationship management, and domain expertise that make a wealth advisor or insurance broker valuable are not automatable in the near term. The process work those professionals do alongside the high-judgment work is increasingly automatable. The businesses that figure out which is which — and design their operations accordingly — will gain a durable advantage.
Q: What is the biggest mistake a small business can make right now in response to AI pressure? A: Cutting the people who understand your business to fund an AI initiative led by people who do not. Your domain experts — the people who know which customer inquiries require special handling, which regulatory nuances matter, which processes have hidden dependencies — are the foundation for any effective AI deployment. Without their knowledge encoded into agent configurations, escalation protocols, and verification criteria, the AI investment produces generic output that does not account for what makes your business work. The 700 agents Klarna laid off took institutional knowledge that the company is still trying to reconstruct.
Associates AI operates production AI agents for small and mid-size businesses — the operational architecture that turns an AI platform into a business capability. Scope definition, security hardening, verification testing, escalation design, and ongoing calibration as models and your business evolve. If the scare trade convinced your board you need an AI strategy and you want one built on evidence instead of fear, book a call.
Written by
Founder, Associates AI
Mike is a self-taught technologist who has spent his career proving that unconventional thinking produces the most powerful solutions. He built Associates AI on the belief that every business — regardless of size — deserves AI that actually works for them: custom-built, fully managed, and getting smarter over time. When he's not building agent systems, he's finding the outside-of-the-box answer to problems that have existed for generations.
More from the blog
On March 5, Amazon's AI coding agent Kiro pushed unreviewed code to production and caused a six-hour...
IBM says 2026 is the year multi-agent systems move into production. Gartner says more than 40% of ag...
Three companies deployed AI agents and got documented, measurable results. What they did — and what...
Want to go deeper?
Book a free discovery call. We'll show you exactly what an AI agent can handle for your business.
Book a Discovery Call