Back to blog
17 min read

Two AI Reports Everyone in Tech Should Read in 2026

A practical breakdown of Matt Shumer's and Citrini Research's viral AI reports - what they get right, where they overreach, and what developers should do now.

  • ai
  • future-of-work
  • agentic-coding
  • economic-impact
  • developer-strategy
Two AI Reports Everyone in Tech Should Read in 2026

Two Reports Are Shaking the AI World — And Most People Haven’t Read Them

What “Something Big Is Happening” and “The 2028 Global Intelligence Crisis” actually say, why they matter, and what you should think about them

This article explains and analyzes two viral reports from the AI community (February 2026) that paint a picture of where AI is heading — one from a practitioner’s perspective, the other from a financial analyst’s perspective. I break them down with my own words, highlight the key arguments, and share my personal take on what this means for developers, tech leads, and anyone building products today.

Difficulty: Accessible — no code, no prerequisites. Just critical thinking.


Foreword

A few weeks ago, two articles landed almost simultaneously in my feed. Within days, they had accumulated tens of thousands of shares, sparked heated debates on X and Hacker News, and became the kind of pieces people forward to their non-tech friends and family.

The first one, “Something Big Is Happening” by Matt Shumer, is a personal alarm bell from an AI startup founder who says he no longer does the technical work of his own job.

The second one, “The 2028 Global Intelligence Crisis” by Citrini Research (with Alap Shah), is a fictional macro-economic memo written from the perspective of June 2028, describing a world where AI-driven job displacement has triggered a full-blown recession.

Both are long reads. Both are deliberately provocative. And both contain arguments that deserve more than a quick scroll and a retweet.

I’ve read them carefully, taken notes, and I want to walk you through what they actually say, where I think they make strong points, where I think they overreach, and — most importantly — what you should take away from them.

Personal note: I’m not an economist, and I’m not a futurist. I’m a developer who has been building AI-powered products for the past few years. I’ve seen firsthand how fast these tools evolve. That practical experience is the lens through which I’m reading these reports.


Quick Checklist

Before we dive in, here’s what you’ll get from this article:

# Item
1 A clear summary of Matt Shumer’s “Something Big Is Happening”
2 A clear summary of Citrini Research’s “The 2028 Global Intelligence Crisis”
3 The key arguments each report makes (and the assumptions behind them)
4 Where the two reports converge — and where they diverge
5 My personal take: what I find convincing, what I find overblown
6 Concrete takeaways for developers, tech leads, and product builders

Report #1 — “Something Big Is Happening” (Matt Shumer)

In short

Matt Shumer is the founder of an AI startup (OthersideAI / HyperWrite). He has spent six years building in the AI space. His article is essentially a letter to his friends, family, and anyone who keeps asking “so what’s the deal with AI?” — except this time, he drops the polite version and says what he actually thinks.

His core message: the gap between what AI can do today and what the general public believes AI can do is now dangerously wide. And this gap is preventing people from preparing for what’s coming.

For more details, refer to the original article at https://shumer.dev/something-big-is-happening.

The key arguments

1. AI already replaced his own technical work.

Shumer describes his typical Monday: he tells the AI what he wants built, walks away for four hours, comes back to find it done. Not a rough draft. The finished thing. The AI opens the app itself, clicks through buttons, tests features, iterates on its own, and only comes back to him when it’s satisfied with the result.

This is not a prediction. It is a description of what happened this week at his company.

2. The latest models crossed a qualitative threshold.

He specifically points to GPT-5.3 Codex and Claude Opus 4.6, both released on February 5, 2026. According to him, these models don’t just execute instructions — they exhibit something that feels like judgment and taste. The kind of intuitive decision-making that people always said AI would never have.

3. AI is now helping build itself.

He quotes OpenAI’s own technical documentation for GPT-5.3 Codex, which states that the model was used to debug its own training, manage its own deployment, and diagnose its own evaluation results. Dario Amodei (CEO of Anthropic) has also said that AI is now writing much of the code at his company, and that the feedback loop between current AI and next-generation AI is accelerating.

4. The pace of improvement is exponential, not linear.

He references METR, an organization that measures the length of real-world tasks that AI can complete end-to-end without human help. A year ago, the answer was about 10 minutes. Then an hour. Then several hours. The latest measurement (Claude Opus 4.5, November 2025) showed the AI completing tasks that take a human expert nearly five hours. That number is doubling approximately every seven months — and possibly accelerating to every four months.

5. Massive job displacement is coming — fast.

Shumer relays Dario Amodei’s prediction that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think that’s conservative. His own rule of thumb: if a model shows even a hint of a capability today, the next generation will be genuinely good at it.

6. What to do about it.

His advice is practical and urgent: start using AI seriously (paid tier, best model available), push it into your actual work, spend one hour a day experimenting, get your financial house in order, and stop assuming your field is immune.

My take on this report

Shumer writes with conviction and authenticity. He’s clearly not writing this to sell you anything — he’s writing it because he feels a moral obligation to warn the people he cares about.

That said, a few things are worth noting:

What I find convincing: His personal experience is hard to dismiss. When someone who has been building AI products for six years says “this week was different”, that carries weight. The METR data he references is real, measurable, and follows a consistent trend line. And OpenAI’s own documentation about GPT-5.3 Codex contributing to its own development is… well, that’s just a fact you can verify.

Where I’d push back a bit: Shumer’s article is deliberately written to shake people awake. That’s its purpose, and it does that well. His comparison with COVID is interesting — and I think often misunderstood. To me, he’s not saying “AI will spread like a virus.” He’s saying: there will be a before and an after. Just like COVID permanently changed how we work, travel, and interact, AI is about to redraw the map of what “normal” looks like. And just like in early 2020, most people right now are in the phase where they dismiss what’s coming because it hasn’t hit their life yet. That reading, I find hard to argue with.

That said, the phrase “nothing that can be done on a computer is safe in the medium term” is a very strong claim. It might turn out to be right. But there’s a difference between “AI can do this task” and “AI will replace all humans doing this task.” Capability and deployment are not the same thing. Between the two, there’s an entire layer of organizational inertia, regulation, trust-building, and plain human resistance that slows things down — sometimes considerably.


Report #2 — “The 2028 Global Intelligence Crisis” (Citrini Research)

In short

This one is completely different in both format and approach. Citrini Research is a financial analysis publication. Their article is written as a fictional macro-economic memo from June 2028, looking back at how the AI revolution triggered a global economic crisis.

The key framing, and this is important: what if AI bullishness continues to be right… and what if that’s actually bearish?

In other words: what if AI succeeds so spectacularly that it breaks the economic system it’s supposed to improve?

For more details, refer to the original article at https://www.citriniresearch.com/p/2028gic.

The scenario they describe

The fictional memo opens in June 2028. Unemployment is at 10.2%. The S&P 500 is down 38% from its October 2026 highs. The authors reconstruct the sequence of events that led to this point.

Phase 1: The SaaS Disruption (late 2025 – mid 2026)

Agentic coding tools reach a tipping point. A competent developer with Claude Code or Codex can now replicate the core functionality of a mid-market SaaS product in weeks. Enterprise procurement teams start asking: “What if we just built this ourselves?”

The long-tail of SaaS gets hammered first. Then even the systems of record start feeling the pressure. In the scenario, ServiceNow’s Q3 2026 report reveals the reflexive mechanism: when their Fortune 500 clients cut 15% of their workforce using AI, they also cancel 15% of their ServiceNow licenses. The very AI-driven efficiency gains that boost their clients’ margins mechanically destroy ServiceNow’s revenue base.

Phase 2: Friction Goes to Zero (early 2027)

AI agents become embedded in everyday consumer behavior. They run in the background, optimizing purchases, canceling unused subscriptions, price-matching across platforms, renegotiating insurance renewals. The entire intermediation layer of the economy — businesses built on human laziness, inertia, and information asymmetry — starts to collapse.

The authors introduce a brilliant concept here: habitual intermediation. Think DoorDash. Its moat was essentially: “you’re hungry, you’re lazy, this is the app on your home screen.” But an AI agent doesn’t have a home screen. It checks every option and picks the cheapest one, every time.

Even the payment rails get disrupted. When agents start transacting with other agents, the 2-3% credit card interchange fee becomes an obvious inefficiency. Agents route around it, settling transactions via stablecoins on Solana or Ethereum L2s.

Phase 3: The Intelligence Displacement Spiral (2027)

This is the core mechanism the report describes, and it’s the most important concept in the piece.

Here’s the loop: AI gets better → companies lay off workers → savings are reinvested in more AI → AI gets better → more layoffs → displaced workers spend less → companies sell less → companies invest more in AI to protect margins → AI gets better.

The authors call it a feedback loop with no natural brake.

And here’s the critical insight: this isn’t traditional CapEx. Companies aren’t building new factories they might stop building when demand falls. This is OpEx substitution — a company that was spending $100M on employees and $5M on AI now spends $70M on employees and $20M on AI. Total spending decreases, but AI spending increases. The disruption engine feeds itself even as the broader economy contracts.

Phase 4: The concept of “Ghost GDP”

Perhaps the most thought-provoking idea in the entire report. In their scenario, productivity numbers look amazing. GDP keeps growing. But the output is being generated by machines that don’t buy houses, don’t eat at restaurants, don’t pay for childcare. The economic value shows up in the national accounts but never circulates through the consumer economy that makes up 70% of GDP.

The authors put it bluntly: “We probably could have figured this out sooner if we just asked how much money machines spend on discretionary goods. Hint: it’s zero.”

My take on this report

What I find compelling: The intellectual framework here is exceptional. The concept of the “Intelligence Displacement Spiral” — a negative feedback loop where each company’s individually rational decision to cut costs with AI collectively destroys demand — is the kind of second-order thinking that’s largely absent from the AI conversation. Most people are still debating “will AI take jobs?” while this report is asking “what happens to the economy when it does?”

The “Ghost GDP” concept is also striking. We already see echoes of it today: productivity metrics can look great even when the actual human economy underneath is struggling.

The discussion of habitual intermediation is particularly sharp. So many businesses are built on friction — on the fact that humans are impatient, forgetful, or just don’t have time to comparison-shop. When AI agents remove that friction, those business models don’t just erode. They evaporate.

Where I’d push back: The report is explicitly framed as a scenario, not a prediction. The authors are clear about this. But the way it’s written — as a retrospective memo with fake Bloomberg headlines and made-up economic data — makes it feel more like a forecast than a thought experiment. That’s by design, and it’s effective, but readers should keep this framing in mind.

The scenario also assumes remarkably smooth and rapid adoption of AI across every sector simultaneously. In practice, regulatory barriers, organizational inertia, integration costs, and plain human stubbornness tend to slow things down considerably. Healthcare won’t adopt AI at the same speed as software startups. Law firms in Brussels won’t move at the same pace as tech companies in San Francisco.

The timeline (everything goes from “fine” to “recession” in roughly two years) is extremely aggressive. History suggests that even genuinely disruptive technologies take longer to permeate the economy than their creators expect.


Where the Two Reports Converge

Despite their very different formats and audiences, these two pieces share a common core thesis:

1. AI capability is improving faster than most people realize. Both reports anchor their arguments on the measurable, observable acceleration in model capability throughout 2025-2026.

2. The gap between what AI can do and what most people think it can do is dangerous. Shumer says this explicitly. The Citrini report implies it through its scenario construction — the crisis happens partly because people don’t see it coming.

3. White-collar knowledge work is the primary target. Not because the AI labs are specifically targeting lawyers and analysts, but because cognitive work is what current AI models are best at automating. Physical work comes later.

4. There is no obvious “safe zone” to retrain into. This is arguably the most unsettling point both reports make. Unlike previous waves of automation, where displaced workers could move to a new sector, AI improves at everything simultaneously. Whatever you retrain for, it’s getting better at that too.

5. The window for preparation is narrow. Both authors believe that people who start engaging now — not in six months, not when it’s in the news — will have a significant advantage over those who wait.


Where They Diverge

Shumer is focused on individuals. His advice is personal: use the tools, experiment daily, build things you care about, rethink your kids’ education. His tone is urgent but ultimately empowering.

Citrini is focused on systems. Their analysis is about macro-economic feedback loops, credit markets, consumer spending patterns, and the structural fragility of an economy built on white-collar income. Their tone is clinical and, frankly, more alarming — because systemic problems don’t have individual solutions.

This difference matters. Shumer’s advice (“spend one hour a day with AI”) is actionable and helpful regardless of whether his timeline is right. Citrini’s scenario, if it plays out even partially, describes problems that no amount of individual upskilling can solve — it would require policy responses at the governmental and institutional level.


My Personal Take

I’ve been building AI-powered products for a while now. I work with these models daily. I’ve seen them go from “interesting but unreliable” to “indispensable” in a timeframe that genuinely surprised me.

So where do I stand?

I think both reports are directionally correct. AI capability is advancing faster than public perception. The latest models are qualitatively different from what existed a year ago. White-collar work is being restructured in real time.

I think the timelines are probably too aggressive. Not because the technology won’t be capable — it probably will — but because adoption is always slower than capability. Organizations move slowly. Regulations move slower. Cultural change moves slowest of all. The Citrini scenario describes a two-year collapse that, in reality, would more likely play out over five to ten years — which is still fast by historical standards, but gives more room for adaptation.

I think the “no safe zone” argument is partially right but overstated. Yes, AI improves at everything. But “improves at” and “fully replaces humans at” are different things. There will be domains where humans remain in the loop longer — not because AI can’t do the work, but because trust, accountability, regulation, and sheer institutional inertia keep them there. Medicine, education, legal accountability, physical craftsmanship — these won’t disappear overnight.

I think the most useful thing you can do right now is exactly what Shumer suggests: use these tools every day, push them into your real work, and develop an intuition for what they can and can’t do. Not because this will make you “safe” from disruption — nothing guarantees that. But because understanding the technology firsthand gives you the ability to adapt as things change. And things will change.

Warning: If you’re a developer or tech lead reading this and you haven’t seriously experimented with agentic coding tools (Claude Code, Cursor, Codex), you’re already behind. I don’t say this to be dramatic. I say it because the delta between “developers who use AI extensively” and “developers who don’t” is growing wider every month. In six months, it may be the difference between being the person who drives your team’s productivity forward and being the person who’s wondering why the team is half the size it used to be.


What I’d Recommend

Here are concrete steps, with my own priorities:

If you’re a developer: Start using AI-assisted coding tools for real projects, not toy examples. Feed it your actual codebase. Give it complex, multi-file tasks. Learn its limits by hitting them. The models today are not the models from six months ago, and the models six months from now will not be the models of today.

If you’re a tech lead or product owner: Read both reports. Share them with your team. Start thinking about what parts of your product or service could be replicated by a competent developer with an AI coding agent in a few weeks. That’s your vulnerability surface. It’s also your opportunity — if you can use these tools to build faster and better, you should.

If you’re building a business (like I am with Flutteris or i-Discover): Ask yourself the “Citrini question”: is any part of your business model built on friction? On the assumption that your users won’t comparison-shop, won’t notice a better alternative, won’t automate away the manual step they currently pay you for? If yes, that’s where you need to innovate now, before an AI agent does it for your users.

If you’re none of the above, but you’re curious: Subscribe to a paid AI tier ($20/month for Claude Pro or ChatGPT Plus). Select the best available model. Don’t use it like Google. Give it a real task: a contract to review, a report to analyze, a business plan to stress-test. See what happens. Then do it again tomorrow. And the day after.


Conclusion

These two reports are not comfortable reads. One is a practitioner screaming “I’ve seen the future and it’s here.” The other is a financial analyst calmly modeling what happens if the practitioner is right.

Neither is a crystal ball. Both contain assumptions that may not hold. The timelines might be wrong by a factor of two or three. The economic knock-on effects might be cushioned by policy responses we can’t yet foresee.

But the direction of travel is not in question. AI models are getting better, faster, and cheaper. The work they can do is expanding. The industries affected are broadening. Whether this takes two years or ten, the transformation is underway.

The best thing you can do — the thing both reports agree on — is to stop waiting and start engaging. Not with fear. With curiosity, discipline, and a clear-eyed understanding of what’s coming.

Read them yourself. Form your own opinion. And then get to work.

Stay tuned for new articles, and happy building.


References:

Ready to go further?

Let's discuss your case in 20 minutes. Free, concrete, no strings attached.

Book a slot