As we approach 2026, the technology landscape presents a striking paradox: AI adoption has never been higher, and yet success has never been more elusive.
The numbers tell a different story than the headlines suggest. MIT’s Project NANDA report found that 95% of enterprise AI pilots fail to deliver measurable financial returns — despite $30-40 billion in corporate investment. McKinsey’s “State of AI 2025” found that while 88% of organizations use AI, only 6% qualify as “high performers,” seeing significant financial impact.
What’s going wrong and what should enterprise leaders actually prepare for next year?
TREND 1: Agentic AI’s Coming Reality Check
There is enormous hype around AI, and many organizations don’t know what to do with it in practical terms. Many are incentivizing employees to use AI tools with no strategic guidance, hoping someone will find a valuable use case. That haphazard approach rarely works.
There’s going to be significant frustration in AI adoption in 2026; not because the technology isn’t capable, but because the process of adopting it isn’t right.
MIT’s recent research confirms this: the problem isn’t model quality but the “learning gap” — organizations don’t understand how to design workflows that capture AI’s benefits. Generic tools like OpenAI’s ChatGPT work for individuals but stall in enterprise settings because they don’t adapt to company-specific processes and cultures.
The path forward requires targeted applications.
At CESAR Innovation Hub, for example, we wanted stronger ESG positioning in project proposals, but some of our sales team members lacked deep expertise. Rather than hiring specialists to assist them, we trained an LLM to analyze proposals and suggest enhancements. Now, team members without ESG backgrounds produce proposals demonstrating real environmental value.
TREND 2: Cybersecurity’s “Blade Runner Moment”
Remember the movie Blade Runner? The protagonist determines whether someone is human by asking questions and watching for emotional responses — subtle tells that reveal authenticity.
We’re entering that world now. And it’s going to get worse before it gets better.
For example:
- Deepfake-enabled phishing surged more than 1,600% in Q1 2025.
- Nearly 83% of phishing emails now use AI-generated content.
- Voice clones can now be created from just five minutes of audio.
- Brazil is the most targeted country for phishing attacks globally, with criminals — sometimes operating from prison — mimicking family members’ voices to extract money.
In early 2025, fraudsters cloned the voice of Italian Defense Minister Guido Crosetto to call business leaders about a fabricated kidnapping ransom. At least one victim transferred nearly one million euros before authorities intervened.
The criminal playbook is consistent: create urgency to prevent verification. They call, claiming accidents, bleeding children, kidnappings — anything to trigger an emotional response and bypass rational thinking.
The solution mirrors the Blade Runner test: assume voice and video can be faked.
This means callback verification using known numbers, pre-established code words, and ultimately AI systems that verify AI. I’ll admit: our own IT department sends fake phishing emails to train us, and I have failed every test. If those of us in technology struggle, imagine everyone else.
TREND 3: Quantum Computing’s Year Is Not Here (Yet)
Here’s a contrarian view: quantum computing’s breakthrough moment isn’t next year.
It’s likely three to five years out — perhaps five to ten.
Why? Follow the money.
If major technology companies believed practical quantum was imminent, they wouldn’t invest tens of billions in conventional data centers. A single one-gigawatt AI data center costs approximately $50 billion. Google, Microsoft, and others are racing to secure locations with adequate clean energy.
The world’s largest tech juggernauts simply wouldn’t make those massive data center commitments if quantum systems were to render conventional infrastructure obsolete within just a few years. That level of long-term infrastructure planning signals their true expectations about quantum timelines.
Nvidia CEO Jensen Huang and Meta CEO Mark Zuckerberg have both suggested practical quantum remains “a decade plus out.” Current systems require extreme conditions near absolute zero, suffer high error rates, and need millions of physical qubits to create even a handful of reliable ones.
That said, the “harvest now, decrypt later” threat is a real trend in the market today, wherein adversaries are capturing encrypted data to decrypt once quantum matures. Organizations must begin quantum-readiness preparations now, not because quantum arrives in 2026, but because migration to quantum-resistant cryptography will take years.
Brazil’s investments — including a proposed decade-long, nearly $1 billion public plan and the country’s first quantum computer arriving in 2026 — aren’t about immediate dominance. They’re about being ready when the technology matures.
TREND 4: Design-driven Culture Shifts Go Full Circle
AI adoption reveals an uncomfortable truth: we’ve built design thinking into project methodologies but not across most organizational DNA.
Software teams already operate adaptively, including using agile, design thinking, and lean startup.
But back-office functions — accounting, procurement, HR — operate differently.
For decades, we told those employees: follow the process, be efficient, don’t deviate.
Now we must tell them: you have to experiment.
That’s a fundamental culture shift.
The MIT research found more than half of AI budgets go to sales and marketing, but the biggest ROI comes from back-office automation: eliminating outsourcing, cutting agency costs, streamlining operations. Many organizations chase visible applications while ignoring “boring” processes where AI delivers compounding returns.
Success requires organization-wide design thinking: treating the entire enterprise as a system that experiments continuously. AI tools from specialized vendors succeed about 67% of the time; internal builds succeed only one-third as often. Line managers, not central AI labs, must drive more adoption.
TREND 5: A Dangerous Regulatory Divide?
I was once against AI regulation, but I’ve changed my mind.
When you see the damage social media has inflicted on young people — the anxiety, depression, inability to handle minor disputes — you realize unchecked technology causes profound harm.
Designers created systems that keep us scrolling, that make us addicted.
AI’s potential harms could dwarf social media’s.
On December 11, 2025, President Trump signed an executive order blocking U.S. states from regulating AI, creating an “AI Litigation Task Force” to challenge state laws. The administration argues that 50 different regulations would cripple innovation.
But as former Secretary of Defense Chuck Hagel has argued in The Atlantic, the greatest AI vulnerabilities stem not from too much oversight but from its absence. AI systems already underpin essential functions across our economy and national security apparatus: airport routing, energy-grid forecasting, fraud detection, and real-time battlefield data integration.
AI has extraordinary operational advantages, but also concentrated, high-impact failure points.
The Pentagon has repeatedly warned that state-of-the-art models remain vulnerable to data poisoning and adversarial prompting. U.S. states remain the country’s most effective laboratories for developing policy on fast-moving technologies, especially given persistent federal inaction.
Meanwhile, “across the pond,” the EU AI Act continues its phased implementation, with most obligations applicable by August 2026. And, Brazil’s AI Bill, which passed the Senate in December 2024, positions the country as a potential leader in Latin American governance.
The historical parallel: early industrial Luddites tried destroying machines threatening their livelihoods. Of course, breaking machines meant more machines tomorrow. But their concern was valid: the pace of displacement matters. Previous shifts occurred slowly enough for society to adapt.
Today’s changes happen so fast that finding alternative occupations becomes extraordinarily difficult.
Regulation isn’t about stopping progress. It’s about ensuring progress doesn’t destroy what it’s meant to improve.
TREND 6: Business Models at a Tipping Point
Perhaps the most consequential trend isn’t technological; it’s economic.
Every leader must ask: How will AI change our business model?
Not via efficiency improvements, but fundamental transformation.
Consider professional services, built on billable hours:
If AI agents perform tasks once requiring three, four, or ten engineers, who pays for those agents?
How do you price work when hours collapse but the value delivered increases?
We’re running experiments now where this question isn’t theoretical.
For example, how do we charge for four engineers’ worth of hours when one engineer plus an agent can deliver the same outcome in a fraction of the time? The market hasn’t decided yet whether it will accept flat rates tied to outcomes rather than effort.
At CESAR, AI-augmented teams complete work dramatically faster. The productivity gains are real. But our entire pricing structure assumes a relationship between time and value that AI disrupts.
We’re not alone. Brazil’s services sector represents approximately 57-67% of Brazil’s GDP — a huge slice of Latin America’s largest economy. To be clear, not all of those faces this reckoning: nursing, building maintenance, hairstyling — work requiring physical human presence — won’t be disrupted by AI anytime soon. But white-collar knowledge work? Legal, consulting, IT, accounting, and public administration — these services face profound transformation.
When you face this level of change, you almost reset the business model from zero — but with 1,400+ employees, not the 10 people of an early-stage startup.
Looking Ahead
2026 will test every organization’s capacity for honest self-assessment.
The hype cycle has peaked, and the implementation reality is setting in.
Winners won’t have the most advanced models or the biggest budgets. They’ll build cultures of experimentation, invest in human-AI collaboration, maintain security vigilance, and honestly confront how AI transforms, not just improves, their fundamental business models.
The technology is ready. The question is whether we are.


