Scan to download
BTC $68,609.14 +1.87%
ETH $2,131.65 +3.67%
BNB $615.64 +0.92%
XRP $1.35 +2.56%
SOL $83.66 +0.66%
TRX $0.3155 -1.37%
DOGE $0.0927 +1.63%
ADA $0.2477 +1.91%
BCH $460.90 -2.04%
LINK $9.07 +4.40%
HYPE $37.10 +1.15%
AAVE $99.76 +2.02%
SUI $0.8967 +2.93%
XLM $0.1714 +2.80%
ZEC $249.00 +6.38%
BTC $68,609.14 +1.87%
ETH $2,131.65 +3.67%
BNB $615.64 +0.92%
XRP $1.35 +2.56%
SOL $83.66 +0.66%
TRX $0.3155 -1.37%
DOGE $0.0927 +1.63%
ADA $0.2477 +1.91%
BCH $460.90 -2.04%
LINK $9.07 +4.40%
HYPE $37.10 +1.15%
AAVE $99.76 +2.02%
SUI $0.8967 +2.93%
XLM $0.1714 +2.80%
ZEC $249.00 +6.38%

a16z "Major Vision for 2026: Part One"

Core Viewpoint
Summary: a16z investment team’s predictions for technology trends in 2026
Blockunicorn
2025-12-10 19:14:52
Collection
a16z investment team’s predictions for technology trends in 2026
Article Author: a16z New Media
Article Compiler: Block unicorn

As investors, our responsibility is to deeply understand every corner of the technology industry to grasp future development trends. Therefore, every December, we invite our investment teams to share a significant concept they believe technology companies will address in the coming year.

Today, we will share insights from the Infrastructure, Growth, Bio+Health, and Speedrun teams. Stay tuned for more shares from other teams tomorrow.

Infrastructure

Jennifer Li: How Startups Navigate the Chaos of Multimodal Data

Unstructured, multimodal data has long been the biggest bottleneck for enterprises and also their greatest untapped treasure. Every company is mired in a sea of PDFs, screenshots, videos, logs, emails, and semi-structured data. Models are getting smarter, but the input data is becoming increasingly chaotic, leading to failures in RAG systems, where agents fail in subtle and costly ways, and critical workflows still heavily rely on manual quality checks. The limiting factor for AI companies today is data entropy: in the world of unstructured data, freshness, structure, and authenticity are continuously degrading, while 80% of enterprise knowledge now resides in this unstructured data.

For this reason, clarifying unstructured data presents a once-in-a-lifetime opportunity. Enterprises need a continuous approach to clean, build, validate, and manage their multimodal data to ensure that downstream AI workloads can truly function. Use cases are everywhere: contract analysis, onboarding processes, claims processing, compliance, customer service, procurement, engineering searches, sales enablement, analytics pipelines, and all agent workflows that rely on reliable context. Startups that can build platforms to extract structure from documents, images, and videos, resolve conflicts, repair pipelines, or maintain data freshness and retrievability hold the keys to the kingdom of enterprise knowledge and processes.

Joel de la Garza: AI Revitalizes Cybersecurity Recruitment

For much of the past decade, the biggest challenge faced by Chief Information Security Officers (CISOs) has been recruitment. From 2013 to 2021, the number of cybersecurity job vacancies grew from fewer than 1 million to 3 million. This is because security teams hired a large number of technically skilled engineers to perform tedious first-level security tasks, such as reviewing logs, a job no one wants to do. The root of the problem lies in the fact that cybersecurity teams purchased products capable of detecting everything, creating this cumbersome work, which means their teams need to review all information—this, in turn, creates a false labor shortage. It’s a vicious cycle.

By 2026, AI will break this cycle and fill the recruitment gap by automating many repetitive tasks for cybersecurity teams. Anyone who has worked in a large security team knows that half of the work could be easily solved through automation, but when the workload piles up, it’s hard to determine which tasks need automation. Native AI tools that can help security teams address these issues will ultimately enable them to free up time to do what they really want to do: hunt down bad actors, build new systems, and fix vulnerabilities.

Malika Aubakirova: Native Agent Infrastructure Will Become Standard

By 2026, the biggest infrastructure shock will not come from external companies but from within enterprises. We are shifting from predictable, low-concurrency "human-speed" traffic to recursive, bursty, and large-scale "agent-speed" workloads.

Today's enterprise backends are designed for a 1:1 ratio of human operations to system responses. They are not architected to handle the recursive fan-out of triggering 5,000 subtasks, database queries, and internal API calls at millisecond intervals for a single agent "goal." When agents attempt to refactor codebases or fix security logs, they do not appear as users. In the eyes of traditional databases or rate limiters, it looks like a DDoS attack.

Building systems for agents in 2026 means redesigning the control plane. We will witness the rise of "agent-native" infrastructure. Next-generation infrastructure must treat "thundering herd" effects as the default state. Cold start times must be reduced, latency fluctuations must be significantly lowered, and concurrency limits must be multiplied. The bottleneck lies in coordination: achieving routing, locking, state management, and policy execution in large-scale parallel execution. Only those platforms that can handle the ensuing flood of tool execution will ultimately prevail.

Justine Moore: Creative Tools Move Towards Multimodal

We now have building blocks for storytelling with AI: generative voice, music, images, and video. But for any content beyond one-off snippets, obtaining the desired output is often time-consuming and frustrating—even impossible—especially when you want to approach a traditional director's level of control.

Why can't we feed a model a 30-second video and have it continue the scene with new characters created from reference images and sounds? Or reshoot a video so we can observe the scene from different angles, or match actions to reference videos?

2026 will be the year AI moves towards multimodal. You can provide the model with any form of reference content and use it to create new content or edit existing scenes. We have already seen some early products, such as Kling O1 and Runway Aleph. But there is still much work to be done—we need innovation at both the model and application layers.

Content creation is one of the most powerful application scenarios for AI, and I expect to see a surge of successful products emerge, covering a wide range of use cases and customer groups, from meme creators to Hollywood directors.

Jason Cui: The Continuous Evolution of AI-Native Data Stacks

In the past year, we have seen the integration of the "modern data stack" as data companies shift from focusing on specialized areas like data ingestion, transformation, and computation to bundled unified platforms. For example: the merger of Fivetran/dbt and the continued rise of unified platforms like Databricks.

Although the entire ecosystem has clearly matured, we are still in the early stages of truly AI-native data architecture. We are excited about how AI continues to transform multiple aspects of the data stack and are beginning to realize that data and AI infrastructure are becoming inseparable.

Here are some directions we are optimistic about:

  • How data will flow into high-performance vector databases alongside traditional structured data

  • How AI agents will solve the "context problem": maintaining continuous access to the correct business data context and semantic layer to build robust applications, such as interacting with data and ensuring these applications always have the correct business definitions across multiple record systems

  • How traditional business intelligence tools and spreadsheets will change as data workflows become more agentified and automated

Yoko Li: A Year We Step Into Video

Image

By 2026, video will no longer be content we passively watch, but rather a space we can truly immerse ourselves in. Video models will ultimately be able to understand time, remember what they have already shown, respond to our actions, and maintain the kind of reliable consistency found in the real world. These systems will no longer just generate a few seconds of disjointed footage but will be able to maintain characters, objects, and physical effects long enough for actions to make sense and show their consequences. This transformation will turn video into a medium that can continuously evolve: a space where robots can practice, games can evolve, designers can prototype, and agents can learn in practice. What is ultimately presented will no longer resemble a video clip but rather a vibrant environment, one that begins to bridge the gap between perception and action. For the first time, we will feel like we can be part of the videos we generate.

Growth

Sarah Wang: The Decline of Record Systems

By 2026, the true disruptive change in enterprise software will be that record systems will finally lose their dominance. AI is narrowing the gap between intent and execution: models can now directly read, write, and infer operational data, transforming IT service management (ITSM) and customer relationship management (CRM) systems from passive databases into autonomous workflow engines. With the latest advancements in reasoning models and agent workflows accumulating, these systems will not only respond but also predict, coordinate, and execute end-to-end processes. Interfaces will shift to dynamic agent layers, while traditional record systems will retreat to the background, becoming a generic persistent layer—its strategic advantage will be ceded to those who truly control the agent execution environments used by employees daily.

Alex Immerman: AI in Vertical Industries Evolves from Information Retrieval and Reasoning to Multi-Party Collaboration

AI is driving unprecedented growth in vertical industry software. Healthcare, legal, and real estate companies have reached over $100 million in annual recurring revenue (ARR) in just a few years; the finance and accounting sectors follow closely behind. This evolution began with information retrieval: finding, extracting, and summarizing the right information. 2025 brought reasoning capabilities: Hebbia analyzes financial statements and builds models, Basis reconciles trial balances across different systems, and EliseAI diagnoses maintenance issues and dispatches the right vendors.

2026 will unlock multi-party collaboration modes. Vertical industry software benefits from domain-specific interfaces, data, and integrations. However, the nature of work in vertical industries is inherently multi-party collaboration. If agents are to represent the workforce, they need to collaborate. From buyers and sellers to tenants, consultants, and vendors, each party has different permissions, workflows, and compliance requirements, which only vertical industry software can understand.

Today, parties use AI independently, leading to a lack of authorization during handoffs. An AI analyzing procurement agreements will not communicate with the CFO to adjust the model. Maintenance AI will not know what promises have been made to tenants by on-site staff. The transformation of multi-party collaboration lies in coordination across stakeholders: routing tasks to functional experts, maintaining context, and synchronizing changes. Counterparty AI will negotiate within established parameters and flag asymmetries for human review. Senior partner markings will be used to train the systems of the entire company. Tasks executed by AI will be completed with a higher success rate.

As the value of multi-party collaboration and multi-agent collaboration increases, the costs of transition will also rise. We will see the network effects that AI applications have long failed to achieve: the collaboration layer will become a moat.

Stephenie Zhang: Designed for Agents, Not for Humans

By 2026, people will begin to interact with the web through agents. What has been optimized for human consumption will no longer be equally important for agent consumption.

For years, we have focused on optimizing predictable human behavior: ranking high in Google search results, appearing prominently in Amazon search results, and starting with concise "TL;DR" summaries. In high school, I took a journalism class where the teacher taught us to write news using "5W1H," and feature articles should start with an engaging lead to attract readers. Perhaps human readers might miss those highly valuable, insightful discussions hidden on the fifth page, but AI will not.

This shift is also reflected in the software domain. Applications have been designed to meet human visual and click needs, and optimization means good user interfaces and intuitive workflows. As AI takes over retrieval and interpretation tasks, the importance of visual design for understanding will gradually diminish. Engineers will no longer stare at Grafana dashboards; AI systems reliability engineers (SREs) can interpret telemetry data and post analyses on Slack. Sales teams will no longer need to sift through customer relationship management (CRM) systems; AI can automatically extract patterns and summaries.

We are no longer designing content for humans but for AI. The new optimization goal is no longer visual hierarchy but machine readability—this will change how we create and the tools we use.

Santiago Rodriguez: The End of "Screen Time" KPIs in AI Applications

For the past 15 years, screen time has been the best metric for measuring the value delivery of consumer and enterprise applications. We have lived in a paradigm where Netflix streaming duration, mouse clicks in electronic health record user experiences (to prove effective usage), and even time spent on ChatGPT have been key performance indicators. As we move towards outcome-based pricing models that perfectly align the incentives of providers and users, we will first discard screen time reporting.

We have already seen this in practice. When I run DeepResearch queries on ChatGPT, I can derive immense value even with almost zero screen time. When Abridge magically captures doctor-patient conversations and automatically executes follow-ups, doctors hardly need to look at the screen. When Cursor develops complete end-to-end applications, engineers are planning the next feature development cycle. And when Hebbia writes presentations based on hundreds of public documents, investment bankers can finally get a good night's sleep.

This brings a unique challenge: the single-user pricing model for applications will require more complex ROI measurement methods. The proliferation of AI applications will enhance doctor satisfaction, developer efficiency, financial analyst well-being, and consumer happiness. Companies that can articulate ROI in the simplest terms will continue to outpace their competitors.

Bio + Health

Julie Yoo: Healthy Monthly Active Users (MAU)

By 2026, a new healthcare customer group will take center stage: "healthy monthly active users."

Traditional healthcare systems primarily serve three major user groups: (a) "sick monthly active users": a group with highly variable needs and high costs; (b) "sick daily active users": for example, patients requiring long-term intensive care; and (c) "healthy young active users": relatively healthy individuals who rarely seek medical care. Healthy young active users face the risk of transitioning into sick monthly active users/daily active users, and preventive care can slow this transition. However, our treatment-focused healthcare reimbursement system rewards treatment over prevention, so proactive health checks and monitoring services have not been prioritized, and insurance rarely covers these services.

Now, the healthy monthly active user group is emerging: they are not sick but wish to regularly monitor and understand their health status—and they may represent the largest segment of the consumer population. We expect a wave of companies—including AI-native startups and upgraded versions of existing enterprises—to begin offering regular services to cater to this user group.

As AI lowers the cost of healthcare services, new preventive-focused health insurance products emerge, and consumers become increasingly willing to pay out-of-pocket for subscription models, "healthy monthly active users" represent the next highly promising customer group in the healthcare technology sector: they are continuously engaged, data-driven, and focused on prevention.

Speedrun (the name of an internal investment team at a16z)

Jon Lai: World Models Shine in the Narrative Domain

By 2026, AI-driven world models will fundamentally change the way narratives are told through interactive virtual worlds and the digital economy. Technologies like Marble (World Labs) and Genie 3 (DeepMind) are already capable of generating complete 3D environments based on text prompts, allowing users to explore them as if they were in a game. As creators adopt these tools, new forms of narrative will emerge, potentially evolving into a "generative Minecraft," where players can co-create vast and ever-evolving universes. These worlds can combine game mechanics with natural language programming, for example, players can issue commands like "create a brush that turns everything I touch pink."

Such models will blur the lines between players and creators, making users co-creators of a dynamically shared reality. This evolution may give rise to interconnected generative multiverses, allowing different genres like fantasy, horror, and adventure to coexist. In these virtual worlds, the digital economy will thrive, allowing creators to earn income by building assets, guiding newcomers, or developing new interactive tools. Beyond entertainment, these generative worlds will also serve as rich simulation environments for training AI agents, robots, and even artificial general intelligence (AGI). Thus, the rise of world models signifies not only the emergence of a new genre of games but also heralds the arrival of a new creative medium and economic frontier.

Josh Lu: "My Year of Me"

2026 will be the "Year of Me": products will no longer be mass-produced but tailored to you.

We have already seen this trend everywhere.

In education, startups like Alphaschool are building AI tutors that can adapt to each student's learning pace and interests, providing every child with an education that matches their learning rhythm and preferences. Such a level of attention would be impossible without spending tens of thousands of dollars on tutoring for each student.

In health, AI is designing daily supplement combinations, workout plans, and meal plans tailored to your physiological characteristics. No coaches or laboratories needed.

Even in media, AI can allow creators to recombine news, shows, and stories to create a personalized information stream that perfectly aligns with your interests and preferences.

The biggest companies of the last century succeeded because they found the average consumer.

The biggest companies of the next century will win by finding individuals among the average consumers.

By 2026, the world will no longer optimize for everyone but will begin to optimize for you.

Emily Bennett: The First Native AI University

I anticipate that in 2026, we will witness the birth of the first native AI university, an institution built from the ground up around AI systems.

In recent years, universities have been trying to apply AI to grading, tutoring, and course scheduling. But what is emerging now is a deeper level of AI, an adaptive academic system capable of real-time learning and self-optimization.

Imagine an institution where courses, consultations, research collaborations, and even building operations are continuously adjusted based on data feedback loops. Course schedules will self-optimize. Reading lists will update every night and automatically rewrite as new research emerges. Learning paths will adjust in real-time to fit each student's pace and circumstances.

We have already seen some signs. Arizona State University (ASU) has partnered with OpenAI for a university-wide collaboration that has spawned hundreds of AI-driven projects covering teaching and administration. The State University of New York (SUNY) has now incorporated AI literacy into its general education requirements. These are all foundational steps for deeper deployment.

In an AI-native university, professors will become architects of learning, responsible for data management, model tuning, and guiding students on how to question machine reasoning.

Assessment methods will also change. Detection tools and plagiarism bans will be replaced by AI awareness assessments, where students' grading criteria will no longer be whether they used AI but how they used AI. Transparency and strategic use will replace prohibition.

As industries strive to recruit talent capable of designing, managing, and collaborating on AI systems, this new type of university will become a training ground, producing graduates proficient in coordinating AI systems, aiding the rapidly changing labor market.

This AI-native university will become the talent engine of the new economy.

That's all for today; see you in the next part, stay tuned. ```

warnning Risk warning
app_icon
ChainCatcher Building the Web3 world with innovations.