Huang Renxun's latest podcast transcript: The future of Nvidia, the development of embodied intelligence and agents, the explosion of inference demand, and the public relations crisis of artificial intelligence
Video Title: Jensen Huang: Nvidia's Future, Physical AI, Rise of the Agent, Inference Explosion, AI PR Crisis
Video Author: All-In Podcast
Translation: Peggy, BlockBeats
Editor's Note: As the AI narrative continues to heat up, the focus of market discussions is shifting from "how powerful the models are" to "how the systems are implemented." Over the past two years, the industry has experienced breakthroughs in large model capabilities, a race for training computing power, and the expansion of generative applications. However, as these stages gradually become consensus, new questions arise: when AI is no longer just answering questions but begins to execute tasks, embed into enterprise processes, and enter the physical world, what are the underlying conditions that support its continued advancement?
This article features excerpts from a conversation on the well-known tech podcast All-In Podcast. As one of the most influential investor podcasts in Silicon Valley, the show is co-hosted by four investors who have been active on the front lines for a long time, known for their in-depth discussions on technology, business, and macro trends.
The four hosts of the show are:
·Jason Calacanis, an early internet entrepreneur and angel investor, widely known for investing in companies like Uber and Robinhood;
·Chamath Palihapitiya, founder of Social Capital and former Facebook executive, who has invested in several tech companies including Slack and Box;
·David Sacks, partner at Craft Ventures, a member of the "PayPal Mafia," founder of Yammer, which was sold to Microsoft for about $1.2 billion, and an early investor in Airbnb and Uber;
·David Friedberg, founder of The Production Board, focusing on investments in agriculture, climate, and life sciences, and founder of The Climate Corporation (later acquired by Monsanto).
This episode's guest is Jensen Huang, co-founder and CEO of NVIDIA, regarded as one of the key drivers in the current wave of AI infrastructure.

From left to right: David Friedberg, Chamath Palihapitiya, David Sacks, Jensen Huang, Jason Calacanis
The entire interview can be roughly summarized in three levels.
First, AI infrastructure is changing. In the past, the market's understanding of AI was largely based on stronger GPUs and more data centers. However, Huang wants to emphasize that future competition will no longer be just about individual chips but about entire systems. As inference demand rises, model types increase, and agents begin to handle more complex tasks, AI computing is transitioning from a relatively singular model to more complex and specialized system collaboration. NVIDIA is thus trying to shift its role from a chip company to a builder of "AI factories."
Second, AI is moving from "generating content" to "completing tasks." This is the most critical thread in this interview. ChatGPT has allowed the public to intuitively feel the capabilities of AI for the first time, but in Huang's view, the real change is that AI is starting to enter workflows in the form of agents: it is not just answering questions but can call tools, break down tasks, collaborate, and ultimately get things done. Because of this, users' willingness to pay for AI will shift from "getting an answer" to "getting a result." This implies greater inference demand, higher system complexity, and potentially rewriting the ways software development, organizational management, and knowledge work are conducted.
Finally, AI is extending from the digital world to the real world. In the interview, whether discussing autonomous driving, robotics, healthcare, digital biology, or Huang's mention of Physical AI, they all point to the same trend: the value of AI is not only reflected on screens but will increasingly manifest in factories, hospitals, cars, endpoint devices, and daily life. However, this also means that the challenges AI will face next will not only be technical but also include supply chains, policies, regulations, manufacturing capabilities, and geopolitical complexities. In other words, the next round of AI expansion will be a truly industrialization process.
From this perspective, what is most noteworthy in this conversation is not a specific product or an optimistic number, but a judgment that Huang repeatedly conveys: AI is transitioning from the "model era" to the "system era." Future competition will not just be about whose model is larger or whose computing power is stronger, but about who understands the industry better, who can embed AI deeper into real processes, and who can organize these capabilities into a runnable, scalable system.
This also expands the scope of the discussion beyond NVIDIA itself. The real question it attempts to answer is: as AI gradually becomes infrastructure, how will the next round of industrial restructuring unfold, and where will new value be created?
The following is the original content (reorganized for readability):
TL;DR
·AI infrastructure is transitioning from "single GPU" to decoupled architecture. Different computing tasks will be collaboratively completed by GPUs, CPUs, network chips, and inference chips like Groq.
·NVIDIA is transforming from a GPU company to a complete system provider, an "AI factory company." It sells the entire infrastructure rather than a single chip.
·The key to measuring AI costs is not the cost of data centers but the token cost and throughput efficiency. More expensive systems may actually be cheaper.
·AI is moving from generative models to the Agent era. Users are willing to pay for "getting things done" rather than just getting answers.
·Computing demand is exploding. From generation to inference to agents, it may have expanded more than 10,000 times in a short period and is still accelerating.
·Future software development will change. Engineers will no longer just write code but will define problems, design architectures, and collaborate with agents.
·In the long run, the biggest opportunities lie in deep specialization in vertical fields rather than in generic models themselves. Who understands the industry better will have a competitive moat.
Original Interview Content
Jason Calacanis (notable angel investor | All-In Podcast host | early investor in Uber):
This week is a special episode. We let our regular weekly show "make way" for this, and we usually only do this for three types of people: President Trump, Jesus, and Jensen Huang (founder and CEO of NVIDIA). As for how to rank these three, you decide. You've been on a roll lately, and this GTC was very successful.
Jensen Huang (CEO of NVIDIA):
The whole industry is here. Almost all tech companies and AI companies have come.
Jason Calacanis:
It's incredible, truly extraordinary. One of the most significant releases in the past year is Groq. When you acquired Groq, did you realize how much this would make Chamath "insufferable"?
Note: Groq is not Grok. The former is a company that makes AI inference chips and inference clouds, while the latter is a chatbot from xAI. By the end of 2025, Groq reached a non-exclusive inference technology licensing agreement with NVIDIA, with the transaction amount not disclosed; however, there were reports and speculations of around $17 billion to $20 billion. By GTC 2026, Huang further showcased the inference system integrated into the NVIDIA platform based on Groq technology.
The Chamath mentioned here refers to Chamath Palihapitiya (founder of Social Capital | former Facebook executive | All-In host). He is one of the four hosts of All-In and was also an early investor and board member of Groq. Therefore, when the significant deal between NVIDIA and Groq surfaced, it was seen as Chamath hitting another key project.
Jensen Huang:
I had a vague premonition.
Jason Calacanis:
We have to deal with him every week.
Jensen Huang:
I know. You all have to accompany him through a full six-week delivery period.
Jason Calacanis:
That's right.
From GPU Company to "AI Factory" Company
Jensen Huang:
In fact, many of our strategies are publicly discussed years in advance at GTC. Two and a half years ago, I introduced the operating system for the AI factory, called Dynamo.
You know, a dynamo is originally a device invented by Siemens that converts the energy of water into electrical energy, driving the factory system during the last industrial revolution. So I think this name is very suitable as the name for the "factory operating system" in the next industrial revolution. One of the core technologies in Dynamo is decoupled inference.
Jason Calacanis:
Jensen, I know you understand technology very well. Go ahead, define it. I don't want to steal your thunder.
Jensen Huang:
Thank you. Decoupled inference means that the entire processing pipeline for inference is extremely complex, possibly one of the most complex types of computing problems today.
Its scale is immense, containing a large number of different forms and scales of mathematical calculations. Our idea is to break the entire processing flow apart, allowing one part to run on one type of GPU and another part to run on another type of GPU. Furthermore, this realization led us to understand that perhaps decoupled computing itself is a reasonable direction: we can fully enable different types and natures of computing resources to work together.
The same thinking later guided us to Mellanox. You see today, NVIDIA's computing is already distributed across GPUs, CPUs, switches, vertical scaling switches, horizontal scaling switches, and network processors. Now, we also want to add Groq.
Our goal is to place the right workloads on the right chips. In other words, we have evolved from a GPU company to an AI factory company.
David Sacks (partner at Craft Ventures | former PayPal COO | All-In host):
For me, this is probably the most important insight. What you are seeing is a fundamental "decoupling." In the past, there was only one choice: GPUs. Now, more and more different computing forms are starting to emerge, and these choices will coexist in the future.
You mentioned one point on stage that I think everyone doing high-value inference should listen to carefully: you said that about 25% of the space in data centers should be allocated to Groq's LPU.
Note: LPU stands for Language Processing Unit. This is a category of chip proposed by Groq, primarily focused on inference rather than training.
Jensen Huang:
Yes, in data centers, Groq could account for about 25% of the Vera Rubin system.
Note: Vera Rubin is NVIDIA's next-generation AI platform architecture. It is not a single chip but a system-level infrastructure platform aimed at AI factories.
David Sacks:
Can you talk about how the industry views this direction now? Essentially, you are building the next generation of decoupled architecture: prefill, decode separation, and the inference process being split. How do you think people will react?
Jensen Huang:
Let's take a step back. The reason we added this capability to the system was that the entire industry had already shifted from large language model processing to Agentic Processing.
When you run an agent, it accesses working memory, long-term memory, and calls tools, which puts a lot of pressure on storage. You will also see agents collaborating with agents. Some agents use very large models, while others use smaller models; some use diffusion models, while others use autoregressive models. In other words, within this data center, there will be a variety of completely different types of models coexisting. We built Vera Rubin to handle this extreme diversity of loads.
So, in the past, we were a company with "one rack," and now we have added four types of racks. In other words, NVIDIA's TAM, or total addressable market, has suddenly expanded by about 33% to 50%.
A significant portion of this new 33% to 50% will be storage processors, namely BlueField; a portion, which I personally hope will be a large part, will be Groq processors; and another portion will be CPUs; of course, there will also be many network processors. All of these combined will ultimately run the "new type of computer" in the AI revolution, which is agents. It is the operating system of modern industry.
Chamath Palihapitiya (founder of Social Capital | former Facebook executive | All-In host):
What about embedded applications? For example, if my daughter's teddy bear wants to talk to her, what would be inside? Is it a custom ASIC? Or will there be a broader TAM in edge and embedded scenarios, with different tools for different scenarios?
Note: ASIC stands for Application-Specific Integrated Circuit, and TAM stands for Total Addressable Market.
Jensen Huang:
We believe there are actually three computers in this question.
The first, at the largest scale, is used to train AI models, develop AI, and create AI.
The second is the computer used to evaluate AI. For example, look around; there are robots and cars everywhere. You must first place them in a virtual environment that can represent the physical world for evaluation. In other words, this software itself must comply with the laws of physics. We call this system Omniverse.
The third is the computer deployed on the edge, which is the robot computer. It could be an autonomous vehicle, a robot, or even a small teddy bear.
For devices like teddy bears, one very important direction we are working on is turning telecom base stations into part of AI infrastructure. This way, the entire $2 trillion telecom industry will gradually become an extension of AI infrastructure in the future. So, radio equipment will become edge devices, factories will become edge devices, and warehouses will too.
In summary, all three types of foundational computers are essential.
David Friedberg (founder of The Production Board | All-In Podcast host):
Jensen, I felt last year that you saw this coming before anyone else. You said the growth in inference demand wouldn't just be 1,000 times.
Jensen Huang:
Did I dig my own grave?
David Friedberg:
But it could grow 1 million times? 1 billion times? Right?
I think many people thought that was too exaggerated at the time because the whole world was still focused on training expansion. But now you see, inference has truly exploded and is starting to become "inference constrained." You have now released an "inference factory" that will have 10 times the throughput of the next-generation factory.
But if you look at external discussions, many people will say: your inference factory will cost $40 billion to $50 billion, while those alternatives, such as custom ASICs, AMD, etc., only cost $25 billion to $30 billion, so you will lose market share.
So why don't you just tell us: what exactly do you see? How do you view market share? Is it worth it for these customers to pay nearly double the premium?
Why More Expensive Systems Can Produce Cheaper Tokens
Jensen Huang:
The most important point, the core point is: do not equate the price of the factory with the price of tokens, nor should you equate it with the cost of tokens.
It is very likely, and I can prove, that a $50 billion factory can produce the lowest-cost tokens. The reason is that we generate these tokens with astonishing efficiency, which can be 10 times higher.
You see, the difference between $50 billion and $20 billion is largely just land, electricity, and the factory shell. Beyond that, you still need to buy storage, networking, CPUs, servers, and cooling systems. So, whether the GPU itself is at full price or half price will not drop the total cost from $50 billion to $30 billion. You can pick any number you like; more realistically, it might only drop from $50 billion to $40 billion.
And if a $50 billion data center has 10 times the throughput, then that price difference is actually not significant.
Jason Calacanis: Got it.
Jensen Huang:
This is also why I have always said: even for many chips, if you cannot keep up with the technological frontier and the speed at which we are advancing, then even if the chips are given away for free, they are still not cheap enough.
David Sacks:
I want to ask a more macro strategic question. You are now operating the most valuable company in the world. Revenue next year may exceed $350 billion, with free cash flow of $200 billion, and it is still compounding at a crazy rate.
How do you make decisions? How do you gather information? Everyone knows about your famous email system, but how do you actually form intuition, shape the market, decide where to double down, where to pull back, and where to enter new fields? How does that information get to you? How do you make the final judgment?
Jensen Huang:
That is the job of a CEO.
David Sacks:
Right.
Jensen Huang:
Our responsibility is to define the vision and define the strategy. Of course, we draw inspiration and information from the outstanding computer scientists, technical experts, and countless excellent employees in the company, but ultimately, shaping the future is our responsibility.
One of the criteria for judgment is: is this thing ridiculously difficult? If it is not difficult enough, we should stay away from it. The reason is simple: if something is easy to do, there will certainly be a lot of competitors.
Is it something that has never been done before and is ridiculously difficult? Does it happen to mobilize our company's unique "superpowers"? So I have to look for that intersection: it must meet these criteria simultaneously.
And you also have to know that doing such things will inevitably come with a lot of pain and torment. No great invention has ever come about because it was too simple and succeeded easily the first time.
If something is super difficult and has never been done before, it basically means you will go through a lot of pain and suffering. So you better enjoy the process.
David Sacks:
Can you highlight three or four more "long tail" businesses? For example, you mentioned data centers in space, ADAS and cars, and the biological direction. Give us a sense of when these curves will start to turn upward? How do you view these long-term businesses?
Note: ADAS stands for Advanced Driver Assistance Systems.
Jensen Huang:
Of course. Physical AI is a huge category. As I mentioned earlier, we have three computing systems and all the software platforms built on them. Physical AI is the first real opportunity for the tech industry to serve a $50 trillion industry that has seen almost no deep technological transformation in the past. To achieve this, we must reinvent all the necessary technologies.
I have always believed this is a 10-year journey. We started this 10 years ago, and now we are finally seeing it begin to turn upward. For us, this has already become a multi-billion dollar business, and the current scale is approaching $10 billion per year. So it is already a significant business and is growing exponentially. That is the first point.
The second direction is that I believe we are very close to the ChatGPT moment in digital biology.
We are gradually learning how to represent and understand genes, proteins, and cells. We already know how to handle chemicals. Therefore, being able to represent and understand the basic components of biology and their dynamic behaviors will likely happen within the next two to five years. Within five years, I am very confident that digital biology will have a huge impact on the entire healthcare industry.
These are all very important directions. Agriculture is also one of them.
Chamath Palihapitiya:
It is already happening.
Jensen Huang:
Without a doubt.
Jason Calacanis:
I want to shift the topic back to the desktop. The company was largely built on enthusiasts, gamers, and graphics card users. Today, you mentioned Claude Code, OpenClaw, and the revolution brought by agents in front of about 10,000 viewers.
Especially among the enthusiast community, we see a lot of energy and innovation actually exploding there, with many breakthroughs happening on the desktop. You also released a desktop device this time; I remember it was the Dell 60800? This is a very powerful workstation that can run local models and has 750GB of memory. Now Mac Studio is sold out everywhere. Our company is now fully transitioning to OpenClaw. Friedberg is using it, Chamath is using it, and everyone is obsessed.
What does this open-source agent movement that started with enthusiasts and the desktop open-source ecosystem mean to you? Where is it headed?
The Age of Agents Has Arrived: Why Will Computing Demand Expand by Another 10,000 Times?
Jensen Huang:
First, let's take a step back. Over the past two years, we have actually seen three turning points.
The first was generative AI. ChatGPT brought AI into the public eye, making everyone aware of its importance. In fact, this technology was already clearly there months before ChatGPT appeared. It was only when ChatGPT provided a user-friendly interface that generative AI truly exploded.
Generative AI, as you know, generates tokens for both internal and external consumption. Internal consumption is essentially "thinking," which further drives the development of inference.
Next, more grounded capabilities based on real information began to emerge, allowing AI to not just answer questions but to provide more reliable and useful answers. You are also starting to see OpenAI's revenue and business model experience a turning point in growth.
Then, the third turning point initially was only visible within the industry, which is Claude Code. This is the first truly useful agentic system, highly revolutionary.
But before Claude Code, this capability was mainly aimed at enterprises, and many outsiders had never seen it. Until OpenClaw brought "what AI agents can really do" into the public eye.
Thus, the cultural significance of OpenClaw lies in the fact that it truly made the public aware of the capabilities of agents for the first time.
The second reason it is important is that OpenClaw is open.
More critically, it constructs a completely new computing model, almost reinventing computing itself. It has a memory system: scratch is short-term memory, and the file system is long-term resource; it has scheduling capabilities; can run cron jobs; can generate new agents; can break down tasks, perform causal reasoning, and solve problems; it also has an I/O subsystem that can input, output, and connect to WhatsApp; it has a set of APIs that can run different types of applications, known as skills.
These four elements essentially define a computer. Therefore, we now actually have, for the first time, a personal AI computer.
And it is open-source, truly open-source, and can run almost anywhere. This is the blueprint for modern computing. In a sense, it has already become the operating system of modern computing and will be ubiquitous in the future.
Of course, we also need to help it solve one issue: as long as you have agentic software, it may access sensitive information, execute code, and communicate externally. So we must ensure that everything is governed, sufficiently secure, and has strategic constraints, allowing these agents to have two of the three capabilities but not all three simultaneously.
In terms of governance, we have also contributed. Peter Steinberger is here today. We have many great engineers working with him to help make this system safer and more robust, ensuring it can protect privacy and security.
Chamath Palihapitiya:
Jensen, has this paradigm shift made many of the AI regulatory bills passed across the U.S. seem outdated?
Many proposals were originally based on old models. Can you talk about how quickly this paradigm shift has rendered a large number of existing regulatory ideas ineffective? AI regulation has now become a very hot topic in U.S. politics.
Jensen Huang:
In this regard, we must always stay ahead of policymakers, and you have done very well in this area. We must proactively approach them and tell them what stage technology has reached, what it is, and what it is not. It is not a living entity, not an alien, and does not have consciousness. It is computer software.
Also, we often hear the statement "we do not fully understand this technology." But that is not true; we actually understand a lot. So first, we must continuously provide policymakers with real information; do not let doomsday theories and extremism shape their understanding of this technology.
At the same time, we must also acknowledge that technology is developing rapidly and not let policy run too far ahead of technology. From a national perspective, my biggest concern is that the greatest national security risk for the U.S. in AI is not AI itself, but that other countries are adopting AI while we, out of anger, fear, or paranoia, are unwilling to let our industries and society embrace AI.
So what I am truly most worried about is that AI is not spreading fast enough in the U.S.
David Sacks:
Let me follow up. If you were sitting in the boardroom of Anthropic, watching their turmoil with the "Department of Defense," what would you think? This actually continues the point you just made: people do not know how to understand AI, leading to another layer of resentment, fear, and distrust. If it were you, what different things would you suggest Dario and his team do to change today's outcomes and public perception?
Jensen Huang:
First, I want to say that Anthropic's technology is remarkable. We ourselves are significant users of Anthropic's technology. I greatly admire their emphasis on safety, their commitment to safety culture, and their technical excellence in advancing this work; it is truly impressive.
Moreover, they want to remind the public of the boundaries of this technology's capabilities, which I think is a good thing. We just have to realize that the world has a spectrum: reminders are good, but scaring people is not.
Jason Calacanis: Right.
Jensen Huang: Because this technology is too important to us. I think predicting the future is certainly possible, but we need to be more cautious and humble. Because in fact, we cannot fully predict the future.
If we throw out some extremely extreme, catastrophic judgments without evidence showing these things will actually happen, the harm it causes may be greater than people imagine.
And now, we are already leaders in the tech industry. In the past, no one listened to us, but now it is different. Technology has deeply embedded itself in the social structure, is an extremely important industry, and is highly related to national security. Every word we say is important.
So I think we must be more cautious, restrained, balanced, and thoughtful.
David Friedberg:
I would nominate you to do this. Public support for AI in the U.S. is only 17%. We have seen what happened in the nuclear energy sector: we basically shut down the entire nuclear industry, and now China is building 100 fission reactors while the U.S. has none. Now we are hearing voices about pauses on data centers and similar issues. So I think we must be more proactive.
But I want to return to what you said about the explosion of agents happening within the company: efficiency improvements, productivity increases. Now many people are debating ROI, right? You and I entered this year with the biggest question: will revenue appear? Will revenue expand like intelligence itself? Then we saw something akin to an "Oppenheimer moment": Anthropic's revenue reached $5 billion to $6 billion in a single month in February.
Note: The "Oppenheimer moment" refers to J. Robert Oppenheimer, the head of the Manhattan Project (the secret research project that developed the atomic bomb during World War II). The first detonation of an atomic bomb in 1945 symbolizes a critical point where technological breakthroughs coexist with risks, and it is now often used to refer to key technological moments with irreversible impacts.
How do you see the trend moving forward? You mentioned today that Blackwell and Vera Rubin have already seen visibility for trillion-dollar demand in the coming years. Coupled with the momentum shown by Anthropic and OpenAI, do you think we have already reached that curve, and we will see revenue accelerate like intelligence?
Jensen Huang:
I will answer from a few angles. Look at the audience here; Anthropic and OpenAI are indeed present. But in reality, 99% of what is here is AI, and it is neither Anthropic nor OpenAI. The reason behind this is that AI itself is extremely diverse.
I would say that as a category, the second most popular model is actually open models. The first is, of course, OpenAI, open-source weights, open-source models, and this entire broad open ecosystem. The second is open models, and there is a significant gap between it and the third, which is Anthropic.
This shows how large the scale of all AI companies combined is, so we must first recognize this.
Returning to computing volume. When we move from generative AI to inference, the required computing volume increases by about 100 times; when we move from inference to agentic, the computing volume may increase another 100 times. In other words, in just two years, computing demand has likely increased by about 10,000 times. Meanwhile, people will pay for information, but what they are truly willing to pay for is work results.
David Friedberg: Right.
Jensen Huang:
Talking to a chatbot and getting an answer is certainly good. Helping me do research is also great. But what truly makes me willing to spend money is when it helps me get things done. And that is precisely where we are now; agentic systems are actually completing work. They are helping our software engineers finish their tasks.
So think about it: on one side, there is 10,000 times more computing, and on the other side, there may already be 100 times more consumption demand. Moreover, we have not even truly begun large-scale expansion. We are absolutely on the path to 1 million times growth.
Jason Calacanis:
I think this leads perfectly to a question: how many employees does your company have?
Jensen Huang:
We have 43,000 employees, about 38,000 of whom are engineers.
Jason Calacanis:
We often discuss a topic on the podcast: wow, the token usage in our company is skyrocketing. Some even ask, "How much token allocation can I get when I join a company?" because they want to become efficient employees. I remember you mentioned in that two-and-a-half-hour keynote, which was really long but great.
Jensen Huang:
Thank you. It could have been shorter.
Jason Calacanis:
You mentioned that the token usage limit for each engineer might reach around $75,000. Does that mean NVIDIA's engineering team spends $1 billion to $2 billion on tokens each year?
Jensen Huang:
That is how we think. Let me give you a thought experiment: suppose you hire a software engineer or AI researcher with an annual salary of $500,000, which is quite common for us.
At the end of the year, I ask him, "How much did you spend on tokens this year?" If he says "5,000 dollars," I would be blown away, really. If an engineer with a $500,000 salary consumes tokens worth less than $250,000 in a year, I would be very alert. This is essentially no different from a chip designer saying, "I decided to only use paper and pencil; I don't need CAD tools."
Jason Calacanis:
This is indeed a paradigm shift. Your understanding of these top employees almost reminds me of what is taught in MBA classes about LeBron James: he spends $1 million a year maintaining his body, so he can still play at 41. Why shouldn't these top knowledge workers have "superhuman abilities"?
Jensen Huang:
Exactly.
Jason Calacanis:
If we push this trend forward two or three years, what will the efficiency of these top employees at NVIDIA look like? What will they be able to accomplish?
Jensen Huang:
First, the old notion of "this is too difficult" will disappear. The thought of "this will take too long" will also disappear. The idea of "we need many, many people" will vanish.
It's like during the last industrial revolution; no one would say, "This building looks too heavy." Nor would anyone say, "That mountain is too big." All thoughts about "too big, too heavy, too time-consuming" will be dissolved.
David Sacks:
What remains is creativity. What can you come up with?
Jensen Huang:
Absolutely correct. In other words, the future question will become: how do you collaborate with these agents?
Essentially, this is a completely new way of programming. In the past, we wrote code; in the future, we will write ideas, architectures, and specifications; we will organize teams; we will define evaluation criteria, telling the system what is good, what is bad, and what constitutes excellent results; we will iterate and brainstorm with it.
That is what you will really be doing. I believe every engineer will have 100 agents in the future.
Jason Calacanis:
Returning to the PR issue. Entrepreneurs like David Friedberg, using your technology and AI at Ohalo, are really doing very tangible things: increasing food production, improving the supply of high-quality calories. Friedberg, how much do you think this can reduce costs? What impact will this vision have on what you are doing?
David Friedberg:
We just did a zero-shot genome modeling, and it was successful. At that moment, you would be truly amazed. And this happened against the backdrop of "others replacing the entire enterprise software stack overnight."
I personally did something: in 90 minutes, I replaced the entire software stack and a bunch of workflows. I started at 10 PM on Sunday and finished running and deploying everything before 11:30 PM.
After I, as CEO, completed it, I also asked all my management team members to do the same exercise over the weekend. By Monday, the result we saw was: it was done.
To be more technical and scientific, we used auto research and a batch of data to accomplish something in 30 minutes. If done through traditional paths, this would have been a PhD-level achievement, potentially taking 7 years, and could have become one of the most respected doctoral works in the field, worthy of publication in Science.
Instead, we just downloaded auto research from GitHub onto a desktop computer, fed in the newly acquired batch of data, and it ran out in 30 minutes. Everyone's expressions changed at that moment. The potential it released is truly unbelievable.
So I believe this acceleration is expanding everyone's possibilities in unprecedented ways.
But back to the point about auto research: what do you think? Achieving such results over a weekend with 600 lines of code, and being able to run locally while processing so many different types of datasets.
Does this not indicate that we are still in an extremely early stage of both algorithm optimization and hardware optimization?
Jensen Huang:
The reason OpenClaw is so amazing is that it perfectly coincides with the breakthrough of large language models; it appeared at just the right time.
To a large extent, if it weren't for Claude, GPT, and ChatGPT reaching today's level, Peter probably wouldn't have created this. Because the models have indeed reached a very high level.
Secondly, it brings new capabilities: allowing these models to call tools we have created over the years. For example, browsers, Excel; in chip design, Synopsys and Cadence; and Omniverse, Blender, Autodesk, etc. And these tools will continue to be used in the future.
Now some people say that the enterprise IT software industry will be destroyed. But let me give you another perspective: the scale of the enterprise software industry has always been limited by "how many butts sit in how many seats," which is the number of seats. But in the future, it will welcome 100 times more agents. These agents will query SQL, interact with vector databases, and use Blender and Photoshop.
The reason is simple: first, these tools already do very well; second, these tools are essentially "intermediary interfaces" between us and machines. Ultimately, when the work is completed, the results must be presented back to me in a way I can control. And I know how to operate these tools.
So I hope everything can ultimately return to Synopsys, back to Cadence, because that is where I can control and perform "deterministic standard" verification.
Note: Synopsys and Cadence are two important EDA (Electronic Design Automation) software companies that all chip companies (NVIDIA, Apple, AMD) basically rely on.
The Next Battlefield for AI: Open Source, Verticalization, and Global Diffusion
David Sacks:
I want to ask a question about open source. Now we have closed-source models that are excellent; we also have open-weight models, many of which are impressive and very strong.
Two days ago, you might have been busy on stage and missed it, but in a crypto project called BitTensor's Subnet 3, someone completed a training task: they trained a 4 billion parameter Llama model completely in a distributed manner. A group of random people contributed computing power, but they managed to statefully manage the entire training process. I think this is technically very crazy because the participants were completely randomly dispersed.
Jensen Huang:
This is like Folding@home of our era.
Note: Folding@home is a distributed computing project that allows global volunteers to contribute computing power for protein simulations and medical research.
David Sacks:
Exactly. So how do you see the endgame of open source? Do you see architectures also decentralizing, computing power also decentralizing, thus supporting open weights and fully open-source paths, making AI truly widely accessible?
Jensen Huang:
I believe we fundamentally need both: first, models as first-class commercial products, proprietary products; second, models existing in open-source form.
This is not an A or B relationship, but both A and B must exist. There is no doubt about that. The reason is that models are primarily a technology, not an end product. Models are a technology, not a service.
For the vast majority of users, on that horizontal level, at the level of general intelligence, I actually do not want to fine-tune a model myself. I would rather continue using ChatGPT, Claude, Gemini, X. They each have their personalities, depending on my mood and the problems I want to solve. So this part of the industry will develop very well; it will be very prosperous.
However, all the domain knowledge and expertise in all these industries must be solidified in a way that they can control, and that can only come from open models. The open model industry is already very close to the forefront. We are also investing heavily.
To be honest, even if open models really catch up to the forefront, I still believe that models as a service, world-class commercial product models, will continue to thrive and develop.
Jason Calacanis:
Every startup we invest in is almost first open-source, then moves to proprietary models.
Jensen Huang:
Right. And the beauty of it is: as long as you have an excellent router, on day one, every day, you can access the best models in the world. At the same time, this gives you time to reduce costs, fine-tune, and specialize. So you start with world-class capabilities and then gradually build your own moat.
David Friedberg:
Jensen, I want to ask a geopolitical question. Of course, no one wants the U.S. to win the global AI race more than you. But a year ago, during the Biden administration, the diffusion rule was effectively preventing U.S. AI technology from spreading globally.
Now the new government has been in power for a year. How would you rate it? In terms of AI global diffusion, are we at an A, B, or C? What is being done well, and what is not?
Jensen Huang:
First of all, President Trump wanted American industries to lead, wanted the U.S. tech industry to lead, wanted the U.S. tech industry to win, and wanted American technology to spread globally, making the U.S. the richest country in the world. He wanted to achieve all of this.
But at this moment, NVIDIA has already lost its original 95% market share in the world's second-largest market, and now it is at 0%. President Trump wants us to regain this portion.
The first step is to obtain licenses for those companies we can sell to. Many companies have already submitted applications, and we have also applied for licenses for them, and Secretary of Commerce Lutnick has already approved some. Next, we have notified Chinese companies, many of which have already placed purchase orders with us. So we are now restarting the supply chain and sending goods out.
From a higher level, I think we should acknowledge one thing: when we cannot obtain micro motors, rare earth minerals, our national security is weakened; when we cannot control our communication networks, national security is weakened; when we cannot provide sustainable energy for the country, national security is also weakened. Each of these industries is a story I do not want the AI industry to repeat.
As we look to the future and ask, "What does it look like for the U.S. tech industry and U.S. AI industry to truly lead globally," we must honestly say: AI models cannot be monopolized by one American company; that kind of outcome is meaningless.
But we can completely envision: the American tech stack, from chips to computing systems to platforms, being widely adopted globally. People around the world can build their own AI, public AI, private AI on this American tech stack, and then serve their societies. I hope the American tech stack can cover 90% of the world. I truly hope so.
Otherwise, if the final situation becomes like solar energy, rare earths, magnets, motors, and communication devices, I would consider that a very bad outcome for U.S. national security.
Chamath Palihapitiya:
How closely are you monitoring global conflict situations now? How concerned are you? For example, the Middle East might affect helium supply, which poses a potential supply chain risk for semiconductor manufacturing. How worried are you about these issues? How much energy are you investing in this?
Note: Helium is crucial for semiconductor manufacturing; it is irreplaceable in key processes such as lithography and inspection, and as a non-renewable resource, its supply is highly concentrated, mainly relying on a few sources in the U.S., Qatar (Middle East), and Algeria (North Africa). Once these upstream supplies are disrupted, it could directly affect the stable operation of chip production lines.
Jensen Huang:
First of all, regarding the Middle East, we have 6,000 families there. Many employees in the company are Iranian, and their families are still in Iran. So we have many families there.
The first thing is: they are very anxious, very worried, and very scared right now. We are always thinking about them and monitoring the situation. They will receive our full support. Some have asked me whether, given the current situation in the Middle East, we will continue to stay in Israel. My answer is: we will 100% stay in Israel. We will 100% support the families there. We will 100% continue to be in the Middle East.
Some have also asked, given the situation in the Middle East, do we still think it is worth expanding AI there? My view is: the reason for war is that everyone wants a more stable outcome. And I believe that after the war, the Middle East will be more stable than before. So if we were willing to consider it before the war, we should be even more serious about it after the war. So I am also 100% invested in this issue.
We have three things we must do. First, we must quickly re-industrialize America, whether it is chip manufacturing plants, computer manufacturing plants, or AI factories.
Jason Calacanis:
How is the progress on this front?
Jensen Huang:
The progress is very good. The reason we can advance at an astonishing speed in Arizona, Texas, and California is that we have received strategic support, friendship, and assistance from the Taiwanese supply chain. They are truly our strategic partners. They deserve our support, friendship, and generosity. They are also doing everything they can to help us accelerate the manufacturing process.
Second, we must diversify the manufacturing supply chain. Whether it is Korea, Japan, or Europe, we need to spread the supply chain to make it more resilient. Third, while we enhance diversification and resilience, we must also maintain restraint and not apply unnecessary pressure.
Jason Calacanis:
You mean to be patient.
Chamath Palihapitiya:
What about helium? Many reports have mentioned this issue.
Jensen Huang:
I think helium could be a problem. But on the other hand, there are usually quite a few buffer stocks in the supply chain, and such systems generally leave some margin.
Jason Calacanis:
You have made significant progress in autonomous driving and released major news. You have added many partners, including Uber. Recently, I saw you in a video driving a Mercedes autonomously. You and Uber also announced that you will deploy more cars on the road with many automakers.
I understand your bet is that in the future, there will be an open platform similar to Android, and you will play a key role in serving dozens of automakers; on the other hand, there may be a closed system like iOS, such as Tesla or Waymo.
What is your strategy? How will this chess game unfold? Because it feels like you are collaborating in some places while competing in others, and your stack is very deep.
Jensen Huang:
First, we believe that everything that will move in the future will eventually achieve full or partial autonomy. Second, we do not want to build autonomous vehicles ourselves, but we want to empower every car company in the world to build autonomous vehicles.
So we have built three computers: training computers, simulation computers, evaluation computers, and vehicle-side computers. We have also developed the safest driving operating system in the world.
At the same time, we have created the world's first autonomous driving system with reasoning capabilities. It can break down complex scenarios into simpler ones and navigate through them one by one, just like a reasoning model. This reasoning system is called Alpamayo, and it has achieved very impressive results.
We will do vertical optimization and horizontal innovation; then let each manufacturer decide for themselves. Do you just want to buy one of our computers? Like Elon and Tesla, they would buy our training system; or do you want to buy both the training system and the simulation system? Or do you want to work with us to integrate all three systems, even putting the vehicle-side computer into your car?
Our attitude has always been that we want to solve problems but do not insist that we provide the only answer. No matter how you choose to collaborate with us, we are happy.
David Sacks:
Following up on this question, I find it particularly interesting. You are essentially building a platform that allows a thousand flowers to bloom. But indeed, some flowers now want to go down, go to the bottom of the stack, and try to compete with you. Google has TPU, Amazon has Inferentia and Trainium, and almost everyone is working on their own "I can surpass NVIDIA" version, even though they are also your major customers.
How do you handle this relationship? What do you think will happen in the long run? What role will these products ultimately play in the entire ecosystem?
Jensen Huang:
This is a very good question.
First, we are the only true AI company. We create foundational models ourselves and are at the forefront in many areas. We build every layer of the stack from top to bottom. We are also the only AI company in the world that collaborates with all AI companies.
They never show me what they are doing, but I always clearly tell them what I am doing. So our confidence comes from one point: we are very willing to compete on "whose technology is the best." As long as we can continue to run at high speed, I believe that continuing to procure from NVIDIA will still be one of their most economical choices. I am very confident about this.
Second, we are the only architecture that can be deployed on all cloud platforms. This brings fundamental advantages. We are also the only architecture that can be taken from the cloud and placed in local data centers, cars, any area, and even in space.
So, there is actually a large portion of our market, about 40% of the business. If you do not have the CUDA stack and the ability to provide a complete AI factory, customers simply do not know how to collaborate with you. They do not want to buy chips; they are building AI infrastructure. So what they need is: you come in with a complete stack, and we just happen to have a complete stack.
So, surprisingly, if you look at it now, NVIDIA's market share is actually still increasing.
David Sacks:
What you mean is that these companies tried a round and ultimately found "wow, this is too complicated," and then came back? So your share continues to grow?
Jensen Huang:
There are several reasons for share growth.
First, our pace of advancement is too fast. Second, we make everyone realize: the problem is not making chips but making systems, and this system is extremely difficult to create. So their cooperation with us is still increasing.
Take AWS as an example; I remember they just announced yesterday that they plan to buy 1 million chips in the coming years. This is a very large procurement volume, and this does not even count the large number they have already purchased. We are certainly very happy about that.
Additionally, our share growth over the past few years has also been due to Anthropic coming in, Meta also coming in, and the growth of open models being astonishing, all of which are happening on NVIDIA.
So our share is increasing; on one hand, the number of models is increasing; on the other hand, these companies are increasingly moving out of the cloud and growing in regional deployments, enterprise scenarios, and industry edge scenarios.
And that entire market is very difficult to penetrate if you are just making an ASIC.
David Friedberg:
Relatedly, I want to ask a question without delving into numerical details, but analysts seem not to believe you.
You say computing power could grow 1 million times, but the market consensus expects you to grow 30% next year, 20% the year after, and by 2029, it should be a year of explosive growth, yet it is only 7%. If you apply your TAM to these growth numbers, the implied meaning is that your share will decline significantly.
So from what you see in the future order book, are there any signs that support this judgment?
Jensen Huang:
First of all, they do not understand the scale and breadth of AI at all.
David Sacks:
Right, I feel that way too.
Jensen Huang:
Most people think AI is just a matter for those five super-large cloud vendors.
Jason Calacanis:
Right.
David Sacks:
There is also a kind of investment orthodoxy logic that "the larger the scale, the harder it is to sustain growth." They have to go back and explain the model to the risk control committee of the investment bank; they cannot easily believe "five trillion can grow to fifteen trillion." At most, they are willing to give it seven trillion; anything more than that is unacceptable.
Jason Calacanis:
They cannot imagine a company with a $10 trillion market cap.
David Sacks:
It is essentially a self-preserving model; they do not dare to write in things that have never happened in history.
Jensen Huang:
Moreover, you must redefine what you are doing.
Recently, someone observed: Jensen, how could NVIDIA possibly exceed Intel in the server market scale? The reason is simple: the entire data center CPU market is about $25 billion a year. And we, as you know, can achieve $25 billion in revenue in roughly the time we are sitting here chatting.
Jason Calacanis:
Nice.
Jensen Huang:
Of course, that is a joke.
Chamath Palihapitiya:
What is said on the podcast does not count as formal earnings guidance.
Jensen Huang:
That's right; it does not count as earnings guidance. But the key point is: how big can you grow depends on what you are actually building.
NVIDIA is not building chips; that is the first point. Second, just making chips is no longer sufficient to solve the AI infrastructure problem; it is too complex. Third, most people's understanding of AI is too narrow, limited to what they see, hear, and discuss.
OpenAI is very powerful; it will be very large; Anthropic is also very powerful; it will also be very large. But AI itself will be larger than both of them combined. And what we serve is that entire larger portion.
David Sacks:
Can you explain to the average person what the "space data center" business is? How should it be understood compared to those large data centers on the ground?
Jensen Huang:
We are already in space.
David Sacks:
How should the average person understand this business?
Jensen Huang:
First, we should certainly do well with things on the ground, after all, we are currently on the ground. Second, we should also prepare for entering space. There is certainly a lot of energy in space. The problem lies in heat dissipation. You cannot rely on conduction and convection like on the ground; you can only rely on radiation for heat dissipation, which requires a very large surface area. This is not an unsolvable problem; after all, there is plenty of space in space, but the costs are still high. However, we will explore.
Moreover, we are already there. Our hardware has been radiation-hardened, and many satellites around the world are already running CUDA. They are doing imaging, image processing, and AI image analysis. Such tasks should indeed be completed in space rather than sending all data back to the ground for image analysis. So, there is indeed a lot of work that should be done in space.
At the same time, we will continue to research what data centers in space should look like. This will take many years. That's okay; I have plenty of time.
The Future of Robotics, Healthcare, and Work: How Will AI Ultimately Enter the Real World?
Jason Calacanis:
I want to follow up on healthcare.
We all reach a certain age and start thinking about lifespan and healthy lifespan. We all look pretty good; some may look even better. Jensen, I really don't know what your secret is. Is it anti-aging? What should we not eat? You have to tell me privately.
From the perspective of building a healthcare system, where will this direction go? What progress have we made?
I was just using Claude to analyze what these medical billing codes in the U.S. are about. The U.S. spends twice as much as others, yet health outcomes seem to be only half.
From what I can see, about 15% to 25% of the money is actually spent on the first consultation with a general practitioner. To be honest, we all know that today, a large language model can already perform better and more consistently in the first consultation.
So what is still lacking to break through regulations and allow AI to truly have a substantive impact on the entire healthcare system?
Jensen Huang:
We are mainly involved in several directions in healthcare.
The first is AI physics, which serves AI biology, using AI to understand and represent biology and its behaviors. This is very important in drug discovery.
The second is AI agents, used in scenarios like assisting diagnosis. OpenEvidence is a great example, and Hippocratic is also a great example. I really enjoy collaborating with these companies. I genuinely believe that agentic technology will fundamentally change the way we interact with doctors and the healthcare system.
The third part is physical AI.
The first part is AI physics, using AI to predict physics; the second part is allowing physical AI to understand physical laws, which can be applied in robotic surgery. This area is already very active. In the future, every instrument you encounter in a hospital, whether it is ultrasound, CT, or any other device, will become agentic.
You can think of it as a secure version of OpenClaw embedded in every instrument. So in many ways, these devices will directly interact with patients, nurses, and doctors in the future.
Jason Calacanis:
With so much investment in AI weapons, I really hope we invest a bit more in AI paramedics, AI EMTs, and AI paramedics to save lives, rather than just kill.
This also leads us to the topic of robotics. You now have dozens of partners. The robotics field has gone through a strange period over the past ten or even twenty years—Boston Dynamics, Google acquiring a bunch of companies, and then selling or dismantling them. Everyone once thought that robotics was far from being truly usable.
But now, you and top entrepreneurs like Elon Musk are betting on it. Optimus looks incredible, and many companies in China are making rapid progress. How far are we from truly bringing robots into everyday life, such as robot chefs, robot nurses, robot caregivers, and humanoid robots that can work in the real world?
Especially in China, they seem to be doing just as well as the U.S., if not faster. Based on the progress of your partners and the maturity of the technology, how much longer do you think it will take?
Jensen Huang:
To a large extent, the robotics industry was originally invented by us, or you could say it was invented in the U.S. You could also say we entered the market too early. We were about five years ahead of the truly critical "brain" enabling technology, so we got tired and lost patience first.
But now, it has truly arrived. The next question is just: how long will it take to go from "high-function proof of concept" to "acceptable commercial product"?
Technology never exceeds two to three cycles. Two to three cycles is about three to five years. That's it. In three to five years, there will be robots everywhere.
I think China is very strong, and it is a strength that cannot be underestimated. The reason is that their microelectronics, motors, rare earths, and magnets are all top-notch in the world, which are precisely the foundations of the robotics industry. Therefore, in many aspects, our robotics industry will deeply rely on their ecosystem and supply chain. The global robotics industry will be deeply dependent on it.
Thus, I believe you will see some very rapid changes.
Jason Calacanis:
Will it ultimately be one-to-one? Elon seems to think that in the future, there will be one person paired with one robot—7 billion people with 7 billion robots, 8 billion people with 8 billion robots.
Jensen Huang:
I hope for even more than that. First, there will be a large number of robots working 24/7 in factories; there will also be many robots that are not very mobile but will be slightly active factory robots. Almost everything will ultimately become robotic.
Chamath Palihapitiya:
For me, the most important point about robots is that they will unlock economic mobility for everyone.
In the past, when everyone had a car, they could do many different jobs; in the future, when everyone has a robot, their robot can do many jobs for them. They can open an Etsy store, a Shopify store, and create anything they want with the help of robots, doing many things they could not do alone. I believe robots will ultimately become the technology that brings prosperity to more people on Earth than we have ever seen.
Jensen Huang:
Without a doubt. The simplest reality now is that we are already short millions of workers. So we are actually in urgent need of robots. If there were more labor, all these companies could grow even faster.
Moreover, some of the things you mentioned are really interesting. With robots, we will have "virtual presence." For example, when I am on a business trip, I can enter the body of a robot at home, remotely control it, walk around the house, walk the dog, and see how things are going.
Jason Calacanis:
We have to let the venue staff kick people out soon.
Jensen Huang:
That's right. But think about it; it can really walk around the house, see what is happening, talk to the dog, and chat with the kids.
David Friedberg:
This is somewhat like time travel.
Jensen Huang:
At the same time, we will travel at the speed of light. Obviously, we will send the robot first. I certainly won't send myself first; I will send a robot first to see the situation. Then I will upload my AI.
Chamath Palihapitiya:
This is almost inevitable. It will unlock the Moon and Mars, making them colonizable targets. And this means almost unlimited resources. Bringing materials back from the Moon to Earth can be done with nearly zero energy consumption because you can use solar energy to accelerate. So in the future, you can completely build factories on the Moon to produce everything needed for Earth, and robots are the key to making all this possible.
Jensen Huang:
In that era, distance will no longer be an issue.
David Friedberg:
Moreover, the more income models and agents earn, the more we can invest in infrastructure; the more complete the infrastructure, the more it will unlock stronger models and agents.
Dario recently said on Dwarkesh's podcast that by 2027 or 2028, model companies and agent companies will earn hundreds of billions of dollars; by 2030, he expects it to reach $1 trillion. Note that this does not even include AI revenue at the infrastructure level.
Jensen Huang:
I think he is being very conservative. I believe Dario and Anthropic's performance will far exceed that number, far exceed it.
Jason Calacanis:
So from $30 billion to $1 trillion?
Jensen Huang:
Yes. And the reason is that he has not considered one part: I believe every enterprise software company will eventually become a value-added reseller of Anthropic code, Anthropic tokens, and OpenAI tokens. This part will significantly expand their GTM scale.
David Sacks:
So in such a world, what is the real remaining "moat"?
Some moats will become almost insurmountable, to be honest. For example, the moat that no one discusses much but is probably the strongest is CUDA; it is an amazing strategic advantage.
But in the future, if models themselves can create great things, the next generation of models may also disrupt it. In your view, what is the most important differentiation for companies building application layers?
Jensen Huang:
Deep specialization.
I believe that in the future, there will be general models integrated into software companies' agent systems. Many of these models will be commercial models like Claude, proprietary models; but many will also be specialized sub-agents trained by these companies for specific sub-tasks.
David Sacks:
So your call to entrepreneurs is: truly understand your vertical field.
Jensen Huang:
Exactly.
David Sacks:
Understand it deeper and better than anyone else. Then wait for these tools to catch up to you; once the tools catch up, you can inject your knowledge into them.
Jensen Huang:
Right. You have your knowledge, and you can connect customers to your agents. The sooner you let agents truly connect with customers, the sooner this flywheel will start turning, and it will turn very quickly.
David Sacks:
This is almost the complete opposite of today's software logic. Today, we first create a piece of software, then think about "what can be generalized," and then sell it to as many people as possible, and finally sell customization as an add-on service.
David Friedberg:
And then lock in the customers.
Jensen Huang:
In reality, as you said, we first create a horizontal platform. But you see, all those global systems integrators (GSI) and consulting firms are essentially experts who then customize your horizontal platform into a vertical solution.
Jason Calacanis:
Exactly. And to some extent, the scale of the customization market may be five to six times larger than the platform itself.
Jensen Huang:
Absolutely correct. So I believe that these platform companies themselves have the opportunity to become that expert, to become that player in the vertical field, to become the true master of a specific domain.
Jason Calacanis:
I want to give you the praise you deserve.
I remember you said something three years ago: "The ones who will take your job away will not be AI, but those who use AI." Looking back now, our entire discussion has almost revolved around this point: agents are turning humans into "superhumans," expanding business opportunities and entrepreneurial opportunities. You actually saw this very clearly early on.
Jensen Huang:
You are too kind.
Jason Calacanis:
Of course, we also have to accommodate two ideas at the same time: first, there will indeed be good developments; second, there will indeed be jobs replaced. Then the question becomes: do those people have enough resilience and determination to embrace these new technologies?
For example, if 100% of driving jobs are automated in the future, that will certainly save many lives, which is a good thing; but we must also acknowledge that there are 10 to 15 million people in the U.S. who rely on this for their livelihoods. This change will definitely happen.
Jensen Huang:
I believe jobs will change. For example, today there are many drivers. I believe that in the future, many drivers will still sit in the car, but they will no longer be responsible for driving; instead, they will sit in the back or beside, becoming a kind of "mobility assistant."
Because don't forget, what drivers ultimately do is not just drive. They help you with luggage, handle many things, essentially playing an assistant role.
So I would not be surprised if future drivers become your mobility assistants, helping you handle many other things while the car drives itself.
Jason Calacanis:
Just like in a hotel.
Jensen Huang:
Yes. The car is driving itself, but they are still helping you coordinate various things.
David Friedberg:
Autonomous flying planes have also brought more pilots, rather than pushing pilots out of the cockpit, even though autonomous flying has already taken on 90% of the work.
Chamath Palihapitiya:
And to be honest, when the car is driving itself, that driver can still do a bunch of other work on their phone, arranging various things for you.
Jensen Huang:
For example, coordinating, communicating, booking, handling a bunch of tasks.
Chamath Palihapitiya:
The whole pie is getting bigger.
Jensen Huang:
Yes. So one thing is clear: every job will be changed; some jobs will disappear; but at the same time, many new jobs will be created. And I want to say to those young people who have just graduated and feel anxious about AI: go become the person who knows how to use AI best.
Today, we all hope that employees can become truly proficient in AI, and this is certainly not an easy task. You need to know how to make requests, but you cannot make the instructions too rigid; you need to leave enough space for AI to innovate and create under our guidance; and you need to lead it to the results we truly want. All of this requires a kind of "art."
David Sacks:
When you were at Stanford, your famous advice to young people was: "I wish you pain and suffering." Do you remember?
Jason Calacanis:
That is classic.
David Sacks:
What about today? If a person is about to graduate high school, standing at a crossroads in life, whether to go to college, what major to study, or even whether to go to college at all, what would you advise them?
Jensen Huang:
I still believe that deep science, deep mathematics, and language skills are very important. And as you know, language itself is actually the programming language of AI, the ultimate programming language. So perhaps people majoring in English will be the most successful in the future.
In summary, my advice is: no matter what kind of education you receive, make sure you are sufficiently professional in using AI.
Speaking of work, I want to add one more thing that I hope everyone hears. In the early days of the deep learning revolution, one of the world's top computer scientists, whom I greatly respect, once firmly predicted that computer vision would completely eliminate radiologists. He even suggested that everyone should not enter the field of radiology.
Ten years later, this prediction is 100% correct on one level: computer vision has indeed been integrated into all radiology devices and platforms globally. But the surprising result is that the number of radiologists has not only not decreased but has actually increased, and the demand is soaring. The reason is that every job contains two levels: tasks and purposes.
The task of a radiologist is to look at images, but their true purpose is to help doctors treat patients and diagnose diseases. And because imaging examinations can now be done faster, hospitals can perform more scans, which enhances medical efficiency and allows patients to enter the diagnosis and treatment process faster and receive treatment sooner. The result is that hospitals have increased revenue because they have performed more scans and served more patients.
Jason Calacanis:
Exactly.
Jensen Huang:
So the result is actually positive.
David Friedberg:
And a faster-growing, more productive, and wealthier country can completely place more teachers in classrooms rather than fewer teachers.
But you will enable each teacher to have the ability to tailor courses for every student in the classroom. This way, they will be stronger, like "bionic people," and the results will be better.
Jensen Huang:
Every student will have AI assistance, but every student still needs excellent teachers.
Jason Calacanis:
This has been fantastic. Jensen, congratulations on your success. This has truly been a particularly positive and uplifting discussion. Thank you very much for taking the time to participate.
David Sacks:
You are the captain this industry needs.
Jason Calacanis:
Indeed. I think you should express the positive side of AI more loudly. There is too much doomsday talk out there.
David Sacks:
And I also think that being able to maintain this humility after achieving such great success and telling everyone, "Folks, what we are doing is essentially still software," is really healthy. People need to hear this. We have invented new categories and new industries before. We do not need to slide into that kind of panic; that is not helpful.
Jason Calacanis:
And we can choose for ourselves, right? We have autonomy and the ability to act. We can choose how to use it. Well, everyone, see you next time. Thank you for watching this episode of All-In.
Jensen Huang:
Thank you.
Popular articles















