Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnSquareMore
Full Transcript of Nvidia Analyst Call | What Trump Card Did Jensen Huang Present to Wall Street at the GTC Conference?

Full Transcript of Nvidia Analyst Call | What Trump Card Did Jensen Huang Present to Wall Street at the GTC Conference?

404k404k2026/03/17 19:19
Show original
By:404k

404K is a curated knowledge community focused on global technology, AI, semiconductors, cloud computing, and capital markets. Here, investment bank reports, homepage summaries, closed-door meeting highlights, macro charts, in-depth subscription translations, video transcripts, and audio-to-text content are updated daily, covering US stocks and frontline technology trends overseas.

Full Transcript of Nvidia Analyst Call | What Trump Card Did Jensen Huang Present to Wall Street at the GTC Conference? image 0

The complete original text has been published on Knowledge Planet, welcome to join.


During NVIDIA's GTC Financial Analyst Q&A, CEO Jensen Huang shared profound insights into AI computing, the transformation of industry business models, and NVIDIA's strategic moat. The core points of the session can be summarized in the following six key aspects:

1. Targeting the Multi-Trillion-Dollar Market Demand NVIDIA is extremely confident in the continued explosive growth of the market. Huang made it clear that just the Blackwell and Ruben architectures alone give NVIDIA strong visibility and confidence in a market demand exceeding $1 trillion, and this number continues to grow with the development of new clients, markets, and geographies.

2. The Traditional IT Industry Will Be Reshaped, and the Token Economy Becomes the Core Future Business Model Huang advanced a disruptive industry forecast: the existing $2 trillion IT software licensing industry will not be destroyed, but reshaped and dramatically expanded (potentially to $8 trillion). In the future, IT companies will no longer sell software licenses, but instead rent out and generate Tokens. 100% of IT companies worldwide will integrate OpenAI, Anthropic, or open-source models, becoming Token resellers, signifying a fundamental transformation of the underlying business model.

3. AI Agents Will Enable a New Norm of 24/7 Compute Consumption In the past, computers spent much of their time idle, but in the future, computers will run 24/7 continuously, because AI agents will automatically perform tasks in the background and constantly generate Tokens. Agent systems are extremely complex; they need to learn autonomously, call tools (such as web browsers), and generate sub-agents, all of which put extremely high demands on structured and unstructured data processing, driving demand for comprehensive accelerated computing (including CPUs and various types of GPUs) to new heights.

4. Physical AI Will Trigger a Real-Economy Revolution Far Beyond the Digital World Currently, AI is concentrated in the cloud and the digital domain, but Huang pointed out that when physical AI reaches an inflection point, AI will enter factories, the edge, and specific physical locations. Because the size of the real-world, atom-related physical industry (about $70 trillion) far exceeds that of the pure digital sector, physical AI will drive rapid growth in business on the non-hyperscale side (i.e., enterprise on-prem and industrial deployment), potentially becoming the dominant portion in the future.

5. NVIDIA's Ultimate Moat: Selling the Entire "Full-Stack AI Factory," Not Just Chips Huang emphasized that attempting to compete with NVIDIA using a single low-cost chip is futile because **“customers don't buy chips; they buy platforms.”** NVIDIA is currently the only company in the world capable of optimizing a unified architecture across multiple types of memory (HBM, LPDDR5, SRAM). By fully controlling chips, networking, storage, and the entire software stack, NVIDIA can build a harmonious and unified "AI factory" and maintain an extremely rigorous pace of system architecture upgrades, “once a year”—unattainable by rivals who merely piece together technologies.

6. Inference Is the Endgame for AI Monetization, but Post-Training Compute Will Grow Exponentially Regarding the relationship between training and inference, Huang made an important clarification:

  • The Value and Difficulty of Inference:
     There was a misconception that inference is easy, but the reality is inference is “super hard” and growing harder, as it's when AI truly thinks and works. Huang expects that in the future, 99% of compute power will be used for inference, as this is where Tokens are truly converted into economic output and real value in healthcare, manufacturing, finance, and other industries.
  • The Evolution of Training:
     AI model training has not ceased; it's progressed from "pre-training (rote memorization)" to "post-training (skill learning, tool usage, reinforcement learning, etc.)." With the addition of multimodal and physical-world interaction data, future post-training compute requirements may be millions to billions of times greater than pre-training.

Jensen Huang
: So we have very… we have very high visibility, foresight, and strong confidence in markets above $1 trillion. You know, this isn’t a floating point number, folks, okay. It’s also not a 94th-decimal accurate number. All right, we’re not going to precision down to pennies. You can keep your change. Nevertheless, we have very high visibility in the market above $1 trillion for Blackwell plus Ruben. As for why it's only Blackwell plus Ruben, and not everything else we sell, that’s because I referenced last year when I was only talking about Blackwell and Ruben. Does that make sense? Last year we didn’t have grok (note: transcription error, possibly referring to Groq architecture or specific hardware). Last year we didn’t sell standalone CPUs. Last year we didn’t have or sell a lot of things we have now. So it doesn’t make sense to include those today. Not because we didn’t have them yesterday—that doesn’t make sense either. (Someone nodded, so I can continue. Okay.) So, therefore, there are a few things. Only Blackwell and Ruben. Not Feynman. Not Ruben Plus. You know, not Ruben Ultra. Not any of those. Not Vera standalone. Not grok. So, just Blackwell plus Ruben. We have high confidence, strong visibility, demand forecasts, and purchase orders in the millions. We regularly complete the deals we ship. Regularly. We expect to complete and ship more business from now to the end of 2027. We expect to make bookings and ship more business between now and 2027. The reason is, we expect to continue production until the end of 2027. Now, unlike some businesses, because we build and finish these high-quality systems, we are able to… book and ship new business in the same quarter. Of course, if you have to build an ASIC, you can’t do that, or you know, obviously if you don’t see demand now you can’t possibly ship by end of 2027. But not for us. We build inventory. We have supply chain pipelines, and we have to utilize them. We have to take care of customers who suddenly appear, because they are desperate for more compute. Does that make sense? So, when they are desperate for more compute, and suddenly on the last day say, “Wow, can I get more,” I want to say—and we’re always in a position to say—“We’re glad to help you.” We’re also developing new customers and new markets. And new regions that we haven’t added here. Because we still have… about 21 months left. Okay.

So I hope you understand what that $1 trillion is. By definition, it will continue to grow. By definition, compared to the benchmark I’m referencing, it will keep growing and get bigger than that number. I also want to say a few things. Again, last year was a very good year because fiscal year 2025 is our year of inference. I think we helped everyone understand the relationship between the price of computers and the cost of a Token. Only at the margin are a computer’s price and a Token’s cost related. The price of a computer and the cost of a Token. Remember, people buy these computers to produce Tokens. The efficiency in producing those Tokens is crucial. They're not reselling the computer. If you buy a computer that’s expensive and you just flip it, then yeah, it’s expensive. But if you buy an expensive computer, it’s because the technology is incredible and it produces Tokens at such an amazing rate. So while you’re buying the most expensive computer, you’re producing the lowest-cost Tokens. Does that make sense? This is what we do every day. That’s our job. That’s how we deliver the value we deliver. That value difference. The two numbers I just described are exactly how we ensure gross margins. We have to deliver outcomes. And we relentlessly, consistently deliver so much value, measured by the number of Tokens generated per second—that’s the foundation of our delivered value. Each generation, we deliver so much more value that customers are willing to pay more for the next generation rather than less for the current one. They’re willing to upgrade immediately. When Vera Rubin comes, it makes more sense to install Vera Rubin than to keep buying Grace Blackwell. Are you following? Someone nods—okay, because the value is better. Even if the price is higher. So I compare those two systems because they are the de facto two standard systems in the world. Before you can beat those two systems, it makes no sense to buy anything else. And those two systems are incredibly hard to beat.

Because Moore’s Law can’t give you a 35x improvement. So Moore’s Law alone is not enough. Making a faster chip is not enough. You’d have to make a whole lot of faster chips. So, last year was our 2025—our inference year. I think we showcased our leadership in inference and training. Now inference. And the other great things we did last year: we increased our coverage. We increased the number of AIs now supported on our platform. Last year, that is, 2025. We added Anthropic, which is new. We added Meta SSL (note: possibly a mis-transcription, likely referring to a Llama-related model), which is new. We are continuing to work with Meta on everything else. Meta SSL is brand new, and they have brand-new compute needs. We can all acknowledge that last year open-source software, open-source models really took off. To the level of providing API inference service providers, as of now. Open-source models probably represent… the second most popular AI models. The bigger ones. Number one, of course, is OpenAI. In terms of total Tokens generated, open-source models are number two. As you know, Nvidia is the world’s best platform for open-source models. We are the standard for open-source models, everywhere. So number one is OpenAI. Number two is all open-source models. Number three is Anthropic. Number four is xAI. Grab your lists, keep going. I think Nvidia’s model coverage significantly increased last year, which explains our accelerating growth on a huge base. As you know, we’re already a very large company, but we’re now accelerating. Our growth is actually accelerating. So anyway, that’s my thought. Oh, lastly. We love our hyperscaler partners, we work very, very closely together, but it’s important to understand that our relationship with hyperscalers is not just selling to them. We bring customers to them. Having CUDA in their cloud brings all CUDA developers, all AI-native enterprises, all the big companies working with us—whenever we help those big or small companies accelerate, we bring them, we run their workflows there, and get them hosted at CSPs worldwide. We are one of the world’s best CSP sales teams. That’s why if you go to the show floor, they have the biggest booths. AWS has the biggest booth here. Google Cloud has the biggest booth here. Azure has the biggest booth. Oracle has a huge booth. CoreWeave has a huge booth, too. Does that make sense? Because we bring them customers. Why are they here? To sell to my developers. All our developers only know how to program one thing. They only know to program CUDA, and they only use CUDA libraries. When we win customers, when we help these developers integrate Nvidia, they end up landing at one of our CSP partners. We are one of the best sales forces for CSP, period. However, we also see tremendous diversity of customers beyond CSP. Regional cloud, industrial on-prem. Dell, Lenovo, and HP—growing so quickly, and all the ODMs growing so fast. Much of that business goes to the right side of that chart, the 40%. Most people see us for the left 60% of the business. Without Nvidia’s full-stack technology, without us being able to build you an entire AI factory, without every open platform in the world running on Nvidia, you have no shot at that 40% of the market. So the core of that chart is, most of the left 60% is Nvidia developers landing in the cloud. And 100% of the right 40%—it’s impossible unless you have end-to-end, full-stack technology. Did I make that clear? It’s important to understand our business. We aggregate all of this and call it accelerated computing. Maybe it’s not helping you understand, so next year we’ll split it a bit differently. Okay, in the future, we’ll break it down differently. It might look like that chart—you’ll see. Something like hyperscalers making up 60%. Even if you see that, remember that many of those customers are ones we bring to the cloud. Then on the right, the 40% is completely impossible if you’re just making a chip. Because they don’t buy chips—they buy platforms. All three are on one slide. Maybe that blows your minds. That’s why I repeated it. Did that help? You know what I should do? I should do three panels or three slides. That would be a seven-hour keynote. But it would be worth it. Alright. That’s all. Thank you. Now we’ll open it up for questions. Hi.

Analyst Ben Reitzes
: Hi, this is Ben Reitzes from Melius Research (note: original text mis-transcribes name as Ben Wright says Amelia's research). Thank you for having us at this event. The access you and your team have provided is amazing. Congratulations to you and your team. It’s awesome. By the way Jensen, we got a photo together last night, and people can still go and like it. I need to break last year’s record.

Jensen Huang
: What photo?

Analyst Ben Reitzes
: We snapped a quick photo and I posted it, trying to beat last year’s number of likes, so…

Jensen Huang
: Okay, okay. So am I in some vulnerable pose or something? Let’s just say, the camera adds 10 pounds to me, but not to you. I don’t know how that happens. You look great.

Analyst Ben Reitzes
: So I promise I’ll ask an inference question. It's related. This is great and I think a lot of people here get it. I think the main pushback we get is… is the effort worth it? And do the hyperscalers have enough upside in their API and cloud business revenue to justify all this spending? What does Jensen see? Because I’ve made estimates for hyperscalers and said they have revenue upside. But right now, their CapEx is 20% higher than cloud API revenue. I want to know what you see. You’ve previously said cash flows have big upside. From your customers, especially the hyperscalers and those supporting Anthropic and OpenAI. So when do we revise those numbers up? I know it’s hard for you to answer, since you’re giving guidance for other companies, but if we see this potential upside, I think your stock will perform much better because we’ll realize this buildout can continue. When does that inflection come? I mean, we’re seeing an inflection, but when does it actually happen, you know, how much upside is there in their revenue? How can we feel better about it?

Jensen Huang
: Yeah. So, I really wish those companies were public. The reason is, that way you’d see what I see. In history, no company—not a startup, a non-profit, or a private company—could grow like this. Revenue is growing by $1–2 billion per week. That’s what they’re experiencing right now. Remember, I just said per week. The whole IT software industry is about $2 trillion. That $2 trillion industry, I don’t think will be disrupted. I think it will be transformed. I believe every company in that $2 trillion IT industry will integrate a combination of OpenAI, Anthropic, and open-source models. And turn them into—connected by a piece of open-source software called Open Claw (note: possibly refers to an open cloud project)—we've made an enterprise version called Nemo Claw. And, in an instant, you have an agent. 1.5 million people have downloaded Open Claw and built an agent for themselves. Just one line of code and you tell the agent to build itself. “Oh, you don’t know something? Go learn.” Then it learns. Get it? So, in the future, these agents will be integrated into IT. Today’s IT industry is a $2 trillion software license market. That might turn into… I’ll just say a number, $8 trillion. That will also mean an enormous resale of Tokens. 100% of the world’s IT industry will be OpenAI and Anthropic resellers. Are you following me? No. Raise your estimates for OpenAI and Anthropic. I do believe. Anthropic and OpenAI, of course, and all IT companies will use open-source models to modify and customize their own software and models. That is what Nemo is for. That’s the use of Nemo. We’ve built all these tools, which is why we collaborate with all of them. They’re all going to create agents integrating these three components. And I believe they'll achieve incredible growth. The time is coming. It’s coming soon. The reason is, you can see this in Anthropic’s data. You also see it in OpenAI’s data. They're not just growing—they’re reaching the size of an entire IT company in a month. And the revenue of these AI companies. AI will be used by enterprises directly, but also resold and integrated into IT companies. Does that make sense? AI is just software. Their software gets provided directly to enterprises, but also integrated as vertical applications, made professional, governed, secured, easy to deploy, connected to their systems of record, etc. It’ll be a complete ecosystem, with the agent system rented to customers, but they still have to consume Tokens through the factory. If it's through OpenAI, wonderful. If it’s through Anthropic, great. If it’s through open-source, great. But they all have to generate Tokens. So ultimately, whereas legacy IT companies licensed software, future IT companies will lease and generate Tokens. Are you following me? Their business models will evolve. These companies will get bigger. Their gross margins will change. The composition of margins will change, for now Token cost is part of their COGS. But they’ll provide much, much more value. That’s very exciting for them. Super exciting for them. Okay, fantastic, pass on this $8 trillion-value microphone. Thank you.

Analyst CJ
: Good morning, this is CJ from Cantor Fitzgerald. Thanks for hosting this event, really appreciate it. Following up on Ben’s question, and thinking about that 60-40 chart evolution—you mentioned Nemo, and yesterday you announced the Vera Rubin DS AI Factory reference design, basically giving non-hyperscalers a blueprint to compete with hyperscale. So I’m curious—when you put all this together, you’re seeing a tremendous surge in Token generation volume. How do you expect that chart to evolve over time, and how should we look at the different participants inside it, and their relative growth drivers?


Full Transcript of Nvidia Analyst Call | What Trump Card Did Jensen Huang Present to Wall Street at the GTC Conference? image 1

0
0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

Understand the market, then trade.
Bitget offers one-stop trading for cryptocurrencies, stocks, and gold.
Trade now!