首页  >>  来自播客: User Upload Audio 更新   反馈

Nvidia 2023 Q2 Earnings Call

发布时间 2023-08-24 22:20:42    来源

中英文字稿  

Good afternoon. My name is David and I'll be your conference operator today. At this time I'd like to welcome everyone to NVIDIA's second quarter earnings call. Today's conference is being recorded. All lines have been placed on me to prevent any background noise. After the speakers are marked, so they are a question and answer session.
下午好。我叫大卫,今天我将担任您的会议操作员。此时我想欢迎大家参加NVIDIA的第二季度盈利电话会议。今天的会议将被记录下来。为了防止干扰,所有通话都已静音。在发言人发表讲话后,将进行问答环节。

If you'd like to ask a question during this time, simply press the star key followed by the number one on your telephone keypad. If you'd like to withdraw your question, press star one once again. Thank you, Samona Jankowski. You may begin your conference.
如果您想在此期间提问,请在您电话键盘上按下星号键,然后是数字一。如果您想撤回您的问题,请再次按下星号一。谢谢,Samona Jankowski。您可以开始您的会议了。

Thank you. Good afternoon everyone and welcome to NVIDIA's conference call for the second quarter of fiscal 2024. We meet today from NVIDIA, our Jensen Huang, president and chief executive officer and co-ed Cress, executive vice president and chief financial officer.
谢谢大家。下午好,欢迎参加 NVIDIA 第二季度财务电话会议。今天我们有 NVIDIA 的首席执行官暨总裁黄仁勋先生和首席财务官兼执行副总裁 Cress 女士参加。

I'd like to remind you that our call is being webcast live on NVIDIA's investor relations website. The webcast will be available through replay until the conference call to discuss our financial results for the third quarter of fiscal 2024. The content of today's call is NVIDIA's property. It can be reproduced or transcribed without our prior written consent.
我想提醒您,我们的通话正在NVIDIA投资者关系网站上进行网络直播。该直播将提供回放,直至我们就2024财年第三季度财务业绩举行电话会议。今天通话的内容属于NVIDIA所有。未经我们事先书面同意,不得复制或转录。

During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties and our actual results may differ materially. For discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release. Our most recent forms, 10K and 10Q, and the reports that we may file on form 8K with the Securities and Exchange Commission. All our statements are made as of today, August 23, 2023, based on information currently available to us. Accept is required by law. We assume no obligation to update any such statements.
在本次通话中,我们可能会根据目前的预期进行前瞻性声明。这些声明受到许多重大风险和不确定性的影响,我们的实际结果可能会有实质性的差异。关于可能影响我们未来财务结果和业务的因素的讨论,请参考今天的收益发布中的披露。我们最近的10K和10Q表格,以及我们可能向证券交易委员会提交的8K表格的报告。所有我们的声明都基于我们当前拥有的信息,截止到2023年8月23日。按照法律要求,我们没有更新任何此类声明的义务。

During this call, we will discuss non-gap financial measures. We can find a reconciliation of these non-gap financial measures to gap financial measures in our CFO commentary, which is posted on our website. And with that, let me turn the call over to collect.
在本次电话会议中,我们将讨论非依据通用会计准则的财务指标。我们可以在我们公司财务总监的评论中找到这些非依据通用会计准则财务指标与依据通用会计准则财务指标的对比,该评论已发布在我们的网站上。有了这些,我将把电话交给收集小组。

Thanks, Yamana. We had an exceptional border. Record Q2 revenue of $13.51 billion was up 88% sequentially and up 101% year on year. And above our outlook of $11 billion. Let me first start with data center.
谢谢,Yamana。我们取得了非凡的业绩。第二季度创下了135.1亿美元的营收纪录,环比增长了88%,同比增长了101%。这一数字超出了我们预期的110亿美元。让我首先从数据中心开始讲起。

Record revenue of $10.32 billion was up 141% sequentially and up 171% year on year. Data center compute revenue nearly tripled year on year. Driven primarily by accelerating demand for cloud service providers and large consumer internet companies for our HDX platform, the engine of genitive AI and large language models.
截至目前,我们的收入达到了103.2亿美元,环比上涨了141%,同比增长了171%。其中,数据中心计算收入同比增长了近两倍。这主要是由于云服务提供商和大型消费者互联网公司对我们的HDX平台、先进人工智能引擎和大型语言模型的需求加速增长。

Major companies, including AWS, Google Cloud, Meta, Microsoft Azure and Oracle Cloud, as well as growing number of GPU cloud providers are deploying in volume HDX systems based on our Hopper and Ampere architecture Tensor Core GPUs. Networking revenue almost doubled year on year, driven by our end-to-end in Finneban networking platform, the gold standard for AI.
许多主要公司,包括AWS、谷歌云、Meta、微软Azure和甲骨文云,以及越来越多的GPU云供应商,都在大规模部署基于我们Hopper和Ampere架构Tensor Core GPU的HDX系统。我们的金融网络平台Finneban是AI领域的黄金标准,推动了网络收入年度几乎翻倍的增长。

There is tremendous demand for NVIDIA accelerated computing and AI platforms. Our supply partners have been exceptional in ramping capacity to support our needs. Our data center supply chain, including HDX with 35,000 parts and highly complex networking, has been built up over the past decade. We have also developed and qualified additional capacity and suppliers for key steps in the manufacturing process, such as co-host packaging. We expect supply to increase each quarter through next year.
NVIDIA公司的加速计算和人工智能平台需求非常巨大。我们的供应伙伴在扩大产能方面表现出色,以满足我们的需求。过去十年间,我们已经建立了一个包括35,000个零部件和高度复杂网络的数据中心供应链。我们还为制造过程中的关键环节,如共主封装,开发和资格认证了额外的产能和供应商。我们预计供应量将在明年的每个季度中逐渐增加。

By geography, data center growth was strongest in the US as customers direct their capital investments to AI and accelerated computing. China demand was within the historical range of 20 to 25 percent of our data center revenue, including compute and networking solutions. At this time, let me take a moment to address recent reports on the potential for increased regulations on our exports to China.
按照地理位置来看,由于客户将他们的资本投资导向人工智能和加速计算,数据中心在美国增长最为强劲。中国的需求在我们的数据中心收入中(包括计算和网络解决方案)仍然在历史范围内,占据20%到25%左右。此时,我想花点时间来讨论关于我们对中国出口可能面临加强监管的最新报道。

We believe the current regulation is achieving the intended results. Given the strength of demand for our products worldwide, we do not anticipate that additional export restrictions on our data center GPUs if adopted would have an immediate material impact toward financial results. However, over the long term, restrictions prohibiting the sale of our data center GPUs to China, if implemented, will result in a permanent loss of an opportunity for the US industry to compete and lead in one of the world's largest markets.
我们认为当前的监管措施已经达到预期的效果。考虑到我们产品在全球的强劲需求,我们预计即使有进一步的出口限制措施针对我们的数据中心GPU,也不会立即对我们的财务结果产生重大影响。然而,从长远来看,如果禁止向中国销售我们的数据中心GPU,将导致美国产业在全球最大市场之一的竞争和领导地位上失去一个永久的机会。

Our cloud service providers drove exceptional strong demand for HDX systems in the quarter as they undertake a generational transition to upgrade their data center infrastructure for the new era of accelerated computing in AI.
我们的云服务提供商在这一季度推动了对HDX系统的异常强劲需求,他们正在进行一代跃升以升级他们的数据中心基础设施,以适应人工智能加速计算的新时代。

The NVIDIA HDX platform is culminating of nearly two decades of full stack innovation across Silicon systems, interconnects, networking, software, and algorithm. Instances powered by the NVIDIA H100 Tensor Core GPUs are now generally available at AWS, Microsoft Azure, and several GPU cloud providers, with others on the way shortening.
NVIDIA HDX平台是近20年来对硅系统、互连、网络、软件和算法进行全栈创新的结果。由NVIDIA H100 Tensor Core GPU提供支持的实例现在可在AWS、Microsoft Azure和其他几个GPU云服务提供商中普遍获得,并且其他压缩工具正在缩短等待时间。

Consumer Internet companies also drove the very strong demand. Their investments in data center infrastructure, purpose built for AI, are already generating significant returns. For example, meta recently highlighted that since launching reels and AI recommendations have driven a more than 24% increase in time spent on Instagram.
消费者互联网公司也推动了强劲的需求。它们对为人工智能专门构建的数据中心基础设施的投资已经带来了显著的回报。例如,最近Meta强调,自从推出Reels和人工智能推荐功能以来,Instagram的使用时间增加了超过24%。

Enterprises are also racing to deploy generative AI, driving strong consumption of NVIDIA powered instances in the cloud as well as demand for on-premise infrastructure. Whether we serve customers in the cloud or on-prem, through partners or DRAP, their applications can run seamlessly on NVIDIA AI enterprise software with access to our acceleration libraries, retain models, and APIs.
企业也正在竞相部署生成式人工智能,推动了云中使用NVIDIA动力实例和对现场基础设施的需求的强劲消费。无论我们是通过合作伙伴或DRAP在云端还是现场为客户提供服务,他们的应用程序都可以在NVIDIA AI企业软件上无缝运行,并获得我们的加速库、保留模型和API的访问权限。

We announced a partnership with Snowflake to provide enterprises with accelerated paths to create customized generative AI applications using their own proprietary data, all securely within the Snowflake data cloud. With the NVIDIA Nemo platform for developing large language models, enterprises will be able to make custom LLMs for advanced AI services, including chat box, search, and summarization right from the Snowflake data cloud.
我们宣布与Snowflake建立合作伙伴关系,为企业提供加速创建定制生成式人工智能应用的路径,并利用他们自己的专有数据在Snowflake数据云中安全地进行操作。通过NVIDIA Nemo平台开发大型语言模型,企业将能够在Snowflake数据云中直接为高级人工智能服务(包括聊天机器人、搜索和摘要)创建定制的语言模型。

Virtually every industry can benefit from generative AI. For example, AI co-pilots, such as those just announced by Microsoft, and boost the productivity of over a billion office workers and tens of millions of software engineers. Millions of professionals in legal services, sales, customer support, and education will be available to leverage AI systems trained in their fields. AI co-pilots and assistants are set to create new multi-hundred billion dollar market opportunities for our customers.
几乎每个行业都能从生成性人工智能中受益。例如,微软刚刚宣布的AI联合驾驶员可以提高超过十亿办公室工人和数千万软件工程师的工作效率。在法律服务、销售、客户支持和教育领域的数百万专业人士将可以利用在他们领域接受过培训的AI系统。AI联合驾驶员和助手将为我们的客户创造新的价值数百亿美元的市场机会。

We are seeing some of the earliest applications of generative AI in marketing, media, and entertainment. WPP, the world's largest marketing and communication services organization, is developing a content engine using NVIDIA Amiverse to enable artists and designers to integrate generative AI into 3D content creation. WPP designers can create images from text prompts while responsibly trained generative AI tools and content from NVIDIA partners, such as Adobe and Getty Images, using NVIDIA Picasso, a foundry for custom generative AI models for visual design. All content provider, Shutter Stott, is also using NVIDIA Picasso to build tools and services that enable users to create 3D scene backgrounds with the help of generative AI.
我们正在看到生成式人工智能在营销、媒体和娱乐领域的一些最早应用。全球最大的营销和通信服务机构WPP正在开发一个使用NVIDIA Amiverse的内容引擎,使艺术家和设计师能够将生成式人工智能整合到3D内容创作中。WPP设计师可以根据文本提示创作图像,同时利用经过负责任训练的生成式人工智能工具和来自NVIDIA合作伙伴(如Adobe和Getty Images)的内容,使用NVIDIA Picasso创建定制生成式人工智能视觉设计模型的工坊。所有内容提供商Shutter Stott也正在使用NVIDIA Picasso构建工具和服务,通过生成式人工智能的帮助,让用户能够创建3D场景背景。

We partnered with ServiceNow, an Accenture, to launch the AI Lighthouse program, fast tracking the development of enterprise AI capabilities. AI Lighthouse Unites with ServiceNow Enterprise Automation Platform and Engine with NVIDIA Accelerated Computing and with Accenture Consulting and Deployment Services. We are collaborating also with HuggingFace to simplify the creation of new and custom AI models for enterprises. HuggingFace will offer a new service for enterprises to train and to advanced AI models powered by NVIDIA DGX Cloud.
我们与Accenture旗下的ServiceNow合作,推出了AI Lighthouse计划,加快了企业AI能力的发展。AI Lighthouse将与ServiceNow的企业自动化平台和引擎、NVIDIA加速计算以及Accenture咨询和部署服务进行整合。我们还与HuggingFace合作,为企业简化新型和定制AI模型的创建。HuggingFace将为企业提供基于NVIDIA DGX Cloud的高级AI模型训练服务。

And just yesterday, VMware and NVIDIA announced a major new enterprise offering called VMware Private AI Foundation with NVIDIA, a fully integrated platform featuring AI software and Accelerated Computing from NVIDIA with multi-cloud software for enterprises running VMware. VMware is hundreds of thousands of enterprise customers who have access to the infrastructure. AI and Cloud Management software needed to customize models and run generative AI applications such as intelligent chat box, assistance, search and some organizations.
就在昨天,VMware和NVIDIA宣布了一项名为VMware Private AI Foundation with NVIDIA的重要企业服务,这是一个完全集成的平台,结合了来自NVIDIA的AI软件和加速计算,并提供了适用于运行VMware的企业的多云软件。VMware拥有数十万的企业客户,他们可以使用基础设施、AI和云管理软件来定制模型并运行生成式AI应用,如智能聊天框、辅助工具、搜索等。

We also announced new NVIDIA AI Enterprise Ready Servers featuring the new NVIDIA L40S GPU built for the industry standard data center server ecosystem and Bluefield 3 DPU data center infrastructure processor. L40S is not limited by co-law supply and is shipping to the world leading server system makers. L40S is a universal data center processor designed for high volume data center scanning out to accelerate the most compute intensive applications including AI training and infancy, 3D design and visualization, video processing and NVIDIA Amigurz industrial digitalization.
我们还宣布了新的NVIDIA AI企业级服务器,配备为行业标准的数据中心服务器生态系统和Bluefield 3 DPU数据中心基础架构处理器打造的全新NVIDIA L40S GPU。L40S不受供应限制,并正在运送给世界领先的服务器系统制造商。L40S是一款通用的数据中心处理器,旨在加速高计算密集型应用,包括AI训练和推理、三维设计和可视化、视频处理以及NVIDIA Amigurz工业数字化。

NVIDIA AI Enterprise Ready Servers are fully optimized for VMware, cloud foundation and private AI foundation. Nearly 100 configurations of NVIDIA AI Enterprise Ready Servers will soon be available from the world leading enterprise IT computing companies including Dell, HPE and Lenovo.
NVIDIA AI Enterprise Ready Servers已经完全优化了VMware、云基础设施和私有AI基础设施。世界领先的企业IT计算公司,包括戴尔、惠普和联想,很快将提供近100种配置的NVIDIA AI Enterprise Ready Servers。

The GH 200 Grace Hopper Superchip, which combines our ARM based Grace CPU with Hopper GPU, entered full production and will be available this quarter in OEM servers. It is also shipping to multiple super computing customers including Los Altmos, National Lab, and the Swiss National Computing Center. NVIDIA and SoftBank are collaborating on a platform based on GH 200 for generative AI and 5G6G applications.
GH 200 Grace Hopper超级芯片已经进入全面生产阶段,并将在本季度内提供给原始设备制造商的服务器。它还将运送给包括洛斯阿尔蒙特、国家实验室和瑞士国家计算中心在内的多个超级计算用户。NVIDIA和SoftBank正在合作开发基于GH 200的平台,用于生成式人工智能和5G6G应用。

The second generation version of our Grace Hopper Superchip with the latest HBM3E memory will be available in Q2 of calendar 20-24. We announced the GGX GH 200, a new class of large memory AI supercomputer for giant AI language model, recommendations for systems, and data analytics. This is the first use of the new NVIDIA NV-based Twitch system enabling all of its 256 Grace Hopper Superchip to work together as one, a huge jump in territorial fire generation connecting just HEPUs over NV-based.
我们的Grace Hopper超级芯片第二代版本将搭载最新的HBM3E内存,计划在20-24年的第二季度推出。我们宣布推出GGX GH 200,这是一种新型的大内存人工智能超级计算机,用于巨型AI语言模型、系统推荐和数据分析。这是首次使用新的基于NVIDIA NV的Twitch系统,使得所有256个Grace Hopper超级芯片能够作为一个整体运行,这是跨越式的进步,只需要在NV-based上连接HEPU。

The GH 200 systems are expected to be available at the end of the year. Google Cloud, Meta, and Microsoft are among the first to gain access. Strong networking growth was driven primarily by the Finneban infrastructure to connect HGX GPU systems, thanks to its end-to-end optimization and in-network computing capabilities. Finneban delivers more than double the performance of traditional Ethernet for AI. For billions of dollar AI infrastructures, the value from the increased throughput of the Finneban is worth hundreds of millimals and paid for the network. In addition, only in Finneban can scale to hundreds of thousands of GPUs. It is the network of choice for leading AI practitioners.
预计GH 200系统将在年底可获得。谷歌云、Meta和微软是最早获得访问权限的公司之一。强大的网络增长主要受到Finneban基础设施的推动,以连接HGX GPU系统,这要归功于其端到端优化和网络计算能力。Finneban为人工智能提供的吞吐量较传统以太网提高了一倍以上。对于数十亿美元的人工智能基础设施来说,Finneban的吞吐量提升所带来的价值相当于几百万美元,并且付费网络费用。此外,只有Finneban能够扩展到数十万个GPU。它是领先的人工智能从业者选择的网络。

For Ethernet-based cloud data centers that seek to optimize their AI performance, we announced NVIDIA Spectrum X, an accelerated networking platform designed to optimize Ethernet for AI workflows. Spectrum X couples the spectrum for Ethernet switch with the Bluefield 3 DPU achieving 1.5x better overall AI performance and power efficiency versus traditional Ethernet. Bluefield 3 DPU is a major success. It is in qualification with major OEMs and ramping across multiple CSPs and consumer network companies.
为了提高以太网云数据中心的人工智能性能,我们宣布推出NVIDIA Spectrum X,这是一个加速网络平台,旨在优化以太网用于人工智能工作流。Spectrum X将以太网交换机的频谱与Bluefield 3 DPU相结合,相比传统以太网提供1.5倍的整体人工智能性能和功耗效率。Bluefield 3 DPU取得了巨大成功,正在通过主要原始设备制造商的测试,并正在多个云服务提供商和消费网络公司展开推广。

Now moving to gaming. Gaming revenue of 2.49 billion was up 11% sequentially and 22% year-on-year. Growth was fueled by GeForce RTX 40 Series GPUs from laptops and desktop. Any customer demand was solid and consistent with seasonality. We believe global end demand was returned to growth after last year's slowdown. We have a large upgrade opportunity ahead of us. Just 47% of our installed base have upgraded to RTX and about 20% of a GPU with an RTX 3060 or higher performance.
现在转向游戏。游戏收入为24.9亿美元,环比增长11%,同比增长22%。增长主要源自笔记本电脑和台式机中的GeForce RTX 40系列GPU。客户需求坚实且符合季节性。我们认为全球最终需求在去年的减速后已经恢复增长。我们面临着巨大的升级机会。我们的安装基础中仅有47%升级至RTX,并且仅有约20%的GPU拥有RTX 3060或更高性能。

Laptop GPUs posted strong growth in the key back-to-school season led by RTX 4060 GPUs. NVIDIA's GPU powered laptops have gained in popularity and their shipments are now outpacing desktop GPUs from several regions around the world. This is likely to shift the reality of our overall gaming revenue events. We have Q2 and Q3 as the stronger quarters of the year, reflecting the back-to-school and holiday build schedules for laptops. In desktop, we launched the GeForce RTX 4060 and the GeForce RTX 4060 TIGPAs, bringing the 8-in-lub-lace architecture down to price points as low as $299. The ecosystem of RTX and DLSS games continue to expand. 35 new games added to DLSS support, including blockbusters such as Diablo 4 and Baldur's Gate 3. There's now over 330 RTX accelerated games and apps. We are bringing generative AI to games, a Computex-reannounced NVIDIA Avatar Cloud Engine for games a custom AI model-friendly service. Developers can use this to bring intelligence to non-player characters. It enhances a number of NVIDIA on-nubers and AI technology, including Nemo, Reba and Audio 2-Face.
在关键的返校季,笔记本电脑GPU以RTX 4060 GPU为主,实现了强劲增长。NVIDIA的GPU笔记本电脑在全球多个地区比台式机GPU的出货量高,因此可能改变我们整体游戏收入的现实情况。我们的强劲季度是Q2和Q3,反映了笔记本电脑的返校季和假日建设计划。在台式机方面,我们推出了GeForce RTX 4060和GeForce RTX 4060 TIGPAs,将八核心架构的价格降至299美元。RTX和DLSS游戏生态系统不断扩大,新增了35款支持DLSS的游戏,包括《暗黑破坏神4》和《巴尔德之门3》等大作。现在已有超过330款支持RTX加速的游戏和应用程序。我们将人工智能引入游戏中,通过重新宣布的NVIDIA Avatar Cloud Engine为游戏提供定制的AI模型友好服务。开发人员可以利用这一服务赋予非玩家角色智能能力。它还增强了诸多NVIDIA的音频和AI技术,包括Nemo、Reba和Audio 2-Face。

Now moving to professional visualization. Revenue of $375 million was up 28% sequentially and down 24% year-on-year. The ADA architecture ramp drove strong growth into two, rolling out initially the laptop workstations with a refresh of desktop workstations coming into three. These will include powerful new RTX systems with up to four NVIDIA RTX 6000 GPUs, providing more than 5,800 teraflops of AI performance and 192 gigabytes of GPU memory. They can be configured with NVIDIA AI Enterprise or NVIDIA Baldur's Enterprise. We also announced three new desktop workstations with GPUs based on the ADA generation. The NVIDIA RTX 5000, 4500, and 4000, offering up to 2X the RT Core throughput and up to 2X faster AR training performance compared to the previous generation.
现在转向专业可视化。第四季度收入为3.75亿美元,环比增长28%,同比下降24%。ADA架构的推出推动了业务的强劲增长,首先推出了笔记本工作站,并在第三季度推出更新的台式工作站。这些机型包括提供超过5800万AI计算性能和192GB GPU内存的强大新型RTX系统,可配置为NVIDIA AI Enterprise或NVIDIA Baldur's Enterprise。我们还宣布推出基于ADA一代的三款台式工作站,搭载了NVIDIA RTX 5000、4500和4000 GPU,相比上一代,其新一代RT核心吞吐量提高了2倍,AR训练性能提高了2倍。

In addition to the traditional workloads such as 3D Design and Content Creation, new workloads in generative AI, large language model development, and data science are expanding the opportunity in pro visualization for our RTX technology.
除了传统的工作负载,如3D设计和内容创作,生成式人工智能、大型语言模型开发和数据科学等新的工作负载也正在扩大我们的RTX技术在专业可视化领域的机会。

One of the key themes in Jensen's keynote at SIGGRAPH earlier this month was the conversion of graphics and AI. This is where NVIDIA on-nubers is positioned. On-nubers is OpenUSD's native platform. OpenUSD is a universal interchange that is quickly becoming the standard for the 3D world, much like HTML is the universal language for the 2D and it's best. Together Adobe, Apple, Autodesk, Pixar and NVIDIA forms the Alliance for OpenUSD. Our mission is to accelerate OpenUSD's development and adoption.
在本月早些时候在SIGGRAPH上,詹森的主题演讲之一是图形和人工智能的转换。这正是NVIDIA所处的位置。 On-nubers是OpenUSD的原生平台。 OpenUSD是一个正在迅速成为3D世界标准的通用交换格式,就像HTML是2D世界的通用语言一样。 Adobe、Apple、Autodesk、Pixar和NVIDIA共同组成了OpenUSD联盟。我们的使命是加快OpenUSD的开发和采用。

We announced new and upcoming on-nubers cloud APIs including runUSD and chatUSD to bring generative AI to OpenUSD's workloads.
我们宣布了新的和即将推出的 on-nubers 云 API,其中包括 runUSD 和 chatUSD,以将生成式人工智能带入 OpenUSD 的工作负载。

Moving to automotive. Revenue was 253 million, down 15% sequentially and up 15% year on year. Solid year on year growth was driven by the ramp of self-driving platforms based on the renewable drive or an SOC with a number of new energy vehicle makers. These sequential declines reflect lower overall automotive demand, particularly in China.
转向汽车行业。收入为2.53亿美元,环比下降15%,同比增长15%。强劲的同比增长是由于基于可再生驱动或带有多家新能源车制造商的SOC的自动驾驶平台的逐渐推广。这些环比下降反映出整体汽车需求的下降,尤其是在中国市场。

We announced a partnership with MediaTac to bring drivers and passenger new experiences inside the call. MediaTac will develop automotive SOCs and integrate a new product line of NVIDIA GPU chiplets. The partnership covers a wide range of vehicle segments from luxury to entry novel.
我们宣布与MediaTac合作,为驾车者和乘客在车内提供全新的体验。MediaTac将开发汽车用片上系统(SOC),并整合一条新的NVIDIA GPU芯片组产品线。这项合作涵盖了从豪华到入门级各种车型。

Moving to the rest of the P&L, gap gross margins expanded to 70.1% and non-gap gross margins to 71.2% driven by higher data center sales. Our data center products include a significant amount of software and complexity which is also helping thrive our gross margins. Sequential gap operating expenses were up 6% and non-gap operating expenses were up 5% primarily reflecting increased compensation and benefits.
转向损益表的其他部分,差距毛利率扩大至70.1%,非差距毛利率扩大至71.2%,主要是由于数据中心销售增加。我们的数据中心产品包括大量的软件和复杂性,这也有助于提高我们的毛利率。顺便说一句,差距操作费用上升了6%,非差距操作费用则上升了5%,主要是因为薪酬和福利增加。

We return approximately 3.4 billion to shareholders in the form of share repurchases and cash dividends. Our board of directors has just approved an additional 25 billion in stock repurchases to add to our remaining 4 billion of authorization as of the end of Q2.
我们以股票回购和现金股利的形式向股东返还了约34亿美元。我们的董事会刚刚批准了另外250亿美元的股票回购,以补充截至第二季度末时剩余的40亿授权额度。

That may turn to the outlook for the third quarter of fiscal 2024. Demand for our data center platform for AI is tremendous and broad based across industries and customers. Our demand visibility extends into next year. Our supply over the next several quarters will continue to ramp as we lower cycle times and work with our supply partners to add capacity. Additionally, the new L40S GPU will help address the growing demand for many types of workloads from cloud to enterprise.
这可能反映了2024财年第三季度的前景。对于我们的人工智能数据中心平台的需求非常巨大且覆盖各行各业和客户。我们对需求的预测延伸至明年。在未来几个季度,我们的供应将继续增长,同时我们将降低周期时间并与供应合作伙伴增加产能。此外,新的L40S GPU将有助于满足云端到企业等多种工作负载的日益增长需求。

For Q3, total revenue is expected to be 16 billion plus or minus 2%. We expect sequential growth to be driven largely by data center with gaining and provision also contributing. Gap and non-gap gross margins are expected to be 71.5% and 72.5%, respectively, plus or minus 50 basis points. Gap and non-gap operating expenses are expected to be approximately 2.95 billion and 2 billion respectively. Gap and non-gap other income and expenses are expected to be an income of approximately 100 billion, excluding gains and losses from non-affirming investments. Gap and non-gap tax races are expected to be 14.5% plus or minus 1%, excluding any discrete items.
对于Q3,预计总收入将为160亿美元左右,误差范围为正负2%。我们预计,数据中心的持续增长以及收购和提供业务的贡献将主要推动顺序增长。差异毛利率和非差异毛利率预计分别为71.5%和72.5%,误差范围为正负50个基点。差异运营费用和非差异运营费用预计分别为约29.5亿美元和20亿美元。差异其他收入和费用以及非差异其他收入和费用预计将约为100亿美元的收入,不包括非子公司投资的盈亏。差异税率和非差异税率预计将为14.5%,误差范围为正负1%,不包括任何离散项目。

Further financial details are included in the CFO commentary and other information available on our IR website.
进一步的财务细节包括在首席财务官的评论和我们投资者关系网站上提供的其他信息中。

In closing, let me highlight some upcoming events for the financial community. We will attend the Jeffrey's Tech Summit on August 30th in Chicago, the Goldman Sachs Tech Conference on September 5th in San Francisco, the Evercore Summit Conduct conference on September 6th, as well as the City Tech Conference on September 7th, both in New York and the B&A Virtual AI Conference on September 7th.
最后,让我强调一下金融界即将举行的一些重要活动。我们将参加8月30日在芝加哥举行的杰弗里斯科技峰会,9月5日在旧金山举行的高盛技术会议,9月6日的 Evercore 峰会行为会议,以及9月7日在纽约举行的 City Tech 会议,以及9月7日的 B&A 虚拟人工智能会议。

Our earnings call to discuss the results of our third quarter of the school 2024 is scheduled for Tuesday, November 1st.
我们将于2024年第三学期的收益电话会议定于11月1日星期二,讨论我们的业绩结果。

Operator, we will now open the call for questions. Could you please call for questions for us? Thank you.
操作员,我们现在开始接受问题了。你能否代为我们提问?谢谢。

At this time, I would like to remind everyone in order to ask a question, press star than the number one on your telephone keypad. We ask that you please limit yourself to one question.
在这个时候,我想提醒大家,如果你想提问,请按下电话键盘上的星号,而不是数字键盘上的数字一。我们要求您只提出一个问题。

We will pause for just a moment to compile the Q&A roster. We will take our first question from Matt Ramsey with Petey Cowan. Your line is now open.
我们将稍作停顿,编写问答名单。我们将从Petey Cowan的Matt Ramsey提出第一个问题。您现在可以发言了。

Yes, thank you very much. Good afternoon. Obviously, remarkable results. Jensen, I wanted to ask a question of you regarding the really quickly emerging application of large model inference. I think it is pretty well understood by the majority of investors that you guys have a very much a lockdown share of the training market. A lot of the smaller model inference workloads have been done on A6 or CPUs in the past. With many of these GPT and other really large models, there is this new workload that is accelerating super duper quickly on large model inference. I think your Grace Hopper super chip products and others are pretty well aligned for that. Could you maybe talk to us about how you are seeing the inference market segment between small model inference and large model inference and how your product portfolio is positioned for that? Thanks.
是的,非常感谢。下午好。显然,取得了显著的成果。詹森,我想问你关于大型模型推断应用的问题。我认为,大多数投资者都很清楚,你们在训练市场上有着相当牢固的份额。过去,许多较小的模型推断工作负载都是在A6或CPU上完成的。随着这些GPT和其他非常大的模型的出现,大型模型推断的工作负载正在迅速加速。我认为你们的格蕾丝·霍珀超芯片产品和其他产品都非常适合。你能否给我们谈谈你们是如何看待小型模型推断和大型模型推断之间的推理市场细分,以及你们的产品组合在这方面的定位?谢谢。

Yeah, thanks a lot. Let's take a quick step back. These large language models are fairly pretty phenomenal. It does several things, of course. It has the ability to understand unstructured language. At its core, what it has learned is the structure of human language and it has encoded within it compressed within it a large amount of human knowledge that has learned by the corpus that it has studied. What happens is you create these large language models and you create as large as you can and then you derive from it smaller versions of the model. Essentially, teacher student models. It is a process called distillation.
是的,非常感谢。让我们稍微退后一步。这些大型语言模型相当了不起。它们能做很多事情。当然,它们有能力理解非结构化的语言。在其核心,它所学到的是人类语言的结构,并且它内部压缩着大量的人类知识,这些知识是通过它所学到的语料库进行学习的。所以你创建了这些大型语言模型,并且尽可能地创建大一些,然后你从中派生出更小的模型版本。实际上,这是一种称为蒸馏的过程,就像是教师与学生关系的模型。

When you see these smaller models, it is very likely the case that they were derived from or distilled from or learned from larger models. Just as you have professors and teachers and students and so on and so forth. You are going to see this going forward. You start from a very large model and it has a large amount of generality and generalization and what is called zero shot capability. For a lot of applications and questions or skills that you haven't trained it specifically on, these large language models miraculously has the capability to perform them. That is what makes it so magical. On the other hand, you would like to have these capabilities in all kinds of computing devices. What you do is you distill them down. These smaller models might have excellent capabilities on a particular skill but they don't generalize as well. They don't have what is called as good zero shot capabilities. They all have their own unique capabilities but you start from very large models.
当你看到这些更小的模型时,很可能它们是从更大的模型中派生、提炼或学习而来的。就像你有教授、老师、学生等等一样。你将来会看到这一点。你从一个非常大的模型开始,它具有很高的普遍性和泛化能力,以及所谓的零样本能力。对于很多你没有专门训练过的应用、问题或技能,这些大型语言模型竟然有能力完成它们。这就是它如此神奇的地方。另一方面,你希望各种计算设备都具备这些能力。你所做的就是将它们提炼出来。这些更小的模型可能在某个特定技能上具有出色的能力,但它们泛化能力不如强。它们没有所谓的良好零样本能力。它们都有自己独特的能力,但你是从非常大的模型开始的。

Next we will go to Vivek Aria with B of A security. Your line is now open. Thank you. Just had a quick clarification on our question. Collet, if you could please clarify how much incremental supply do you expect to come online in the next year? You think it is up 20, 30, 40, 50 percent? So just any sense of how much supply because you said it is growing every quarter. Then, Jensen, the question for you is when we look at the overall hyperscaler spending, that pie is not really growing that much. What is giving you the confidence that they can continue to carve out more off that pie for a generative AI? Give us your sense of how sustainable is this demand as we look over the next one to two years. If I take your implied Q3 outlook of data center, 12, 13 billion, what does that say about how many servers are already AI accelerated? Where is that going? Give us some confidence that the growth that you are seeing is sustainable into the next one to two years.
接下来,我们将与B of A证券一起前往Vivek Aria。你现在可以发言了。谢谢您。我们对我们的问题有一个快速澄清。Collet,请问你认为明年会有多少额外的供应量上线?你认为它会增长20、30、40、50%吗?所以只要有个大致的供应量,因为你说它每个季度都在增长。然后,Jensen,你的问题是当我们看整体的超大规模支出时,那个饼图并没有真正增长那么多。是什么让你有信心他们能继续从那个饼图中分得更多用于生成AI的份额?请告诉我们您对未来一到两年的需求的可持续性的看法。如果我们根据你所暗示的Q3数据中心展望进行估算,12,13亿美元,这意味着已经有多少服务器已经进行了AI加速?这会导致什么结果?请提供一些使我们相信您所看到的增长在未来一到两年是可持续的。

Thanks for that question regarding our supply. Yes, we do expect to continue increasing, ramping our supply over the next quarter as well as into next fiscal year. In terms of percent, that is not something that we have here. It is a work across so many different suppliers, so many different parts of building and HDX and many of our other new products that are coming to market. But we are very pleased with both the support that we have with our suppliers and the long time that we have spent with them in improving their supply.
谢谢你提出关于我们供应的问题。是的,我们确实预计在未来一个季度以及下一个财年继续增加和扩大我们的供应。就百分比而言,我们这里没有具体数据。这是涉及众多不同供应商、建筑和HDX以及我们其他新产品的工作。但我们对供应商的支持以及与他们长期合作改善供应的努力非常满意。

The world has something along the lines of about a trillion dollars worth of data centers installed in the cloud and enterprise and otherwise. That trillion dollars of data centers is in the process of transitioning into accelerated computing and generative AI.
全球云计算、企业以及其他领域中,已经安装了价值约一万亿美元的数据中心。这些数据中心正在逐步转向加速计算和生成式人工智能。

We are seeing two simultaneous platform shifts at the same time. One is accelerated computing and the reason for that is because it is the most cost effective, most energy effective and the most performant way of doing computing now.
我们同时见证了两种平台转型。其中一种是加速计算,其原因是因为它是目前最具成本效益、最能节能且性能最优的计算方式。

What you are seeing, and then all of a sudden, enabled by generative AI, enabled by accelerated computing, generative AI came along. This incredible application now gives everyone two reasons to transition to do a platform shift from general purpose computing, the classical way of doing computing to this new way of doing computing, accelerated computing.
你正在目睹的,突然之间,受生成式人工智能和加速计算的推动,生成式人工智能问世了。这个令人难以置信的应用现在给了每个人两个理由去从一般目的计算,传统的计算方式转变为这种新的计算方式,加速计算。

It is about a trillion dollars worth of data centers. Call it a quarter of a trillion dollars of capital spend each year. You are seeing that data centers around the world are taking that capital spend and focusing it on the two most important trends of computing today, accelerated computing and generative AI. I think this is not a near term thing. This is a long term industry transition and we are seeing these two platforms shifts happening at the same time.
这是关于价值约一万亿美元的数据中心。每年可说它是一万亿美元的资本投资的四分之一。您会看到世界各地的数据中心将这笔资本投资用于当今计算领域中最重要的两个趋势,即加速计算和生成式人工智能。我认为这不是近期的事情。这是一个长期的行业转型,我们看到这两个平台转变同时发生。

Next we go to Stacy Raskon with Bernstein research. Hi guys. Thanks for taking my question. I was wondering, Collette, if you could tell me how much of data center in the quarter, maybe even the guidance like systems versus GPT, like DGX versus just the H100. What I am really trying to get at is how much is pricing or content or everyone to define that versus unit driving the growth going forward. Can you give us any color around that?
接下来我们来请Stacy Raskon来自Bernstein研究部门发言。嗨大家,非常感谢你们回答我的问题。我想知道,Collette,你能告诉我这个季度数据中心的销售额有多大,甚至包括系统和GPT(指DGX与H100)的预期?我真正想了解的是,未来的增长是由定价、内容还是每个人来决定,还是单位的推动力。你能给我们提供一些相关信息吗?

Sure Stacy. Let me help within the quarter. Our HDX systems are a very significant part of our data center as well as our data center growth that we had seen. Those systems include our HDX of our helper architecture but also our amper architecture. Yes, we are still selling both of these architectures or in the market.
可以的,Stacy。我可以在本季度内提供帮助。我们的HDX系统是我们数据中心以及我们所见到的数据中心增长的重要组成部分。这些系统包括我们的助手架构的HDX系统,也包括我们安培架构的HDX系统。是的,我们仍然在市场上销售这两种架构。

When you think about that, what does that mean from both the systems as a unit of course is growing quite substantially and that is driving in terms of the revenue increases. All of these things are the drivers of the revenue inside data center. Our DGXs are always a portion of additional systems that we will sell. Those are great opportunities for enterprise customers and many other different types of customers that we are seeing even in our consumer internet companies.
当你考虑这一点时,从整体看来,这意味着系统正在急剧增长,这正推动着收入的增长。所有这些都是数据中心收入的驱动因素。我们的DGX始终是我们将要销售的额外系统的一部分。这对企业客户以及我们在消费互联网公司中看到的许多其他类型的客户来说都是很好的机会。

The importance of there is also coming together with software that we sell with our DGXs but that is a portion of our sales that we are doing. The rest of the GPUs, we have new GPUs coming to market that we talk about the L40S and they will add continued growth going forward but again the largest driver of our revenue within this last quarter was definitely the HDX system.
随着我们销售的DGXs软件的重要性,还有一部分销售额是来自软件。除此之外,我们还有新的GPU即将上市,我们谈论的是L40S,这将为未来的持续增长增添动力。然而,在上个季度中,推动我们收入增长的最大因素明显是HDX系统。

And Stacy, if I could just add something, you say it is H100 and I know you know what your mental image in your mind but the H100 is 35,000 parts, 70 pounds, nearly a trillion transistors in combination. It takes a robot to build, well many robots to build because it is 70 pounds to lift and it takes a supercomputer to test a supercomputer. So these things are technology marvels and the manufacturing of them is really intensive. So I think we call it H100 as if it is a chip that comes off of a fab but H100 goes out really as HGX and it is in the world's hyperscalers and they are really quite large system components if you will.
Stacy,如果我可以补充一些内容,你说的H100我知道你在脑海中有什么样的形象,但H100是包含三万五千个部件、重达70磅、近一万亿个晶体管的组合体。它需要机器人来生产,而且需要很多机器人,因为它重达70磅,需要超级计算机来测试。所以,这些东西是技术奇迹,它们的制造真的非常复杂。所以我认为我们称它为H100好像它是从一个晶圆厂出来的芯片,但实际上H100在世界上的超大规模计算中心扮演着重要的角色,它们是相当大的系统组件。

Next we go to Mark Lipikis with Jeffries. Your line is no open. Hi, thanks for taking my question and congrats on the success. Jensyn, it seems like a key part of the success, your success in the market is delivering the software ecosystem along with the chip and the hardware platform and I had a two-part question on this. I was wondering if you could just help us understand the evolution of your software ecosystem, the critical elements and is there a way to quantify your lead on this dimension, like how many person years you have invested in building it and then part two, I was wondering if you would care to share with us your view on what percentage of the value of the NVIDIA platform is hardware differentiation versus software differentiation. Thank you.
接下来我们与Jeffries的Mark Lipikis一起进行讨论。你的问题没有开放的线路。嗨,谢谢你回答我的问题,并祝贺你的成功。Jensyn,市场上你的成功似乎与软件生态系统密切相关,软件生态系统与芯片和硬件平台一起。我有一个双重问题。我想知道你是否能帮助我们理解你的软件生态系统的发展,关键元素是什么,并且是否有一种方法来量化你在这个维度上的领先地位,例如你在构建这个生态系统上投入了多少人年,然后第二部分,我想知道你是否愿意与我们分享你对NVIDIA平台的价值中硬件差异化与软件差异化的占比是多少。谢谢。

Mark, I would really appreciate the question. Let me see if I could use some metrics. So we have a run time called NVIDIA AI Enterprise. This is one part of our software stack and this is, if you will, the run time that just about every company uses for the end-to-end of machine learning, from data processing, the training of any model that you like to do on any framework you like to do, the inference and the deployment, the scaling it out into a data center. It could be a scale out for a hyperscale data center, it could be a scale out for enterprise data center, for example, on VMware. You can do this on any of our GPUs. We have hundreds of millions of GPUs in the field and millions of GPUs in the cloud and just about every single cloud. It runs in a single GPU configuration as well as multi-GPU per compute or multi-node. It also has multiple sessions or multiple computing instances per GPU. So from multiple instances per GPU to multiple GPUs, multiple nodes to entire data center scale.
马克,我真的很感激这个问题。让我看看是否能使用一些指标来回答。所以我们有一个叫做NVIDIA AI Enterprise的运行时。这是我们软件堆栈的一部分,它可以说是每家公司在机器学习的端到端过程中都会使用的运行时,包括数据处理、训练任何你喜欢使用的模型在任何框架上、推理和部署,以及将其扩展到数据中心。它可以用于超大规模数据中心的扩展,也可以用于企业数据中心(例如在VMware上)。你可以在我们的任何GPU上进行这些操作。我们在现场有数亿个GPU,云上有数百万个GPU,几乎在每个云平台都有。它可以在单个GPU配置下运行,也可以在多个GPU或多节点上进行计算,甚至可以实现每个GPU的多个会话或多个计算实例。所以从每个GPU的多个实例到多个GPU,多个节点再到整个数据中心的规模,它都可以胜任。

So this run time called NVIDIA AI Enterprise has something like 4500 software packages, software libraries and has something like 10,000 dependencies among each other. That run time is, as I mentioned, continuously updated and optimized for our install base, for our stack. That's just one example of what it would take to get accelerated computing to work. The number of code combinations and type of application combinations is really quite insane. That's taken us two decades to get here.
所以这个名为NVIDIA AI Enterprise的运行时有大约4500个软件包、软件库,并且它们之间有大约10000个依赖关系。正如我所提到的,这个运行时在我们的安装基础和堆栈上不断进行更新和优化。这只是让加速计算起作用所需的一个例子。代码组合和应用组合的数量真的相当可怕。我们花了20年的时间才到达这一点。

But what I would characterize as probably the elements of our company, if you will, are several. I would say number one is architecture. The flexibility, the versatility and the performance of our architecture makes it possible for us to do all the things that I just said. From data processing to training to inference, pre-processing of the data before you do the inference to the post-processing of the data, tokenizing of languages so that you could then train with it. The workflow is much more intense than just training or inference. But anyways, that's where focus and it's fine.
但是我将我们公司的要素归为几个。首先是架构。我们的架构的灵活性、多功能性和性能使我们能够实现我刚刚提到的所有事情。从数据处理到训练再到推理,再到推理之前的数据预处理和数据的后处理,还有语言的标记化,以便您可以用它来进行训练。工作流程比仅仅进行训练或推理要复杂得多。但无论如何,这就是我们的重点,也没有问题。

But when people actually use these computing systems, it requires a lot of applications. So the combination of our architecture makes it possible for us to deliver the lowest cost of ownership. The reason for that is because we accelerate so many different things.
但是当人们实际使用这些计算系统时,需要大量的应用程序。因此,我们的架构组合使我们能够提供最低的拥有成本。这是因为我们加速了很多不同的事情。

The second characteristic of our company is the install base. You have to ask yourself, why is it that all the software developers come to our platform? The reason for that is because software developers seek a large install base so that they can reach the largest number of end users so that they could build the business or get a return on the investments that they make.
我们公司的第二个特点是安装基数。你必须问自己,为什么所有的软件开发者都来我们的平台呢?原因是软件开发者寻求一个庞大的安装基数,以便能够触达最多的最终用户,从而建立业务或获得投资回报。

And then the third characteristic is reach. We're in the cloud today, both for public cloud, public-facing cloud because we have so many customers that use it, so many developers and customers that use our platform. CSPs are delighted to put it up in the cloud. They use it for internal consumption to develop and train and to operate recommender systems or search or data processing engines and whatnot all the way to training and inference. So we're in the cloud, we're in enterprise. Yesterday we had a very big announcement. It's really worthwhile to take a look at that. VMware is the operating system of the world's enterprise. We've been working together for several years now and we're going to bring together, together we're going to bring generative AI to the world's enterprises all the way out to the edge. And so reach is another reason. And because of reach, all of the world's system makers are anxious to put NVIDIA's platform in their systems. And so we have a very broad distribution from all of the world's OEMs and ODMs and so on and so forth because of our reach.
然后第三个特点是覆盖范围。今天我们处在云端,无论是公共云还是面向公众的云,因为我们有如此多的客户、开发人员和使用我们平台的用户。云服务提供商非常乐意将其部署在云端。他们将其用于内部消费,用于开发、培训,以及操作推荐系统、搜索引擎、数据处理引擎等等,甚至用于训练和推理。所以我们进入了云端,也进入了企业。昨天我们有一个非常重要的公告。真的值得一看。VMware是全球企业的操作系统。我们已经合作了几年,如今我们将携手向世界范围内的企业带来生成式人工智能技术,一直延伸到边缘。因此,覆盖范围是另一个原因。由于我们的覆盖范围,全世界的系统制造商都急于将NVIDIA的平台引入到他们的系统中。因此,我们在全球的原始设备制造商(OEM)和原始设计制造商(ODM)等方面有非常广泛的分布,等等,都是因为我们的覆盖范围。

And then lastly because of our scale and velocity. We were able to sustain this really complex stack of software and hardware networking and compute and across all of these different usage models and different computing environments. And we're able to do all this while accelerating the velocity of our engineering. It seems like we're introducing a new architecture every two years. Now we're introducing a new architecture, a new product just about every six months. And so these properties make it possible for the ecosystem to build their company and their business on top of us. And so those in combination makes it special.
然后最后,由于我们的规模和速度,我们能够维持这个非常复杂的软件和硬件网络和计算堆栈,并适应各种不同的使用模式和计算环境。我们能够在加快我们的工程速度的同时做到这一切。看起来我们每两年就会引入一种新的架构。现在,几乎每六个月我们都在推出一种新的架构、一种新的产品。因此,这些特性使生态系统有可能建立自己的公司和业务在我们之上。因此,这些特性的结合使得它非常特别。

Next we'll go to Atif Malik with Citi. Your line's open. Hi, thank you for taking my question and great job on results in Atkuk. I have a question on the Kowals less L for TS that you guys talked about. Any idea how much of the supply tightness can L40 as help with. And if you can talk about the incremental profitability or growth margin contribution from this product. Thank you.
接下来我们将与花旗银行的Atif Malik进行对话。请问你有什么问题?谢谢你回答我的问题,对Atkuk的成果表现也很出色。我想问一下关于你们提到的Kowals less L for TS,你们有多少转供应紧张情况可以由L40来帮助解决呢?同时,你能谈一谈这个产品对增量利润或者增长利润率的贡献吗?谢谢。

Yeah, Atif let me take that for you. The L40, L40s is really designed for a different type of application. H100 is designed for large scale language models and processing just very large models and a great deal of data. And so that's not L40s' focus. L40s' focus is to be able to fine tune models, fine tune pre-trained models. And it will do that incredibly well. It has a transformer engine, it's got a lot of performance. You can get multiple GPUs in a server. It's designed for hyperscale scale out. Meaning it's easy to install L40s servers into the world's hyperscale data centers. It comes in a standard rack, standard server, and everything about it is standard. And so it's easy to install. L40s also is with the software stack around it and along with Bluefield 3. And although the work that we did with there, and the work that we did with Snowflakes and ServiceNow and so many other enterprise partners, L40s is designed for the world's enterprise IT systems. And that's the reason why HPE, Dell, and Lenovo and some other 20 other system makers building about 100 different configurations of enterprise servers are going to work with us to take generative AI to the world's enterprise. And so L40s is really designed for a different type of scale out, if you will. It's of course, large language models. It's of course generative AI. But it's a different use case. And so the L40s is off to a great start. And the world's enterprise and hyperscalers are really clamoring to get L40s deployed.
是的,Atif,让我来替你解释一下。L40、L40s 实际上是设计用于不同类型的应用的。H100 是专为大规模语言模型和处理非常大型模型以及大量数据而设计的。所以这不是 L40s 的重点。L40s 的重点是能够对模型进行微调,微调预训练模型。而且它在这方面表现非常出色。它有一个转换引擎,性能非常强大。你可以在服务器上使用多个 GPU。它专为超大规模扩展设计。这意味着很容易将 L40s 服务器安装到全球的超大规模数据中心中。它采用标准的机架、标准的服务器,所有方面都是标准化的。所以安装起来很方便。L40s 还配有周边软件堆栈,以及 Bluefield 3。虽然我们与 Snowflakes、ServiceNow 和其他很多企业合作伙伴合作,但 L40s 是为全球企业 IT 系统设计的。这就是为什么惠普、戴尔、联想和其他20多家系统制造商将与我们合作,构建约100种不同配置的企业服务器,使生成式人工智能在全球企业中推广。所以如果你愿意,L40s 实际上是专为不同类型的扩展设计的。当然,它适用于大型语言模型。当然,它是生成式人工智能。但这是一个不同的用例。所以 L40s 的表现非常出色。全球企业和超大规模的用户都迫切希望部署 L40s。

Okay, next we'll go to Joe Moore with Morgan Stanley. The line is now open. Great. Thank you. I guess the thing about these numbers that's so remarkable to me is the amount of demand that remains unfulfilled, talking to some of your customers. As good as these numbers are, you sort of more than tripled your revenue in a couple of quarters. There's a demand in some cases for multiples of what people are getting. So can you talk about that, how much unfulfilled demand do you think there is? And you talked about visibility extending into next year, you know, if you have line of sight into when you'll get to supply demand equilibrium here.
好的,接下来我们将连线摩根斯坦利的乔·摩尔。可以开始发言了。太好了,谢谢。我觉得这些数字让我印象深刻的是尚未满足的需求量,我和一些顾客交流过。这些数字虽然很好,但你们的收入在几个季度内增长了超过两倍。实际上,有些情况下人们的需求是现有供应量的好几倍。所以你能谈谈有多少尚未满足的需求吗?你也提到了明年的可见性,如果你们能看到何时能够实现供需平衡。

Yeah, we have excellent visibility through the year and into next year. And we're already planning the next generation infrastructure with the leading CSPs and data center builders. The demand, the easiest way to think about the demand is the world is transitioning from general purpose computing to accelerated computing. That's the easiest way to think about the demand. The best way for companies to increase their throughput, improve their energy efficiency, improve their cost efficiency is to divert their capital budget to accelerated computing and generous to AI. Because by doing that, you're going to offload so much work to upload off of the CPUs. That the available CPUs is in your data center will get boosted. And so what you're seeing companies do now is recognizing this tipping point here, recognizing the beginning of this transition and diverting their capital investment to accelerated computing and generous to AI. And so that's probably the easiest way to think about the opportunity ahead of us. This isn't a singular application that is driving the demand, but this is a new computing platform, if you will, a new computing transition that's happening. And data centers all over the world are responding to this and shifting in a broad-based way.
是的,我们在今年和明年都有很好的可见度。我们已经与领先的云服务提供商和数据中心建设商合作,正在计划下一代基础设施。需求方面,最简单的理解方式是世界正在从通用计算过渡到加速计算。这是最简单的理解需求的方式。企业提高吞吐量、提高能源效率、提高成本效率的最佳方法是将资本预算转向加速计算和人工智能。因为通过这样做,你将能够卸载掉大量负载,使得你的数据中心中可用的中央处理器性能得到提升。目前,你可以看到公司们已经认识到了这个转折点的到来,并且正在将资本投资转向加速计算和人工智能领域。这可能是我们面临的机遇最简单的理解方式。这并不是一个单一的应用驱动需求,而是一个新的计算平台,一种新的计算转变正在发生。世界各地的数据中心都在以广泛的方式响应并进行转变。

Next we go to Shia Hari with Goldman Sachs. Your line is now open. Hi, thank you for taking the question. I had one quick clarification question for Collette and then another one for Jensen. Collette, I think last quarter you had said CSPs were about 40% of your data center revenue. Consumer Internet 30% enterprise 30%. Based on your remarks, it sounded like CSPs and consumer internet may have been a larger percentage of your business. If you can kind of clarify that or confirm that, that would be super helpful. And then Jensen, a question for you, given your position as the key enabler of AI, the breadth of engagements and the visibility you have into customer projects, I'm curious how confident you are that there will be enough applications or use cases for your customers to generate reasonable return on their investments. I guess I asked the question because there is a concern out there that there could be a bit of a pause in your demand profile in the out years. Curious if there is enough breadth and depth there to support a sustained increase in your data center business going forward. Thank you.
接下来我们与高盛一起前往Shia Hari。你的话筒已经打开。嗨,谢谢你们接受问题。我有一个关于Collette的问题需要澄清,然后还有一个问题是给Jensen的。Collette,我记得上个季度你说CSP(云服务提供商)占你们数据中心收入的大约40%,消费者互联网30%,企业30%。根据你的讲话,听起来CSP和消费者互联网可能占据了你们业务的更大比例。如果你能澄清或确认一下,那将非常有帮助。然后是Jensen,我想问你,作为AI的关键推动者,通过你对客户项目的广泛参与和可见性,你对客户是否能够获得合理回报的应用或使用案例有多有信心?我问这个问题是因为外界担心在未来几年你们的需求可能会暂停一下。我想知道是否有足够的广度和深度来支持未来你们数据中心业务的可持续增长。谢谢。

Okay, so thanks, Toshiya, on the question regarding our types of customers that we have in our data center business. And we look at it in terms of combining our compute as well as our networking together. Our CSPs, our large CSPs are contributing a little bit more than 50% of our revenue within Q2. And the next largest category will be our consumer internet companies. And then the last piece of it will be our enterprise and a high performance computing.
好的,所以谢谢你,Toshiya,在关于我们数据中心业务中所拥有的客户类型的问题上的答复。我们从结合我们的计算和网络方面来看待这个问题。我们的大型云服务提供商(CSPs)在Q2季度对我们的收入贡献超过50%。接下来最大的类别将是我们的消费者互联网公司。最后一部分将是我们的企业和高性能计算方面的客户。

Toshiya, I'm reluctant to guess about the future. And so I'll answer the question from the first principle of computer science perspective. It is recognized for some time now that general-purpose computing is just not in brute forcing general-purpose computing. Using general-purpose computing at scale is no longer the best way to go forward. It's too energy costly. It's too expensive. And the performance of the applications is too slow. And finally, the world has a new way of doing it. It's called accelerated computing. And what kicked it into turbo charge is generous of AI. But accelerated computing could be used for all kinds of different applications that are already in the data center. And by using it, you offload the CPUs. You save a ton of money, you use an order of magnitude in cost and order of magnitude in energy and the throughput is higher. And that's what the industry is really responding to. Going forward, the best way to invest in the data center is to divert the capital investment from general-purpose computing and focus it on generative AI and accelerated computing.
Toshiya,我不愿意猜测未来。所以我将从计算机科学的第一原理角度回答这个问题。一段时间以来,人们一直承认通用计算并不是通过暴力破解通用计算来实现的。在大规模使用通用计算方面,不再是最佳选择。它耗费太多能源,太昂贵,并且应用程序的性能太慢。最后,世界有了一种新的方法。它被称为加速计算。而促使其加速发展的是人工智能的慷慨提供。但是加速计算可以用于数据中心中已有的各种不同应用程序。通过使用它,你可以卸载中央处理器,节省大量资金,降低成本和能耗,并且吞吐量更高。这正是业界真正积极响应的方向。未来,投资在数据中心中的最佳方式是将资本投资从通用计算转移到生成性人工智能和加速计算上。

Generative AI provides a new way of generating productivity, a new way of generating new services to offer to your customers. And accelerated computing helps you save money and save power. And the number of applications is tons. Lots of developers, lots of applications, lots of libraries. It's ready to be deployed. And so I think the data centers around the world recognize this. This is the best way to deploy resource and deploy capital going forward for data centers. And this is true for the world's clouds and you're seeing a whole crop of new GPU specialty, GPU specialized cloud service providers. One of the famous ones is CoreWeave and they're doing incredibly well. But you're seeing regional GPU specialists, service providers all over the world now. And it's because they all recognize the same thing, that the best way to invest your capital going forward is to put it to already computing and generative AI.
生成式人工智能为生产力的生成提供了一种新的方式,为您向客户提供新服务提供了一种新的方式。而加速计算有助于节省资金和电力。应用程序的数量非常多。有很多开发者、很多应用程序和很多库。它们已准备就绪,可以被部署。因此,我认为全球的数据中心已经认识到了这一点。这是对数据中心未来资源和资本部署的最佳方式。世界上的云计算也是如此,你会看到一批新的GPU专业化云服务供应商,其中一家著名的是CoreWeave,他们取得了非常好的成绩。但现在你会看到全球各地出现了许多地区性的GPU专业服务提供商。这是因为他们都认识到同样的事实,未来最好的资本投资方式是投入到加速计算和生成式人工智能中。

We're also seeing that enterprises want to do that. But in order for enterprises to do it, you have to support the management system, the operating system, the security and software-defined data center approach of enterprises and that's called VMware. And we've been working several years with VMware to make it possible for VMware to support not just the virtualization of CPUs but the virtualization of GPUs as well as the distributed computing capabilities of GPUs supporting NVIDIA's Bluefield for high-performance networking. And all of the generative AI libraries that we've been working on is now going to be offered as a special skew by VMware's Salesforce, which is, as we all know, quite large because they reach some several hundred thousand VMware customers around the world. And this new skew is going to be called VMware Private AI Foundation. And this will be a new skew that makes it possible for enterprises. And in combination with HP Dell and Lenovo's new server offerings based on L40S, any enterprise could have a state-of-the-art AI data center and be able to engage and enter to AI. And so I think the answer to that question is hard to predict exactly what's going to happen quarter to quarter. But I think the trend is very, very clear now that we're seeing a platform shift.
我们也看到企业希望这样做。但是为了让企业能够做到这一点,您必须支持企业的管理系统、操作系统、安全性和软件定义的数据中心方法,这就是所谓的VMware。我们已经与VMware合作了几年,使其能够不仅支持CPU的虚拟化,还支持GPU的虚拟化以及支持NVIDIA的Bluefield用于高性能网络的分布式计算能力。我们一直在研发的所有AI生成库现在将作为VMware Salesforce的特殊外设提供,众所周知,VMware Salesforce非常庞大,因为它们拥有全球数十万VMware客户。这个新的外设将被称为VMware Private AI Foundation。这将是一个使企业能够实现的新的外设。结合基于L40S的惠普戴尔和联想的新服务器提供,任何企业都可以拥有先进的AI数据中心,并能够参与和进入AI。因此,我认为对这个问题的答案很难准确预测每个季度会发生什么。但我认为现在已经非常清楚,我们正在见证一个平台转变的趋势。

Next we'll go to Timothy R. Curie with UBS. Your line is now open. Thanks a lot. Can you talk about the attach rate of your networking solutions to the compute that you're shipping? In other words, is like half of your compute shipping with your networking solutions, you know, more than half, less than half? And is this something that maybe you can use to prioritize allocation of the GPUs? Thank you.
下面我们将进入UBS的蒂莫西·R·居里。您可以开始发言了。非常感谢。您能否谈谈您出货的计算设备与网络解决方案的连接率?换句话说,您出货的计算设备是否有一半以上配备了您的网络解决方案?还是不到一半?这件事是否可以作为您分配GPU的优先考虑因素?谢谢。

Well, working backwards, we don't use that to prioritize the allocation of our GPUs. We let customers decide what networking they would like to use. And for the customers that are building very large infrastructure, Infiniband is, you know, I hate to say it, kind of a no-brainer. And the reason for that, because the efficiency of Infiniband is so significant, you know, some 10, 15, 20 percent higher throughput for a billion dollars infrastructure translates to enormous savings. Basically, the networking is free. And so if you have a single application, if you will, infrastructure, or it's largely dedicated to large language models or large AI systems, Infiniband is really a terrific choice.
嗯,从反向来看,我们不使用这种方式来优先分配我们的GPU。我们让客户决定他们想要使用的网络。对于那些正在构建非常大型基础设施的客户来说,Infiniband是一个不言而喻的选择。原因是因为Infiniband的效率非常高,相比于十亿美元的基础设施而言,吞吐量提高了10%、15%、甚至20%,能够带来巨大的节省。基本上,网络是免费的。因此,如果你有一个单一的应用程序基础设施,或者主要专用于大型语言模型或大型人工智能系统,选择Infiniband确实是一个很棒的选择。

However, if you're hosting for a lot of different users and Ethernet is really according to the way you manage your data center, we have an excellent solution there that we just recently announced and it's called Spectrum X. Well, we're going to bring the capabilities, if you will, not all of it, but some of it of the capabilities of Infiniband to Ethernet. And so that we can also, within the environment of Ethernet, allow you to enable you to get excellent generative AI capabilities. So Spectrum X is just ramping now. It requires Bluefield 3 and it supports both our Spectrum 2 and Spectrum 3 Ethernet switches and the additional performance is really spectacular. And Bluefield 3 makes it possible and a whole bunch of software that goes along with it. Bluefield, as all of you know, is a project really dear to my heart and it's off to just a tremendous start. I think it's a home run. And this is the concept of in-network computing and putting a lot of software in the computing fabric is being realized with Bluefield 3 and it is going to be a home run.
然而,如果您为许多不同的用户提供主机服务且以太网确实是数据中心管理的方式,我们有一个出色的解决方案,我们最近刚刚宣布了这个解决方案,它被称为Spectrum X。嗯,我们要为您带来一些Infiniband的功能,不是全部,而是其中一些功能,以便我们在以太网环境中也能够提供优秀的生成式人工智能能力。因此,Spectrum X目前正在加速发展。它需要Bluefield 3,并支持我们的Spectrum 2和Spectrum 3以太网交换机,其额外性能确实是非常出色的。而Bluefield 3使这成为可能,并且有另外一整套软件与之相配套。众所周知,Bluefield对我来说真的很重要,它的开局非常出色。我认为它就是个大成功。而通过Bluefield 3,网络内计算的概念和在计算布局中放置大量的软件正在实现,并且将成为一个巨大的成功。

Our final question comes from the line of Ben Ritzis with Millius. Your line is no open. Hi, good afternoon, good evening. Thank you for the question from putting me in here. My question is with regard to DGX Cloud. Can you talk about the reception that you're seeing and how the momentum is going and then collect? Can you also talk about your software business? What is the run rate right now and the materiality of that business? And it does seem like it's already helping margins a bit. Thank you very much.
我们最后的问题来自Millius的Ben Ritzis先生。请开始提问。嗨,下午好,晚上好。感谢你给我提问的机会。我的问题与DGX云相关。你能谈一下你们目前的市场反应和发展势头吗?然后,你们能谈一下你们的软件业务吗?现在的运营情况如何?对业务的重要性如何?它似乎已经稍微有助于利润率的提升。非常感谢。

DGX Cloud's strategy. Let me start there. DGX Cloud's strategy is to achieve several things. Number one, to enable a really close partnership between us and the world CSPs. We recognize that many of our, well, we work with some 30,000 companies around the world. 15,000 of them are startups. Thousands of them are generative AI companies. The fastest growing segment, of course, is generative AI. We're working with all of the world's AI startups. And ultimately, they would like to be able to land in one of the world's leading clouds. And so we built DGX Cloud as a footprint inside the world's leading clouds so that we could simultaneously work with all of our AI partners and help land them in easily in one of our cloud partners.
DGX Cloud的战略是实现几个目标。第一,实现我们与全球云服务提供商之间的紧密合作伙伴关系。我们意识到,我们与世界各地的30,000家公司合作,其中15,000家是初创企业,成千上万家是生成式人工智能公司。当然,增长最快的行业是生成式人工智能。我们正在与全球所有人工智能初创企业合作。最终,他们希望能够进入世界领先的云端之一。因此,我们在世界领先的云端建立了DGX Cloud,以便我们可以同时与所有人工智能合作伙伴合作,并帮助他们轻松进入我们的云合作伙伴之一。

The second benefit is that it allows our CSPs and ourselves to work really closely together to improve the performance of hyperscale clouds, which is historically designed for multi-tenancy and not designed for high-performance distributed computing live generative AI.
第二个好处是,它使我们和我们的云服务提供商能够紧密合作,以改善超大规模云的性能。这些云原本是为多租户设计而不是为高性能分布式计算和实时生成的人工智能设计的。

And so to be able to work closely architecturally to have our engineers work hand-in-hand to improve the networking performance and the computing performance has been really powerful, really terrific.
因此,能够在构架上与工程师紧密合作,让他们携手改善网络性能和计算性能,真的非常有力量,非常棒。

And then thirdly, of course, NVIDIA uses very large infrastructures ourselves. And our self-driving car team, our NVIDIA research team, our generative AI team, our language model team, the amount of infrastructure that we need is quite significant.
其次,当然,NVIDIA自己也使用非常庞大的基础设施。我们的自动驾驶汽车团队,NVIDIA研究团队,生成式AI团队,语言模型团队所需的基础设施数量非常巨大。

None of our optimizing compilers are possible without our DGX systems. Even compilers these days require AI and optimizing software and infrastructure software requires AI to even develop.
没有我们的DGX系统,我们就没有可能拥有我们的优化编译器。即使是如今的编译器也需要人工智能和优化软件,并且基础设施软件的开发也需要人工智能。

It's been well publicized that our engineering uses AI to design our chips. And so the internal, our own consumption of AI, our robotics team, so on and so forth, our omnivores team, so on and so forth, all needs AI. So our internal consumption is quite large as well and we land that in DGX cloud.
已经非常公开地宣传了我们的工程团队使用人工智能来设计芯片。因此,我们内部使用人工智能的范畴非常广泛,包括我们的机器人团队、人工智能团队等等。所以我们内部对于人工智能的需求也很大,并将其部署在DGX云服务上。

And so DGX cloud has multiple use cases, multiple drivers. And it's been off to just an enormous success and our CSPs love it. The developers love it. And our own internal engineers are clamoring to have more of it. And it's a great way for us to engage and work closely with all of the AI ecosystem around the world.
因此,DGX Cloud具有多个用例和多个驱动因素。它已经取得了巨大的成功,我们的云服务提供商非常喜欢它。开发人员喜欢它。我们内部的工程师们也渴望获得更多它。这是我们与全球所有人工智能生态系统紧密合作的绝佳方式。

And let's see if I can answer your question regarding our software revenue. In part of our opening remarks that we made as well, the number of software is a part of almost all of our products, whether they are data center, products, GPU system, or any of our products within gaming and our future automotive products.
让我们看看我能否回答你关于我们软件收入的问题。在我们之前所说的开场白中,软件是我们几乎所有产品的一部分,无论是数据中心产品、GPU系统,还是我们的游戏产品和未来的汽车产品中的任何一个产品都会包含软件。

You're correct. We're also selling it in extendable business. And that stand alone software continues to grow where we are providing both the software services upgrades across there as well.
你说得对。我们也在推出可扩展的企业版本。而且那个独立的软件也在不断发展,我们也为其提供软件服务升级。

Now we're seeing at this point probably hundreds of millions of dollars annually for software business. And we are looking at NVIDIA AI enterprise to be included with many of the products that we're selling, such as our DGX, such as our PCIe versions of our H100.
现在我们看到的是一年可能有数亿美元的软件业务。而且我们希望将NVIDIA AI企业应用于我们销售的许多产品中,比如我们的DGX、我们的H100的PCIe版本等等。

And I think we're going to see more availability with our CSP marketplaces. So we're up to a great start and I do believe we'll see this continue to grow going forward.
我认为我们将会在我们的云服务提供商市场上看到更多的可用性。所以我们开始得非常好,我相信这种增长趋势将继续下去。

And that does include today's question and answer session. I'll turn the call back over to Jensen Wong for any additional or closing remarks.
这当然包括今天的问题和答案环节。现在我将把话题交回给Jensen Wong,听听他是否有其他的补充或总结性的话要说。

A new computing era has begun. The industry is simultaneously going through two platform transitions, accelerated computing and generative AI.
一个新的计算时代已经开始。该行业正在同时经历两个平台转型,加速计算和生成式人工智能。

Data centers are making a platform shift from general purpose to accelerated computing. The trillion dollars of global data centers will transition to accelerated computing to achieve an order of magnitude better performance, energy efficiency and cost.
数据中心正在从通用计算转向加速计算的平台转变。全球的数万亿美元的数据中心将转向加速计算,以实现数量级更高的性能、能源效率和成本效益。

Accelerated computing enabled generative AI, which is now driving a platform shift in software and enabling new, never before possible applications. Together, accelerated computing and generative AI are driving a broad-based computer industry platform shift.
加速计算为生成式人工智能提供了支持,目前正推动软件平台的转变,并促成了前所未有的新应用的实现。加速计算和生成式人工智能共同推动了计算机行业广泛的平台转变。

Our demand is tremendous. We are significantly expanding our production capacity. Supply will substantially increase for the rest of this year and next year. NVIDIA has been preparing for this for over two decades and has created a new computing platform that the world's industry, world's industries can build upon.
我们的需求是巨大的。我们正在大幅扩展我们的生产能力。供应将在今年余下时间和明年大幅增加。NVIDIA已经为此做了两十多年的准备,并创建了一个全球产业可以依托的新计算平台。

What makes NVIDIA special are one, architecture. NVIDIA accelerates everything from data processing, training, inference, every AI model, real-time speech to computer vision, and giant recommenders to vector databases. The performance and versatility of architecture translates to the lowest data center TCO and best energy efficiency.
NVIDIA之所以特别,有以下几个方面:一,架构。NVIDIA能加速从数据处理、训练、推理、各种AI模型、实时语音到计算机视觉以及大型推荐系统和向量数据库等方面的各项工作。其架构的性能和多功能性使得数据中心的总体成本(TCO)最低,并且具备最佳的能源效率。

Two, install base. NVIDIA has hundreds of millions of CUDA compatible GPUs worldwide. Developers need a large install base to reach end users and grow their business. NVIDIA is the developer's preferred platform. More developers create more applications that make NVIDIA more valuable for customers.
两者皆为重要,安装基数。NVIDIA在全球范围内拥有数以亿计的与CUDA兼容的GPU。开发人员需要一个庞大的安装基数来触达最终用户并发展业务。NVIDIA是开发人员首选的平台。更多的开发人员创造更多的应用程序,使NVIDIA对客户更有价值。

Three, reach. NVIDIA is in clouds, enterprise data centers, industrial edge, PCs, workstations, instruments, and robotics. Each has fundamentally unique computing models and ecosystems. Systems suppliers, like OEMs, computer OEMs, can confidently invest in NVIDIA because we offer significant market demand and reach.
三个领域,涵盖了云计算、企业数据中心、工业边缘计算、个人电脑、工作站、仪器和机器人。每个领域都具有独特的计算模型和生态系统。像OEM(原始设备制造商)和计算机OEM这样的系统供应商可以放心地投资NVIDIA,因为我们拥有巨大的市场需求和影响力。

Scale and velocity. NVIDIA has achieved significant scale and is 100% invested in accelerated computing and generative AI. Our ecosystem partners can trust that we have the expertise, focus, and scale to deliver a strong roadmap and reach to help them grow.
规模和速度。NVIDIA已经取得了显著的规模,并且100%投入于加速计算和生成式人工智能。我们的生态系统伙伴可以相信我们拥有专业知识、专注力和规模,能够提供强大的路线图和资源,帮助他们实现增长。

We are accelerating because of the additive results of these capabilities. We are upgrading and adding new products about every six months versus every two years to address the expanding universe of generative AI. While we increase the output of H100 for training and inference of large language models, we are ramping up our new L40S universal GPU for cloud scale out and enterprise servers.
我们之所以加速发展,是因为这些能力的积极结果。为了应对生成式人工智能不断膨胀的领域,我们每六个月进行一次升级并增加新产品,而不是每两年一次。同时,我们正在增加H100的产量,用于训练和推理大型语言模型,并且正在推出用于云端和企业服务器的全新L40S通用GPU。

Spectrum X, which consists of our Ethernet switch, Bluefield 3 SuperNIC, and software helps customers who want the best possible AI performance on Ethernet infrastructures. Customers are already working on next generation accelerated computing and generative AI with our Grace Hopper.
Spectrum X是由我们的以太网交换机、Bluefield 3 SuperNIC和软件组成的,可以帮助那些希望在以太网基础设施上获得最佳人工智能性能的客户。这些客户目前正在使用我们的Grace Hopper进行下一代加速计算和生成式人工智能的研究。

We are extending NVIDIA AI to the world's enterprises, that demand generative AI, but with the model privacy, security, and sovereignty. Together with the world's leading enterprise IT companies, Accenture, Adobe, Getty, Hugging Face, Snowflake, ServiceNow, VMware, and WPP, and our enterprise system partners Dell, HPE, and Lenovo, we are bringing generative AI to the world's enterprise.
我们将NVIDIA AI推广到全球企业,这些企业需要生成式AI模型的隐私、安全和主权保护。与全球领先的企业IT公司Accenture、Adobe、Getty、Hugging Face、Snowflake、ServiceNow、VMware和WPP以及我们的企业系统合作伙伴Dell、HPE和Lenovo一起,我们正在将生成式AI引入全球企业。

We're building NVIDIA Omniverse to digitalize and enable the world's multi-trillion dollar heavy industries to use generative AI to automate how they build and operate physical assets and achieve greater productivity. Generative AI starts in the cloud, but the most significant opportunities are in the world's largest industries, where companies can realize trillions of dollars of productivity gains.
我们正在构建NVIDIA Omniverse,将世界上价值数万亿美元的重工业数字化,并利用生成型人工智能来自动化建造和运营实体资产,以实现更高的生产力。生成型人工智能从云端开始,但最重要的机遇在于世界上最大的产业领域,企业可从中获得数万亿美元的生产力提升。

It is an exciting time for NVIDIA, our customers, partners, and the entire ecosystem to drive this generational shift in computing. We look forward to updating you on our progress next quarter. This includes today's conference call. You may now disconnect.
对于NVIDIA、我们的客户、合作伙伴和整个生态系统来说,现在正是一个令人兴奋的时刻,我们将推动计算领域的这一代变革。我们期待在下个季度向您更新我们的进展情况,包括今天的电话会议。您现在可以挂断电话了。