首页  >>  来自播客: User Upload Audio 更新   反馈

Broadcom (AVGO) 2023 Q4 Earnings Call

发布时间 2024-03-10 18:10:32    来源

中英文字稿  

Hello and welcome to Broadcom's Inc. 1st Quarter, 5th of 2024 Financial Results Conference Call. At this time, for opening remarks and introductions, I will turn the call over to GU, head of Investor Relations of Broadcom Inc. You may begin. Thank you, Operator, and good afternoon, everyone. Joining me on today's call are HOC-10, President and CEO, Kirsten Spears, Chief Financial Officer, and Charlie Kowaz, President, Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed, describing our financial performance for the first quarter of fiscal year 2024. If you did not receive a copy, you may obtain the information from the Investor section of Broadcom's website at broadcom.com. This conference call is being webcast live, and an audio replay of the call can be accessed for one year through the Investor section of Broadcom's website. During the prepared comments, HOC and Kirsten will be providing details of our first quarter fiscal year 2024 results, guidance for our fiscal year 2024, as well as commentary regarding the business environment. We'll take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call.
你好,欢迎参加博通公司2024年第五财季财务业绩电话会议。在此之前,我将把电话交给博通公司的投资者关系负责人顾先生进行开场发言和介绍。您可以开始了。谢谢,操作员,大家下午好。今天参加电话会议的有首席执行官HOC-10、首席财务官Kirsten Spears和半导体解决方案集团总裁Charlie Kowaz。博通公司在收市后发布了一份新闻稿和财务表,描述了我们2024财年第一季度的财务业绩。如果您没有收到拷贝,可以在博通公司的网站broadcom.com的投资者部分获取信息。本电话会议正在现场网络直播,电话会议录音可在一年内通过博通公司的网站投资者部分获取。在准备的发言中,HOC和Kirsten将提供我们2024财年第一季度的业绩细节,2024财年的指导方针,以及有关业务环境的评论。我们会在准备发言结束后进行提问。请参考我们今天的新闻稿以及最近提交给美国证券交易委员会的文件,了解可能导致我们实际结果与本电话会议中的前瞻性声明不符的特定风险因素。

In addition to U.S. gap reporting, Broadcom reports certain financial measures on a non-gap basis. A reconciliation between gap and non-gap measures is included and enabled attached to today's press release. Comments made during today's call will primarily refer to our non-gap financial results. I'll now turn the call over to HOC. Thank you, Gee. And thank you everyone for joining us today. In our fiscal Q1 2024, consolidated net revenue was $12 billion. Up 34 percent year on year, as revenue included 10 and a half weeks of contribution from VMware. In VMware, consolidated revenue was 11 percent year on year. Semiconductor solutions revenue increased 4 percent year on year to 7.4 billion, and infrastructure software revenue grew 153 percent year on year to 4.6 billion. With respect to infrastructure software, revenue contribution from consolidating VMware drove a sequential jump in revenue by 132 percent. We expect continued strong bookings at VMware will accelerate revenue growth through the rest of fiscal 2024. In semiconductors, AI revenue quadruple year on year to $2.3 billion during the quarter, more than offsetting the current cyclical that slowdown in enterprise and telcos.
除了美国的差距报告外,博通还以非差距基础报告了某些财务指标。 今天的新闻发布会附带了差距和非差距指标之间的调解,并可以访问。 今天通话中提到的评论主要将涉及我们的非差距财务结果。 现在我将通话转给HOC。 谢谢你,吉。谢谢大家今天加入我们。 在我们2024财政第一季度,综合净收入为120亿美元。 比去年同期增长了34%,因为收入包括来自VMware的10个半周的贡献。 在VMware中,合并的收入同比增长了11%。 半导体解决方案收入同比增长了4%,达到74亿美元,基础软件收入同比增长了153%,达46亿美元。 就基础软件而言,从合并VMware的收入贡献推动了收入同比增长了132%。 我们预计VMware持续强劲的预订将加速2024财政年度余下时间的收入增长。 在半导体领域,AI收入在本季度同比增长了四倍,达到23亿美元,完全抵消了企业和电信运营商目前的周期性放缓。

Now let me give you more color on our two reporting segments. Starting with software. Q1, software segment revenue of 4.6 billion was up 156 percent year on year, and included 2.1 billion in revenue contribution from VMware. Consolidated bookings in software grew sequentially from less than 600 million to 1.8 billion in Q1, and is expected to grow to over 3 billion in Q2. Revenue from VMware will grow double digit percentage sequentially quarter over quarter through the rest of the fiscal year. This is simply a result of our strategy with VMware. We are focused on upselling customers, particularly those who are already running their compute workloads with vSphere virtualization tools to upgrade to VMware Cloud Foundation, otherwise branded as VCF. VCF is the complete software stack integrating compute, storage and networking that virtualizes and modernizes our customers' data centers. This on-prem self-service cloud platform provides our customers a complement and an alternative to public cloud. And in fact, on a VM explore last August, VMware and Vidia entered into a partnership called VMware Private AI Foundation, which enables VCF to run GPUs.
现在让我更详细地介绍一下我们的两个报告部门。首先是软件部门。Q1的软件部门收入达到46亿美元,同比增长156%,其中包括来自VMware的21亿美元的收入贡献。软件的整体预订额从不到6亿美元的Q1逐步增长到18亿美元,并预计将在Q2增长至30亿美元以上。来自VMware的收入将在财政年度的其余时间内连续每个季度以两位数百分比增长。这仅仅是我们与VMware合作的策略的结果。我们专注于向客户推销产品,特别是那些已经使用vSphere虚拟化工具运行计算工作负载的客户,让他们升级到品牌为VCF的VMware Cloud Foundation。VCF是完整的软件堆栈,集成了计算、存储和网络,虚拟化并现代化了客户的数据中心。这种本地自助云平台为我们的客户提供了一个与公共云相辅相成的选择。事实上,去年八月份举行的VM探索大会上,VMware和Vidia进入了一项名为VMware Private AI Foundation的合作伙伴关系,这使VCF能够运行GPU。

This allows customers to deploy their AI models on-prem, and wherever they do business, without having to compromise on privacy and data in control of their data. And we are seeing this capability drive strong demand for VCF from enterprises seeking to run their growing AI workloads on-prem. Reflecting all these sectors for the full year, we reiterate our fiscal 2024 guidance for software revenue of $20 billion. Turning to semiconductors. Before I give you an overall assessment of this segment, let me provide more color by end markets. Q1 networking revenue of $3.3 billion grew 46% year-on-year, representing 45% of our semiconductor revenue. This was largely driven by strong demand for our custom AI accelerators at our two hyperscale customers. This strength extends beyond AI accelerators. Our latest generation, KamaHawk 5800G switches, thought to, internet nicks, retirement, DSPs, and optical components are experiencing strong demand at hyperscale customers, as well as large-scale enterprises deploying AI data centers.
这使得客户能够在本地部署他们的AI模型,并在任何地方开展业务,而无需在隐私和数据控制方面妥协。我们看到这一功能推动了企业对在本地运行不断增长的AI工作量需求的强劲需求。在全年内反映所有这些部门,我们重申我们在软件收入方面的2024财年指导为200亿美元。转向半导体。在我为您总体评估这一部分之前,让我为终端市场提供更多细节。第一季度网络收入33亿美元,同比增长46%,占我们半导体收入的45%。这在很大程度上受到了我们两家超大规模客户对我们定制的AI加速器强劲需求的推动。这种强劲势头不仅限于AI加速器。我们最新一代的KamaHawk 5800G交换机、以太网网卡、DSP和光学元件在超大规模客户和大型企业部署AI数据中心方面也经历着强劲的需求。

For fiscal 2024, given continuous strength of AI networking demand, we now expect networking revenue to grow over 35% year-on-year compared to our prior guidance for 30% annual growth. Moving on to wireless. Q1 wireless revenue of $2 billion decreased 1% sequentially and declined 4% year-on-year, representing 27% of semiconductor revenue. As you all may know, the engagement with our North American customer continues to be very deep strategic and of course multi-year. Then in fiscal 2024, helped by content increases, we reiterate our previous guidance for wireless revenue to be flat year-on-year.
在2024财政年度,考虑到人工智能网络需求持续增强,我们现在预计网络收入将比我们先前的指导预期的30%年增长率增长超过35%。接下来谈到无线。第一季度无线收入为20亿美元,环比下降1%,同比下降4%,占半导体收入的27%。正如大家所知,与我们的北美客户的合作关系继续保持密切的战略性和多年的合作。然后在2024财政年度,受到内容增加的影响,我们重申先前的指导,预计无线收入将与去年持平。

Next, our Q1 service storage connectivity revenue was $887 million or 12% of semiconductor revenue down 29% year-on-year. We are seeing wicked demand in the first half, but expect recovery in the second half. Accordingly, we are revising our outlook for fiscal 2024 service storage revenue to decline in the mid-20% year-on-year compared to prior guidance for high teens-percented decline year-on-year. On broadband, Q1 revenue declined 23% year-on-year to $940 million and represented 30% of semiconductor revenue. We are seeing a cyclical trough this year for broadband as telco spending continues to weaken and do not expand improvement until late in the year.
接下来,我们的Q1服务存储连接收入为8.87亿美元,占半导体收入的12%,同比下降29%。我们在上半年看到了强烈的需求,但预计下半年会有复苏。因此,我们正在调整我们对2024财年服务存储收入的展望,预计与之前高昂的指导相比,年同比将下降约20%。在宽带领域,Q1收入同比下降23%至9.4亿美元,占半导体收入的30%。我们认为,在今年宽带领域将迎来一个周期性低谷,因为电信支出继续减弱,直到年底才能看到改善。

And accordingly, we are revising our outlook for fiscal 2024 broadband revenue to be down 30% year-on-year from our prior guidance of down mid-tins year-on-year. And finally, Q1 industrial resales of $215 million declined 6% year-on-year. Since fiscal 2024, we continue to expand industrial resales to be down high single-digit year-upon-year. And in summary, we are stronger than expected growth from AI, more than offsetting the cyclical weakness in broadband and service storage. Q1 semiconductor revenue grew 4% year-over-year to $7.4 billion. Turning to fiscal 2024, we reiterate our guidance for semiconductor solution revenue to be up mid-to-high single-digit percentage year-on-year.
因此,我们正在修正我们对财政2024年宽带收入的展望,预计同比下降30%,而之前我们的指引是同比下降中个位数。最后,第一季度工业再销售额为2.15亿美元,同比下降6%。自财政2024年以来,我们继续扩大工业再销售额,预计同比下降高个位数。总的来说,我们从人工智能领域获得了比预期更强劲的增长,从而抵消了宽带和服务存储领域的周期性弱势。第一季度半导体收入同比增长4%,达到74亿美元。转向财政2024年,我们重申半导体解决方案收入预计同比增长中高个位数。

I know we told you in December our revenue from AI would be 25% of our full-year semiconductor revenue. We now expect revenue from AI to be much stronger, representing some 35% of semiconductor revenue at over $10 billion. And this more than offset weaker than expected demands in broadband and service storage. So for fiscal 2024, in summary, we reiterate our guidance for consolidated revenue to be $50 billion, which represents 40% year-on-year growth. And we reiterate our full-year adjusted EBITDA guidance of 60%. Before I turn this call over to Kirsten, who will provide more details of our financial performance this quarter.
我知道我们在去年12月告诉你,人工智能带来的收入将占我们全年半导体收入的25%。我们现在预计,人工智能带来的收入会更强劲,占半导体收入的35%,超过100亿美元。这超过了宽带和服务存储需求不及预期的影响。所以在2024财年,总体上,我们重申我们的集团收入预计达到500亿美元,同比增长40%。我们也重申我们全年调整后的EBITDA预测为60%。在我把通话转给将详细说明本季度财务表现的Kirsten之前。

Let me just highlight that Broadcom recently published its fourth annual ESG report available on a corporate citizenship sign, which discusses the company's sustainability initiatives. As a global technology leader, we recognize Broadcom's responsibility to connect our customers, employees, and communities. Through our product and technology innovation and operational excellence, we remain committed to this mission. Kirsten. Thank you, Hawk. Let me now provide additional detail on our Q1 financial performance, which was the 14 week quarter, and included 10 and a half weeks of contribution from VMware. Consolidated revenue was $12 billion for the quarter, up 34% from a year ago. Excluding the contribution from VMware, Q1 revenue increased 11% year-on-year. This margins were 75.4% of revenue in the quarter. Operating expenses were $2.2 billion, and R&D was $1.4 billion, both up year-on-year, primarily due to the contribution from VMware. Q1 operating income, including VMware, was $6.8 billion and was up 26% from a year ago, with operating margin at 57% of revenue. The transition costs of $226 million in Q1, operating profit of $7.1 billion was up 30% from a year ago, with operating margin of 59% of revenue. Adjusted EBITDA with $7.2 billion or 60% of revenue. This figure excludes $139 million of depreciation. Now a review of the P&L for our two segments, starting with semiconductor. Revenue for a semiconductor solution segment was $7.4 billion and represented 62% of total revenue in the quarter.
让我强调一下,博通最近发布了其第四份年度ESG报告,可在企业公民标志处获取,该报告讨论了公司的可持续发展计划。作为全球科技领导者,我们意识到博通有责任连接我们的客户、员工和社区。通过我们的产品和技术创新以及运营卓越,我们始终致力于这一使命。Kirsten. 谢谢,Hawk. 现在让我详细介绍一下我们第一季度的财务表现,这是14周的季度,包括VMware贡献的10周零半个星期。该季度的综合收入为120亿美元,比去年同期增长34%。不包括VMware的贡献,第一季度的收入同比增长11%。毛利率在该季度为75.4%。运营费用为22亿美元,研发费用为14亿美元,均比去年同期增加,主要是由于来自VMware的贡献。包括VMware在内,第一季度的运营利润为68亿美元,比去年同期增长26%,运营利润率占收入的57%。第一季度的过渡成本为2.26亿美元,运营利润为71亿美元,比去年同期增长30%,运营利润率为收入的59%。调整后的EBITDA为72亿美元,占收入的60%。该数字不包括1.39亿美元的折旧费用。现在让我们来审视两个业务部门的损益表,首先是半导体。半导体解决方案部门的收入为74亿美元,占该季度总收入的62%。

This was up 4% year-on-year. Gross margins for a semiconductor solution segment were approximately 67% down 190 basis points year-on-year, driven primarily by product mix within our semiconductor and market. Operating expenses increased 8% year-on-year to 865 million, reflecting a 14-week quarter, resulting in semiconductor operating margins of 56%.
这一数字同比增长了4%。半导体解决方案部门的毛利率约为67%,比去年下降了190个基点,主要是由于半导体和市场内的产品组合。运营费用同比增长了8%,达到8.65亿美元,反映了一个14周的季度,半导体运营利润率为56%。

Now moving on to our infrastructure software segment. Revenue for infrastructure software was $4.6 billion, up 153% year-on-year, primarily due to the contribution of VMware and represented 38% of revenue. Gross margins for infrastructure software were 88% in the quarter, and operating expenses were $1.3 billion in the quarter, resulting in infrastructure software operating margin of 59%. Excluding transition costs, operating margin was 64%.
现在转向我们的基础设施软件部门。基础设施软件的收入为46亿美元,同比增长153%,主要是由于VMware的贡献,占总收入的38%。基础设施软件的毛利率为88%,季度运营费用为13亿美元,导致基础设施软件的运营利润率为59%。不计入过渡成本的情况下,运营利润率为64%。

Moving on to cash flow. Free cash flow in the quarter was $4.7 billion and represented 39% of revenues off a higher revenue base. Excluding restructuring and integration spend of $658 million, free cash flows were 45% of revenue. We spent $122 million on capital expenditures. Day sales outstanding were 41 days in the first quarter, compared to 31 days in the fourth quarter, on higher accounts receivable due to the VMware acquisition.
转移到现金流方面。季度自由现金流为47亿美元,占更高收入基数的39%。在排除了6.58亿美元的重组和整合支出后,自由现金流占收入的45%。我们在资本支出上花费了1.22亿美元。第一季度的账款收回期为41天,与由于VMware收购所致的较高应收账款而在第四季度的31天相比。

The accounts receivable we brought on from VMware has payment terms of 60 days, unlike Broadcom's standard 30 days. We ended the first quarter with inventory of $1.9 billion, up 1% sequentially. We continued to remain disciplined on how we manage inventory across the ecosystem. We ended the first quarter with $11.9 billion of cash and $75.9 billion of gross debt.
从VMware转来的应收账款付款期限为60天,而不是Broadcom的标准30天。我们第一季度结束时的存货为19亿美元,环比增长1%。我们继续保持对整个生态系统的库存管理方法的纪律性。第一季度结束时,我们持有119亿美元的现金和759亿美元的总债务。

The weighted average coupon rate and years to maturity of our 48 billion in fixed rate debt is 3.5% and 8.4 years respectively. The weighted average coupon rate and years to maturity of our 30 billion in floating rate debt is 6.6% and 3 years respectively. During the quarter, we repaid $934 million of fixed rate debt that came due.
我们480亿固定利率债务的加权平均票面利率为3.5%,到期年限为8.4年。而我们300亿浮动利率债务的加权平均票面利率为6.6%,到期年限为3年。在本季度,我们偿还了到期的9.34亿美元固定利率债务。

This week, we repaid $2 billion of our floating rate debt and we intend to maintain this quarterly repayment of debt throughout fiscal 2024. Turning to capital allocation. In the quarter, we paid stockholders $2.4 billion of cash dividends based on a quarterly common stock cash dividend to $5.25 per share. We executed on our plan to complete our remaining share buyback authorization. We repurchased $7.2 billion of our common stock and eliminated $1.1 billion of common stock for taxes due on investing of employee equity, resulting in the repurchase and elimination of approximately $7.7 million AVGO shares.
本周,我们偿还了20亿美元的浮动利率债务,并打算在财政2024年期间保持每季度偿还债务。至于资本配置。在本季度,我们向股东支付了24亿美元的现金股利,根据每股5.25美元的每季度普通股现金股利计算。我们执行了完成剩余股票回购授权的计划。我们回购了72亿美元的普通股,并消除了11亿美元用于支付员工权益投资所需的普通股税款,从而回购和消除了约7700万股AVGO股票。

To help you with modeling share count, the weighted effect of the 54 million share issued for the VMware acquisition resulted in a sequential increase in Q1 to $478 million, with the Q2 non-gaptiluted share count expected to increase to approximately $492 million as the shares issued are fully weighted in the second quarter. Now onto guidance. Regardless of the updated dynamics of our semiconductor and software segments, Hock, discussed, we choose to reiterate our guidance for fiscal year 2024, consolidated revenue of $50 billion and adjusted the BEDOT of 60%.
为了帮助您进行股份计算建模,因为为VMware收购发行的5400万股股什有权重影响,导致Q1的股什数量按顺序增加至4.78亿美元,预计Q2非GAAP股份计数将增加到大约4.92亿美元,因为发行的股份在第二季度将完全有权重。现在关于指引。尽管我们的半导体和软件部门的更新动态,Hock讨论了,我们选择重申我们2024财年的指引,即总体收入为500亿美元,经调整的EBITDA为60%。

With regard to VMware in February, we signed a definitive agreement to divest the end user computing decision with the transaction expected to close in 2024 subject to customary closing conditions including regulatory approvals. The EUC division has been classified as discontinued operations in our Q1 financials. We have decided to retain the carbon black business and merge carbon black with Symantec to form the enterprise security group. The impact on revenue and profitability is not significant. That concludes my prepared remarks. Operator, please open up the call for questions. Thank you. Ladies and gentlemen, to ask the question, please press star 11 on your telephone, and then wait to hear your name announced. To withdraw your question, please press star 11 again. We ask that you limit yourself to one question only. Please stand by while we compile Q&A roster.
关于2024年2月的VMware,我们签署了最终协议,拟出售终端用户计算决策,交易预计将在2024年关闭,视常规关闭条件和监管批准而定。终端用户计算部门已被列为我们Q1财务报表的已停运业务。我们决定保留碳黑业务,并将碳黑与赛门铁克合并,组建企业安全组。对收入和盈利能力的影响不重大。这就是我准备好的发言。操作员,请开放提问的环节。谢谢。女士们先生们,如果您想提问,请按下电话上的星号11,然后等待听到您的名字被宣布。如果您想撤消问题,请再次按下星号11。我们请您仅限于一个问题。请稍等,我们整理问答名单。

Our first question comes from the line of Hosh Kumar with Piper Sandler. Your line is open. Yeah, thank you, Hosh. Once again, tremendous results and tremendous activity that you guys are benefiting from an AI. But my question was on software. I think if I heard you correctly, Hush, you mentioned that your software bookings will rise quite dramatically to $3 billion in 2Q. I was hoping that you could explain to us why it would rise almost 100% up if my math is correct in 2Q over 1Q. Is it something simple or is it something that you guys are doing from a strategy angle that's making this happen?
我们的第一个问题来自Piper Sandler的Hosh Kumar。您可以发言了。是的,谢谢,Hosh。再次,你们取得了巨大的成果,也从人工智能中获益了巨大的活动。但我的问题是关于软件的。我想如果我听对了的话,Hosh,你提到你们的软件预订在第二季度将会大幅增长到30亿美元。我希望你能解释一下为什么第二季度的软件预订会比第一季度上涨近100%,如果我的数学没有错的话。是不是有什么简单的原因,还是你们从战略角度上做了什么使得这种情况发生?

As I indicated, with the acquisition of VMware, we're very focused on upselling and helping customers not just buy but deploy this private cloud. What we call virtual private cloud solution or platform on-prem data centers. It has been very successful so far. And I agree, early innings still at this point. We just have close on the deal, what we close on the deal late November and we are now much early much. So we are the benefit of at least three months. But we have been very prepared to launch and focus on this push initiative on private cloud, VCS. And the results have been very much what we expect it to be, which is very, very successful. Thank you, Hosh. Thank you. Please stand by for our next question.
正如我所提到的,通过收购VMware,我们非常专注于向客户推销并帮助他们不仅购买,而且部署这个私有云。我们称之为虚拟私有云解决方案或平台,可在本地数据中心部署。到目前为止,这一举措取得了非常成功的成效。我同意,在当前阶段仍处于早期阶段。我们刚刚完成了交易,我们在去年十一月才完成交易,现在距离那个时间点已经过去了很多时间。因此,我们至少已经受益了三个月。但是我们之前已经做好了充分准备,专注于推动这个私有云VCS项目。而结果已经达到了我们的预期,非常成功。谢谢,Hosh。谢谢。请等待我们的下一个问题。

Our next question comes from the line of Parliament Serb with JP Morgan. Your line is open. Yeah, good afternoon. Thanks for taking my question. Hosh, on the AI outlook being revised from greater than 7.5 billion, I think, last quarter to 10 billion plus this quarter. As you mentioned, AI compute pulls your ASICs, but it also pulls your networking, optical, PCIe connectivity solutions as well. So you can just help us understand, like, of that 2.5 billion increase in outlook, is it stronger AI ASIC demand, stronger networking, stronger optical, et cetera. But more importantly, are you also seeing a similar acceleration in your Ford ASIC design wind pipeline as well? There's a lot of questions and a lot of information you want me to discuss. Let's take them one at a time, shall we?
我们的下一个问题来自JP摩根的Serb议员。您可以发言了。是的,下午好。感谢您回答我的问题。关于AI前景从上一季度的超过75亿美元修正为本季度的10亿美元以上。正如您所提到的,AI计算需要您的ASIC,但也需要您的网络、光学和PCIe连接解决方案。您可以帮助我们理解这个前景增长的25亿美元中,AI ASIC需求更强大、网络更强大、光学更强大等吗?但更重要的是,您是否还看到了Ford ASIC设计风口的类似加速?您要我讨论很多问题和信息。让我们逐个来处理吧。

Yeah, they increase. As we have said before, I've been shown before, is it roughly two thirds, one thirds or 70, 13, which is AI accelerators, which are custom ASIC AI accelerators with a couple of hyperscalers compared to the other components, which I collectively consider as networking components. And it's about 70, 30 percent makes, and that increase of almost $3 billion that you mentioned is a similar combination. And then are you seeing a similar acceleration on the Ford design wind pipeline and customer engagements? I have indicated I only have two. Really only have two. Seriously, I don't count anybody. I do not go into production as a real customer at this point. Okay. Thanks, Bob. Thanks. Please stand by for our next question.
是的,它们在增加。正如我们之前所说,我以前展示过的是大约三分之二、三分之一或70% 13(70%和13%),那是AI加速器,这是与其他部件相比的自定义ASIC AI加速器与几个超大规模计算中心,我将它们统称为网络组件。大约是70%和30%,这个你提到的增加近30亿美元是类似的组合。那么您是否在福特设计风管线和客户合作方面看到了类似的加速情况?我已经表示我只有两个。真的只有两个。说真的,我不算任何人。目前,我并没有真正的客户投入生产。好的。谢谢,鲍勃。请等待我们下一个问题。

Our next question comes from the line of Vivet Aya with Bank of American Securities. Shilani Colson. Thank you for taking my question. Hawk, again, on the over 10 billion for AI, is this still a supply constrained number or do you think that this is kind of a very project driven numbers that's not really supplied that gates it? So if you were to get let's say increased supply, could there be upside? And then kind of part B of that is on the switching side, have you already started to see benefits from the 51 terabit per second switches? Is that something that comes along later? Like what is the contribution of 51T to the switching upside that you mentioned for this year?
我们下一个问题来自美国银行证券的Vivet Aya。谢谢Shilani Colson。再次关于超过100亿美元用于人工智能,这个数字是否仍然受到供应限制,或者您认为这是一个非常项目驱动的数字,实际上并没有提供限制呢?所以,如果您假设供应增加,可能会有上升空间吗?然后,第二部分是关于切换方面的问题,您是否已经开始从每秒51太比特的交换机中看到好处?这是以后才会出现的吗?您提到今年切换的上行,51T的贡献是多少?

Yeah, no. I would have, Tomahawk 5 is going great guns. Now it's not driven unlike in the past, Tomah 3, Tomah 4 by traditional scale out in hyperscalers on the cloud environment. This is all largely coming from AI scaling out of AI that data centers. The building of larger and larger clusters to enable generative AI computing functionality and you're going for bigger and bigger pipes. And the Tomahawk 5, 51 terabit is a perfect solution and we're seeing a lot of demand. And in many cases, we are, basically they are surpassing the rate of adoption that we've previously thought. So it is a very good solution in connecting GPUs. And with respect to AI accelerators where I think you are focusing on is that a constraint on supply chain, we do get enough lead time out of our hyperscale customers that we do not have a supply chain constraint. Thank you. Thank you. Please stand by for our next question.
好的,不是这样的。我本来会这样认为,汤鹰5正在蒸蒸日上。现在它不像过去的汤鹰3、汤鹰4那样被传统的云环境中的超大规模数据中心所驱动。所有这些基本上都来自人工智能数据中心的扩展。构建越来越大的集群以实现生成式人工智能计算功能,您会寻求越来越大的通道。而汤鹰5的51太比特是一个完美的解决方案,我们看到有很多需求。在许多情况下,实际上他们正在超过我们先前认为的采用率。因此,在连接GPU方面它是一个很好的解决方案。至于人工智能加速器,我认为您关注的是供应链的限制,我们会获得足够的领先时间,因此我们的超大规模客户不会成为供应链的制约。谢谢。谢谢。请等待我们的下一个问题。

Our next question comes from the line of space. You're right. I'm with Bernstein research. The line is open. Hi guys. Thanks for taking my question. I had a question on the core software business. So you said VMware for the two months that was in there was 2.1 billion. We could put the rest of the software CACinetic and brokeade at like two and a half almost to be up a 25% sequentially and almost 40% year over year. I guess the question is do I have my math right? And if so, how can that be? What's going on in the core business? And how should we be thinking about the growth of the core business in VMware as we go to the year? Is it VMware so 12 billion?
我们下一个问题来自空间线。您说得对。我是伯恩斯坦研究。请问。大家好。感谢回答我的问题。我有一个关于核心软件业务的问题。您说VMware在那两个月的销售额是21亿美元。我们把其他软件CACinetic和brocade算在一起,接近25%,同比增长将近40%。我想知道我的数学算对了吗?如果是,那怎么可能?核心业务发生了什么?我们应该如何思考VMware核心业务的增长?随着进入这一年,VMware的核心业务达到了12亿美元吗?

Yeah. Don't get too excited about that. Don't get too excited about that. I think it's certain products contracts we obtain. But it's very strong contract renewals in the older from old Broadcom contracts, especially mainframes were very strong as was some of our other distributors of our platform. So that has also accelerated. But that's not a style of this show, Stacy. What this show is the accelerating bookings and backlog we accumulating on VMware. Okay. So VMware is still running it like an 11 or 12 billion dollar run rate benefit. So that sounds like that should accelerate. So the overall for VMware should be more than the 12 billion that you talked about. So the core business, the strength of the core, that was kind of a one time which would model that kind of like falling off because you still got the overall software at 20. Correct. Got it. Okay. Thanks. Please stand by for our next question.
是的。不要对此过于兴奋。不要对此过于兴奋。我认为这是我们获得的某些产品合同。但是老款Broadcom合同的续约非常强劲,尤其是主机方面非常强劲,以及我们平台的其他一些分销商。所以这也加速了。但这不是这个节目的风格,斯泰西。这个节目的重点是VMware上正在加速的订单和积压。好的。所以VMware仍然像一个110到120亿美元的运营额受益。听起来似乎应该加速。所以VMware整体的业务应该超过你所说的120亿美元。所以核心业务,核心的实力,那种一次性的模型应该会减弱,因为你仍然有20的整体软件。正确。明白了。好的。谢谢。请等待我们下一个问题。

Our next question comes from the line of Aaron Rakers with Wells Fargo. The line is open. Yeah. Thanks for taking the question. I wanted to ask kind of continuing on the VMware discussion a little bit, you know, Hawk, now that you've had the asset, you know, for a little while, I'm curious of how you, how the go-to market strategy looks with VMware relative to the prior software acquisitions that you've done. What I'm really getting at is kind of like, you know, how have you kind of thought about the segmentation of the customer base of VMware? Are you, I know there's been some discussion around your channel engagements, you know, legacy VMware channel in the past. So I'm just kind of curious of how you've been managing that go-to market. Oh, I think, no, we haven't had it for that long to be honest. It's like three months, but about three months. But yeah, it's, it's what, and it seems to be, no, no, there are kings to be, walk out, but things seem to be progressing very well as much as we had hoped it would, because where we are focusing our go-to market, and more than go-to market, where we are focusing our resources on don't just go-to market, but on engineering, engineering a very improved VCF stack, which we have, and selling it out there, and being able to then support it and even in the process get help customers deploy and start to really make it stand up in your data centers. All that focus is on the largest, I would say, 2000 strategic customers. These are guys who want to still have significant distributed data center on-brand. You know, many of our customers is looking at a hybrid situation, not only use the word ears too loosely. Basically, a lot of these customers have some very legacy, but critical mainframe. That's an old platform, not growing, except it's still vital. Then what they have to modernizing workloads, called today and in the future, is they really have a choice, which they are taking both angles, of running a lot of applications in data centers on-prem, distributed data centers on-prem, which can handle these modernized workloads, while at the same time, because of elastic demand, to be able to also put some of these applications into public cloud. Today's environment, most of these customers do not have an on-prem data center that resembles what's in the cloud, which is very high availability, very low latency, highly resilient, which is one we are offering with VMware Cloud Foundation of this year. It exactly replicates what they get in a public cloud. And they love it. Now, three months, and we are seeing it in the level of bookings we are generating over the last three months. Thank you. Thank you. Please stand by for our next question.
我们接下来的问题来自富国银行的Aaron Rakers。请发言。是的。感谢您回答问题。我想继续讨论一下VMware,您知道,Hawk,现在您已经掌握了这笔资产一段时间了,我很好奇您与VMware的营销策略相比之前的软件收购有何不同。我真正想了解的是,您是如何考虑VMware客户群体的分组的?我知道以前有一些关于您的渠道参与的讨论,您是否考虑过如何管理这种营销策略。噢,我觉得,不,其实我们还没有拥有它那么久。大约三个月左右。但是,是的,情况似乎正在进展,正如我们所希望的那样,因为我们正在将我们的资源集中在不仅仅是营销,而是工程领域,我们正在改进VCF堆栈,并将其销售出去,并能够支持它,甚至在这个过程中帮助客户部署并真正让其在数据中心中发挥作用。我们的重点是放在最大的2000个战略客户上。这些客户仍然希望拥有重要的分布式数据中心。许多我们的客户正在考虑混合状态,不要用这个词太宽泛。基本上,这些客户中许多人都有一些非常老旧但至关重要的大型机。这是一个老平台,不再增长,但仍然至关重要。然后,他们必须现代化工作负载,今天和未来所需的工作负载是,他们确实有一个选择,他们同时采取两种方案,一是在本地数据中心的分布式数据中心上运行许多应用程序,这些数据中心可以处理这些现代化工作负载,同时,由于弹性需求,他们还可以将一些应用程序放入公共云中。在如今的环境中,大多数这些客户没有一个类似于云端的本地数据中心,它具有非常高的可用性、极低的延迟和高度的弹性,这正是我们今年通过VMware Cloud Foundation提供的。它完全复制了他们在公共云中获得的体验。他们很喜欢。现在已经过去三个月了,我们在过去三个月内产生的预订量水平中看到了这一点。谢谢。请等待下一个问题。

Our next question comes from the line of Chris Dangley with Citi. Yolana's open. Hey, thanks, gang. Let me ask a question. Hey, Hox, just a question on the AI upside in terms of a customer perspective. How much of the upside is coming from new versus existing customers? And then how do you see the customer base going forward? I think it's going to broaden. We know how you like to price. So if you do get a bunch of new customers for these products, could there be some better pricing and better margins as well? Hopefully, I'm not listening to the call. Chris, thanks for this question. Love it. And perhaps let me try to perhaps give you a sense how we think of the AI market, the new generative AI market, so to speak, using it very loosely and generically as well. We see it as two end mark, two broad segments. One segment is hyperscalers, especially very large hyperscalers with huge, huge consumer subscriber base. You probably can guess who this few people are. Very large subscriber base and very, almost infinite amount of data. And our model is getting subscribers to keep using this platform they have. And through that, be able to generate a better experience for not only the subscribers, but a better advertising opportunity for their advertising clients. It's a great ROI as we are seeing, it's a ROI that comes very quickly. And the investment continues vigorously with that segment, comprising very few players. But we will use subscriber base, but with the scale to invest a lot. And here, ASICs, custom silicon, custom AI accelerators makes plenty of sense. And that's where we focus that attention on. They also buy as a scale out those AI accelerators through clusters, increasing large clusters. Because of the way the models are running, the foundation models run, and large language models need to generate those parameters. They buy a lot of networking together with one in comparison, obviously, to the value of AI accelerators we sell. The network working site while growing is small percentage compared to the size of the value of their accelerators. That's one big segment we have. The other segment we have, which is smaller, is the enterprise. One I broadly call enterprise segment in AI. Here you're talking about companies, large, not so large, but large, who wants to do, who have AI initiatives going on. You know, all these big news and hype about AI, being the savior to productivity and all that. It gets all these companies on multiple, on their own initiatives. And here, in a short of going to public cloud, they try to run it on prem. If they try to run it on prem, they take extended silicon for AI accelerators as much as possible. And here in terms of the AI accelerator, we don't have a market. That's a merchant silicon market. But in the networking site, as they buy it together with their data centers, they do buy all those out networking components, beginning with switches, routers even through people like the RISTAs 7800, but switches for sure, and the various other components I mentioned. And that's a different sense market that we have. So it's an interesting mix and we see both. Thanks a lot, Hark. Thank you. Please stand by for our next question.
我们下一个问题来自花旗银行的克里斯·丹利。Yolana,你可以开始了。嘿,谢谢各位。让我提一个问题。嘿,霍克斯,关于客户视角的人工智能增长方面,新客户和现有客户的增长有多少?你如何看待客户群体的发展?我认为它会变得更加广泛。我们知道你们喜欢定价。所以如果你们为这些产品获得了一大批新客户,是否会有更好的定价和更好的利润?希望我没有听错电话。克里斯,谢谢你的问题。很喜欢。也许让我试着给你一个我们如何看待新的生成式人工智能市场的感觉,用一个很宽泛的意义。我们看到有两个广泛的细分市场。一个是超大规模的超大规模器件,特别是拥有庞大的消费者订阅用户群的超大规模器件。你可能猜到这几个人是谁。非常庞大的订阅用户群和几乎无限的数据量。我们的模型是让用户继续使用他们拥有的平台。通过这样做,不仅可以为订阅用户提供更好的体验,还可以为其广告客户提供更好的广告机会。正如我们看到的,这是一个非常快速回报的投资。这一细分市场仅包含少数玩家,但他们将以庞大的用户群体规模投资大量资金。在这里,ASICs、定制硅片和定制人工智能加速器是非常有意义的。这也是我们关注的重点。他们通过集群扩展购买这些人工智能加速器,组成大型集群。由于模型的运行方式以及大型语言模型需要生成这些参数,他们购买了大量的网络设备。与我们销售的AI加速器的价值相比,网络设备的规模虽在增长,但与他们加速器价值规模相比只占很小一部分。这是我们拥有的一个大型细分市场。我们另外一个细分市场则更小一些,即企业市场。在AI领域,我将其概括为企业市场。在这里,我们谈论的是各种公司,大公司,不太大的公司,但想要进行AI倡议的公司。关于人工智能的大新闻和炒作,有关于提高生产力等等的救世主。这些公司对自己的计划充满热情。相较于使用公共云,他们尝试在本地运行。如果他们尝试在本地运行,他们会尽可能地采用专门的AI加速器硅片。在人工智能加速器领域,我们没有市场。这是一个商用硅片市场。但在网络设备领域,随着他们购买数据中心,他们会购买所有这些网络设备组件,从交换机,路由器,甚至是像RISTAs 7800这样的设备,但交换机肯定会包括在内。这是我们拥有的另一种不同类型的市场。因此,我们看到这是一个有趣的组合,我们看到这两者。霍克斯,非常感谢。谢谢你。请继续等候我们的下一个问题。

Our next question comes from the line of call, Ackerman with BNP, Parabas. Your line is open. Yes, thank you. Hark, weakness in broadband, server and storage customers is understandable given what your purists have said this earnings season. But perhaps you could speak to the backlog visibility you have with your customers in those markets that would indicate those markets could begin to order again and see the sequential growth in the second half of your calendar year. Thank you. You're correct. We are, as I say, we are almost like near the trough. This year, 24, first half for sure, will be the trough. Second half, 24, don't own yet. But tell you what, we have 52 week lead time, as you know. We're very disciplined sticking to it. And based on that, we are seeing bookings lately, significantly are from bookings a year ago. Thank you. Please stand by for our next question.
我们下一个问题来自BNP Parabas的Ackerman致电。您的线路已开通。是的,谢谢。对于宽带、服务器和存储客户的疲软是可以理解的,考虑到您们的纯粹主张在本季度财报中所说的。但也许您可以谈谈您在这些市场上与客户的积压订单可见性,这可能表明这些市场可能会再次开始下单,并在您的日历年下半年看到顺序增长。谢谢。您说得对。如我所说,我们几乎已经接近谷底。今年24年,上半年当然会是谷底。24年的下半年还不确定。但是告诉您,我们有52周的交货周期,您知道的。我们非常遵守这个规则。基于此,我们最近看到的订单,主要来自一年前的订单。谢谢。请等候我们下一个问题。

Our next question comes from the line of Christopher rolling with a quahana. Thank you for the question. So, Hark, this one's for you on optical. So, our check suggests that you're vertically integrating there. You're now putting in your own drivers, TIA's. You're starting to get traction in PAM4 DSP. And I think you kind of had an early lead in 100 gig data center lasers as well. And this is, a lot of this should be on the back of AI networking that appears to be exploding here. So, I was wondering if you could help us size the market and then also talk about how fast this is growing for you. I think there may have been some clues in that one third number of the AI you gave us. But perhaps if you can kind of double click or square that for us, it would be great. Thanks. Okay.
我们下一个问题来自Christopher rolling先生。谢谢你的问题。所以,Hark,这个问题是关于光学的。我们的检查表明,您正在垂直整合。您现在正在使用自己的驱动程序,TIA和PAM4 DSP。您在100G数据中心激光方面也似乎有所领先。这些都应该是AI网络爆炸的结果。我想知道您能否帮助我们估算市场规模,然后谈论这对您的增长速度有多快。我认为在您给我们的AI数字中可能有一些线索。也许如果您可以深入讨论一下,对我们来说会很有帮助。谢谢。

Before you get carried away, please, in the other categories outside AI accelerators, all those things like PAM4, DSPs, optical components, re-timers. They are small compared to Tomahawk switches and Jericho routers using AI networks. And also we're in an environment where, as you all know, traditional enterprise networking is kind of also in a bit of a slowdown law. So, obviously, it's demand driven very much by AI. And that tends to push us in a line of thinking that could be very biased. Because what it is showing is that the mix and the content on networking relative to compute is very bad skew, very different from in an AI data center compared to a traditional CPU based data center. So, I don't want to get you guys all in the wrong way. But you're right, in an AI data center, there's quite a bit of content on DSPs, PAM4s, optical components and re-timers and PCI express switches. But there's still not that big overall scheme of things compared to what we sell in switches and routers. And compared to AI accelerators, they are even small. Thing in that ratio. As I said, AI revenue of 10 billion plus this year, 70% will be AI accelerators, 30%...
在你冲动之前,请注意,在人工智能加速器之外的其他类别中,诸如PAM4、DSP、光学元件、再定时器等,与使用AI网络的Tomahawk交换机和Jericho路由器相比,它们都较小。我们目前处于一个环境中,众所周知,传统企业网络也在有点停滞不前。显然,需求很大程度上受到人工智能的推动。这很容易让我们产生偏见。因为它显示了与传统基于CPU的数据中心相比,在网络方面的内容和混合物的倾斜程度是非常不同的。所以,我不想让你们产生误解。但你是对的,在人工智能数据中心,DSP、PAM4、光学元件、再定时器和PCI Express交换机等内容相当丰富。但与我们在交换机和路由器方面的销售相比,总体规模仍然不那么大。与人工智能加速器相比,它们甚至更小。正如我所说,今年人工智能收入超过100亿美元,其中70%将来自人工智能加速器,30%...

...everything else. And within everything else, 30% or so, I would say more than half of that 30%, more like 20%, are...
...其他所有事情。在其他所有事情中,我会说有30%左右,我认为这30%中超过一半,更像是20%...

...the switches and routers. And the rest are the various three-timers DSP components. Because we are not unlike what you say, we're not vertically integrated in the sense we do not do the entire transceiver, the optical...
交换机和路由器。剩下的就是各种三重定时器DSP组件。因为我们并不像你说的那样,我们在垂直整合方面并不做整个收发器,光学部分...

...transceiver. We don't do that. Those are manufactured typically by OEMs, contract manufacturers like InnoLite, E-optoline, guys in China. Well, those guys are much more competitive. But we provide those key components we talk about. So when you look at it that way, you can understand the kind of the weighting of the various values. Super helpful. Thank you, Hark. Thank you. Will you stand by for our next question?
转换器。我们不做那些。通常是由OEM、合同制造商如InnoLite、E-optoline和中国的人制造的。嗯,这些人竞争力更强。但我们提供我们谈论的那些关键组件。所以当你这么看的时候,你就可以理解各种价值的权重。非常有帮助。谢谢你,哈克。谢谢。你会等待我们的下一个问题吗?

The next question comes from the line of Toshi Ahari with Goldman Sachs. The line is open. Hi. Thank you for taking the question. Hark, I think we all appreciate the capabilities you have in terms of custom compute. I asked this question last quarter on the group call back. But there is one competitor, Bay Sinasia, who continues to be pretty vocal and adamant that one of the future designs at your largest customer, they may have some share. And we're picking up conflicting evidence and we're getting a bunch of investor questions. I was hoping you could address that and your confidence level and maintaining if not extending your position there. Thank you. You know, I can't stop somebody from trash talking. Okay. It's a bad say to describe it. Let the numbers speak for themselves. Please. Live it that way. And I add to it like most things we do in terms of large critical technology products, we tend to always have, as we do here, a very deep strategic and multi-year relationship with our customer. And now, sir. Thank you. Thank you. Please stand back for our next question.
下一个问题来自Goldman Sachs的Toshi Ahari。线路已经打开。嗨,谢谢你提出问题。Hark,我认为我们都很欣赏你在定制计算方面的能力。我上个季度在集团电话会议上问过这个问题。但有一家竞争对手Bay Sinasia,他们一直很坚决地声称在你们最大客户的未来设计中可能会有一些份额。我们收集到了矛盾的证据,也收到了很多投资者的问题。我希望你能解决这个问题,并表达你在这方面保持或扩大地位的信心水平。谢谢。你知道,我不能阻止别人说闲话。好吧,用这种方式来形容真是糟糕。让数字来说话吧。请用这种方式看待。另外我想补充的是,就像我们在大型关键技术产品方面所做的大多数事情一样,我们往往会像在这里所做的那样与客户建立非常深入的战略性和多年的关系。现在,请继续下一个问题。谢谢。谢谢。请等待我们的下一个问题。

Our next question comes from the line of Jay Rakesh with Mazuhu. Yalana's open. Hi, Hark. Just from the custom silicon side, obviously, you guys dominate that space. But you mentioned two customers, two customers, two, you're only two major customers. Just wondering what's really holding back other hyperscalism, you know, ramping up their custom silicon side. And on the flip side, you know, you're hearing some peers talk about custom silicon roadmaps as well. So if you could hit both, thanks. Well, number one, we don't dominate this market. Only a two. I can't be dominating it too. And number one. Number two, the second point is it takes years. It takes a lot of heavy lifting to create that custom silicon because you need to do more than just hardware or silicon to really have a solution for generative AI or even AI in trying to create those AI capabilities in your data centers. It's more than just silicon. You're doing it. You invest a lot in creating software models that works on your custom silicon that matches, you've got to match your business model in the first place, which leads to and create foundation models, which then needs to work and optimize on the custom silicon you're developing. So it's an iterative process and it's a constant evolving process even for the same customer we deal with. I mentioned that in the last call. So it takes years to really understand or be able to basically reach a point where you can say that, hey, I'm finally delivering production worthy. It's not because silicon is bad. It doesn't work well with the foundation models that the customer put in place and the software layer that works with it. The firmware, the software layer that translates into it. All that has to work. You're almost like creating an entire ecosystem in a limited basis, which we are recognized very well in X86 CPUs, but in GPUs, those kinds of AI accelerators, something still very early stage. So it takes years. And for our two customers, we have engaged for years. With one of them, we have engaged for eight years to get to this point. So it's something you have to be very patient, be very persevere and hope that everything lines up because ultimate success if you're just a silicon developer is not just dependent on you, but dependent as much even more on your partner or customer doing it. So it's just got to be patient, guys. I got the tool only so far.
我们下一个问题来自Mazuhu的Jay Rakesh。Jay Rakesh,你的线路已经打开。嗨,哈克。从定制硅的角度来看,显然,你们主导了这个领域。但你提到了两个客户,你们只有两个主要客户。我只是想知道是什么真正阻碍了其他超大规模数据中心提升他们的定制硅一方面。另一方面,你会听到一些同行谈论定制硅路线图。如果你可以回答这两个问题,谢谢。 首先,我们并没有主导这个市场,只有两个客户。我不能称之为主导。其次,第二点是,这需要多年的时间。这需要大量的工作来创建定制硅,因为你需要做的不仅仅是硬件或硅片,为了真正拥有针对生成式AI甚至AI的解决方案,你需要在你的数据中心进行更多的工作。这不仅仅是硅片。你需要投资在创建软件模型上,在你的定制硅上运行的模型,这需要与你的商业模式相匹配,从根本上创建基础模型,然后需要在你正在开发的定制硅上工作和优化。这是一个迭代的过程,甚至对于我们处理的同一客户来说也是一个不断发展的过程。我在上一次电话中提到过这一点。所以,要真正理解或者能够达到一个可以说,嘿,我终于交付了产品的价值,这需要多年时间。这并不是因为硅片不好,而是因为它与客户制定的基本模型和与之配套的软件层不奏效。固件、将其翻译成软件层。所有这些都必须完美配合。你几乎正在创造一个有限基础上的整个生态系统,在X86 CPU方面,我们在很大程度上得到了认可,但在GPU、这些种类的AI加速器,仍然处于非常早期阶段。所以,这需要多年的时间。对于我们的两个客户,我们已经接触了多年。其中一个客户,我们已经接触了八年才达到这一点。这是一个你必须非常耐心、坚持不懈的过程,希望一切都能够顺利,因为最终的成功如果你只是一个硅开发商,不仅仅依赖于你,而且更多地依赖于你的合作伙伴或客户。所以,只要耐心,只有这样才能取得进展。

And on the peers getting into that market? Who is getting to the market? Please repeat. Your talk, talk about some of your peers. Like I think NVIDIA has been talking about entering the custom silicon market. Oh, that's a silicon market. I don't comment to remain on it. All I do say is I have no interest in going into a market where we have a philosophy in running our business, Broadcom. And maybe other people have a different philosophy. Let me tell you my simple philosophy, which I've articulated over time every now and then, which is very clear to my management team and to the whole Broadcom. You do what you're good at. And you do, you keep doubling down on things you know you are better than anybody else. And you just keep doubling down because nobody else will catch up to you if you keep running in of the bank. But do not do something that you think you can do, but somebody else is doing much better job than you are. That's my philosophy. Thanks, Mark. Great. Thank you. Please stand by for our next question.
对于那些进入市场的同行呢?谁正在进入市场?请再说一遍。你谈谈一些你的同行吧。就像我认为英伟达一直在谈论进入定制硅市场。哦,那是硅市场。我不想评论它。我要说的是,我对进入一个我们在经营业务时有哲学的市场没有兴趣,Broadcom。也许其他人有不同的哲学。让我告诉你我的简单哲学,我不时会表述,这让我的管理团队和整个Broadcom都非常清楚。你应该做擅长的事情。你应该继续加倍努力在你知道自己比其他人更擅长的事情上。如果你早早地就保持领先,就没有其他人会赶上你。但是不要做你认为自己能做好的事情,而另外有人做得比你好。这是我的哲学。谢谢,马克。非常好。谢谢。请等候我们下一个问题。

Our next question comes from the line of Matt Bramsey with TD Cola. Your line is open. Thank you very much for squeezing me in, guys. It's kind of a two-part thing on the custom silicon stuff. I guess, if some of the merchant leaders in AI, they're going to be able to do that. They were interested in some custom networking stuff from you either in switching routing. Would you consider it? And the second question is for Kirsten. The business model around custom silicon for most folks is taking our repayments up front and sell the end product at a lower gross margin, but a higher operating margin. And you guys have wrapped its massive custom business with no real impact to gross margins. So maybe you could just unpack the philosophy and the accounting about the way that you guys approach the custom silicon opportunities just from a margin perspective. Thanks, guys. I'll take that because you're asking business models. You're not asking really number crunching. So let me try to answer in this way. So there's no particular reason short of what constitutes an AI accelerator. An AI accelerator, the way it's configured now, whether it's a merchant or it's custom, has a lot – first, an AI accelerator to run foundation models very well needs not just a whole bunch of floating point multipliers to do matrix multiplication, matrix analysis on regression. That's the logic part, compute part. It comes – you have to come with access to a lot of memory, literally almost cache memory tied to it. The chip is not just a simple multiplier. It comes attached to it memory. It's almost a layered three-dimensional chip, which it is. Memory is not something we are – any of us in AI accelerators are super good at designing or building. So we buy the memory from very specialized high bandwidth memory. You all know about it. From key memory supplies, every one of us does that. So you – part of the two – to combine the two together, that's what an AI accelerator is. So even if I get very good, natural normal, a module, corporate, silicon growth margin on my compute logic chip on multipliers, there's no way I can apply that kind of add-on margin to the high bandwidth memory, which is a big part of the course of the total chip. And so naturally, by simple math, it will – that whole entire consolidated AI accelerator brings a growth margin below what a traditional silicon product we have out there. No going away from that, because you are adding on memory, even though we have to create the access, the IOs that attach it, we do not – and could not justify adding that kind of margin to memory. Nobody could for us. So it brings a natural normal margin. That's really the simple basis to it. But on the logic part of it, sure, with the kind of content, with the kind of IP and that we develop, cutting edge, to make those high density floating-point multipliers on 800 square millimeters of advanced silicon, we can commend the margin similar to our corporate growth margin. Okay. Thank you. Thank you. Please, stand by for our next question.
我们的下一个问题来自TD Cola的Matt Bramsey。您的电话已接通。非常感谢你们抽空让我发言。关于定制硅的事情有两个方面。我想,如果一些AI的商业领导者,他们将能够做到这一点。他们对你们的定制网络方面是否感兴趣,比如交换路由。你们考虑过吗?第二个问题是给Kirsten。对于大多数人来说,围绕定制硅的商业模式是提前收款,并以较低的毛利率销售最终产品,但较高的运营利润。而你们已经开展了大规模的定制业务,却没有对毛利率产生实质影响。所以也许你可以解释一下关于你们如何从利润角度看待定制硅机会的哲学和会计学。谢谢大家。我来回答这个问题,因为你在问商业模式,而不是实际数字。所以让我试着用这种方式回答。除了构成AI加速器的特定部分之外,没有什么特殊的原因。现在配置的AI加速器,无论是商业购买还是定制制造,都有很多内容 - 首先,为了能够很好地运行基础模型,AI加速器不仅需要大量浮点乘法器来进行矩阵运算和回归分析。这是逻辑部分,计算部分。它还配备了大量内存,几乎就是与之相连的缓存内存。芯片不仅仅是一个简单的乘法器。它是一个几乎是三维芯片的分层芯片。内存不是我们任何一个AI加速器都擅长设计或构建的东西。所以我们从非常专门的高带宽内存供应商那里购买内存。您都知道。每个人都在做,所以要将这两者结合起来,这就是AI加速器的功能。所以即使我在我的计算逻辑芯片上有很好的、自然的、常规的硅毛利率,有关乘法器的计算逻辑,我也无法将这种添加的利润率应用到高带宽内存上,因为这是芯片总成本的一个很大部分。因此,通过简单的数学计算,整个合并的AI加速器带来的毛利率将低于我们已经推出的传统硅产品。没有办法避免这一点,因为您正在添加内存,尽管我们不得不创建访问,将它连接到芯片。我们无法并且无法证明将这种利润率应用到内存。对我们来说也是如此。因此,这会带来一个自然的正常毛利率。这实际上就是它的基本原理。但从逻辑部分来看,当然,有了我们开发的这种内容和IP,以制造800平方毫米高级硅上的高密度浮点乘法器,我们可以达到类似于我司公司毛利率的利润率。好的,谢谢。请等待我们下一个问题。

Our next question comes from the line of Edward Snyder with Child Equity Research. The line is open. Thanks a lot. First of all, I'll just keep in one if I could, Hawk. You mentioned the second customer, but you also mentioned that it takes years of work iteratively. I mean, anybody has looked at the QPU history. I guess understands that. And you said before that it takes time to wrap it up. But maybe you could give us a little bit of color. I said phenomenal growth in your custom silicon products is much more material part of that coming from your second customer and taking account the little revenue number is the growth rate generally speaking fairly comparable. And then I had a question about VMware.
我们的下一位提问者是Child Equity Research的爱德华·施耐德。 请问你有问题吗?请问你有问题吗?首先,我想问一下,Hawk,您提到了第二个客户,但您也提到了需要多年的迭代工作。我想任何查看过QPU历史的人都会明白这一点。您之前说过包装需要时间,但也许您可以为我们提供一点细节。您提到您的定制硅产品的惊人增长更多地来自第二个客户,并且考虑到收入数字很小,增长率一般来说是否相当可比。然后我有一个关于VMware的问题。

You better go on to your VMware customers. Because on the first, I don't tell about my customer individually. Sorry. Okay. Well, okay. Never mind. That's a waste of time. So closing VMware held kind of a significant shift in your software strategy from focus in the largest 1000 or so customers to well hundreds of thousands now. Why should we expect once you get through, I don't want to say the low hanging fluid of selling into the, like you mentioned, the first 1000 customers with the VCS product that your op X is a share of sales, especially in sales and marketing would start to increase.
你最好继续与你的VMware客户保持联系。因为首先,我不会单独谈论我的客户。抱歉。好的。嗯,好的。别担心。这是浪费时间。所以关闭VMware在你的软件战略中有一种显著的转变,从最大的1000个左右的客户重点转移到现在的数十万客户。一旦你通过了,我们为什么要期待你的运营支出(尤其是在销售和营销方面)在销售额中的份额开始增加,不想说在销售VC产品时的像你提到的头1000个客户中的销售?

Because that's the big leverage Broadcom has had over almost all your acquisitions in software and that seems to be changing now. Ah, we have a shift here. And it's interesting. You're right. No regard. We are spending more on go to market and support because we have a lot of customers. We have 300,000, but we stratify. So we have the strategic guys. We sell, upsell, VCA, private cloud. Very good. But a long tail of what we call smaller commercial customers, we continue to support and sell, improve versions of just the VCA, VSphere, computer utilization to improve productivity on their service.
因为这是Broadcom对几乎所有软件收购的重要杠杆,而现在这种情况似乎在发生变化。哦,我们这里发生了转变。这很有趣。你说的对。我们现在更多地在市场推广和支持方面投入资源,因为我们有很多客户。我们有30万个客户,但我们进行了分层。所以我们有战略客户。我们进行销售,升级销售,VCA,私有云。非常好。但我们还有大量我们称之为较小商业客户的长尾客户,我们继续支持和销售,提供优化后的VCA、VSphere、计算利用率,以提高他们服务的生产力。

We don't attempt to say, go build up your whole VCF. They don't have the skills, they don't have the skills to do it. But only the answer is you're right. My cost of my spend, op X spend, be it support, services, go to market, wall increase. But the difference between that and say CA, my under acquisition we did is we're growing this business very fast. And you don't have to increase your spend growing this business. So we have operating leverage through revenue growth over the next three years. Great. If I could squeeze one more in.
我们并不试图说,去建立你的整个VCF。他们没有这样做所需的技能。但唯一的答案是你是正确的。我的花费,OP X花费,无论是支持、服务、推广,都将增加。但与CA不同的是,我们正在快速发展这个业务。你不必增加花费来发展这个业务。因此,在未来三年内,我们通过营收增长实现运营杠杆。太好了。如果我能再挤进一个问题。

You mentioned several times actually in the last quarter that there were two divisions you're going to divest including carbon black and that's changed. What has changed is the market outlook kind of softens and is the weight and see or if you change your strategy and how you're going to integrate, I'm just curious why last quarter and say you probably gave it to you once and how you're keeping it. Well, we find now that we could generate more value to you, the shareholders. Soon you are, I'm just kidding. But we would generate more value to our shareholders by taking carbon blank, which is not that big and it integrating it into cement. That true by doing it, we would generate much better value to our shareholders than taking a one-shot divestiture on this asset, not particularly large to begin with. Great.
在上个季度,您多次提到要剥离两个部门,包括碳黑部门,但现在情况有所改变。市场前景有所软化,我们正在观望,考虑是否要改变策略和整合方式。我很好奇为什么上个季度您说可能会剥离,而现在却要保留。我们现在发现,通过将碳黑部门整合到水泥中,我们可以为股东创造更多价值。开玩笑了,但这样做对我们的股东会产生更大的价值。相比于一次性出售这一资产,这部门本来就不是很大。太好了。

Thank you. Thank you. Ladies and gentlemen, through the interest of time, I would now like to turn the call back over to GU for closing remarks. Thank you, operator. In closing, we would like to highlight our Broadcom enabling AI in infrastructure investor meeting on Wednesday, March 20, 2024 at 9 a.m. Pacific, 12 p.m. Eastern time. Charlie Coas, president of Broadcom's semiconductor solutions group and several general managers will present on Broadcom's merchant silicon portfolio. The live webcast and replay of the investor meeting will be available at investors.broadcom.com.
谢谢您。谢谢您。女士们先生们,出于时间的考虑,我现在想将电话转回给GU进行总结讲话。谢谢您,操作员。总之,我们想强调我们将于2024年3月20日星期三上午9点太平洋时间、下午12点东部时间举行的Broadcom AI基础设施投资者会议。Broadcom半导体解决方案集团总裁Charlie Coas和几位总经理将介绍Broadcom的商业硅组合。投资者会议的现场网络直播和重播可在investors.broadcom.com上观看。

Broadcom currently plans to report its earnings for the second quarter of fiscal 24 after close of market on Wednesday, June 12, 2024. A public webcast of Broadcom's earnings conference call will follow at 2 p.m. Pacific time. That will conclude our earnings call today. Thank you all for joining. Operator, you may end the call. Thank you. Ladies and gentlemen, this concludes today's conference call. Thank you for your participation. You may now disconnect. Thank you.
博通目前计划在2024年6月12日星期三市场收盘后公布其2024财年第二季度的收益报告。博通的盈利电话会议将于太平洋时间下午2点举行,并可通过网络观看。今天的电话会议到此结束。感谢大家的参与。操作员,您可以结束通话了。谢谢大家,今天的电话会议到此结束。感谢您的参与。您可以挂断电话了。谢谢。