Connect with us

Tech

Nvidia Doubles Down On AI And Taiwan At Computex 2024

Published

on

Nvidia Doubles Down On AI And Taiwan At Computex 2024

At Computex 2024 last month, Nvidia made a big splash with the first keynote of the show, held offsite at the athletics stadium of National Taiwan University. Nvidia, like many other vendors at the show, doubled down on its position in AI, both in the datacenter and inside AI PCs. While much of what CEO Jensen Huang talked about during the opening keynote was focused on datacenter and cloud AI, there were still a plethora of other announcements from the company focused on its PC gaming business and on injecting more AI into gaming. These new AI PC use cases also seemed to focus heavily on utilizing a hybrid AI approach leveraging both cloud and local AI compute to deliver a more advanced gaming experience.

Nvidia’s New Cadence For Blackwell Ultra, Vera, Rubin and Rubin Ultra

During the keynote, Nvidia didn’t introduce many new concepts that it hadn’t already covered during its presentations at CES or GTC earlier this year. It did update some of its power targets for GB200 NVL72 racks, now claiming a more efficient 100 kilowatts per rack instead of the previously quoted 120 kilowatts. Huang also showed what a NVLink spine looked like and struggled to carry it on stage to demonstrate the size and weight of the interconnect.

Nvidia did give more visibility into the company’s future roadmap all the way out to 2027. The company has updated its cadence of GPU launches to an annual schedule, which is an acceleration from its old 18- to 24-month cadence. This means that we should get Blackwell Ultra in 2025, Rubin GPU and Vera CPU in 2026 and Rubin Ultra in 2027. Do keep in mind that Vera will likely be paired with Rubin, much like Grace is paired with Blackwell and Hopper. (Fun fact: just as the Grace and Hopper components were named for the computer scientist Admiral Grace Hopper, the Vera and Rubin components are named for the astronomer Dr. Vera Rubin, who did pioneering studies of galactic rotation.) So, we can expect VR to be the future nomenclature for those platforms, likely in a VR100 and VR200 configuration.

Nvidia Showcases RTX AI PCs And Other AI Offerings

While Nvidia has been dominant in both enterprise and cloud AI environments, it has struggled to communicate its capabilities on the PC. This is especially true with the advent of the AI PC. Ironically, Nvidia was one of the first companies to bring AI capabilities to the PC via its GPUs with technologies such as DLSS, which has been pivotal in enabling real-time ray tracing. That aside, Nvidia has been making lots of product announcements this year to beef up its AI PC story, including with the launch of ChatRTX.

To further improve its on-device capabilities, Nvidia took what was once an April Fool’s Joke and turned it into a real beta with Project G-Assist, which is a GeForce AI Assistant. I got to experience this firsthand in the Nvidia suites at Computex and was really impressed with what it could do and how much it enhanced the gaming experience. Capabilities ranged from helping adjust graphics settings to walking the user through certain game intricacies and answering questions about objects in the in-game line of sight.

Nvidia also demonstrated the next generation of its ACE digital human platform, which is powered for on-device use by Nvidia NIMs that add more intelligence and interactivity to NPCs in open-world and RPG titles. Nvidia ACE’s demos were a great way for the company to demonstrate its hybrid AI capabilities to improve performance and latency.

At Computex, the company also announced that it is working with Microsoft to deliver Copilot+ PC specs in a new category of laptops using RTX 4070 GPUs paired with other vendors’ SoCs. My understanding is that most of those will be SoCs that have dedicated NPUs, such as the AMD Ryzen AI 300 series or perhaps Intel’s Lunar Lake, but it remains unclear when these configurations will debut.

To further back up Nvidia’s AI PC story, the company also noted that Windows Copilot Runtime would be adding GPU acceleration for local PC SLMs. This, paired with Nvidia’s RTX AI Toolkit, should make access to Nvidia’s GPUs much easier for third-party developers building Windows AI applications, further strengthening the company’s AI PC story. AI PC darlings Adobe, Blackmagic Design and Topaz are already onboard to take advantage of the RTX AI Toolkit for their apps, and I’m excited to see how all these apps will leverage both GPU and NPU optimizations to maximize performance.

The Continued Battle For The AI PC

Nvidia has been working hard this year to strengthen its position on the client side of the AI equation, especially on the AI PC. Meanwhile, Nvidia’s position in AI for the cloud and the datacenter is dominant, and with an accelerated annual cadence it’s quite clear that the company will be difficult to catch up with.

What I want to see from Nvidia in the future is a better end-to-end AI story that showcases its strengths as hybrid AI continues to become a more prevalent model for AI consumption. Nvidia has already released its latest RTX 4000 Super family of GPUs this year, but even now there are rumors about the next-generation 5000 series, which will likely lean even further into AI capabilities such as frame generation and other rendering techniques. As touched on above, Nvidia’s role in Copilot+ PCs could also evolve with time if current rumors are any indication; I’ll be looking especially closely for products in that vein early next year.

Continue Reading