Connect with us

Tech

NVIDIA Research to present simulation, generative AI advances at SIGGRAPH – The Robot Report

Published

on

NVIDIA Research to present simulation, generative AI advances at SIGGRAPH – The Robot Report

Listen to this article

NVIDIA Research today said it is bringing an array of advancements in rendering, simulation, and generative AI to SIGGRAPH 2024. The computer graphics conference will be from July 28 to Aug. 1 in Denver.

At SIGGRAPH, NVIDIA Corp. plans to present more than 20 papers introducing innovations advancing synthetic data generators and inverse rendering tools that can help train next-generation models. The company said its AI research is making simulation better by boosting image quality and unlocking new ways to create 3D representations of real or imagined worlds.

The papers focus on diffusion models for visual generative AI, physics-based simulation and increasingly realistic AI-powered rendering. They include two technical Best Paper Award winners and collaborations with universities across the U.S., Canada, China, Israel, and Japan, as well as researchers at companies including Adobe and Roblox.

These initiatives will help create tools that developers and businesses can use to generate complex virtual objects, characters, and environments, said the company. Synthetic data generation can then be harnessed to tell powerful visual stories, aid scientists’ understanding of natural phenomena or assist in simulation-based training of robots and autonomous vehicles.


SITE AD for the 2024 RoboBusiness registration now open.
Register now.


Diffusion models improve texture painting, text-to-image generation

Diffusion models, a popular tool for transforming text prompts into images, can help artists, designers and other creators rapidly generate visuals for storyboards or production, reducing the time it takes to bring ideas to life.

Two NVIDIA-authored papers are advancing the capabilities of these generative AI models.

ConsiStory, a collaboration between researchers at NVIDIA and Tel Aviv University, makes it easier to generate multiple images with a consistent main character. The company said it is an essential capability for storytelling use cases such as illustrating a comic strip or developing a storyboard. The researchers’ approach introduces a technique called subject-driven shared attention, which reduces the time it takes to generate consistent imagery from 13 minutes to around 30 seconds.

NVIDIA researchers last year won the Best in Show award at SIGGRAPH’s Real-Time Live event for AI models that turn text or image prompts into custom textured materials. This year, they are presenting a paper that applies 2D generative diffusion models to interactive texture painting on 3D meshes, enabling artists to paint in real time with complex textures based on any reference image.

ConsiStory makes it easier to generate multiple images with the same character, says NVIDIA Research.

ConsiStory makes it easier to generate multiple images with the same character. Source: NVIDIA Research

NVIDIA Research kick-starts developments in physics-based simulation

Graphics researchers are narrowing the gap between physical objects and their virtual representations with physics-based simulation — a range of techniques to make digital objects and characters move the same way they would in the real world. Several NVIDIA Research papers feature breakthroughs in the field, including SuperPADL, a project that tackles the challenge of simulating complex human motions based on text
prompts.

Using a combination of reinforcement learning and supervised learning, the researchers demonstrated how the SuperPADL framework can be trained to reproduce the motion of more than 5,000 skills — and can run in real time on a consumer-grade NVIDIA GPU.

Another NVIDIA paper features a neural physics method that applies AI to learn how objects — whether represented as a 3D mesh, a NeRF or a solid object generated by a text-to-3D model — would behave as they are moved in an environment. A NeRF, or neural radiance field, is an AI model that takes 2D images representing a scene as input and interpolates between them to render a complete 3D scene.

A paper written in collaboration with Carnegie Mellon University discusses the development of develops a new kind of renderer. Instead of modeling physical light, the renderer can perform thermal analysis, electrostatics, and fluid mechanics (see video below). Named one of five best papers at SIGGRAPH, the method is easy to parallelize and doesn’t require cumbersome model cleanup, offering new opportunities for speeding up engineering design cycles.

Additional simulation papers introduce a more efficient technique for modeling hair strands and a pipeline that accelerates fluid simulation by 10x.

Papers raise the bar for realistic rendering, diffraction simulation

Another set of NVIDIA-authored papers will present new techniques to model visible light up to 25x faster and simulate diffraction effects — such as those used in radar simulation for training self-driving cars — up to 1,000x faster.

A paper by NVIDIA and University of Waterloo researchers tackles free-space diffraction, an optical phenomenon where light spreads out or bends around the edges of objects. The team’s method can integrate with path-tracing workflows to increase the efficiency of simulating diffraction in complex scenes, offering up to 1,000x acceleration. Beyond rendering visible light, the model could also be used to simulate the longer wavelengths of radar, sound or radio waves.

Path tracing samples numerous paths — multi-bounce light rays traveling through a scene — to create a photorealistic picture. Two SIGGRAPH papers improve sampling quality for ReSTIR, a path-tracing algorithm first introduced by NVIDIA and Dartmouth College researchers at SIGGRAPH 2020 that has been key to bringing path tracing to games and other real-time rendering products.

One of these papers, a collaboration with the University of Utah, shares a new way to reuse calculated paths that increases effective sample count by up to 25x, significantly boosting image quality. The other improves sample quality by randomly mutating a subset of the light’s path. This helps denoising algorithms perform better, producing fewer visual artifacts in the final render.

NVIDIA and University of Waterloo researchers have developed techniques to mitigate free-space diffraction in complex scenes.

NVIDIA and University of Waterloo researchers have developed techniques to mitigate free-space diffraction in complex scenes. Source: NVIDIA Research

Teaching AI to think in 3D

NVIDIA researchers are also showcasing multipurpose AI tools for 3D representations and design at SIGGRAPH.

One paper introduces fVDB, a GPU-optimized framework for 3D deep learning that matches the scale of the real world. The fVDB framework provides AI infrastructure for the large spatial scale and high resolution of city-scale 3D models and NeRFs, and segmentation and reconstruction of large-scale point clouds.

A Best Technical Paper award winner written in collaboration with Dartmouth College researchers introduces a theory for representing how 3D objects interact with light. The theory unifies a diverse spectrum of appearances into a single model.

In addition, a NVIDIA Research collaboration with the University of Tokyo, the University of Toronto, and Adobe Research introduces an algorithm that generates smooth, space-filling curves on 3D meshes in real time. While previous methods took hours, this framework runs in seconds and offers users a high degree of control over the output to enable interactive design.

See NVIDIA Research at SIGGRAPH

NVIDIA events at SIGGRAPH will include a fireside chat between NVIDIA founder and CEO Jensen Huang and Lauren Goode, senior writer at Wired, on the impact of robotics and AI in industrial digitalization.

NVIDIA researchers will also present OpenUSD Day by NVIDIA, a full-day event showcasing how developers and industry leaders are adopting and evolving OpenUSD to build AI-enabled 3D pipelines.

NVIDIA Research has hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars, and robotics.

Aaron Lefohn, NVIDIA ResearchAbout the author

Aaron Lefohn leads the Real-Time Rendering Research team at NVIDIA. He has led real-time rendering and graphics programming model research teams for over a decade and has productized many research ideas into games, film rendering, GPU hardware, and GPU APIs.

Lefohn’s teams’ inventions have played key roles in bringing ray tracing to real-time graphics, combining AI and computer graphics, and pioneering real-time AI computer graphics. Some of the NVIDIA products derived from the teams’ inventions include DLSS, RTX Direct Illumination (RTXDI), NVIDIA’s Real-Time Denoisers (NRD), the OptiX Deep Learning Denoiser, and more.

The teams’ current focus areas include real-time physically-based light transport, AI computer graphics, image metrics, and graphics systems.

Lefohn previously worked in rendering R&D at Pixar Animation Studios, creating interactive rendering tools for film artists. He was also part of a graphics startup called Neoptica creating rendering software and programming models for Sony PlayStation 3. In addition, Lefohn led real-time rendering research at Intel. He received his Ph.D. in computer science from UC Davis, his M.S. in computer science from the University of Utah, and an M.S. in theoretical chemistry.

Continue Reading