Silicon Valley ACM SIGGRAPH
Silicon Valley ACM SIGGRAPH
  • 19
  • 9 023
AI for Learning Photorealistic 3D Digital Humans from In-the-Wild Data
Matthew Chan, NVIDIA Research
ABSTRACT: Traditionally, creating 3D digital humans requires lengthy efforts by digital artists, and often costly 3D scanning by special multi-view scanners. Learn how recent generative AI technologies allow the learning of photorealistic 3D representations from a collection of in-the-wild 2D images, such as internet photos. We'll dive deep into our recent work called “EG3D” and “WYSIWYG”, which can synthesize wide varieties of photorealistic 3D humans in real time. We'll also show how 3D synthetic data from a pre-trained 3D generative model can be used to train another AI model for challenging image synthesis tasks. To this end, we present our recent work called “LP3D,” which can synthesize photorealistic neural radiance field (NeRF) models from a single RGB image in real time. We'll demonstrate how these AI-driven human synthesis methods can make the advanced capabilities, such as 3D video conferencing, accessible to anyone and enable new applications in the future.
BIO: Matthew Chan joined NVIDIA as a research engineer in 2022. They primarily work at the intersection between graphics and generative models, specifically how they relate to 3D scene synthesis, reconstruction, and understanding. They graduated from University of Maryland, College Park in 2021 with a bachelor's degree in mathematics and computer science.
research.nvidia.com/labs/amri/author/matthew-chan/
research.nvidia.com/labs/nxp/wysiwyg/
Joint event by Silicon Valley ACM SIGGRAPH (SVSIGGRAPH), San Francisco Bay Area ACM (SFBayACM) and Los Angeles ACM SIGGRAPH (LASIGGRAPH).
www.sfbayacm.org/
lasiggraph.org/
www.meetup.com/sv-siggraph/events/299800871/
0:00 ACM Chapters Intros
5:42 Speaker Intro
6:36 Presentation
7:27 Long History of Telepresence Efforts...
10:31 Generative Models
13:37 3D-aware GAN: Unsupervised Learning Of Photorealistic 3D Faces
34:03 Conditional 3D Face Synthesis
46:03 AI-Mediated 3D Telepresence
51:11 Questions and Answers Q&A
Переглядів: 416

Відео

Learning-Based Expression and Head Pose Transfer from Images and Videos
Переглядів 2102 роки тому
The face and head movements are a powerful mode of nonverbal communication among humans. Endowing anthropomorphic characters with the ability to produce such actions autonomously is crucial for making digital twins of themselves. Xiao Zeng of the UCLA Computer Graphics & Vision Lab, and Surya Dwarakanath of Cruise Automation present a learning-based approach to transfer facial expressions and h...
Versatile Fluid Simulation: The Interplay Between Adaptivity, Efficiency and Constitutive Modeling
Переглядів 1972 роки тому
Dr. Mridul Aanjaneya, Rutgers University, discusses the interplay between adaptivity, efficiency, and constitutive modeling for versatile fluid simulation in computer graphics. An Assistant Professor in the Department of Computer Science at Rutgers University, Dr. Mridul Aanjaneya obtained his PhD in Computer Science from Stanford University under the supervision of Ronald Fedkiw while working ...
Workflow of a Concept Artist: Lim Chuan Shin
Переглядів 1122 роки тому
Malaysian-based freelance concept artist Lim Chuan Shin shares his processes, techniques and tools in creating modern edgy animation characters and environment. With 15 years of experience in game development, Shin has worked on AAA games such as Street Fighter V, Agent of Mayhem, and most recently, Splinterlands. Recipient of the Best Concept Artist Award in Mobile category by Concept Art Asso...
Learning to Predict High Frequency Signals via Low Frequency Embeddings
Переглядів 2872 роки тому
Jane Wu, Ph.D. Student at Stanford Existing machine learning models still struggle to predict high-frequency details present in data due to regularization, a technique necessary to avoid overfitting. Hence, researches are conducted whereby high-frequency information is procedurally embedded into low-frequency data so that when the latter is smoothed by the network, the former still retains its ...
Implicit Neural Representation Networks for Fitting Signals, Derivatives, and Integrals
Переглядів 1,6 тис.2 роки тому
Julien Martel, Postdoctoral Research Fellow at Stanford University in the Computational Imaging Lab David B. Lindell, Postdoctoral Scholar at Stanford University and incoming Assistant Professor in the Dept. of CS at University of Toronto For more details: www.meetup.com/SV-SIGGRAPH/events/282027117/ Abstract Implicitly defined, continuous, differentiable signal representations parameterized by...
The Art of Fantasy Illustration
Переглядів 3102 роки тому
Irina French - Digital Artist and Illustrator An illustrator with more than 10 years of professional experience, Irina has created a varied range of art including book covers for the upcoming urban fantasy novels of Abigail Smith's Ace Mystic, Tom Meros' Little Ginny Polkadot this August, and the Mageland fantasy books. Irina's artwork also graced the covers of D.M. Cain's Chronicle of Chaos, D...
Global Position Prediction for Interactive Motion Capture
Переглядів 2433 роки тому
Paul Schreiner, Researcher, Rokoko, University of Copenhagen He will demonstrate global position estimation from local pose information, including - A method for reconstructing the global position in motion capture using neural networks where position sensing is poor or unavailable such as in IMU-based motion capture, and the - Performance of the proposed method and its benefits over using heur...
Computational Imaging with Single-Photon Detectors, David Lindell
Переглядів 7423 роки тому
Computational Imaging with Single-Photon Detectors, David Lindell
ThinVR: A VR display approach providing wide FOV in a compact form factor, Intel Labs
Переглядів 5963 роки тому
ThinVR: A VR display approach providing wide FOV in a compact form factor, Intel Labs
AI-Driven Photorealistic Human Digitization, Koki Nagano
Переглядів 3783 роки тому
AI-Driven Photorealistic Human Digitization, Koki Nagano

КОМЕНТАРІ

  • @sanjayshuklz
    @sanjayshuklz 9 місяців тому

    Love the continuous improvements. Some of these are pushing this app to another level where no other app can match. I am not aware of any such other app. Simply love this app.

  • @atharvahude
    @atharvahude Рік тому

    Github link ?

    • @SVSIGGRAPH
      @SVSIGGRAPH Рік тому

      The description has his website ye-yuan.com, which includes his github github.com/Khrylx, and his thesis arxiv.org/abs/2204.13678, which also includes a couple of githubs. Suggest also checking out github.com/NVlabs

  • @marianbuckridge
    @marianbuckridge Рік тому

    Promo-SM 😔

  • @soonbinlee4049
    @soonbinlee4049 2 роки тому

    What a beatiful presentation.

  • @SVSIGGRAPH
    @SVSIGGRAPH 2 роки тому

    GTC 2022 has a session on: Fourier Neural Operators and Transformers for Extreme Weather and Climate Prediction [S41936]

  • @SVSIGGRAPH
    @SVSIGGRAPH 2 роки тому

    For papers, videos, code… www.computationalimaging.org/publications/

  • @Sara-gm6on
    @Sara-gm6on 3 роки тому

    👀 P r o m o S M!!!

  • @WhiteDragon103
    @WhiteDragon103 3 роки тому

    When using that mocap suit, does the suit sense what direction gravity is pulling relative to the "root"? If so, it capable of reliably determining the pitch and roll of the body relative to the ground plane? If so you may be discarding too much information in the training data.

    • @SVSIGGRAPH
      @SVSIGGRAPH 3 роки тому

      A link to the paper is in the description. It says, "Capture systems, such as IMU suits, can provide excellent pose and orientation data of a capture subject, but otherwise need post processing to estimate global position." … "IMUs typically include accelerometers, magnetometers, and gyroscopes, which allow for an excellent measurement of rotation that can be used to reconstruct the pose of limbs as well as the orientation of the capture subject." … "We align the vertical axis of our reference frame to match the the global vertical, opposite the direction of gravity." Does that answer your question?

  • @superguy7044
    @superguy7044 3 роки тому

    Is the team working on this open to discussion??

    • @SVSIGGRAPH
      @SVSIGGRAPH 3 роки тому

      I don't think they monitor this video, but you might try contacting them directly.

  • @olsonspeed
    @olsonspeed 3 роки тому

    Good to see that Loren is still experimenting and innovating, a great man, his success is well deserved. "Long may you fly!"

  • @SVSIGGRAPH
    @SVSIGGRAPH 3 роки тому

    Announcements and introduction precedes the presentation which starts at 3:28