Surprise Video With Our New Paper On Material Editing! 🔮

Поділитися
Вставка
  • Опубліковано 16 чер 2020
  • 📝 Our "Photorealistic Material Editing Through Direct Image Manipulation" paper and its source code are available here:
    users.cg.tuwien.ac.at/zsolnai...
    The previous paper with the microplanet scene is available here:
    users.cg.tuwien.ac.at/zsolnai...
    ❤️ Watch these videos in early access on our Patreon page or join us here on UA-cam:
    - / twominutepapers
    - / @twominutepapers
    🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
    Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh
    More info if you would like to appear here: / twominutepapers
    Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: discordapp.com/invite/hbcTJu2
    Károly Zsolnai-Fehér's links:
    Instagram: / twominutepapers
    Twitter: / twominutepapers
    Web: cg.tuwien.ac.at/~zsolnai/
    #NeuralRendering
  • Наука та технологія

КОМЕНТАРІ • 198

  • @MichaOrynicz
    @MichaOrynicz 4 роки тому +586

    IMHO Such more in-depth video from time to time would be a nice addition to the channel.

    • @TwoMinutePapers
      @TwoMinutePapers  4 роки тому +97

      Noted, thank you very much for the feedback! :)

    • @wstoettinger
      @wstoettinger 4 роки тому +35

      I completely agree! great work!

    • @TwoMinutePapers
      @TwoMinutePapers  4 роки тому +48

      @@wstoettinger You are very kind Wolfy, thank you!

    • @Olisha.S
      @Olisha.S 4 роки тому +6

      Congratulations, and great paper!

    • @DThorn619
      @DThorn619 4 роки тому +17

      I agree! Seeing a more in-depth view of some papers really would make a for a nice addition to the channel. I realize that one reason this paper could be talked about at such length was because it was yours but what if every 2-4 weeks or so you had a short poll that listed the 2 minute papers you've recently covered and asked the viewers which one we would like a "longer format" version for you to go over?

  • @Djorgal
    @Djorgal 4 роки тому +265

    The interesting I noticed is that your usual over enthusiasm is much more contained when talking about your work. No mention that it's brilliant, fantastic or beautiful (which it is!)

    • @TwoMinutePapers
      @TwoMinutePapers  4 роки тому +117

      You are indeed right. This talk will also be sent to a conference and the tone has to be a little different.

    • @TalkinKush
      @TalkinKush 4 роки тому +15

      The work speaks for itself, his enthusiasm is definitely tempered. Lol

    • @Ariel16283
      @Ariel16283 3 роки тому +3

      Yeah and what a time tô be alive!

    • @doubleclick4132
      @doubleclick4132 3 роки тому +3

      humility is a virtue

    • @smorty3573
      @smorty3573 3 роки тому +1

      What a time to be alive!
      And just after... months we have made some fantastic optimisations!

  • @trabaregocer
    @trabaregocer 4 роки тому +184

    Almost 9 minute papers is kind of nice though.

  • @TurkishLoserInc
    @TurkishLoserInc 4 роки тому +107

    I'm fairly certain most of your subscribers are here for the quality content regardless of the length. Excellent work as usual!

  • @scienceofart9121
    @scienceofart9121 4 роки тому +57

    This channel is a gem and life-changer. Congrats in your phd but please dont leave us, continue to nourish our brains

    • @TwoMinutePapers
      @TwoMinutePapers  4 роки тому +28

      Thank you, and worry not, the series is here to stay! In fact, I am spending more time making the series as ever.

  • @chuckmcwippy6850
    @chuckmcwippy6850 4 роки тому +120

    This was so crazy I almost forgot to hold onto my papers

  • @duncanw9901
    @duncanw9901 4 роки тому +98

    Made it on the cover of the journal--quite a thesis indeed.

  • @smorty3573
    @smorty3573 3 роки тому

    I really like the 9 minute papers. You can explain really great, so it doesn't get boring.

  • @Kram1032
    @Kram1032 4 роки тому +17

    I really like the extra depth, honestly.
    Great work!

  • @photelegy
    @photelegy 4 роки тому +49

    2:45
    Ahhhh, of course. (Don't understand a thing, but find it highly interesting)

    • @DgutLaud
      @DgutLaud 4 роки тому +1

      Russian detected

    • @photelegy
      @photelegy 4 роки тому +1

      @@DgutLaud Why? 😂

    • @HoloDaWisewolf
      @HoloDaWisewolf 4 роки тому +24

      The first line basically says "my problem is to find the shader that produces an image as close as possible to my target image t̃". Well, it's the renderer that produces the image actually, and does so by using a shader.
      Somewhat more precisely, you can think || something || as the magnitude or length of that something. Assume that this "something" is the difference between 3 and 4, then you get || 3 - 4 || = || -1 || = 1. Note that || 4 - 3 || = || 1 || = 1. You can now say that the distance between 3 and 4 (or 4 and 3) is 1. By distance I mean the length of what we get when we subtract these two numbers. It is called a norm, and in that case it simply acts as taking the absolute value.
      There exists infinitely many different norms, or ways to measure distances if you will. Without going too far into the details, the index 2 simply specifies that Dr. Károly Zsolnai Fehér and his collaborators use well-known Euclidean norm, the one that says that the distance between the 2D points (x1, y1) and (x2, y2) is sqrt((x2-x1)^2 + (y2-y1)^2).
      In the video, x is a shader (or rather the parameters describing it), and phi(x) is the image that the renderer outputs using x. A 100x100px image is nothing but a list of 10000 numbers (between 0 and 255, or more sometimes more conveniently between 0 and 1), in much the same manner as a 2D point is a list of two numbers. We prefer the term "N-dimensional vector", where N can be 1, 2, 10000, 1920*1080=2073600 etc.
      Our norm is defined for such vectors i.e. you can compute the distance between them and thus between two images. Throwback to my first paragraph: argmin_x || phi(x) - t̃ || is simply a formal and non-ambiguous way to say "I want to find a shader x that produces an image phi(x) such that its distance with the target image t̃ is as small as possible". If we find a shader that produces exactly the target image, then the norm will be 0 (you cannot find a better shader).
      Since there exists tooons of different shaders (described by tooons of parameters, and guess what, x is also a vector), this is not a trivial problem. It may not be possible to find a perfect shader, but hopefully we can find one that produce good enough results.
      Now, the second line merely specifies some constraints on these parameters, like the index of refraction must be between 0 and 4, or the translucency between 0 and 1. No need to search for parameters beyond some limits, because they don't make sense. Note that sometimes we apply constraints in order to reduce the search space and hopefully the computational time to find a good set of parameters. And sometimes it's simply more convenient (and equivalent) to work with numbers within a certain range, and I could go on.
      Hope it helps!

    • @vladchira521
      @vladchira521 4 роки тому +3

      @@HoloDaWisewolf So from what I understand, it calculates the distances between the corresponding colors of the pixels of the image? In that case they must use 3D distance, since color is represented by 3 parameters (RGB). Am i wrong?

    • @TwoMinutePapers
      @TwoMinutePapers  4 роки тому +7

      @@HoloDaWisewolf 👌

  • @Denjo92
    @Denjo92 4 роки тому +31

    I have finished the video.

  • @AHotLlama
    @AHotLlama 4 роки тому +10

    This video was really good, most of your videos make you sound very well versed in the subject matter, but this is a whole other level. Such an interesting paper too, I'm excited to hear more about your future research, whatever that might be.

  • @elammertsma
    @elammertsma 3 роки тому

    Such great work, Károly! It's wonderful to see some of your own work after getting to hear your enthusiasm about others through hundreds of videos. Love the depth and explanation, too! I agree with the other comments about in-depth videos like this being a welcome addition and a great way to see under the hood of how some of these things work. Thank you!

  • @mohannd1234
    @mohannd1234 4 роки тому

    The only channel videos that I don't understand but get interested in.

  • @IOStudios
    @IOStudios 4 роки тому +5

    I really like these longer videos when you go into more detail! Thank you so much for sharing! :)

  • @liggerstuxin1
    @liggerstuxin1 4 роки тому +38

    Easily the best AI channel to which I am subscribed.

    • @atrox7685
      @atrox7685 4 роки тому +1

      check out Lex Fridman, if u dont alrdy have :D. Best AI Podcast

    • @liggerstuxin1
      @liggerstuxin1 3 роки тому

      Nick M already subbed thank you

  • @ianrajkumar
    @ianrajkumar 3 роки тому

    I for one would love to see more of your own personal work and experiments

  • @OrkGold1
    @OrkGold1 4 роки тому +5

    I'd love to see longer videos. The more the merrier, good stuff.

  • @StuartDesign
    @StuartDesign 4 роки тому +84

    Could you please maybe slow down, before you make half of the CG industry redundant in less than 5 years, Thanks.

    • @qx-jd9mh
      @qx-jd9mh 4 роки тому +6

      Many of those jobs are tedious and mundane.

    • @StuartDesign
      @StuartDesign 4 роки тому +1

      @@qx-jd9mh But what's waiting for them is not necessarily less tedious - and perhaps no job at all.

    • @13lacle
      @13lacle 4 роки тому +1

      While an interesting first step to automating shader creation. It doesn't really help the CG industry yet, let alone make it redundant. It is useful for novices but anything above that it is a novelty for now.
      As far as I can tell it sets: the base color (albedo), direct reflection (specular) level/color, reflection blurriness (roughness), how see through it is (transmission level) and maybe some transmission roughness.
      These cover the basics but there are a dozen or so other situational properties and most importantly does not have any variation in each property. The specular coloring is also weird for a photorealistic shader creator as that doesn't exist for most non metal materials which these ones are acting like.

    • @qx-jd9mh
      @qx-jd9mh 4 роки тому +1

      @@StuartDesign Perhaps don't invest into skill sets that will be automated. Augmenting CG with programming will let you go a lot further than doing it all by hand.

    • @terner1234
      @terner1234 4 роки тому +1

      humans need not apply

  • @VaradMahashabde
    @VaradMahashabde 4 роки тому +1

    Thanks for the video! I can truly say that this is the first time I have understood what is going on behind the scenes. Really wouldn't mind (appreciate rather) another video like this once in a while!

  • @Dysiode
    @Dysiode 4 роки тому

    Magic. And, as we always say, two more papers down the road and this will be even better! What a time to be alive!

  • @SteelDrake2
    @SteelDrake2 4 роки тому

    This was a neat change of pace. Occasionally I'd wish you'd elaborate more on something but understand your wish to keep the videos brief so that you don't drown people in unimportant details. Nice work.

  • @javisan7925
    @javisan7925 4 роки тому +4

    Really great in-depth video, would be great to see more videos like this one every now and then.

  • @14zrobot
    @14zrobot 4 роки тому +6

    I enjoy this more detailed style. Would be cool to see more

  • @chubbymoth5810
    @chubbymoth5810 3 роки тому

    Well worth the extra time. This method may be able to greatly increase the amount of detailed content in games as well I'd think.Very cool stuff.

  • @rbaude27
    @rbaude27 4 роки тому +16

    This is such a cool format on a really interesting paper ! Well done , hopefully we get longer videos more often

  • @Golinth
    @Golinth 4 роки тому +1

    I really enjoyed this style of video, and hope that more are published every now and then.

  • @russelldicken9930
    @russelldicken9930 4 роки тому +1

    Fascinating. I look forward to going through the code!

  • @loscienzo
    @loscienzo 4 роки тому +1

    I've finished the video, but i'm watching it again to better understand it. It's great 🥦

  • @sebbecht
    @sebbecht 4 роки тому +2

    Very interesting! well done and congrats on the thesis defense and publications.

  • @imjody
    @imjody 4 роки тому +23

    1:01 - You know you've been seeing too much COVID-19 stuff when you immediately think of that when you see "Conv1D". 😅

  • @fatribz
    @fatribz 4 роки тому

    What a time to be alive!

  • @yappygm7433
    @yappygm7433 4 роки тому +1

    While the details are well above my head, I am excited for the technology. Great video!

  • @kevorka3281
    @kevorka3281 4 роки тому +1

    Great job, buddy!

  • @ABDLLHSDDQI
    @ABDLLHSDDQI 4 роки тому

    What a time to be alive!
    This type of longer, detailed videos are very fascinating for people who are also engaged /interested in advanced research and methods and want to learn how the problem was approached and worked on. I can understand if you hesitate to mix this with your main channel, but I would recommend someway to make these still, maybe uploaded on a second 'More Than 2 Minute papers' channel?

  • @minecrafthERCULES
    @minecrafthERCULES 4 роки тому

    I am just stunned! this seems like such a time save for the rendering process. I would love to try this out sometime in the future

  • @alexred7515
    @alexred7515 4 роки тому

    Wow! That's wery PAPER! I'm PAPER! Who wants paper? PAPER 4 EVERYONE! God bless paper!

  • @abowden556
    @abowden556 4 роки тому

    this is incredible work, an amazing extension of the previous paper. I can't wait to see how this technique get even more versatile and useful in the future, you absolute bloody madlad.

  • @sorry987654321
    @sorry987654321 4 роки тому

    wow! this is absolutely mind-blowing

  • @BobbyRobby1000
    @BobbyRobby1000 4 роки тому +1

    I really like this going deeper into AI graphics. I'm sure if you upload more stuff like this, people will be interested in the concepts being discussed, and then you could break down these concepts too. You alone could be responsible for hundreds of new people entering AI research.

  • @davidm.johnston8994
    @davidm.johnston8994 3 роки тому

    Very interesting, Doctor Zsolnai.

  • @phxf
    @phxf 4 роки тому

    That’s really neat! I could imagine this being useful as a Blender feature, being able to click the material preview, and edit it using the inbuilt 2d texture painting tool, and as you edit, have a material preview in the background trying to follow the edits, continually optimising towards them like a preview renderer. The encoder network could also just be nice as a way to more rapidly generate high quality material preview user interface elements. If it can be extended to handle arbitrary inputs, it would of course be incredible to be able to take a photo of real materials and have the app effectively make a reasonable attempt at generating a BSDF equivalent to the surface. It maybe useful to have a neural material segmentation network to make a workflow like that simple, in being able to take a photo, open it, and click on a material and use a neural segmentation map to mask what the computer perceives as being the rest of the parts in that photo that use the same material, to then pass through as input to this process.
    I wonder if a decentralised collaborative approach could be useful for training. If it were a blender plugin, perhaps it could have optional analytics in that any time someone edits a material in a way that it doesn’t get very close to replicating, it might ask the user for permission to upload their material edit to expand the training dataset.

    • @phxf
      @phxf 4 роки тому

      I also wonder how far this could be taken. For instance, could the optimiser handle generating procedural textures? It would be amazing, for example, to be able to mask off an area of the material sample image, and just paste on some bricks or pebbles, and have the optimiser come up with a formula to build those shapes, though for that kind of design it might be better to have a system which offers different material preview styles like a simple flat plane facing the camera

    • @phxf
      @phxf 4 роки тому

      For real world texture imports it could also be interesting to harvest photos from the web of photos that are tagged #texture or #floor or similar things, and also in their JPEG EXIF data appear to have the camera flash and phone model both on, and build a system that understands things like where the camera flash is located relative to the camera on various phone models and DSLRs and to be able to accurately interpret things like specular details considering the known relative position of that point light source

  • @gs007abc
    @gs007abc 4 роки тому +3

    Very interesting, I always missed some deepness, well done!🤙

  • @grisus7254
    @grisus7254 4 роки тому

    I’ve now finished watching the video, it was great

  • @pigflyer5514
    @pigflyer5514 4 роки тому

    I love this surprise

  • @pikachufan25
    @pikachufan25 3 роки тому

    i finish Watching the Video
    and i have to say. What a Time to be Alive

  • @Morimea
    @Morimea 3 роки тому

    thanks, very useful info in video and project

  • @anadodik
    @anadodik 4 роки тому

    Great work!

  • @Banana_Chris
    @Banana_Chris 4 роки тому +1

    I like your Videos very much. You often show how the programs run or a model is trained etc.
    What I'd be interested in is if you ever made a video about how you download, customize, execute, and use GitHub's code.
    Unfortunately I don't have much programming experience, but it would be very interesting to see how the program works.
    I noticed that back then with differential equations.
    Only by mathematical formulas I did not understand the differential equations, but when I saw a program code where the equations were solved numerically and iteratively, I understood for the first time how these equations work and how to produce a useful result from them.
    That's why I think it's important to present a practical example using the program code, because this makes it much easier to understand how it works.

  • @subashchandra9557
    @subashchandra9557 4 роки тому

    Are we getting two more videos covering the other two papers of your PHD? Because mentioning that there are 3 and only showing us one is such a tease.

  • @raspberrypi4970
    @raspberrypi4970 4 роки тому +1

    That's King Kai's Planet were Goku trained to fight Vegeta and Nappa..👍

  • @notapplicable7292
    @notapplicable7292 4 роки тому

    A few two minute papers are nice but frankly, I really appreciate the longer form stuff more.

  • @ValentineC137
    @ValentineC137 4 роки тому +1

    almost 9 minute papers with Dr. Károly Zsolnai-Fehér! :o

  • @micahk5557
    @micahk5557 4 роки тому

    I felt a little gross when I saw the word convolution, such a difficult topic in mathematics (at least for me during my class in college). Really cool though, I'm definitely going to try to read through your papers! Love that you supply this stuff (and the code!) for free!

  • @IsaiahSugar
    @IsaiahSugar 4 роки тому

    duude!
    congrats!
    this looks very cool.

  • @addmoreice
    @addmoreice 4 роки тому

    You can think of the two networks as a compressor and decompressor specialized for a single image and using the rendering settings as the compression file.

  • @antonyalen2745
    @antonyalen2745 4 роки тому

    even though I don't understand 55 percent of what you are talking about, I still enjoy your content ;)

  • @vladchira521
    @vladchira521 4 роки тому

    Still in high school right now, so this math is way overy my head. I can only imagine the complexity involved. Personally I was working on a custom ray tracer but things got complicated really fast, so I abandoned. This is fascinating nonetheless. Would love to see more

  • @digitalcasanova
    @digitalcasanova 4 роки тому

    missing source code in description, so here it is for those who couldn't find it: users.cg.tuwien.ac.at/zsolnai/gfx/photorealistic-material-editing/

  • @jimmwagner
    @jimmwagner 4 роки тому

    Awesome!

  • @dkkoala1
    @dkkoala1 4 роки тому

    Great work!
    Did you add any diffuse training examples or were all of them specular? Because it seems that the specular highlight not going away is caused by a bit of overfitting in that particular area.
    You also say that you used several different network architectures for the shader predictor, did you consider using the same architecture but training each of them on a different subset of your data? i.e. one would be trained on diffuse materials only and therefore be great at predicting the properties of those materials while others will be trained on specular and super specular materials and predict their properties better.

  • @saintmayhem7093
    @saintmayhem7093 4 роки тому

    I wish there were other channels like this for other fields of research (medicine, biology, etc).

  • @randomgooy7456
    @randomgooy7456 4 роки тому

    Hope one day people would be able to use this on our own computers

  • @benwant8544
    @benwant8544 4 роки тому +1

    @Two Minute Papers . I know this is relatively unrelated to this video, but would it be possible to produce a renderer that, instead of simulating photons as particles, could you use Maxwell's equations and simulate light as waves? This would allow for interesting phenomena such as thin film interference. I am not a computer scientist so I don't know how difficult that would be to design or run but it seems like an interesting idea!

    • @zacozacoify
      @zacozacoify 4 роки тому

      Cool idea! This might be possible with ray-based rendering if you keep track of the phase of the light (since phenomena like thin film interference can be modelled with rays)

    • @Royvan7
      @Royvan7 4 роки тому

      i've had the same thought myself. i think i should work but i haven't quite worked out how one would extract a view. i think you'd effectively have to simulate the inside of the "camera" as well. i don't typically do light or wave simulations so i'm not too experienced with this but i'm fairly certain this method would be more resource intensive. fairly certain you'd have to simulate the wave tensor field in the entire 3d region. so, instead of calculating you're view you be simulating physics across the entirety of the simulated world, even the parts you won't be able to see.
      asuming the "camera" is not a physical object in the simulated world i think you'd effectively have to calculate every possible view in the scene at once. and would be one "camera" simulation away from a view anywhere in the scene

  • @decodedbunny101
    @decodedbunny101 4 роки тому

    Looks realistic

  • @z-beeblebrox
    @z-beeblebrox 4 роки тому +3

    So you're saying that I - a person who is terrible at creating materials in 3D programs - could just paint a target image to look how I want, and this method will produce the material for me?

    • @Royvan7
      @Royvan7 4 роки тому +1

      sounds like it. it seems like the material you get out has to be possible tho. like with the example where the specular highlight was removed entirely the algorithm added it back in as diffusely as possible. it seems like for that scene not having a specular highlight there while still having a glossy material might just not be possible. or at least the network can't think of a way to do it.

    • @z-beeblebrox
      @z-beeblebrox 4 роки тому

      @@Royvan7 True, although given that this is meant for photorealistic materials, I would expect a certain amount of limitation where my targets are going to fall short of realism. I'd rather the result miss the target and be more accurate than hit an imperfect target

    • @Royvan7
      @Royvan7 4 роки тому

      @@z-beeblebrox fair point.
      though it's possible that carved sphere is biasing it. it's possible that the network has just never seen a sphere that didn't have the highlight and only thinks that has to be one there. my point is that there is chance that since it learned on the sphere there might be some realistic materials it thinks are unrealistic just because they look weird on the carved sphere.
      either way this seems like an amazing tool. especially if you can get it to be user friendly and play nice with other software.

  • @NotASpyReally
    @NotASpyReally 4 роки тому

    I have no idea what you say in any of your videos, but I still watch them because they're pretty :)

  • @DeepGamingAI
    @DeepGamingAI 4 роки тому +1

    You're like that packet of noodle that promises 2 minutes but is actually like 5 or 6 minutes 😂
    Kidding, love your videos 👍

  • @jeromyperez5532
    @jeromyperez5532 4 роки тому

    I'm predicting that Substance R&D is going to be calling you guys any minute now haha.

  • @0dWHOHWb0
    @0dWHOHWb0 4 роки тому

    Is there already a Blender add-on or something like that?

  • @sergeantPepper_
    @sergeantPepper_ 4 роки тому

    How is your architecture different from AutoEncoder but with an additional loss apart from the L2 loss?
    (Isn't the Inversion network functioning same as encoder)

  • @Maxjoker98
    @Maxjoker98 4 роки тому

    Am I correct that the last part you mentioned, having a more dynamic scene than the material test scene, could mean that soon we could have a material picker for pictures? That would be awesome!

  • @DerSolinski
    @DerSolinski 4 роки тому

    I wonder if could you use this to classify physically material properties like strengths hardness ect.
    This could be interesting for games where players have the ability to freely create new materials.

  • @amiri7392
    @amiri7392 4 роки тому

    Is there a channel like this for real life science? Don't get me wrong, I really like software, AI and rendering, but It would be cool to see 2 minute summaries of papers from other fields too.

  • @fairypie1385
    @fairypie1385 4 роки тому

    great vid!

  • @wizardmongol4868
    @wizardmongol4868 4 роки тому

    amazing

  • @Meatloaf_TV
    @Meatloaf_TV 4 роки тому

    Awsome video congrats on finishing your PhD papers

  • @thomzz3449
    @thomzz3449 4 роки тому

    Looks very nice!
    I was wondering what is happening in the scene of the planet 0:37.
    There seems to be a kind of faint grid that is shrinking (best visible on the white parts like the tree)
    Is youtube compression doing this? If so this is the first time i see it

    • @TwoMinutePapers
      @TwoMinutePapers  4 роки тому +1

      Very observant. :) I am slowly zooming into the image, and aliasing artifacts appear. They are not part of the image.

  • @RKroese
    @RKroese 4 роки тому

    Inverse network!!!! :0 Do you all realize what this means???? I want thisss!!

  • @skilbhumen2875
    @skilbhumen2875 4 роки тому

    is there anyway to use this in blender?

  • @pneumonoultramicroscopicsi4065
    @pneumonoultramicroscopicsi4065 3 роки тому +1

    I don't really understand what this algorithm accomplishes, does it create textures from an image?

  • @clray123
    @clray123 4 роки тому

    I just watch for pretty pictures and animations.

  • @Khazam1992
    @Khazam1992 3 роки тому

    Good Luck

  • @SanneBerkhuizen
    @SanneBerkhuizen 4 роки тому

    I'm still very curious about the volcano in the bottle. Is that just a still imagine? Or is it part of an animation

  • @juliocamacho8354
    @juliocamacho8354 4 роки тому

    I wish this was a 30 min video

  • @danielbates7077
    @danielbates7077 4 роки тому

    great content,

  • @woodbyte
    @woodbyte 4 роки тому

    If generalized to more diverse scenes, could it ever possibly extract materials from real photographs via the inversion network?

    • @Royvan7
      @Royvan7 4 роки тому

      or you could manufacture and photo a carved sphere :p

  • @medhavimonish41
    @medhavimonish41 4 роки тому

    ❤️

  • @sathwikmatcha5511
    @sathwikmatcha5511 4 роки тому +1

    What is Nelder-Mead ?

  • @tranceemerson8325
    @tranceemerson8325 4 роки тому

    what if you could take a planet photo for in input image and have the AI make a shader that simulated how Rayleigh scattering would look in it?

  • @aleckelsey2663
    @aleckelsey2663 4 роки тому

    I like more informative process walkthough in the expanded format. But as I usually don't have a lot of time, your normal format is more readily watchable.

  • @Kilgorio
    @Kilgorio 4 роки тому

    wow

  • @gigigigiotto1673
    @gigigigiotto1673 4 роки тому

    someone should make a plug-in of this for blender

  • @astrofpv3631
    @astrofpv3631 4 роки тому

    Heard gpt-3 came out from openai

  • @stonefreak5763
    @stonefreak5763 4 роки тому

    Sind alle sachen in deinen Videos von der TU-Wien?

  • @richardbelton9476
    @richardbelton9476 4 роки тому

    nice vid

  • @32Rats
    @32Rats 4 роки тому +1

    1:09 Conv1D is just off brand Covid

    • @jenss4083
      @jenss4083 4 роки тому

      oh yes, we're triggered

  • @James-md8ph
    @James-md8ph 4 роки тому +1

    Creating novel materials by neural network derivation

  • @ThankYouESM
    @ThankYouESM 3 роки тому

    I would love to see the entire source code written in plain Python since I'm not a mathematician.

  • @Wulfcry
    @Wulfcry 4 роки тому

    At first I thought the process takes 20 seconds an eternity why is it so long, Then the understanding came it can be optimize rudimentary for ones own need with the source.