КОМЕНТАРІ •

  • @Zach010ROBLOX
    @Zach010ROBLOX 7 місяців тому +4

    It's interesting how the B noise preserved overall shape and prominent horizontal and vertical edges better and had flatter colors, whereas the control had more variance in pixel color but preserved more minute details.

  • @Asterism_Desmos
    @Asterism_Desmos 8 місяців тому +7

    Something I’ve noticed is a lot of recent comments being made, I think your channel may be being noticed!

    • @8AAFFF
      @8AAFFF 8 місяців тому +1

      yes the video with the website scanning thing blew up for some reason :D

  • @meuhfoot
    @meuhfoot Рік тому +3

    Very nice! What I think happens - the optimal autoencoder without activations performs downsample and then upsample. On completely random images (type A), where pixels are independent, minimizing the loss is impossible, thus the network diverges and produces noise. On the type B, which have patches of uniform color, the downsample-upsample process is nearly lossless, the network converges to uniform downsample-upsample and a result the blur is uniform. Finally on natural images the downsample-upsample depends on the texture, which allows the network to blur (a bit less) high-frequency areas. Great job and insights!

    • @8AAFFF
      @8AAFFF Рік тому +1

      thanks, this also might be related to why the real data autoencoder has circular type imperfections on the output, and the type B one has lines

  • @lionlight9514
    @lionlight9514 8 місяців тому

    These AI videos are very entertaining, I'd love to see more! I've in particular have tried using an AutoEncoder in the past with very surprising results, and seeing someone else try it with different approaches is very entertaining!

  • @nielskersic328
    @nielskersic328 8 місяців тому

    This channel is seriously underrated

  • @luciengrondin5802
    @luciengrondin5802 8 місяців тому +2

    Without activation function, a NN is a multivariate polynomial.

    • @8AAFFF
      @8AAFFF 8 місяців тому

      yes pretty much, just with vectors
      thanks for pointing that out :)

    • @luciengrondin5802
      @luciengrondin5802 8 місяців тому

      @@8AAFFF Above all this means you can probably optimize the training process or something. There is a paper somewhere about that, I think. arxiv 1806.06850

  • @musaplusplus
    @musaplusplus Рік тому

    Very impressive, I never thought of this, Am going to use more abstract shapes to see if I can get a better result.

  • @feathersm7966
    @feathersm7966 8 місяців тому

    Incredible video friend

  • @andueskitzoidneversolo2823
    @andueskitzoidneversolo2823 8 місяців тому

    There is an old art project called the library of babel. It is a theoretical library that contains every possible combination of every possible word to create every possible sentence that eventually contains all possible knowledge writable thing.
    There is also a old project called the canvas of babel. It it a canvas that contains every possible image.
    You can search both these websites and try to learn anything you ever wanted to know. But no matter how long you search you will never find anything but noise. Maybe. Its a very very small chance you'll learn everything you wanted. But highly unlikely.
    Maybe you can use a library like that for training.
    Would be cool if AI or something did prove if the library does in fact contain all knowledge.

    • @8AAFFF
      @8AAFFF 8 місяців тому

      yes i know about both of them :) fascinating thought expiriment
      btw there is also that picbreeder project with the same concept, just a neural network generates all possible images with its weights randomized

  • @theLollox1000
    @theLollox1000 8 місяців тому

    It would be interesting to compare this with basic singular value decomposition compression, with number of singular values used matching the latent dimension, since a NN without activation functions is just a linear transformation with some bias added.

  • @AllExistence
    @AllExistence Місяць тому

    Since you used axis aligned boxes, it never learned curved surfaces or even diagonals.

  • @TiagoTiagoT
    @TiagoTiagoT 8 місяців тому

    How about training a network that takes an index, and also X and Y coordinates, and outputs RGB values, scored on how well the images generated with it work for training an autoencoder?

  • @vsevolod_pl
    @vsevolod_pl 4 місяці тому

    Thats very interesting results, but
    My brother in christ you created THE LINEAR MODEL...
    Basically thats 2 matrix multiplications... Pls use activation functions...

  • @sysfab
    @sysfab 8 місяців тому

    Wow! Cool stuff, i just found you on yt main page! (can you share your discord? i wanna ask some questions)

    • @8AAFFF
      @8AAFFF 8 місяців тому

      oh dont have a discord yet :) might create in the future