Weird POV Effect - trying out OKLab, hidden line removal, blurring and IRL

Поділитися
Вставка
  • Опубліковано 24 січ 2025

КОМЕНТАРІ • 55

  • @ChrisBLong
    @ChrisBLong  10 місяців тому +10

    Thanks to @floppy3962 for suggesting the blur effect :-)

    • @not_pyxlwhip
      @not_pyxlwhip 10 місяців тому +2

      You rock! continue being awesome!

    • @nathanhelmburger
      @nathanhelmburger 10 місяців тому

      I like the blur effect more than I expected I would!

  • @tannerbobanner9453
    @tannerbobanner9453 10 місяців тому +63

    I think it could be useful to have a button that just draws a single frame of randomized noise (with the current color settings) to the screen. This way you wouldn't need to wait for the object being rendered to make a number of rotations before the effect starts to become visible.

    • @CarelessMiss
      @CarelessMiss 10 місяців тому +1

      Great idea

    • @giladperiglass5734
      @giladperiglass5734 10 місяців тому +2

      No but it looks so cool that way I love it

    • @tannerbobanner9453
      @tannerbobanner9453 10 місяців тому +2

      @@giladperiglass5734 true, although just having the option would be nice, especially if he ever needs to rapidly view changes for something, he won't have to wait each and every time

    • @giladperiglass5734
      @giladperiglass5734 10 місяців тому

      @@tannerbobanner9453 sure but I don't see a reason except saving like maximum 5 seconds probably less

  • @FerrisStopmotion
    @FerrisStopmotion 10 місяців тому +13

    this is my favorite series on youtube

  • @ludvigjansson5790
    @ludvigjansson5790 3 місяці тому +2

    I love how I do feel I got a more sense of how the cube was rotating whilst it was green with the same luminance with OKLab, very cool, and kinda feels like it's the it's one of the best ways to showcase how we're more sensitive to the color green :)

  • @JAzzWoods-ik4vv
    @JAzzWoods-ik4vv 10 місяців тому +9

    I think it might be actually easier to see with blur because the colour change is distributed over a larger area. I'd love to see the other wire frames with the blur filter

  • @RoseArcade
    @RoseArcade 10 місяців тому +6

    Thank you for testing OK-LAB, it's neat to see that the hypothesis that the intensity was to do with perceptive lightness seems to be correct! The backface culling of your wireframes at least for me made it a lot easier to track the Suzanne as well (though I am familiar with the model, so that likely helps). Adding the blur effect is also very interesting as it's just a little easier to track imo, even if less "detailed". Thank you for this whole series, it's been lovely to watch! 👏

  • @jacobhuckins494
    @jacobhuckins494 10 місяців тому +6

    The more complicated geometry might show up better if you have a higher density of pixels. I think the lines are getting washed out when too many are too close together.

  • @SidShakal
    @SidShakal 10 місяців тому +6

    9:50 ish, seems like the next thing to do here would be reimplementing the drawing operator into the different color models, as the bitwise XOR operation works on the RGB representation of the color? 🤔
    like, even though the color selection sliders can operate in HSL, OKLab, etc, if the operation used to draw to the screen is still just a bitwise XOR, weird stuff will happen because bitwise XOR is still operating in the RGB model.
    so like, if the way it currently works is
    r_pixel = r_pixel XOR r_pen
    g_pixel = g_pixel XOR g_pen
    b_pixel = b_pixel XOR b_pen
    then it could be rewritten as
    r_pixel = r_pen - r_pixel
    g_pixel = g_pen - g_pixel
    b_pixel = b_pen - b_pixel
    (TODO: i think i'm missing some unsigned integer overflow and underflow considerations here.)
    then for an HSL XOR, it'd be something like
    h_pixel = h_pen - h_pixel
    s_pixel = s_pen - s_pixel
    l_pixel = l_pen - l_pixel
    (TODO: add overflow and underflow consideration here. HSL typically doesn't use the 0 to 255 range RGB does, so the handling here will have different bounds than the handling for RGB.)
    and for OKLab XOR, something like
    l_pixel = l_pen - l_pixel
    a_pixel = a_pen - a_pixel
    b_pixel = b_pen - b_pixel
    (TODO: overflow and underflow handling. see above.)
    i feel like this idea probably opens a whole can of worms and probably isn't worth the effort, but it's a thought 🤷
    cool stuff! 😀

    • @lagomoof
      @lagomoof 10 місяців тому +2

      XOR can still be used. Any system can be remapped to 0-255 (where it isn't that already) XOR performed and the result mapped back out to the correct system. Indeed it doesn't even need to be 0-255. Any power-of-two upper bound would lend well to the XOR operation.
      The downside to doing this is that "transform, XOR, untransform" is going to be slower by a considerable margin. Speed is one of the reasons XOR was used for this back in the day, and this would lose that.
      Th old-school solution would be a pre-calculated lookup table. That would solve most of the speed issue, but an old 8-bit computer would cry at a whole 64KB being used for a 256x256 lookup. But then, old 8-bit tricks of taking advantage of redundancy in the lookup table might get that down by half or better. This might be viable after all 🤔

  • @iamasink
    @iamasink 10 місяців тому +11

    The blur effect is very interesting! It seems a bit clearer to me.

  • @jhonbus
    @jhonbus 10 місяців тому +1

    Fascinating stuff, I'm enjoying this series and the collaborative process you're fostering with it!
    I'd be interested in seeing how this looks combined with another well-known visual processing trick - if certain areas of the image (perhaps in a checkerboard or similar pattern) acted as "permanent obfuscation" - ie in those areas the xor function is disabled (before or after the background noise pattern is established) so that the movement of the wireframe is partly occluded by an apparent foreground.
    Given the astonishing ability of the human visual system to "fill in" the apparent shape of a moving object behind partial cover (think tiger moving behind bushes) it might be an interesting combo.

  • @edenasharpie
    @edenasharpie 10 місяців тому +1

    nice progression of more and more accurate insights!

  • @GenericInternetter
    @GenericInternetter 10 місяців тому +3

    try textured greyscale surfaces without wireframe, at each pixel the brightness of the greyscale pixel denotes the change of the underlying pixel.

  • @PunishedIvan
    @PunishedIvan 10 місяців тому

    We are witnessing a man push the boundaries of perhaps the most obscure science. Fascinating!

  • @aprcktiplaal9293
    @aprcktiplaal9293 10 місяців тому +2

    i think the best way to make more complex shapes more visible is to isntead of rendering the wireframe, to render hard edges
    those would include the outline/silhouette of said mesh as well as lines separating any areas where there is a change of depth.
    sorry, im not good at explaining, but basically imagine looking at a donut from the side, not quite 90° but slightly from top so you can see part of the hole.
    in that example you would atleast in real life make out the dip because there is a visible edge around the bottom part of the hole. this effect can be achieved by just drawing lines where there is a great change in distance.
    should be fairly easy to implement by just storing each face's distance from the camera and drawing edges when a certain threshold of change occurs
    also another thing, about the OKLab implementation and the irregular color intersection chart.
    the reason some colors just suddenly get bright is because the XOR operation is still happening with RGB.
    it would be cool to see if there was a way to swap the XOR operation to use the OKLAB L A B values instead

  • @rogerbrown4409
    @rogerbrown4409 10 місяців тому

    Love this series! There's a popular horror game series called Faith that has a pixel art style, and it has some Very Scary cutscenes with a rotoscoped pixel art style. I wonder how some of those scenes might look with this effect.

  • @hauslerful
    @hauslerful 10 місяців тому

    A thing I've noticed: When moving my mouse cursor around the video in a smooth circular motion, my eyes can track the cursor steadily. With increasing speed of the motion, the cube becomes less and less visible. This might be related to the flip-book motion you've mentioned.
    Maybe one can put either put a white dot on the screen that moves smoothly with variable speed, or the image could be moving by itself, and a fixed crosshair in the middle of the screen could be used to anchor the view.

  • @Manigo1743
    @Manigo1743 9 місяців тому +1

    I was wondering if the effect is still there if the object isn't moving. That is if you keep the cube stationary but still update the pixles with the same rate along the lines. This should of course be done on a background that is already noise, or the cube will just flash on and off.

    • @ChrisBLong
      @ChrisBLong  9 місяців тому

      Yes, that would work like in the Bad Apple videos I made with this effect - look at my other videos to find them. A 'static' object appears as a shimmering outline against the background of noise, but still disappears when you pause the video.

  • @helleye311
    @helleye311 10 місяців тому +3

    I quite like the blurred effect. It's not less visible but the hard pixels without it really make it look artificial. blurred it looks a lot more like standard noise.

  • @ladyamphibulon7784
    @ladyamphibulon7784 10 місяців тому +2

    Really curious to know how this would look paired with motion extraction: ua-cam.com/video/NSS6yAMZF78/v-deo.htmlsi=Hv2kC5-q54HkH_GC

    • @ChrisBLong
      @ChrisBLong  10 місяців тому +1

      Someone processed one of my earlier videos using that 'Posy' technique and posted a comment about it... I'll try to find it again...

  • @reddcube
    @reddcube 10 місяців тому

    I love how the blurred frame just looks like noise when you pause it.
    I'm wondering what it would look like if the background moved instead of the object. You create a random pattern in the left side of the frame and just continually shift the columns to the right. Or instead of a 'random ' pattern, you wrap the far right column to the left side.

  • @gaborfekete3777
    @gaborfekete3777 10 місяців тому

    You could try only the outline of meshes only, something like what blender does with it's freestyle drawing for edge outlines

  • @jotch_7627
    @jotch_7627 10 місяців тому

    to be a bit more precise, the "lightness" metric in HSL isnt just less-good, it is fundamentally the same as RGB. it treats each color axis identically, so you were still testing #FF0000, #00FF00, and #0000FF (or shades thereupon)

  • @dr.keinmann2996
    @dr.keinmann2996 10 місяців тому

    ive been here since the start, and ive been wondering how long itll take for people to mention this has generally been seen before.
    the human eye is INCREDIBLY good at detecting changes in colors, even more so over time. at least thats something along what i remember... i just remember seeing stuff like this before, not too uncommon. ive been really enjoying the experiments though, tons of fun to watch.

  • @-ion
    @-ion 10 місяців тому

    To easily compare how strong the effect is with different colors, it might be good to update the image monochromatically and color each frame with the selected color as it's displayed.

  • @robinsparrow1618
    @robinsparrow1618 10 місяців тому

    if you graph the equation X xor Y = Z where X,Y, and Z are binary numbers, the result is a sierpinski tetrahedron. i think you can kind of see that at 8:50

  • @-ion
    @-ion 10 місяців тому

    Is there any chance you missed a conversion from a linear color space to sRGB which causes the dark gray and the green to look darker than the blue and the red at the same L value? Btw, OKLCH is to OKLab what HSL is to RGB.

  • @jakerdy5369
    @jakerdy5369 10 місяців тому

    Are you sure that implementation of oklab conversion to rgb is correct in your demo? Clipping might occur at the extreme a/b values, and you look at colors that couldn't be represented as rgb ,nearest will be used, and actual luminance of the pixels will be different. It would be great if there is some check that lab->rgb produced valid sRGB results, otherwise it doesn't actually shows that blues less noticable than reds, because colors got clipped.

  • @somnvm37
    @somnvm37 10 місяців тому

    what if you make the resolution a lot higher?

  • @MrGommpa
    @MrGommpa 10 місяців тому

    What about bluring the wire-frame?

  • @omayoperations8423
    @omayoperations8423 10 місяців тому +2

    I feel like an icosahedron could make a clearer looking sphere.

  • @percent3767
    @percent3767 10 місяців тому

    I wonder what would happen if you faded the whole image every frame? I’m curious to see how the ghosted trails would interact with the XORing!

  • @pastebee2812
    @pastebee2812 10 місяців тому +1

    The OKLab luminace doesn't seem to quite correspond to perceived brightness for me - with constant "luminance", blue > red > green > grey. The theory of "brighter perceived colour = easier to see" does still seem to hold though. And subjectively it makes sense, trying to make it out in the grey feels like the same kind of difficult as seeing in the dark.

    • @ChrisBLong
      @ChrisBLong  9 місяців тому

      Yes, it's the same for me - I'm prepared that that's because I've messed up the conversions, but reading around this subject also makes clear that perceived brightness is also strongly effected by other factors, like the background colour, other surrounding colours, the brightness of the room you're in, and others. It seems very hard/impossible to generate perfectly equal perceived brightness via any kind of simple mathematical transformation.

  • @chri-k
    @chri-k 10 місяців тому

    1. The blurred version is actually clearer.
    2. What happens if you do the operation on the pixels which are empty instead?

  • @Ani_150
    @Ani_150 10 місяців тому

    I think an issue with the more complex shapes is that IRL they don't have any "lines". (As in, actual objects aren't made of polygons.)
    Think of an actual sphere, by it's definition it doesn't have any edges, but this effect relies on them so you end up with a weird dice with many faces (that in this case is easy to see as an imperfect sphere, but with other objects is not so simple to do - like with suzanne). You could try to approximate the shape more closely to show the round edges, but it'd eventually be too jumbled to tell the individual lines apart.
    All this to say- I think that that's a limitation of the effect, objects have to have enough hard edges to make sense of.

  • @Spikehead777
    @Spikehead777 10 місяців тому

    I think it would aid visualization if only "important" lines were visualized. That is, for standard polyhedra like cubes, tetrahedra, icosahedra, etc. all of their edges are visualized. For Suzanne, only the outline of its head, mouth, and eyes are visualized. For the teapot, only the outline of its shape, spout, or handle are visualized. Front-facing vs. back-facing lines may be handled separately.
    I can already kind of picture the algorithm in my mind:
    1. Render all faces in white in an off-screen buffer filled with black
    2. Edge detection (Edges in white, all other pixels are black)
    3. XOR the off-screen buffer with the normal frame
    It may be possible to render the face normals and run an edge detection on them to generate additional pixels to render.

  • @FelixNielsen
    @FelixNielsen 10 місяців тому

    The hidden lines and bluring are both rather brilliant suggestion/observation, I think, and was the IRL experiment. Still, in order to fully appreciate the affect, I do believe that it is essential to fully explore the effect of contrast, not luminance. It goes without saying that when things become darker, they become harder to see, just like the real world. It is also important to realize that the XOR peration, is not inherently logical in nature, when it come to colors. You might try greyscale and CMYK to see the difference. Ideally, whatever operation you employ, should have the same effect, regardless of color space.
    Lastly, but perhaps most important, is resolution, not only in terms of screen resolution, but also in terms of line or wire thickness, in relation to the the average distance between them. In other words, more complex shapes and models require, or so I believe, not just higher resolution, but high "resolution", i.e. thinner line/wires, which then require higher contrast, as the perceived luminance will be reduced, and so on.
    Oh ya, lastly lastly, speed, this only from my own experience. The more complex shapes, more lines/wires, to my eyes, do reqiuire lower speed of rotation to compare properly to the simpler shapes.
    A pretty fitting analogy, I think, is less text using a large font vs lots of text using a small font. I think it makes sense in itself, but also if you consider the resolution. It would surprise no one to learn that a tiny font at a very low resolution is quite unreadable and I don't really think there's much difference to the subject your exploring, except that the complexity of a letter and what is required, in terms of resolution, in order to make it distinguishable to other letters and symbols, makes more intuitive sense.
    A seven segment display has a very low resolution, so to speak, seven pixels in fact, but it is okay, because there is usually only 10 symbols to distinguish between and therefore it is easily readable.
    A fourteen segments display, double the resolution, but intented to display the entirely alphabet, 10 digits, 26 lower and uppercase letters not to speak of a number of other symbols. That is, at the very least, 6 times the number of symbols at only twice the resolution, but not only that, even if it is hard to read, having only 10 symbols, makes it reasonably easy to arive at a good guess, by the way of process of elimination, and if we imagine there is context to, which I'd say is the case here, I think your error rate would be very low. Even guessing randomly, you have a 10% chance, which is actually rather a lot in comparison. One has to remember that it is not only the one guess, but a multitude, and just like compund interest, even a small chance in success rate can make a significant difference.
    I for one actually do think that 14 segment display are much harder to read and it is little wonder. Assuing some sort of linear corelation, were the 7 segment display to implement as similar "DPI", it would have not 7, but 2 segments, and I do think we can all agree that this is quite impossible, as ther are only 4 possible configurations. Well, this just goes to show that my erlier assumptions is wrong, so let's square the number, as the two dimensions are often what people forget about, in relations to resolution. this gives is 4 segments and 16 possible configuration, and indeed, that is more than enough. However, we do not want a binary clock, but 10 distiguishable symbols, or rather, the digits 0 through 9. It think perhaps I once saw an examble of someone making a decent attempt at it, but I'm not sure, and at any rate, you would most certainly need to compromise in ways that would make them hardar to both recognise and destiguish.
    Well, this became rather long all of the sudden, but I do think I bring some important observations to the table, so there you have it.
    Best regards.
    Prologue
    The hidden lines/wires and the bluring were both brilliant as they hide unnecesary information. The first one for obvious reasons, but the second one perhaps not so much, but in short, without the blur, there are distinguashble pixels, and this is information we do not need, as our goal is not to distiguish a bunch of tiny squares, in fact the both confuse and distact. Were the resolution high enough that we could not distinguish that pixels, the blur wuould make little difference.
    This further reminds me of an interesting and relevant lesson from biology, relating to evolution.
    There is some sort of jelly fish which if memory serves, has no less than 7 eye, at least one of these being on par with the human eye, however having not brain as such, and only a very rudimentory central nervous system, we are probably only talking about less than a thousand neurons, those that eye, basically comparable to a high megapixel camera, is no use at all, as there is not way to process all this information.
    What did evolution do to fix this? It made the image unsharp by moving the retina. Having not access to advanced optics, there is no image as such to process, but still there is usefull information, like light and dark, big and small, round and square, etc.
    Just an intersting little sideventure to explain the most essential point of all, that these things are indeed complex, but not necesarily complicated. Usually it is just a matter of perspective. I for one have no expertise to speak of, relevant or otherwise, and yet occasionally I make usefull contributions anyway. I hope this is the case also here, as I do find the subject intriguing and hope to see more from you on this subject.

    • @henrysaid9470
      @henrysaid9470 10 місяців тому +1

      I was gonna read this but then I saw how long it is

    • @FelixNielsen
      @FelixNielsen 10 місяців тому

      @@henrysaid9470 You know sometimes long does not mean long winded. In fact there are things, rather a lot of them in fact, that cannot be said in short. Regardless, for you own benefit, you might want to start experimenting with actually starting to read, before concluding it is not worth reading because it is too long, though of course still worth replying to, even though it couldn't possibly be relevant to the content, as you have no idea what it is. That time might have been better spend reading a line or two. certainly it would've save me the trouble of writing this long winded reply serving little purpose but to state that which should be obvious to anyone, but saddly isn't, not even close.
      Just saying..

  • @OfficialNerokin
    @OfficialNerokin 9 місяців тому +1

    Since you are looking for a name for this effect, this might help you finding something: I described this effect to my friends as 'motion without body'

  • @katiebarber407
    @katiebarber407 10 місяців тому

    this would make a sweet xscreensaver screensaver

  • @motownmoneygang
    @motownmoneygang 10 місяців тому

    what would happen if the noise pixels are wayyyy smaller

  • @syriuszb8611
    @syriuszb8611 10 місяців тому +1

    I think that flip book can't work because it is shaking overall. Nothing against your book flipping, it just requires much more stable image. Each pixel need to stay in same place between pages. Even watching computer animations, they get blurry if you shake your head fast enough. I think we can compensate a little for shaking. But if you want to give flip book a second try, maybe try to add a solid frame around the picture, so we have a reference points so we can compensate better? I think in fullscreen, without reference I loose ability to see easier, with smaller, slower head shakes.

  • @wilhellmllw3608
    @wilhellmllw3608 10 місяців тому +1

    The flipbook was a great idea! Too bad it didn't work :(

  • @katiebarber407
    @katiebarber407 10 місяців тому

    suzanne

  • @Maric18
    @Maric18 10 місяців тому

    so many mouth noises ._.