Dithering with Floyd-Steinberg in C++

Поділитися
Вставка
  • Опубліковано 28 чер 2024
  • In this video, I demonstrate the Floyd Steinberg dithering algorithm, its simplicity, and its power at being able to solve a problem to the best of its ability.
    Demo: community.onelonecoder.com/me...
    Source: github.com/OneLoneCoder/Javid...
    Patreon: / javidx9
    UA-cam: / javidx9
    / javidx9extra
    Discord: / discord
    Twitter: / javidx9
    Twitch: / javidx9
    GitHub: www.github.com/onelonecoder
    Homepage: www.onelonecoder.com
  • Наука та технологія

КОМЕНТАРІ • 208

  • @The3oLand
    @The3oLand 2 роки тому +70

    Dithering is still used in printing. In fact modern day printers use a clever combination of computed dithering like FS and seamless tiling like Penrose tiles.

    • @SerBallister
      @SerBallister 2 роки тому +1

      Cheap LCD panels too

    • @Smittel
      @Smittel 2 роки тому +1

      ^ as serb says, some cheap panels use 6bit color exploiting us not being as receptive to spatial high frequency color changes.
      Also, cg graphics sometimes use dithering to reduce banding that even with 8bit can be noticeable with dark gradients, it looks a bit like the quantized image, just not as exaggerated. Dithering is basically just free color precision similar i believe to why printers do it as well but im not too knowledgeable about them

    • @landmanland
      @landmanland 2 роки тому

      @@Smittel printers have the problem of having only a very limited set of colors, usually 6 in your typical inkjet printer. Fortunately printers today have extreem high density of “pixels” so that dithering doesn’t effect the end result. You only see it under a microscope.
      Straight on dithering is actually not used as the end result can make the image muddy, because how ink droplets flow and mix with each other. I use it as a first stage filter, but only for photos since it’s relative cpu expensive.

  • @guitart
    @guitart 2 роки тому +13

    Welcome back, Maestro!

    • @javidx9
      @javidx9  2 роки тому +1

      lol Thanks Perini!

  • @renhoeknl
    @renhoeknl 2 роки тому +12

    What I love about this channel, is that you can just watch the video and learn something without actually having to code along. You certainly can is you want to, but just watching and learning some new algorithms is really nice too.

  • @Maxjoker98
    @Maxjoker98 2 роки тому +28

    Dithering still has uses in "modern" applications. You can still get some more dynamic range for a specific display using it. Floyd-steinberg is rarely used for this nowadays, but the principle remains. Think of things like displaying higher-bitdepth images or videos on "normal" 24bpp monitors, or display-stream compression, etc.
    EDIT: Also dithering is not a scanline-algorithm. Floyd-Steinberg is, but not all dithering algorithms are. But most of them are simple matrix operations!

    • @wes8190
      @wes8190 2 роки тому

      Agreed; I used dithering just a few years ago on a graphics project to get impossibly smooth gradients with no banding. It was like magic.

    • @infinitesimotel
      @infinitesimotel 2 роки тому

      If you want to see some impressive dithering, have you seen the presentation on the Lucas arts guy who only used 16 colours but could get crazy colour ranges and even cycle the palette to make them seem animated?

    • @SquallSf
      @SquallSf Рік тому

      @@infinitesimotel The name of the guy is Mark Ferrari and he is not Lucas' he left long, long ago.

  • @Pariatech
    @Pariatech 2 роки тому +43

    As always a great tutorial. I like that you start with the demo, that's a nice hook to keep watching. I'm curious if I could use this algorithm to make emulation of old 16bit art using high res pictures . I'll have to try it. One more project in my bucket list hahaha

    • @javidx9
      @javidx9  2 роки тому +9

      lol thanks! Yeah its a great way to make pictures look retro. Most art software will have an equivalent "filter". In fact I tested a version of mu implementation against Affinty Photo, and got exactly the same results, so we know how they're doing it :D

    • @SergiuszRoszczyk
      @SergiuszRoszczyk 2 роки тому

      My thought on that would be to take VGA output, dither it and connect to EGA 64-color monitor. That could be interesting. Something that back in the days video cards weren't capable of (at least not 60 times a second).

  • @tjw_
    @tjw_ 2 роки тому +3

    new javidx9 video?! christmas came slightly late this year I see!

    • @kiefac
      @kiefac 2 роки тому +2

      Or extremely early, if you don't handle the overflow correctly

  • @brentgreeff1115
    @brentgreeff1115 2 роки тому +5

    I love this channel. - this is the year I take a few months off to actually try implement all this code.

  • @suzuran451
    @suzuran451 2 роки тому +11

    Very nice! I've wanted to learn about color quantization and dithering for a long time and this video explained them in a very understandable way! Thank you!

  • @thomas3754
    @thomas3754 2 роки тому +1

    A new video. You call this 'Pog' these days I think. Very high quality as always, excited for the next times already

  • @crazykidsstuff
    @crazykidsstuff 2 роки тому

    Best part about this weekend?
    Working through this video!
    Very entertaining and very informative. Thanks so much!

  • @setharnold9764
    @setharnold9764 2 роки тому +1

    The look of my childhood! I'm always amazed at how your framework makes the topic of the video flow so smoothly. Nice stuff.

  • @aropis
    @aropis 2 роки тому

    So great to have you back! Really awesome for people new to image processing. If you did linked dithering and to printing you would have completed the circle. I could have imagine their AHA moment especially if you mentioned CMYK color space. Awesome stuff! Keep it up! Really this video opens many interesting topics regarding signal processing. Reducing a dithered image shows the limit of nearest neighbor/bilinear filtering. This could be the starting point of an image sampling video. All the very best for 2022!

  • @secondengineer9814
    @secondengineer9814 2 роки тому

    Really cool video! Always fun to see a simple algorithm that does so much!

  • @Komplexitet
    @Komplexitet 2 роки тому +3

    Yay new video!

  • @pskry
    @pskry 2 роки тому

    So good to see you're back! Hope you all are well!

  • @geehaf
    @geehaf 2 роки тому

    You're back!! Great explanation and demonstration - as ever. :)

  • @wesleythomas6858
    @wesleythomas6858 2 роки тому +2

    Glad to see you back!!!

    • @javidx9
      @javidx9  2 роки тому

      lol cheers Wesley, not as frequent this year, but Im hoping for once a month.

  • @teucay7374
    @teucay7374 2 роки тому

    The best video I've seen since the year started. I am working on a program to produce pixel art from high def images, and this is super useful for that! Thank you javid!

  • @dimarichmain
    @dimarichmain 2 роки тому

    So good to finally see you back!

  • @ric8248
    @ric8248 2 роки тому +3

    It's fascinating that you're doing some DSP! I hope you enter the world of audio effects one day.

  • @PumpiPie
    @PumpiPie 26 днів тому +1

    Very good video, Good exsplanation ;D Keep up the good work :D

  • @StarLink149
    @StarLink149 2 роки тому

    I love your videos. :)
    I always learn something interesting and can't wait for you to release more.
    On another note, I've always found old pixel art using Bayer dithering to look very nice.

  • @SergiuszRoszczyk
    @SergiuszRoszczyk 2 роки тому +4

    I used this technique to display pictures on white/black/yellowish-brown E-ink display. I was limiting palette to RGB values mimicking three colors of display and then dithering the picture. Works great for photos.

  • @anonanon3066
    @anonanon3066 2 роки тому

    Wow. This is amazing.
    How did i not know about this?
    Never ever would i have imagined it to be such a simple algorithm.

  • @WillBourne999
    @WillBourne999 2 роки тому

    Fantastic video thanks javid.

  • @ianmoore322
    @ianmoore322 2 роки тому

    I've always wondered how to implement this algorithm. Thank you OLC. You always have the answers I've always needed. Console game engines and pixel game engines for example

  • @radojedom8300
    @radojedom8300 2 роки тому

    Excellent. Interesting and educative.

  • @_tzman
    @_tzman 2 роки тому

    Thank you so much for introducing this brilliant algorithm to us. My mind is blown

  • @diskoBonez
    @diskoBonez 2 роки тому

    really fascinating video!

  • @Cypekeh
    @Cypekeh 2 роки тому +1

    love this dithering

  • @davidwilliss5555
    @davidwilliss5555 2 роки тому +1

    Years ago I developed dithering algorithms for printing and Floyd Steinberg is one of the algorithms we used. There was a similar algorithm called Stucki which worked the same way but distributed the error to more pixels using different weights and produced a more pleasing image.
    There's another problem that arises in printing in that often your pixels are not square and a printed pixel will overlap the neighboring white pixels so you have to weight them differently. We had one printer where this was so bad that if you printed a 50% gray by painting pixels like a checker board, the black pixels completely overlapped the white pixels and you got black. For that we ended up using a completely different algorithm.

  • @arrangemonk
    @arrangemonk 2 роки тому +2

    dithering is still everywhere, in every conversation for audio /image resampling (rgb32 float -> rgb8), i also used floyd steinberg for service fees applied to the whole document distributed to its the positions

  • @carlphilip4393
    @carlphilip4393 2 роки тому +1

    Hey javid youre a great guy! Im currently at university and I am looking up to you! Its amazing that you share all your knowledge to all us people for free and you teach as an excellent teacher!

    • @javidx9
      @javidx9  2 роки тому

      Hey that's very kind of you Carl, good luck with your studies, and you can aim much much higher than me!

  • @therealchonk
    @therealchonk 2 роки тому +1

    Great Video. I'll try it out myself.

  • @treyquattro
    @treyquattro 2 роки тому

    this was another superb tutorial. Old (Robert) Floyd was certainly one of the giants of 20th century computer science (e.g. Floyd's algorithm for finding cycles in lists, Floyd-Warshall shortest path algorithm, correctness, work with Knuth, etc.).
    BTW, with modern C++ and template argument deduction, if you're creating a std::array you can leave out the item count - and even the type if elements are all of the same - if you''re initializing from an initializer list.
    e.g. std::array a{1, 2, 3, 4, 5}; // creates a 5 element array of type int

  • @TheButcher58
    @TheButcher58 2 роки тому

    Very interesting video. I once wrote an algorithm to apply a weight to a pixel (related to A*) where it would look at its neighbours. One of the problems I got was the fact that when you iterate from top left to bottom right, it affects the results and I needed to do it again in the reverse way, but this was quite inefficient. This algorithm made me think of that.

  • @brainxyz
    @brainxyz 2 роки тому

    Very Nice! Thanks

  • @will1am
    @will1am 2 роки тому

    The return of the King! :)

  • @Unit_00
    @Unit_00 2 роки тому +1

    Interesting topic as always

  • @ianbarton1990
    @ianbarton1990 2 роки тому

    Another really good video about a subject that I've always found really intriguing, I remember the first time I came across dithering playing around with The GIMP to convert full colour to true black and white images and I thought it was some magic voodoo algorithm that must be beyond mere mortal levels of comprehension. That's why it's so satisfying to find out that the algorithm is very accessible and intuitive to understand but there is still a solid level of mathematical thinking and nuance behind it.
    I think there's a probably a natural follow on video surrounding the generation of 'optimised' palettes (where the computer decides what colours will best approximate the source image) too if you're so inclined to do so. :)

  • @mehulajax21
    @mehulajax21 2 роки тому

    David..your content is awesome...The information that you present is pure gold...keep up the good stuff..
    I have a similar background like you (minus the game development and 10 years of automotive development).
    However, it find a lot of content carrying over to auto dev for experimentation... I would like to know if you have some book recommendations.

  • @kweenahlem6161
    @kweenahlem6161 2 роки тому +2

    best teacher ever

  • @user-we8nn3vp5d
    @user-we8nn3vp5d 2 роки тому

    Thank you. ☺

  • @ElGnomoCuliao
    @ElGnomoCuliao 2 роки тому +3

    Finally!

  • @anoomage
    @anoomage 2 роки тому

    I just did my own Floyd-Steinberg dithering for photographs displaying on an ePaper screen :D (Connected photograph frame, where you can choose an image on your smartphone to be displayed on the ePaper, sent to the Arduino with Bluetooth)
    Can't wait to see how you did it !

  • @apenaswellignton
    @apenaswellignton 2 роки тому

    welcome back! :D

  • @BudgiePanic
    @BudgiePanic 2 роки тому

    Another cool video 👍

  • @Rouverius
    @Rouverius 2 роки тому

    25:30: CMYK! And sure enough it looks like a photo from a color newspaper!
    What's amazing to think about is that back in the 1930's, the first fax machines did a similar operation with vacuum tubes and used capacitors to hold the error values.

  • @Jade-Cat
    @Jade-Cat 2 роки тому +6

    A big factor in the brightening of shadows might be not the dithering algorithm itself, but using it on sRGB (i assume) data, with a linear distance function. Two pixels set one to 0 and the other to 64 will emit more light than two both set to 32.

    • @Tordek
      @Tordek 2 роки тому +2

      Indeed! A lambda adjustment is necessary to linearize the image in between processing steps.

    • @bubuche1987
      @bubuche1987 2 роки тому

      Exactly. And to test that you can pictures of your screen at some distance, while your screen is either displaying a 127,127,127 color or a pattern of altering black and white pixels.

  • @catalyst5434
    @catalyst5434 2 роки тому

    Amazing video, I really like your explanation it is so clear and is very easy to understand ! Thanks for the nice content. I was looking for cache optimization videos but couldn't find a good one, maybe you can make a video about it, that would be awesome !

  • @Ethanthegrand
    @Ethanthegrand 2 роки тому +1

    I love your videos man. Even though your channel is based around C, and I know nothing about it, i in fact watch a lot of these tutorials and program in Lua with my own pixel engine. That’s the great stuff about your videos is that you visualise everything. Keep up the great work!

  • @vytah
    @vytah 2 роки тому +1

    The company I work in uses Floyd-Steinberg dithering to allow our users to print arbitrary images on B&W thermal printers. It works reasonably well.

  • @hermannpaschulke1583
    @hermannpaschulke1583 2 роки тому +2

    I'd say dithering still has uses today. Even with 24bpp you can still see banding in darker areas.

  • @super_jo_nathan
    @super_jo_nathan 2 роки тому +23

    Am I correct in thinking that when clamping to 0 and 255 you will lose some of the error propagation? Of course its better than wrapping around, but wouldn't storing the altered value and only clamping when actually assessing the pixel result in a better dithering?

    • @javidx9
      @javidx9  2 роки тому +22

      You are! In fact I was intrigued by this too, and created a version where this doesnt happen. What I observed was the error propogation goes out of control and quickly saturates, so the bottom right of the image is garbled. I thought about including it in the video, but then I'd have to explain the new custom pixel type required and it didnt really fit. I would guess that the clamping is required to keep things under control - this could probably be achieved by other means however, if you're prepared to go beyond just the basic FloydSteinberg algorithm.

    • @DFPercush
      @DFPercush 2 роки тому +1

      Seems like you could have a moving window of floating point values, like maybe 2 or 3 horizontal lines at a time.

    • @super_jo_nathan
      @super_jo_nathan 2 роки тому +3

      @@javidx9 thank you for the detailed response and the informative video! Hope to see more videos like this from you in the future!

    • @nobody8717
      @nobody8717 2 роки тому +2

      @@javidx9 We'd probably have to hold at the place where the clamping would potentially kick in, to investigate what is happening as to information overloading or miscalculating or translating unexpectedly. Partial dividends accumulating a discrepancy from a rounding or something like that. Debug when "clamp" is used and peek the memory values of the vars.

    • @eformance
      @eformance 2 роки тому +1

      @@javidx9 That makes sense, since errors would propagate and propagate diagonally, and since the algorithm's bias is towards "brightness" it would got out of control. It seems that the clamping was a fortuitous side effect that the algorithm needs. Did you try altering the bias constants too, to see if you could produce something more interesting?

  • @s4degh
    @s4degh 2 роки тому

    I was fascinated by last dithering show case with only 5 colors

  • @RealNekoGamer
    @RealNekoGamer 2 роки тому

    Been using SDL2 with my follow-alongs, as it's a tried and true frontend to the standard graphical APIs, with a few additional goodies such as a render scaling function

  • @orbik_fin
    @orbik_fin 2 роки тому +1

    The brightening effect is caused by doing arithmetic with gamma-compressed values instead of linear ones. E.g. middle gray (128) actually encodes a brightness of 24%, not 50%. See sRGB on Wikipedia.

  • @TomCarbon
    @TomCarbon 2 роки тому +1

    The great advantage of Floyd Steinberg is also the ratios were chosen to match 16 so it can be resolved with 4 bits shifting and be very optimal!!

  • @SoederHouse
    @SoederHouse 2 роки тому

    Thanks for bringing back the youtube::olc::candy

  • @GNARGNARHEAD
    @GNARGNARHEAD 2 роки тому

    nice one; I've been meaning to go back and have a look at the optical flow video.. try and figure something out for horizon tracking on the ESP32 Cam, a nice refresher 😁

  • @alrutto
    @alrutto 2 роки тому +1

    Loved the video, I'm glad you're back. In term of file size, how small would it be after filtering?

    • @javidx9
      @javidx9  2 роки тому

      Thanks, it entirely depends on how many bits per pixel you filter to, for an uncompressed memory surface of 100x100x3x8 you can get it down to 100x100x3xN where N is the number of bits.

    • @SreenikethanI
      @SreenikethanI 9 місяців тому

      @@javidx9 Or we can also use an Indexed format if applicable, saving even more space

  • @clamato2010
    @clamato2010 2 роки тому +2

    Greetings from Mexico teacher, I am a fan of your channel and I have learned a lot with your videos.

    • @javidx9
      @javidx9  2 роки тому

      Hey thanks Sam! Greetings from the UK!

  • @philtoa334
    @philtoa334 2 роки тому

    Nice.

  • @Moonz97
    @Moonz97 2 роки тому

    Great insightful video! I wonder, how do you handle pixels that are out of bounds at 18:42?

  • @OrangeDied
    @OrangeDied 2 роки тому

    i know nothing and have no intresting in image proccesing, but i will watch this whole thing because yeah

  • @sunnymon1436
    @sunnymon1436 2 роки тому

    MYST had a lot of this in it, as I recall.

  • @FaridAnsari1
    @FaridAnsari1 2 роки тому

    I got my first IBM-compatible PC in the early 90's with my monitor being able to only display 256 colors in windows 3.1. I remember that when I wanted to save an image or videos, I would play around with quantization and dithering options in whatever graphic program to make it look right on my display. After watching this video, it really makes me appreciate what dithering does (approximate with far less information and still get the idea of the image across!). I think it would make for a cool post-processing effect for a pixel game engine based games but not sure if it is speedy enough for the FPS?

    • @SianaGearz
      @SianaGearz 2 роки тому

      If you have something CPU-rendered, then you can make Floyd Steinberg work, it's fine, but it also looks terrifyingly bad in motion, as when you have moving and non-moving parts of the image, every little movement causes a ripple of value changes to the right and bottom of it (assuming you process from top right), while everything to top and left stays static, it distracts you from actually moving parts of the image and pulls your attention towards noise at the bottom left. You can use blue noise instead to achieve a similar looking dither effect. With precomputed blue noise, diffusion style dither is insanely fast on the GPU (or CPU), trivially parallelisable, and you can control the behaviour, you can make it stable frame-to-frame or vary it uniformly between the frames, there's even 3D or spatiotemporal blue noises specifically for the purpose. Computing optimised noise is extremely slow, but it can be precomputed such that it wraps around seamlessly and just shipped as a texture or array.

  • @barmetler
    @barmetler 2 роки тому +2

    I want to point something out about pointers.
    In C++, the star is part of the declarator.
    int *i, j;
    will create one int pointer and one int.
    This is why we put the star on the right. It is not a style choice, since the star is not part of the type specification, but the declarator. The same goes for references.
    This is in contrast to unsafe C# code, where the above snippet would create two pointers.
    Hope this helps!

  • @Roxor128
    @Roxor128 8 місяців тому

    The serial nature of Floyd-Steinberg dithering isn't the only problem with it. It's also not a good fit for animations. The way FS dithering propagates the error through the image means that if you change a single pixel, everywhere after it will change as well, resulting in shimmering noise in an animation, which looks pretty bad. An animation-safe form of dithering needs to be localised and keep its pattern still relative to the screen. A Bayer-matrix ordered dither works quite nicely. Well-enough that the software renderer for the original Unreal from 1998 uses a 2*2 version of it on environmental textures to fake bilinear filtering. Interestingly, it's not dithering between colour values, but texture coordinates. Which makes sense as a way to save on performance. Much easier to add offsets to the coordinates of the texel to look up than to do bilinear filtering. Note that it only applies to the environment. Objects such as enemies and your weapon models are unaffected. Those just use nearest-neighbour texture filtering.

  • @watercat1248
    @watercat1248 2 роки тому

    This tethering method I will by amazing fore hardware or software that have limited color support for example nes, gb, gbc, ECT for people that create games or other software for those system I believe this information is very useful
    Personally in not that good with codes and algorithm but I appreciate that video personally the only way I'm able to create I video games is because the game engine existing

  • @trim7911
    @trim7911 2 роки тому +1

    RGB error dithering ... but after you said there's no cross dithering colours all I can think is what happens if you dither Hue Saturation Brightness (HSB or HSL or HSI or HSV if you prefer). Wonder what sort of funky things would happen ... In theory it should still work but possibility of rotating all the way to a complimentary colour.
    But then converting to HSB and back after dithering might just be too much of a pain.
    Still you'd get some funky results ... Very much something that would have been tried on TV signals maybe games consoles like the Sega Genesis, Super Nintendo or Amiga (assuming you're using a composite out).
    Edit wait no that's Y'CbCr .... So much technology that's mostly gone and I've forgot about.

  • @yonis9120
    @yonis9120 2 роки тому +3

    [In the voice of Cornelius Fudge in Harry Potter 5:] He's back!

  • @GregoryTheGr8ster
    @GregoryTheGr8ster 2 роки тому +1

    Nice, but what should the algorithm do when the current pixel is the last on a scanline?

  • @RockTo11
    @RockTo11 2 роки тому

    I wish dithering was used these days, even with 24bit palettes. For example, the splash screen on the Hulu app (on Samsung TVs) uses a teal gradient, but has a lot of posterization banding. Dithering would eliminate that.

    • @bubuche1987
      @bubuche1987 2 роки тому

      In general, I think it would be easy to have shaders ( I am talking about glsl here, and if you don't know what it is this comment is going to make little to no sense ) outputting colors in a much broader range of colors. Everything is calculated not with integers between 0 and 255, but with "reals" between 0 and 1. The precision of those "real" is invisible for the programmer, so it could be very high. Then, in the last step, when would come the time to display it on the screen with only 24 bits per pixel, the GPU could dither the whole result ( it would have the real result of what the color should be in those "reals" and the transformation to 24 bit would be the sampling ).
      Invisible to the programmer ( maybe a boolean to set to true ), retro compatible with a lot of games and improving the result a lot.

  • @evennot
    @evennot 2 роки тому +1

    Author of the "Return of the Obra Dinn" has in-depth research on dithering in his blog, if anyone wants even more admiration of the topic

  • @janPolijan
    @janPolijan 2 роки тому

    Hello, there. I'm using plain C at the moment for basic graphics programming. Thus all the C++ lambda-goodies feel like some sort of black magic, ha ha ha! But still from your good explanations, I understood most of your video and the Floyd-Steinberg dithering and it's very interesting.
    While watching the B&W dithering at 18m15s when you add the clamping, I started to wonder... I understand pixel values must not wrap around when diffusing the error. But isn't the clamping a little problematic??? For potentiall two reasons?:
    #1) slight decrease in dithering quality when we delete part of the error to be diffused in next steps.
    #2) significant amount of branching operations is now being added to perfom clamping of all four adjacents pixels for every pixel we scan.
    I thought perhaps it can be avoided? By simply computing Floyd Steinberg in a signed buffer? Or even for in place dithering, maybe one could add just a simple simple preprocessing step to half the intensity of the input buffer and then cast that pixel array to a signed type during the processing. I dunno? Maybe it sounds too "hacky"? But it's an idea I'd like to explore.

  • @heylittleguy26
    @heylittleguy26 2 роки тому +2

    Do you think this could be used for lighting in 2D games? I think that would be a worthwhile experiment!

    • @ottergauze
      @ottergauze 2 роки тому +2

      A lot of games (typically retro looking ones) do use dithering a lot for their lighting, except here it's more for an aesthetic than as a result of a limitation.

  • @eddiebreeg3885
    @eddiebreeg3885 2 роки тому

    Looking at the very distinct artifacts it does look like the dithering algorithm used in the GIF format, which also uses indexed colors to reduce filesize. I had no idea this algorithm could be that efficient!
    By the way I noticed the use of sqrt and pow to find the closest match by minimizing the euclidian distance... it would be far more efficient to completely ditch the square root
    (useless if you just want to find the shortest distance, minimizing the squared distance is enough) and replace the pow by a good old multiplication :D

    • @Roxor128
      @Roxor128 9 місяців тому

      Dithering isn't part of the GIF format. If you've got a GIF file with dithering, that was done by whatever program saved it. Also, it really should be avoided as it doesn't play well with the format's compression.
      Data compression is all about finding patterns that can be represented more simply. Noise doesn't contain any patterns, so you can't compress it. That's why it ruins compression for images in PNG and GIF format.
      Dithering makes a trade-off between spatial resolution and colour resolution. It fakes there being more colour by using patterns or noise that average out when viewed from a distance. And there's why it doesn't play well with compressed image formats: many dithering approaches rely on noise. One form of dithering that isn't terrible for use in compressed images is ordered dither, which relies on a regular pattern. Of course, if anything ends up resizing the image, that'll introduce a whole new set of problems to ruin things.

    • @eddiebreeg3885
      @eddiebreeg3885 9 місяців тому

      @@Roxor128 Although dithering isn't part of the GIF format, color indexing is. If you take an image originally encoded using some 24 bit format, and convert it down to 8 bits, you'd probably want some form of dithering if you want it to look anything like the original. As for compression, I would argue that if you're using dithering, your goal is to reduce the size of the total pixel space you're using, so you're ALREADY kind of compressing the image in some way. As to whether it's the best way to compress, that's besides the point

    • @Roxor128
      @Roxor128 9 місяців тому

      @@eddiebreeg3885 Indexed colour was designed for saving memory by separating the colour representation from the pixels while still allowing good colour accuracy. Yes, it does have the downside of limiting the number of colours you can have, but you can usually pick _which_ colours you'll have in your palette, and those will be the full accuracy of the display device (18-bit for VGA). When you only have 256KB of memory for the framebuffer, you're not going to waste it directly specifying the RGB values for every pixel. For some numbers, 640*480 with 16 colours takes about 150KB of memory (and was the highest resolution supported by plain old VGA). If you wanted to directly encode the 18-bit RGB values VGA uses for it, you'd need nearly 700KB (and it would have been a pain to program because 18-bit values do not fit neatly into byte-addressed memory).
      Compressed file formats are for saving disk space given a certain kind of image data. GIF was designed in the late 1980s when everyone was using indexed colour for everything. If you had an image saved as a GIF back then, it would almost certainly have been created from scratch with an artist-chosen palette, not converting it from an RGB form. That's what GIF's compression was designed to work with. Dithering undermines that.
      Just tried an experiment. Starting with a photograph resized to 640*480, I reduced the colour depth to 256 colours, generating the palette by the same method, but mapping the colours differently. One image used nearest neighbour, the other used error diffusion. The uncompressed image was 301KB. The nearest neighbour match was 157KB. The error-diffused dither was 192KB.
      Okay, that's not really fair, given GIF wasn't designed with photographic content in mind. So I tried another experiment with an image downloaded from FurAffinity that'd be a closer fit, even though it needed conversion. Same process but left at original size. Uncompressed 256-colour version was 1.1MB. Nearest neighbour was 339KB, error-diffused was 519KB. Also tried 16-colour versions. Uncompressed was 605KB, nearest neighbour 80KB, and error-diffused was 199KB. As for how they look, while the banding is a lot worse in the 16-colour version than the 256-colour one for nearest-neighbour, it really doesn't look too bad. I could buy an artist producing something similar from scratch (though obviously they'd do a better job).
      In all three cases, error diffusion makes the compression significantly worse. 22%, 53% and 148% larger than nearest neighbour, respectively for each of the test cases. You really have to ask yourself "Is it really worth potentially more than doubling the file size for the sake of some nice dithering?"
      EDIT: Realised that Paint Shop Pro can do ordered dither if you limit your results to a standard web-safe palette.
      Results:
      Photograph:
      Uncompressed: 301KB, Nearest: 46KB, Ordered: 70KB, Diffused: 107KB
      Drawn image:
      Uncompressed: 1.1MB, Nearest: 97KB, Ordered: 209KB, Diffused: 492KB.
      While ordered dither isn't as compression-friendly as nearest-neighbour, it looks a hell of a lot better, and compresses significantly better than error-diffusion, with error-diffusion coming out 52% bigger for the photograph and 135% bigger for the drawing.

    • @eddiebreeg3885
      @eddiebreeg3885 9 місяців тому

      @@Roxor128 This is the part where I have to concede I am no expert about dithering, I haven't looked at all possible algorithms although I know there are a few. The only thing I wanted to point out in my original comment was that the look of that specific algorithm strongly ressembles what you would see on *modern* GIFs, that have been converted from RGB formats even though it wasn't designed for that originally. Thank you for taking the time with the experiments, I did learn from them :)

  • @bogdanstrohonov8310
    @bogdanstrohonov8310 2 роки тому

    Good evening Mr. Barr, how about a video on localization in games? Greetings, B S

  • @dennisrkb
    @dennisrkb Рік тому +1

    You should perform the dithering in a linear color space.

    • @thorham1346
      @thorham1346 10 місяців тому +1

      No, you need gamma correction, and the more bits per channel you have, the less gamma correction you need. sRGB to linear color space is already too much for even one bit per channel.

  • @mworld
    @mworld 3 місяці тому

    CGA is back hehe.

  • @yutdevmahmoud5271
    @yutdevmahmoud5271 2 роки тому

    Can you create a video for how to setup visual studio and importing your engine and how to work with it

    • @javidx9
      @javidx9  2 роки тому

      Yes! ua-cam.com/video/eTGSTTxR-Ss/v-deo.html

  • @hackerman8364
    @hackerman8364 2 роки тому

    hey can you make a video about headers
    ?

  • @nanoic2964
    @nanoic2964 2 роки тому

    I've noticed that the quality on a 9th gen 2021 standard edition front facing ipad camera is quite poor, this video has shown me that it is because it dithers quite a lot.

  • @normwaz2813
    @normwaz2813 Рік тому

    Hi, a little off topic but I wonder if you could explain anti-aliasing algorithm?

  • @samuelecanale5463
    @samuelecanale5463 2 роки тому

    Hello, i'm trying to do my own pixel game engine but i encountered a compiler error C2089: "Class too large" about the pixelgameengine big class. Did you encounter the same error? If so how did you solve it? Hope you will find the time to answer. Great video btw, i'm learning so much from you!

    • @javidx9
      @javidx9  2 роки тому

      Thanks Samuele, sounds like you are allocating too much memory on the stack. Big areas of memory need to be allocated on heap and accessed via pointers or other appropriate interfaces.

    • @samuelecanale5463
      @samuelecanale5463 2 роки тому

      @@javidx9 thank you very much. I'll try to fix it like this

  • @frankgrimes9299
    @frankgrimes9299 2 роки тому

    We could fine tune the lambda in 9:35 should get a better code optimization when we remove the branch but instead mask out the MSB and shift it down.

  • @giorgioguglielmone6528
    @giorgioguglielmone6528 2 роки тому

    Sorry if I write to you here. Could you do a tutorial on how to write a program in Visual Studio C ++ 2022 to connect to a Firebird 4.0.1 database (maybe using Boost.Asio or other library like SOCI or IBPP) ?

  • @smartito_97
    @smartito_97 2 роки тому

    This is the algorithm what printers use?

  • @Lattamonsteri
    @Lattamonsteri 2 роки тому

    I remember that I heard an interview where an LucasGames employee told how when he went to work there, dithering wasn't used in the games because it didn't compress well. But after he drew a dithered image and it looked so much better than the standard (EGA?) image, the coders were forced to implement dithering.
    Now, I wonder... is this Floyd Steinberg dithering easy to compress? Could we use the standard posterized image and use it as the compressed image, and then just store another array, where the error amount is stored for each pixel? Then at runtime, the algorithm would go through the image and create the doppled effect? Or is there a better way?

    • @SianaGearz
      @SianaGearz 2 роки тому +1

      I don't see much use, you might as well compress the original high colour depth image instead. Because all you've done is for the say 8 bits of your original colour image, if you're converting it to 2-bit dithered representation, all you've done is stored for each pixel, a separate plane with 2 bits corresponding to the thresholded image, and another plane with 6 bits corresponding to the remaining error. You have also not at all decorrelated the data but duplicated it, so let's say you store differences between neighbouring pixels, and give them a variable storage size depending on the stored value; but then both planes encode the same general trend really, and you'd be better off doing this on the whole pixel, as when there are substantial magnitude changes, you store them once rather than twice.
      At low bit depths, the dithered image itself is probably pretty much incompressible, while at higher ones, nothing speaks against compressing dithered image directly with local difference.
      But i think someone else can come up with a better approach in terms of compression, but i wager a guess, it wouldn't be simple at all.
      On the other hand, if you knew you'd be diffusion dithering the image for display, you can make use of lossy compression algorithm on the high bit depth image, the artefacts of which would be particularly well hidden by the dither, as they're similar in appearance. If you know ADPCM algorithm for compressing audio, there are 2D generalisations of it. It would even spit out data in the same order as consumed by the dithering algorithm, so you can have a pretty optimised implementation that simultaneously decodes and dithers. But i really don't know whether it would beat storing dithered image uncompressed at lower bit depths and trivial compression of dithered image at higher. Sounds like a subject for a scientific paper or something, but maybe someone has done that before.

    • @Lattamonsteri
      @Lattamonsteri 2 роки тому

      @@SianaGearz thanks for a thorough answer :D i'm not familiar with audio compression or anything related to compression in general but i think i understood most of what you said! x)
      As for my original idea, i forgot how many binary numbers are needed for the error values xD i guess i thought they could also be rounded to 8 values or something, but that would probably cause very weird rounding error artefacts!

  • @FrostGamingHype
    @FrostGamingHype 2 роки тому

    im getting closers to build something like the console game engine you made in vscode im designing one in code::blocks

  • @trentlangford9050
    @trentlangford9050 2 роки тому

    This is oddly interesting

  • @JoshRosario310
    @JoshRosario310 2 роки тому

    Audio Dithering next?

  • @oschonrock
    @oschonrock 2 роки тому

    Is there a "bounds check" bug here: ua-cam.com/video/lseR6ZguBNY/v-deo.html on line 78. ie do the coordinates of vPixel+vOffset fall off the end / beginning of the row/column? ie do they use image information from a part of the image which is VERY far away, or even worse "outside the image"... (I haven't checked the implementation of GetPixel and SetPixel to see whether, and if so how, they are bounds checked). -- Update I just downloaded, compiled and checked. SetPixel does the bounds checking and just ignores the "out of bounds" pixel coordinates... so this is "OK..."

  • @Ochenter
    @Ochenter 2 роки тому +3

    Hello David, Daddy.
    Long time no see you, miss your lessons.
    Stay safe, Mister.

    • @javidx9
      @javidx9  2 роки тому +1

      Hi Daniel, thanks as always, and yes stay safe indeed!

  • @bubuche1987
    @bubuche1987 2 роки тому

    Some thoughts after seeing this video:
    I am curious about how we should handle borders of the image. I mean you assume it's always possible to delegate the error to the 4 pixels you mention, and it's not the case.
    I also strongly disagree with your sentence saying that the number of colors is greater than what we can see. I think it's in general false ( can you provide a pair of colors which, put side by side even a immense area, would be indistinguishable ? ), and I know that it is false for for special yet frequently encounter cases: dark shades. There is only 256 variation for hueless colors, and it's quite a small number. Create a picture made of bands of those shades, put it on any screen, and you'll see bands.
    There is a reason why so many games have banding issues ( at least old games ).
    I also do not understand what you said about storing negative values. Yes, if you try to do IN PLACE dithering, with a fixed amount of memory, you may face this problem. But in your case you are already creating a whole new array anyway. Nothing stops you from having an array of signed shorts that will contains errors.
    Finally, I am not an expert in dithering at all, but I am quite sure that there must be alternatives that would allow for parallel dithering ( with a random function based on a _blue_ noise to avoid artifacts ? ).
    And dithering is still useful today ( for printing for example, and as I mentioned to deal with dark areas, in movies or games ).
    Last but not least, your approach seems to assume a linearity of the brightness. But if you have a field made of gray pixels of, let's say (127, 127, 127), it's in incorrect to approximate it with a field of altering blacks and whites pixels. Doing so will result in a much brighter image, and it has nothing to do with negative values or anything like that.
    ( And to check that and avoid sampling issues ( that would merge the black and white pixels ) you can display on full screen your gray image, stand away from your screen and take a picture of it. Then do the same with the black and white pattern ).
    I banged my head on this problem for quite a long time.

    • @bubuche1987
      @bubuche1987 2 роки тому

      Forgot to mention: is it possible to create an image that would look like a grid of black and white (like a chess board) but with slightly modified pixels so WHEN you dither with the right colors you actually obtain a totally different result?
      In general: is it possible to hide an image into another one so you can only see the hidden picture with the right dithering algorithm ?

  • @SianaGearz
    @SianaGearz 2 роки тому +1

    I'm looking at my Odroid Go Advance, and it could actually benefit from dithering, the colour resolution is very low. Maybe temporal dithering though, aka FRC. Today, you feed 8-bit image to your PC monitor, but the pixel resolution is only 6-bit and with a nonlinear transfer curve, so temporal dithering is how they restore some of the requisite quality; or alternatively displays with 10-bit input and 8-bit panel. I feel the modern rendering also has something similar, where you add a noise function to your sample vector together with multiple samples or filtering to try to fight aliasing in the shaders or raytracing, so you dither/vary the sampling point instead of the colour value.
    A very unpleasant trait of Floyd Steinberg is that it looks good in a static picture, but absolutely horrendous in motion, as with minimal movement, parts of the image near the top left are stable, but near the bottom right are increasingly flickery, especially disturbing in parts of the image that haven't really changed. Blue noise was often the accepted substitute here, just like in shaders, until low-discrepancy noise functions were invented recently.
    As to its tendency to lighten up dark areas, wouldn't it make sense to convert the intermediate values to linear space from gamma space and then back? You also don't need to clamp the intermediate values then, just leave them as floats. Then the small-value diffusion that occurs would be more careful not to visibly brighten the image, i would think.

  • @markblacket8900
    @markblacket8900 2 роки тому

    Floyd Steinberg sounds like a name from Cyborg

  • @Miki19910723
    @Miki19910723 2 роки тому

    Dithering is commonly used in games combined with temporal antialiasing. Without error accumulation i guess.

  • @blinded6502
    @blinded6502 2 роки тому

    Imagine LCD screens actually use this algorithm