CMPRSN (Compression Overview) - Computerphile

Поділитися
Вставка
  • Опубліковано 24 сер 2024
  • Outlining the basics of compression methods, including some of the pitfalls! Dr Steve Bagley demonstrates.
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottsco...
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

КОМЕНТАРІ • 154

  • @thepurplelemons
    @thepurplelemons 11 місяців тому +182

    "and I'm just gonna write 'compress' because I've drawn the box too small and 'compressor' wouldn't fit in" is unintentional comedy gold

    • @the_real_ch3
      @the_real_ch3 11 місяців тому +26

      Doubly so after the intentional joke of “a short video on a long topic”

  • @NuclearCraftMod
    @NuclearCraftMod 11 місяців тому +132

    "Sean's actually on a tripod, but it's all the walls and background that are waving around in this place." - it all makes sense now, the gravitational waves are getting out of hand.

  • @GuentherJongeling-hu6oe
    @GuentherJongeling-hu6oe 11 місяців тому +41

    CMPRSN can also be read as Comparison, talk about lossy compression right there

  • @zzzaphod8507
    @zzzaphod8507 11 місяців тому +17

    I imagine that the initial version of the video was over 30 minutes, but some type of algorithm was run on it, resulting in the video being less than 16 minutes.

  • @paultapping9510
    @paultapping9510 11 місяців тому +75

    casually just drops "actually you can define randomness as an uncompressible string", and moves on!

    • @cygil1
      @cygil1 11 місяців тому +7

      Uncompressible, not uncompressed.

    • @paultapping9510
      @paultapping9510 11 місяців тому +2

      @@cygil1 that's the one!

    • @isaactfa
      @isaactfa 11 місяців тому +4

      @@paultapping9510 Well, to channel my own inner Professor Brailsford, the one should have been 'incompressible'.

    • @moralboundaries1
      @moralboundaries1 11 місяців тому +1

      🤯

    • @mevideym
      @mevideym 5 місяців тому

      You can look up entropy in information theory, it should make more sense then

  • @elraviv
    @elraviv 11 місяців тому +56

    2:55 another way of looking at it without going into the math is, if we had such an algorithm that can compress every file to a smaller file, then we could have given it a file it has already compressed, to get a smaller one, and give it back again & again until we were left with 0 bits....

    • @Winium
      @Winium 11 місяців тому

      The issue with this handwavey explanation is that there (may be/)are "optimal" files, i.e. files with size equal to their information content.

    • @elraviv
      @elraviv 11 місяців тому

      @@Winium the point still stands,
      all I needed to show was that the hypothetical algorithm does not reduce the size of an input file.
      you are correct that it will happen long before it will reach 0.

    • @HoSza1
      @HoSza1 11 місяців тому +3

      ​@@WiniumInformation content is irrelevant to that proof. Not even used in it.

    • @rogo7330
      @rogo7330 11 місяців тому

      @@Winium "information content" of the file depends on the compression algorithm itself. If my algorithm produces text of some book when it given an empty file it does not mean that empty file have information content of that book.

    • @mccm2402
      @mccm2402 11 місяців тому +2

      perpetuum compressible

  • @lanatrzczka
    @lanatrzczka 11 місяців тому +15

    Just for fun I played around with assigning shorter binary values to the more commonly used characters found in a text file and working up from there. This resulted in the lesser used characters having much larger representations, and the sequence wasn't ultimately much shorter than what I started with. I learned that ASCII and UTF-8 were already optimized beyond what I might come up with.

  • @fly1ngsh33p7
    @fly1ngsh33p7 11 місяців тому +24

    There is also the Burrows-Wheeler-Transformation that can cluster similar characters. Was a very interesting lecture.

  • @brandonthesteele
    @brandonthesteele 9 місяців тому +1

    The idea of a random string being uncompressible was a new one to me and greatly added to my comprehension all on its own.

    • @angeldude101
      @angeldude101 7 місяців тому

      "Entropy" in information theory is basically the inverse of compressibility, and you can't get much higher entropy than true randomness.

  • @bluekeybo
    @bluekeybo 11 місяців тому +1

    Am I the only one who always feels like I get nothing out of Dr. Bagley's videos and explanations? Where is Dr. Pound when you need him.

  • @mijmijrm
    @mijmijrm 11 місяців тому +2

    CMPRSN .. ? .. ah! .. a video on comparison. About time too.

  • @andytroo
    @andytroo 11 місяців тому +3

    the 0 bit message is still useful - i walk into a coffee shop and get handed my usual coffee, without me requesting one :)

  • @pavlinggeorgiev
    @pavlinggeorgiev 11 місяців тому +16

    I am a simple man. I see new Computerphile vid, I click.

  • @ZomB1986
    @ZomB1986 11 місяців тому +7

    0:44 The bits in the "data" square are not random. It's ASCII for "this is some data!!!". The compressed data in magenta spells "fewer bits"

    • @GearsDatapacks
      @GearsDatapacks 11 місяців тому

      That's a cool little secret

    • @Robstafarian
      @Robstafarian 11 місяців тому

      I was just about to parse it, thanks.

  • @nHans
    @nHans 11 місяців тому +3

    ASCII is 7 bits. That's fewer than 8. In fact, exactly 1 less than 8.

  • @TheNotoriousCommenter
    @TheNotoriousCommenter 11 місяців тому +16

    ALWAYS a GREAT day when Computerphile uploads❤

  • @KilgoreTroutAsf
    @KilgoreTroutAsf 11 місяців тому +3

    I'm going to be doubly pedantic and confirm that you can indeed say "less bits" instead of "fewer bits" when you are talking about information entropy (and not just computer memory), given that entropy is a continuous quantity that can take any value in the positive reals.
    That is, 1.5 bits is less bits than 2.76 bits.

  • @MedEighty
    @MedEighty 11 місяців тому

    1:55 Thank you! I was screaming "FEWER", at the screen.

  • @shahinza
    @shahinza 11 місяців тому +3

    I like how you compressed the video's title!

  • @ChrisSeltzer
    @ChrisSeltzer 11 місяців тому +4

    "You can define randomness as an uncompressable string" a video about this would be appreciated.

    • @bity-bite
      @bity-bite 11 місяців тому

      I don't understand what he meant by that

    • @Quantemic
      @Quantemic 11 місяців тому +2

      @@bity-bite As far as I understand, compression requires patterns and repetition in the thing to be compressed. A truly random set of data doesn't have either of those, so it wouldn't compress. So I guess he meant that compressibility is a measure of randomness; lots of randomness -> less compression.

  • @gwaptiva
    @gwaptiva 11 місяців тому +2

    "All mushrooms are edible, some of them even more than once." The same is kinda true for Compression: I can compress any dataset down to 1 bit. Decompression is a completely different kettle of fish, though

  • @ccthomas
    @ccthomas 11 місяців тому +9

    The question they really need to answer is, what are they going to do once they run through all their tractor-feed paper?

    • @olik136
      @olik136 11 місяців тому +1

      you can still buy it new

    • @devnol
      @devnol 11 місяців тому +1

      You underestimate how much tractor-feed paper is left in the world

  • @KillaBitz
    @KillaBitz 11 місяців тому +3

    Need a video on "Middle Out" compression

  • @bmitch3020
    @bmitch3020 11 місяців тому +10

    I haven't seen 7Up in so many years. Is that a product placement? 😄

    • @SteveLEKORodrigue
      @SteveLEKORodrigue 11 місяців тому

      I was about to say the same.

    • @-dash
      @-dash 11 місяців тому

      7Up more like 7Zip

  • @TiagoTiagoT
    @TiagoTiagoT 11 місяців тому +4

    Have you guys done a video on zip bombs yet?

  • @blumoogle2901
    @blumoogle2901 11 місяців тому

    An interesting, but very complicated system for compressing text that was theorised for a sci-fi book I've read was the development of a conlang specifically to serve as compressible intermediary language.
    Now, translating natural languages are naturally lossy, but it's possible to develop a whole language which is inherintly lossless in translation with another specific language while being more regular and information dense, you can then use this more dense language as a first stage compression before applying regular text compression as a second stage. If you limit the domain and format of the original messages, it's possible - with a penalty for the compression time and decompression time - to acheive very high levels of lossless compression.
    It's a useful idea in the rare case where extremely low bandwidth combined with long transmission times makes very intense compression algorithms and huge identical pseudo-one-time-pads at both ends worth while

  • @dembro27
    @dembro27 11 місяців тому +1

    "You can define randomness as an incompressible string" - that's interesting, as true randomness tends to have patterns and streaks that, if I understand correctly, a compression algorithm could look for.

    • @mgord9518
      @mgord9518 11 місяців тому +5

      Finding a substantial amount of entropy is incredibly rare.
      Yes, it's possible to find "AAAAAA" in a truly random string, but no compression algorithm would be capable of compressing the string as a whole due to stuff like "ApqTrL" being much more common. You'd grow the data by having to escape your control characters that constantly appear in the random data
      I suppose if your random data were a specific set of bytes (like alphabetical characters), you could use unprintable characters as instructions so you'd never have to escape the random data, but even if that were the case you'd get an insanely bad compression ratio.

  • @elementalnova7418
    @elementalnova7418 11 місяців тому +1

    Been hoping for a general overview forever

  • @ZohoExpert
    @ZohoExpert 11 місяців тому +1

    Thank you! It is “fewer bits” and “less bits” was bothering me so much 😂

  • @Krazy0
    @Krazy0 11 місяців тому +1

    You can always use an unused byte to separate between packs, and then after "that byte" include a number or some 256 base number to multiple the pack to it when decompressing, which makes it much faster, but the compressing will be hard, and that unused byte won't be accessible, unless if replaced with double "that byte" just like the double back slash in string to produce 1, good compression to be honest.

  • @andy.robinson
    @andy.robinson 11 місяців тому +1

    Would love to hear some ideas on Optimal Tip-to-Tip Efficiency. No need for demonstrations though.

  • @dipi71
    @dipi71 11 місяців тому +1

    0:11 JPEG is not a compression algorithmus, it's about data _reduction._
    JPEG reduces details deemed unimportant for our visual system. It actually doesn't compress but irrevocably reduces the image data.

    • @toby9999
      @toby9999 11 місяців тому

      Semantics.

    • @dipi71
      @dipi71 11 місяців тому

      @@toby9999 Semantics are an essential part of language. It's what makes the meaning of »compress« different to the meaning of »reduce«.

    • @angeldude101
      @angeldude101 7 місяців тому +1

      JPEG _is_ not compression. JPEG _uses_ compression. Yes, it reduces the data, but it does so specifically to get data that's easier to losslessly compress, specifically with a run-length encoding specifically for 0s (which the lossy encoding generates a lot of), followed by Huffman encoding.

  • @chickenman297
    @chickenman297 11 місяців тому +2

    Lossy compression can easily be seen in compressed movies with crowds in the background. Generally, the crowd will look blurry and have a much lower frame rate than normal.

    • @d5uncr
      @d5uncr 11 місяців тому +1

      No, you can't have the background running at a different frame rate than the foreground.

    • @chickenman297
      @chickenman297 11 місяців тому

      @@d5uncr You misunderstand. The movie is running at a standard rate. The background displays an image and skips a few frames before changing it while the foreground changes every frame.

    • @Robstafarian
      @Robstafarian 11 місяців тому

      I used to see compression in tape-delayed Formula One broadcasts as sparkles in the grass; that was one of the reasons why I dropped that cable provider.

  • @AgentM124
    @AgentM124 11 місяців тому

    The best way to compress (pseudo)random data, is to know the prng algorithm, the seed, (and optionally the length or index into the stream it generates).
    But true random data, you'd have to find a function that fits that data and communicate that function, so then it'll have the same problem of costing more data to represent that.

  • @blumoogle2901
    @blumoogle2901 11 місяців тому

    The system with the highest factor of compression I can think of would be to have the first few bytes - or even potentially gigabytes - to be dedicated to specifying exactly which very custom algorithm is used to compress the message which follows. Your decoder and encoder would be huge, but you could analyse the package very intensely to choose the optimum choice for the specific niche message you are sending, so you never deal with the end result being larger than the original message. It would simply require a lot of pre and post processing but taking an hour to compress and decompress a message is worth it if you save a day by sending a smaller message through a low bandwidth tunnel.

  • @kingsindian1066
    @kingsindian1066 11 місяців тому

    How about a video covering wave function collapse procedural generation algorithm.....

  • @TheGreatAtario
    @TheGreatAtario 11 місяців тому

    I glad to see that "fewer" correction. But I'm sad to see it didn't stick.

  • @VectorNodes
    @VectorNodes 11 місяців тому

    See Steve Bagley -> immediately click

  • @bluekeybo
    @bluekeybo 11 місяців тому +1

    Now, if my compressor can be allowed to be infinite, I'll just make an infinite lookup table

  • @L0op
    @L0op 10 місяців тому

    less bits feels better than fewer bits to me, maybe because we rarely talk about single bits, but rather see them as a measure of storage in daily life? dunno

  • @themadone3071
    @themadone3071 11 місяців тому +1

    Could you do a video about MSS and MTU 😊

  • @Ben_EhHeyeh
    @Ben_EhHeyeh 2 місяці тому

    What is the limit of compressing?
    4x, 5x, 6x, 7x?
    Is there a mathematical reason for the limit?
    Also, I can't find your videos through the UA-cam app, I have to search in the browser then open in the app.
    In the UA-cam App it says, "No Downloads Found." when I search for your channels.
    This issue may be malware on my phone and not a UA-cam system channel issue.
    There is apparently an attack which uses cell phone numbers to propagate infections, like SimJacking (3G) but newer, since 3G is a deprecated attack vector. Maybe there is a newer SimJacking 4G and 5G?

  • @hamc9477
    @hamc9477 11 місяців тому

    I'm always looking at Steve in these videos. The background isn't teaching computer science nearly so well!

  • @lancemarchetti8673
    @lancemarchetti8673 11 місяців тому

    I noticed my comment was deleted possibly due to a included link to my demo file. How do I send an email to you guys?
    Thanks. Lance Marchetti, South Africa.

  • @-dash
    @-dash 11 місяців тому +3

    The other night I dreamt I had gotten myself stuck in an LZMA2 archive

    • @un2mensch
      @un2mensch 11 місяців тому +4

      That's a lot to unpaq

    • @-dash
      @-dash 11 місяців тому +1

      @@un2mensch Yeah it is considering my massive dictionary size

    • @mgord9518
      @mgord9518 11 місяців тому

      Did dream you just inherently know it was LZMA2 or did you see the header or file extension?

  • @mrudo8663
    @mrudo8663 11 місяців тому

    Can we use pi as a form of a compressions?

  • @cyberturd-rz3fm
    @cyberturd-rz3fm 11 місяців тому +1

    what about the middle out algorithm?

    • @KillaBitz
      @KillaBitz 11 місяців тому +1

      I just made the same comment.
      Men of culture, UNITE!!

  • @YuriBez2023
    @YuriBez2023 11 місяців тому

    0:58 - Ironic. You should do something on capacity planning next :D

  • @rdubb77
    @rdubb77 11 місяців тому +1

    A trill is well known to classical musicians.

  • @StubbieCA
    @StubbieCA 11 місяців тому

    The most vital parts of the video , where the scenes is on the desk with writing are out of focus.

  • @TAHeap
    @TAHeap 11 місяців тому

    @ 8:08 "Go for the name of name of a computer" .... _Arf!_
    Now, personally I probably have one or two specimens sitting in a long-unvisited lock-up storage unit somewhere, but you might find it useful to consider the age range of your target demographic ... 👵👴

  • @Moley1Moleo
    @Moley1Moleo 9 місяців тому

    That 7-up product placement.

  • @threeholepunchmike3549
    @threeholepunchmike3549 11 місяців тому

    Commenting before I watch it all. If I don't hear middle out, gonna be upset

  • @TheTruthAboutLemmings
    @TheTruthAboutLemmings 11 місяців тому +1

    No mention of smaz? No mention of using a dictionary of words?.. Bit of a pointless video really

  • @Unisaur64
    @Unisaur64 11 місяців тому

    A video on compression described as a long video on a short topic 🤣

  • @telperion3
    @telperion3 11 місяців тому

    7zup?

  • @Strawberry_Htet
    @Strawberry_Htet 11 місяців тому

    I've just watched the silicon valley :3 this week.

    • @b1tbanger
      @b1tbanger 11 місяців тому +2

      Middle out.

  • @grotmx
    @grotmx 11 місяців тому +1

    Thank you for correcting the less/fewer thing. I find it very distracting when those get mixed up, and I find myself no longer concentrating. My stupid brain.

    • @murphy54000
      @murphy54000 11 місяців тому +1

      less/fewer has no mirror with more. It was the personal preference of an old grammarian that was eventually accepted as fact despite it almost never being used that way historically.

    • @misterkite
      @misterkite 11 місяців тому +2

      You should convince your brain that the fewer/less thing is *arbitrary*. This distinction was first expressed by grammarian Robert Baker in 1770. He just made up the rule out of nowhere saying it "felt more eloquent".

  • @Yupppi
    @Yupppi 11 місяців тому

    The title of the video this time: comparison. Weird that he talks about compression so much.

  • @ThomasGiles
    @ThomasGiles 11 місяців тому +3

    Come on now, you can say “less” to mean “a lower number of.” Not all dictionaries are prescriptivist. And this is a UA-cam video about compression, so who cares?
    Also, “less” has less letters and syllables than “fewer,” fitting the theme perfectly. This man should be applauded for saying “less” and staying on-message!

    • @pierreabbat6157
      @pierreabbat6157 11 місяців тому

      If we say "fewer", we should also say "manier". "More" is the comparative of "much". (much:more:most is cognate with μεγας:μειζων:μεγιστος.)

    • @BobbyHill26
      @BobbyHill26 11 місяців тому

      If Merriam-Webster is to be trusted, less has been used this way in writing for over 1000 years and it wasn’t until the 18th century when the writer Robert Baker wrote a prescriptivist “usage book” and for this specific case, even admitted that less vs fewer was just his own personal preference

  • @AlphaFoxDelta
    @AlphaFoxDelta 11 місяців тому

    Epic

  • @refactorear
    @refactorear 11 місяців тому

    3:34 🤯

  • @mattj65816
    @mattj65816 11 місяців тому +2

    "Is it 'fewer' bits." Thank you.

  • @sam-sn5pu
    @sam-sn5pu 11 місяців тому +1

    please buy a tripod!!!!!!!!!!!

  • @rachel_rexxx
    @rachel_rexxx 11 місяців тому +2

    Ok cool, go deeper please?

  • @minxythemerciless
    @minxythemerciless 11 місяців тому

    I'm pretty sure that there will soon be a way to compress text using large language models. It won't necessarily be super fast, but probably way better at compression.

  • @gryzman
    @gryzman 11 місяців тому

    FeWeR!

  • @skyscraperfan
    @skyscraperfan 11 місяців тому +2

    If a character is only represented by three bits instead of eight, doesn't that mean that we need additional bits that tell us that the character stopped and the next bit will belong to the next character?
    I do not understand why A has to represented by "1A". If the counting digits are not digits that are allowed in the text at the same time, you could just delete all 1s. So "AAAABCCC" would convert into "4AB3C". That of course falls apart if numbers can also appear in the text. So you need different characters for counting.

    • @3k2p6
      @3k2p6 11 місяців тому +1

      With that encoding when I see "4AB3C" I understand ABABABABCCC. The method CAN'T have multiple interpretations, it has to be clear.

    • @zxuiji
      @zxuiji 11 місяців тому +3

      The example he gave was an inefficient example, if you look at hoffmen compression (think that's what it was called) you'd see a much more efficient version. Let's take that AAAEEEAAAEEE string for example, with hoffman compression that would boil down to 1010010100 which is 10 bits, much shorter than the 96 bits used for the original (although in this case it won't actually be worth compressing because of the extra data needed to understand what that string of bits actually meant), if you expand that concept to something much bigger like the usual kilobits and megabits that text files (word documents etc are included here) show up in then you can see how that would be a major saving for a server doing 100s, maybe even 1000s of those kinds of downloads/uploads a second. That is the reason compression is needed and is even now a hotbed for developer interest.

    • @skyscraperfan
      @skyscraperfan 11 місяців тому

      @@3k2p6 Of course the decoder always knows how to interpret the code. It does not have to apply its best guess. For ABABABAB you could define brackets like 4{AB}.

    • @murphy54000
      @murphy54000 11 місяців тому +3

      The way nearly every compression algorithm works is by developing a legend or key based on strings or other contiguous data types (including binary representations) and assigning them a substituted value which, ideally, takes up less space than the original string (or other data). If a substring appears only once in a text, that part can't be compressed any further, and in fact *that specific substring* will take more room to store in the compressed state than it requires in plain text. This is because to decompress the substring, you need to store the original content of the substring in the legend, plus the replaced string in the compressed version. Even if your algorithm can flag something as "this is not compressed; treat this as plaintext", *that's additional data that must be stored*.
      A has to be represented by "1A" because the 1 serves two purposes: it explicitly ends any substring to be multiplied during decompression, and it flags the next substring as uncompressed. The only place this could be omitted would be as the very first character, but that's not only bad practice, but also error prone, and ultimately *isn't worth the bother to define in the algorithm in the first place*

  • @stevojohn
    @stevojohn 11 місяців тому

    Finding Tom Scott a bit insufferable these days, but his video on Huffman Encoding was way better than this. Huffman has been proven to be way better than RLE.

    • @angeldude101
      @angeldude101 7 місяців тому

      They solve completely different problems and look for completely different types of patterns. In fact, it's very possible to use both in sequence, applying RLE, and then huffman coding the result of that. This is exactly what JPEG does.

  • @paullamar4111
    @paullamar4111 11 місяців тому

    Am I the only person who thinks "Kolmogorov Complexity" every time the subject of compression is discussed?
    I am, aren't I?

    • @TAHeap
      @TAHeap 11 місяців тому

      No ... but as any absolute measure of Kolmogorov Complexity would be uncomputable ...

  • @marcombo01
    @marcombo01 11 місяців тому +1

    Pole position

  • @fabianmerki4222
    @fabianmerki4222 11 місяців тому

    just index all large files with a uuid, ignore all conflicts and save a lot of space. .. cloud compression 😂

  • @diskgrinder
    @diskgrinder 6 місяців тому

    Too much too soon. Just show how it works on a sentence

  • @markjfannon
    @markjfannon 11 місяців тому

    7up? What's happened to the usual diet coke :0

  • @lucidmoses
    @lucidmoses 11 місяців тому

    You should be able to compress random data if you have;
    More then about 1k of it
    and HUGE processing abilities say a GPU with 256 cores and enough memory to hold the file.
    Think about run length encoding but varying two things. The count between bytes and the offset of the bytes by value. So standard run length encoding would be a 1,0. Then test ever second byte or a 2,0. Every third byte, 3,0 etc. Do that update say 255. Then start doing the offsets, say 1,1 where the next byte is considered correct if it is one value higher. Then do 1,2 1,3 1,4 up to say 1,255. Then start again at an offset of 2. so 2,1 2,2 2,3 etc. It would be pretty hard to actual random data to avoid all combinations like that.
    But like I said, It's going to take HUGE processing time and if the data is truly random then I'm not sure how much you would compress but it should be more then zero.

    • @jeremydavis3631
      @jeremydavis3631 11 місяців тому

      You're probably right. He did say an incompressible _stream_, though. In theory, a stream of random data is infinitely long. And, unless I'm mistaken, the best possible compression ratio for random data, while always nonzero, will tend toward zero as the stream gets longer. That's because, while you can find patterns in any given chunk of random data, they won't continue to hold true in the rest of the infinite stream. In practice, though, compressing individual chunks might be good enough.

    • @lapatatadelplato6520
      @lapatatadelplato6520 11 місяців тому +3

      There are no patterns in random data. Therefore there is nothing you can take advantage of to compress it.

    • @lucidmoses
      @lucidmoses 11 місяців тому +1

      @@lapatatadelplato6520 Apparently you didn't read what I wrote.

    • @lapatatadelplato6520
      @lapatatadelplato6520 11 місяців тому +5

      @@lucidmoses i did, but the longer the string of random data, the less likely a global pattern will emerge. Look up shannon entropy. It should dictate the mathematical nature of compression. Its mathematically impossible. Like energy is conserved in the real world, entropy is conserved in data. No matter how well you subsample the data, if the string is random, you cannot compress it.

    • @TeslaPixel
      @TeslaPixel 11 місяців тому +1

      "It would be pretty hard [for] actual random data to avoid all combinations like that." But not impossible, so a random sample can indeed break your strategy. There is simply no strategy to always compress random data, and this is easily proven as in the video. If you presented a formal strategy, a string breaking the strategy could be found given time.

  • @garretmh
    @garretmh 11 місяців тому

    Clickbait, the video wasn’t about “comparison” 🤪

  • @TerminalWorld
    @TerminalWorld 11 місяців тому +2

    There is no such thing as 'lossy compression' - compression is always lossless.
    What you are thinking of is conversion.

  • @MainDoodler
    @MainDoodler 11 місяців тому +1

    Best 7up ad