EDIT (March 2023): The ending of this video includes some predictions on the future on VTubing, including the possibility and implications of a truly "AI" VTuber. As of December 2022, this prediction came true with the debut of Neuro-sama just one month after the release of ChatGPT, which lead to mass debate about AI and the possibility of AGI, a heavily philosophical question. Given such hindsight, this video does not provide nearly enough nuance to this now-widespread and controversial topic so please forgive the oversimplification.
"Unlike in conventional programming where the programmer knows what they're doing..." smh Junferno never expected you to spread misinformation like this
Imo, I think it’s because his channel has an almost old UA-cam kind of nostalgia with it. It’s refreshing when a lot of video essays like this just feel the same because of graphics and editing they use and stuff. I feel like Junferno’s videos are more down to earth
I’ve been recommended VTuber channels for like 2 years now and as soon as I thought UA-cam would stop I had to click on this and now the cycle reset itself I can’t win
As an ML expert, I actually think your explanations were the perfect level of detail, the only stuff you really skipped over was the optimization details. Unsure if you have a background in ML, it sort of sounds like you do or at least have done some courses, but if you haven't, really impressed with your ability to digest and explain this stuff. This is probably the single best 15 minutes ML overview explanation I've ever seen, and I've spent like 5 years pitching VCs and whatnot on various ML technologies. So if you're saying "I skipped over a lot of detail" due to any kind of like, impostor syndrome when it comes to the subject matter or whatever, I'd say don't sweat it, I thought this was really high quality. Only note would be the very very end, and I wouldn't expect anyone to really know this as it's a bit niche within Transformer research, but the whole point about "these are all separate models" is very rapidly changing because of progress with large pretrained Transformer models. In one of Deepmind's more recent papers, they shared a model called "Gato" which is a highly multi-modal Transformer model that performs reasonably well on over 600 tasks. It isn't SoTA in any particular domain, but it's also a wildly small model (only ~1.5B parameters or something like that, GPT-3 is 175B for reference), and very importantly, it appears to follow the same scaling laws as Transformer models in other domains. All of that meaning, very soon, these different specialist models will be more frequently combined under one big model that generalizes across domains. Now while it's highly unlikely that this will make anything sentient despite what the AGI-doomsayers may think, it will likely make that whole "Very real feeling Vtuber" possible like seriously within the next 2-5 years. But yeah man, awesome video, def just earned a sub from me, looking forward to whatever else you put out.
@@Cyberlong I don't know what that is lol I googled it, it looks like another kind of HAL 9000 or Skynet-like AGI kind of thing, nothing like that is possible with todays tech. There are a few legitimate experts who believe we could achieve AGI w/Transformer models alone, I (and most reputable experts) don't think that's possible. But to be fair to their point of view I'll describe it as I understand it: Basically as we scale these Transformer models, there appears to be these patterns where they sort of are suddenly able to do tasks that they were really bad at before. So like, as you make the jump from 1B parameters, to 1 Trillion, the idea is that there are new abilities that suddenly "pop up" that didn't even look within reach before. For example, suddenly being able to do word-based math problems & showing the work. So their idea is that it will "suddenly" develop sentience or whatever. Typically with ML, you see a more linear ascent in performance, but the "suddenness" here is what gets these people's attention, because if it's "all of a sudden" couldn't it all be within reach? I think the idea plays on human biases & aspirations really well, not all that different from the "I'm due" mentality in gambling. There are very technical rebuttals to such claims, the one I personally tend to focus on has to do with the way we feed data to these models. Basically, it's been noticed that if you feed data in other ways to smaller models, that you can actually get these models to do the things that other experts considered difficult. Basically, the model already had enough information to do the task, just the way that the task is being structured and described to the model limits its performance when it is smaller. As it gets larger, there is more flexibility in how a task can get described, and thus why you see the "suddenness". The model already knew how to do the thing by that time, and there likely was a linear steady ascent in the knowledge to do such a task, but it was masked by the fact that we were feeding the task in very poorly, but when it can make sense of our bad description, it does the job well because it has the information already. But knowing how to do various complicated tasks, and sentience, are very, very, very different things lol. If this sort of thing interests you, every AI scientist's favorite neuroscience book is "A thousand brains" by Jeff Hawkins. Describes a pretty convincing theory of consciousness and how higher level intelligence works and came to be w/the human brain. One really important characteristic is existing in an extraordinarily information rich analog 3D world, something that today's hardware doesn't really allow robots to do in the way humans do. There is no theory of the brain that even remotely resembles what we are training on computer hardware today. What we are training today is really cool, and really powerful, and will automate a lot of work. It will do great things, and awful things, but it is extraordinarily unlikely that it will resemble anything like HAL 9000 lol, a technology like that is a long ways away.
@@tweak3871 well. MAGI is a supercomputer that _teachnically_ has sentience, but at least on the show it isn't really shown so i'll omit that as I don't believe in sudden sentience. MAGI as a super computer, or three to be exact, is basically a group of fully dedicated hardware for Tree different AI. Ignoring the lore and how there were literal brains inside that thing, is it possible for humanity in some years to create a model so good that is able to manage the decisions of a city all of its own? That's what MAGI does in the Anime. Three Ultra intelligent AI That decide the future of the city and the best course of action.
@@Cyberlong In short no, but I think the true answer is more interesting. So first off, I just want to dispel the sort of myth that is this idea of some grand intelligence that is like an expert in everything and can perform everything whether it be human or artificial. Like ask yourself this question, do you want a generalist driving your bus, or a specialist? Like would you want someone who has spent 30 years driving all kinds of automobiles, or 30 years driving the exact bus you're riding in to drive the bus? The answer undoubtedly is you would prefer the bus driver, because there will be less error. No matter how you cut it, there will be a limited space allotted in a brain like a humans, or in an artificial one, as unlimited capacity is impossible. So no matter how large you make this machine, there will be a limit. So the next question to ask would be, okay, if we have this limited compute capacity what's better? One big generalist, or many specialists + a system of communication? The answer is admittedly somewhat context dependent, but for a huge majority of situations it's a group of specialists. There are many, many, many reasons for this that are rather technical in nature, but most boil down to some version of output vs. energy cost. So given that many specialists is most often the superior strategy, while the future won't be all AI driven, it will include quite a bit of AI in the mix, in fact it kinda already does. Right now the issue is communication of information between humans & AI/ML systems. A group of specialists are required to feed information in so that information may flow back in form of graphs and such which can be digested by people who aren't AI/ML specialists. But this is changing very rapidly, as now we have more natural ways of communicating information to large models. This trend is going to continue, and you will begin to see more "human" looking AI participating in the decision making processes at the highest levels of power. So the question, will it be possible to build a big model capable of making good decisions at scale? Probably (eventually, not likely soon), but it will most certainly be outperformed by a mix of AI/Humans specialized in diverse fields communicating efficiently. So you're never going to see some super AI governing our world or whatever, even if it does technologically become possible.
@@tweak3871 Oh wow, and how feasible it is for the AI to communicate with each other? Like for example, if we had an AI that does decisions (it doesn't matter of what kind for this example) and decided it needed to execute an action just before the value of certain currency passes a threshold. The observing of the currency and its prediction would be the job of another AI. Then, when the time is right, the currency observer passes the information to the decision maker to execute whatever it was supoced to do. That was an example of simple comunication, but if there were many specialists passing information to each other in high intensity. How efficient it would be, would it be slow? Would there need to be a buffer system? Is it necesary to have a translator between models? The thing is that the idea of a AI overlord is weird to me just by the amount of parameters it would need to have. But maybe a composite system for helping and managing certain systems in companies may be feasible. But i don't really know.
ive been trying to get into machine learning these past few months and this video perfectly summarized a bunch of the basic things which I have learned about so far
I'm happy that my google sheet could help you, even if it was the smallest bit of help. I freaking love your vids! The amount of info, the pacing, the jokes, they're all so well-balanced. Hope the algorithm picks you up one day, you deserve more subs!
literally was just think about how junferno should release a video titled "The "VTuber" and Why Artificial Intelligence has Limits," crazy what a coincidence!
literally was just think about how reeto should write a comment that said "literally was just think about how junferno should release a video titled "The "VTuber" and Why Artificial Intelligence has Limits," crazy what a coincidence!," crazy what a coincidence!
He didn't release a video with 2 titles, that's impossible. How would he make a video with "The " and " and Why Artificial Intelligence has Limits" as titles?
"Both of them (the brain and computers) are things with parts in them that do things." Definitely using this next time my teacher asks me a brain-related question.
A.I speedruns are a thing already. Now if we just add a personality to the speedrunner then we will eventually have something to rival sethbling or Simpleflips.
@@SlyHikari03 Finally caught a SAImpleflips stream, he failed to jump over the first goomba in 1-1 and then proceeded to crash live on stream, great entertainer he is.
@@SlyHikari03 I'd guess that it'd be possible to start on something like that already. If I were doing it, I'd probably go with a model that is non-humanoid, going for more of a cute robot/animal mascot look. Make the model a non-anthropomorphic dog with a selection of expressions and barks/howls, and people wouldn't care much about it not being able to talk and would be more accepting of inappropriate responses and mistakes that a human wouldn't make. Tie chat (especially donations) into its reward system and give it some extra outputs of expression changes and visual/audio reactions, and you could probably train it into something entertaining in a pet sort of way.
I don't know why but i just love your sense of humor. Especially that passive aggressive mocking of people who turn away from any difficult sounding word that literally just means what it means.
One of the rare times when I'm pulled _out_ of the experience by the editing choices. I was still on the concept of Elders React recognizing emotion in the Lain gif when I got smacked in the face by "Douse Shinundakura" at 3:22. All good stuff, but... _...damn._
when we gained the capability of editing out, say, certain buildings from a photo of the Hellsalem's Lot skyline, we also gained the capability to detect cases of bad photoshopping, as well as being just better at cross-referencing images with other pieces of information. i don't really see why this won't happen as well with modified videos, or even artificial intelligence that passes the Turing Test.
@@Fabelaz hence our current problems with bias-catering media. but it doesn't mean that those strategies are ineffective, or that they cannot be taught in the usual media literacy course. seatbelts are not automatically ineffective just because people refused to use them.
I remember when I was learning to use Adobe Suite 5, that I noticed the artifacts and distortions in online content more often...Then when I learned more about how photography and cinematography worked, I could better spot media that relied on those tricks to influence how its perceived... And as I learned more about language and psychology, I was better able to notice when the statements and actions a person makes, did not match up. Essentially. Each piece of knowledge made it harder for me to remain ignorant. A lot of things are a result of applying various tools to the idea in hand, to obtain specific resulting responses. I personally think Photography, Cinematography, Music, and so on, are important things to learn about, since they all feature interdisciplinary skills. In other words, they help you recognize how mathematics and other STEM topics are applied to reality. An example is History... We all probably remember just how boring and irrelevant it was to learn about the history of the world, at some point... However, once you recognize how it informs you of things you're interested in, it's suddenly not boring. Essentially, a good teacher can help you make the connection between one topic and another, and how to apply preexisting knowledge to those other topics. Why I went off on this tangent is unknown technically... But they are connected, that is something I have noticed.
@@Fabelaz this is similar to how most people don't understand law or contracts and thus have to depend on them being honest by default. The security measure against fraud then is the fact that an expert CAN be found that can detect the fraud (a lawyer) and that once that happens other aspects kick in to spread the information to others (media, news, mail delivered by courts and lawyers wishing to evoke class actions). Eventually you get general trends that can be simplified to help the general population detect, not exactly the fraud, but detect red flags that let them know to either find and expert or to gtfo of the situation. Not perfect, of course but it works. Thus so long as Deep Fakes are countered by methods to detect the fakes, social constructs like the above to protect the population from them can form over time. Again, far from ideal but it's like coding. You don't code with the assumption of no bugs. You code with the plan of it working despite (or at times assisted by) the bugs.
its nice that he takes the time to explain everything. even if some stuff sounds like it dont rly has a mattering connection. you dont see videos whit this much effort often anymore.
This man just summed up 3 classes spanning over a year of my life, in 28 minutes. Incredible. And the best part is, he did it better! (and made me laugh, see A.I. can be fun and interesting. I'm looking you 90 year old man who 'taught' me A.I.)
i can't say that i understood everything perfectly in this video (even if it was an oversimplification), but it was still wildly interesting and super funny :D i love your style and humour, you pace the jokes perfectly in between the actually educational parts ~ keep up the good work ~ ^^
Love this video, and would like to tangentially mention FUNKe might accidentally have been the first VTuber? He did review/thought pieces with SFM animation, and experimented with a face rig. This might predate the conventional anime VTuber.
That’s definitely a likely candidate for first VTuber at least in the sense that we understand it today. If you go any further back you sort of have to stretch the definition a little. Since the 90s there have been shows that use full body rigs/face cams, but they were never “streamed” or directly talked to an audience, and obviously were never on a video/social media site like twitch or youtube. Some even say (as a joke but it still stands as an argument) that Annoying Orange is a vtuber which fits the social media aspect but it doesn’t use any face rigging to map onto a model. However, the way that FUNKe used his face rig fits the bill in every way. Live streamed, directed to an audience on social media, using an anime-inspired character, using facial mapping on a character, etc. I think the one thing holding him back tho is most VTubers have an established context/story behind them, you watch them with the illusion that they’re “real” characters and that’s like the whole “virtual” aspect. Whereas for FUNKe it was purely cosmetic, we all knew who he was outside of the rig and there was no story behind the character
I can feel you put so much effort into creating this video, I'm amazed! Great job!!! Also, as a person who studied machine learning a little bit, I can admit that playing "BREADY STEADY GO" while explaining it was the *best* choice.
once again, you delivered an absolutely amazing video! i love your great technology-overviews together with the presentation and jokes :) (and srsly, how do you fit so much information of complex topics explained soo well in a single video?! just spectacular.)
5:50 this callback actually took me way off gaurd and totally got me, what a brilliant set up and execution of a point to help the viewer grasp the concept - good teaching stragedy you big nerd
LMAO the amount of jokes in your content! Keep up the good work! I keep rewinding bits and notice small bamboozlements that I just barely caught. Your work is such a gem!
Amazing video, explanation, editing - ooooh boy, it’s been ages since my smooth brain got stimulation like this and had fun. Well done, keep up the good work.
great video it had me hooked from beginning to end i'v been subscribed for a litle while and i think this might be your most intresting and captivating video yet you've really outdone yourself
This was a quite interesting video, I even joked to myself I could make a vtuber in R (especially funny since I flunked in that class). I like vtubing so it's a plus to know how it's made. Also, on the lore and why creators choose to include, I always understood that as "I bought the entire anime girl avatar, I'm going to use the entire anime girl avatar!"
Your ability to make me temporarily forget things while your explaining them is extraordinary. I knew what you were talking about and still had no idea.
This is the one time where if I had procrastinated instead of doing my uni work (building a neural network from scratch), I would have had a better idea of what I was actually doing. This shit is actually explained quite well
it’s so funny because I COME FOR UR ANIME JOKES AND REFERENCES AND JUST IGNORE THE MATH PART LOOOL BUT OMG UR VIDEOS ARE SO WELL EDITED YOU’VE BEEN GETTING SOO GOOD IM SO PROUD!
Watched a few videos before, love the monotone delivery of these interesting topics, it somehow just makes whatever you talk about more fascinating, educational and hilarious Also, YOUR TASTE IN MUSIC IS AMAZING (3:22 Douse Shinundakara was the tipping point for my sub, good tastes)
The mad lad is back! I'm actually a neuroscience major and took some machine learning courses in the past. Those courses well hell. Props to anyone continuing in the computational side of neuroscience. As for 17:00, I got no clue. No one does, probably. Most of neuroscience right now is barebones.
To be frank, "Machine Learning" or "Artificial Intelligences" or what not are more of selling terms. Might as well as calling it "find-lowest-point-of-multiple-variables-function-then-rip-all-parameters-out-and-replace-them-with-something-similar-and-hope-it-works" algorithm. Which I think not a single word here included in any Neuroscience reports out there in the world
@@hongkyang7107 Yes, a lot of terms are like this. Neuroscience itself is also a selling term. You also mentioned that neuroscience articles don't overlap with machine learning. That's not the case. Many papers in neuroscience ARE about machine learning or programming. Both fields are relatively new and they overlap.
This is the most confusing Junferno video yet but it's still hella entertaining. You've got one hell of a formula that no one has ever really done or replicated yet, so keep up the good work!
"But making predictions off something as advanced as an image of a face may not be as simple as just drawing a multivariate nth-degree polynomial." If only, if only it was...
I shit you not, I am a statistician by profession, and from now on when folks ask me what a polynomial regression is and why I can't just "run the numbers" I'm going to send them your explanation. Its genuinely very good at showing how complex regressions can get.
omg literally every single one of your videos is an instant banger like literally i'm laughing out loud at 12:19AM and it's only 10 minutes into the video your adequately detailed but understandable explanations of complex topics, combined with your deadpan delivery of lines like "a report on a project for recognizing faces called the Facial Recognition Project Report" combines to make some very high quality, entertaining content. you clearly put in a ton of effort, time, and research into your videos, and (as a viewer who didn't have to deal with any of that) i can confidently say it's worth it!
This is the cyberpunk future that is here now. I've gotten myself knee-deep in AI everyday, due to being one of the first 15K people to beta test Stable Diffusion. It's getting huge.
EDIT (March 2023): The ending of this video includes some predictions on the future on VTubing, including the possibility and implications of a truly "AI" VTuber. As of December 2022, this prediction came true with the debut of Neuro-sama just one month after the release of ChatGPT, which lead to mass debate about AI and the possibility of AGI, a heavily philosophical question. Given such hindsight, this video does not provide nearly enough nuance to this now-widespread and controversial topic so please forgive the oversimplification.
good prediction 👍
good prediction 👍
prediction good 👍
good preditction 👍👍
diction(pre) = true;
VTuber Junferno expresses far too much happiness.
VTuber Junferno looks like yanderedev......
@@greatwave2480 been waiting for someone to mention this
@@greatwave2480 I was so confused I'm like, YanDev? What?
what do you mean he has a shiji pfp how on earth can he be happy 💀
2200th like.
"Unlike in conventional programming where the programmer knows what they're doing..."
smh Junferno never expected you to spread misinformation like this
Lmaooo true
even pros at FAANG?
@@electronresonator8882 you think there's any pros at FAANG? if there were, they'd have taken over by now >;P
As a programmer with 8 years of experience who knows several programming languages, I can, in fact, tell you that I have no idea what I'm doing.
can confirm, I can't read assembly
He turned himself into a vtuber
Funniest shit I've ever seen
agreed
time to learn in a very *unique* way
I’ll cheers to that my dude
Good pfp
+2
*wacky and uncharacteristic
And rewatch it bc I forgot most of what he said
Your channel’s growing so fast, truly a great demonstration of what happens when you make great content
I can't tell if each comment is sarcasm or not after binging Junferno videos, truly a mindfuck
@@me-fp3cg I think he's being sincere, I agree with him after all.
Imo, I think it’s because his channel has an almost old UA-cam kind of nostalgia with it. It’s refreshing when a lot of video essays like this just feel the same because of graphics and editing they use and stuff. I feel like Junferno’s videos are more down to earth
so true
@@thetheodorusrex9428 you are correct good sir
"How and what exactly is the 'machine learning?'"
Junferno is too good for this vectorized tensor field of reality.
I’ve been recommended VTuber channels for like 2 years now and as soon as I thought UA-cam would stop I had to click on this and now the cycle reset itself I can’t win
Yea it's actually insane how good they are at clinging onto your recommended. No one is safe from falling into the rabbit hole atleast once lol
mark all of them as "not interested", eventually it should take a hint
@@Shajirr_ "eventually"
Time to learn about something interesting by learning about at least 10 vaguely connected other things
Ayooo, you are still with bruhify pfp :O
He's just applying Dijkstra's algorithm to learning as a whole
Literally this channel
So you're telling me I need a PHD in mathematics to be a VTuber?
no just copypaste github code from people smarter than you'll ever be
No, because someone else with a PHD in mathematics did all the math already.
Just copy the phd's homework
@@GerardMenvussa * takes a gun from behind you * , it always was
Gawr Gura certainly didn't.
That jumscare at the end came so close to causing me a carebral aneurysm. Loved that so much.
As an ML expert, I actually think your explanations were the perfect level of detail, the only stuff you really skipped over was the optimization details.
Unsure if you have a background in ML, it sort of sounds like you do or at least have done some courses, but if you haven't, really impressed with your ability to digest and explain this stuff. This is probably the single best 15 minutes ML overview explanation I've ever seen, and I've spent like 5 years pitching VCs and whatnot on various ML technologies. So if you're saying "I skipped over a lot of detail" due to any kind of like, impostor syndrome when it comes to the subject matter or whatever, I'd say don't sweat it, I thought this was really high quality.
Only note would be the very very end, and I wouldn't expect anyone to really know this as it's a bit niche within Transformer research, but the whole point about "these are all separate models" is very rapidly changing because of progress with large pretrained Transformer models. In one of Deepmind's more recent papers, they shared a model called "Gato" which is a highly multi-modal Transformer model that performs reasonably well on over 600 tasks. It isn't SoTA in any particular domain, but it's also a wildly small model (only ~1.5B parameters or something like that, GPT-3 is 175B for reference), and very importantly, it appears to follow the same scaling laws as Transformer models in other domains. All of that meaning, very soon, these different specialist models will be more frequently combined under one big model that generalizes across domains. Now while it's highly unlikely that this will make anything sentient despite what the AGI-doomsayers may think, it will likely make that whole "Very real feeling Vtuber" possible like seriously within the next 2-5 years.
But yeah man, awesome video, def just earned a sub from me, looking forward to whatever else you put out.
So MAGI from evangelion is a near possibility?
@@Cyberlong I don't know what that is lol I googled it, it looks like another kind of HAL 9000 or Skynet-like AGI kind of thing, nothing like that is possible with todays tech.
There are a few legitimate experts who believe we could achieve AGI w/Transformer models alone, I (and most reputable experts) don't think that's possible. But to be fair to their point of view I'll describe it as I understand it: Basically as we scale these Transformer models, there appears to be these patterns where they sort of are suddenly able to do tasks that they were really bad at before. So like, as you make the jump from 1B parameters, to 1 Trillion, the idea is that there are new abilities that suddenly "pop up" that didn't even look within reach before. For example, suddenly being able to do word-based math problems & showing the work. So their idea is that it will "suddenly" develop sentience or whatever. Typically with ML, you see a more linear ascent in performance, but the "suddenness" here is what gets these people's attention, because if it's "all of a sudden" couldn't it all be within reach? I think the idea plays on human biases & aspirations really well, not all that different from the "I'm due" mentality in gambling.
There are very technical rebuttals to such claims, the one I personally tend to focus on has to do with the way we feed data to these models. Basically, it's been noticed that if you feed data in other ways to smaller models, that you can actually get these models to do the things that other experts considered difficult. Basically, the model already had enough information to do the task, just the way that the task is being structured and described to the model limits its performance when it is smaller. As it gets larger, there is more flexibility in how a task can get described, and thus why you see the "suddenness". The model already knew how to do the thing by that time, and there likely was a linear steady ascent in the knowledge to do such a task, but it was masked by the fact that we were feeding the task in very poorly, but when it can make sense of our bad description, it does the job well because it has the information already.
But knowing how to do various complicated tasks, and sentience, are very, very, very different things lol. If this sort of thing interests you, every AI scientist's favorite neuroscience book is "A thousand brains" by Jeff Hawkins. Describes a pretty convincing theory of consciousness and how higher level intelligence works and came to be w/the human brain. One really important characteristic is existing in an extraordinarily information rich analog 3D world, something that today's hardware doesn't really allow robots to do in the way humans do.
There is no theory of the brain that even remotely resembles what we are training on computer hardware today. What we are training today is really cool, and really powerful, and will automate a lot of work. It will do great things, and awful things, but it is extraordinarily unlikely that it will resemble anything like HAL 9000 lol, a technology like that is a long ways away.
@@tweak3871 well. MAGI is a supercomputer that _teachnically_ has sentience, but at least on the show it isn't really shown so i'll omit that as I don't believe in sudden sentience. MAGI as a super computer, or three to be exact, is basically a group of fully dedicated hardware for Tree different AI. Ignoring the lore and how there were literal brains inside that thing, is it possible for humanity in some years to create a model so good that is able to manage the decisions of a city all of its own? That's what MAGI does in the Anime. Three Ultra intelligent AI That decide the future of the city and the best course of action.
@@Cyberlong In short no, but I think the true answer is more interesting.
So first off, I just want to dispel the sort of myth that is this idea of some grand intelligence that is like an expert in everything and can perform everything whether it be human or artificial.
Like ask yourself this question, do you want a generalist driving your bus, or a specialist? Like would you want someone who has spent 30 years driving all kinds of automobiles, or 30 years driving the exact bus you're riding in to drive the bus?
The answer undoubtedly is you would prefer the bus driver, because there will be less error.
No matter how you cut it, there will be a limited space allotted in a brain like a humans, or in an artificial one, as unlimited capacity is impossible.
So no matter how large you make this machine, there will be a limit. So the next question to ask would be, okay, if we have this limited compute capacity what's better? One big generalist, or many specialists + a system of communication?
The answer is admittedly somewhat context dependent, but for a huge majority of situations it's a group of specialists. There are many, many, many reasons for this that are rather technical in nature, but most boil down to some version of output vs. energy cost.
So given that many specialists is most often the superior strategy, while the future won't be all AI driven, it will include quite a bit of AI in the mix, in fact it kinda already does. Right now the issue is communication of information between humans & AI/ML systems. A group of specialists are required to feed information in so that information may flow back in form of graphs and such which can be digested by people who aren't AI/ML specialists. But this is changing very rapidly, as now we have more natural ways of communicating information to large models. This trend is going to continue, and you will begin to see more "human" looking AI participating in the decision making processes at the highest levels of power.
So the question, will it be possible to build a big model capable of making good decisions at scale? Probably (eventually, not likely soon), but it will most certainly be outperformed by a mix of AI/Humans specialized in diverse fields communicating efficiently. So you're never going to see some super AI governing our world or whatever, even if it does technologically become possible.
@@tweak3871 Oh wow, and how feasible it is for the AI to communicate with each other? Like for example, if we had an AI that does decisions (it doesn't matter of what kind for this example) and decided it needed to execute an action just before the value of certain currency passes a threshold. The observing of the currency and its prediction would be the job of another AI. Then, when the time is right, the currency observer passes the information to the decision maker to execute whatever it was supoced to do.
That was an example of simple comunication, but if there were many specialists passing information to each other in high intensity. How efficient it would be, would it be slow? Would there need to be a buffer system? Is it necesary to have a translator between models?
The thing is that the idea of a AI overlord is weird to me just by the amount of parameters it would need to have. But maybe a composite system for helping and managing certain systems in companies may be feasible. But i don't really know.
Jun as a vtuber is literally just a 3D model of Shinji... I'm somehow not surprised.
Seems pretty flat to me
This was a wonderful video, and I learned a lot from it when I wasn’t focused on playing Pokemon Picross on my Nintendo 2DS instead.
ive been trying to get into machine learning these past few months and this video perfectly summarized a bunch of the basic things which I have learned about so far
I'm happy that my google sheet could help you, even if it was the smallest bit of help. I freaking love your vids! The amount of info, the pacing, the jokes, they're all so well-balanced. Hope the algorithm picks you up one day, you deserve more subs!
literally was just think about how junferno should release a video titled "The "VTuber" and Why Artificial Intelligence has Limits," crazy what a coincidence!
literally was just think about how reeto should write a comment that said "literally was just think about how junferno should release a video titled "The "VTuber" and Why Artificial Intelligence has Limits," crazy what a coincidence!," crazy what a coincidence!
He didn't release a video with 2 titles, that's impossible. How would he make a video with "The " and " and Why Artificial Intelligence has Limits" as titles?
"Both of them (the brain and computers) are things with parts in them that do things."
Definitely using this next time my teacher asks me a brain-related question.
Even though your explanations about machine learning were quite over simplified I think this is a quite well put together video.
still went entirely over my head, but nonetheless yeah i agree.
@@tecc9999 Agreed, in fact I'm gonna go get a degree just to understand this one video
It is actually not, like it is literally all what you can learn in undergrad courses(they suck off course)
My brother in christ it's a 30 minute youtube video of course it's gonna be oversimplified
@@quazar-omega same, I think i'll just take that module
I can see myself, age 70. My grandkids are showing me a UA-cam video made entirely by AI playing Super Mario. It’s gonna happen
that already happend some years ago xD
A.I speedruns are a thing already.
Now if we just add a personality to the speedrunner then we will eventually have something to rival sethbling or Simpleflips.
@@SlyHikari03 Finally caught a SAImpleflips stream, he failed to jump over the first goomba in 1-1 and then proceeded to crash live on stream, great entertainer he is.
@@festivite 1 2 OAITMEAL
@@SlyHikari03 I'd guess that it'd be possible to start on something like that already. If I were doing it, I'd probably go with a model that is non-humanoid, going for more of a cute robot/animal mascot look. Make the model a non-anthropomorphic dog with a selection of expressions and barks/howls, and people wouldn't care much about it not being able to talk and would be more accepting of inappropriate responses and mistakes that a human wouldn't make. Tie chat (especially donations) into its reward system and give it some extra outputs of expression changes and visual/audio reactions, and you could probably train it into something entertaining in a pet sort of way.
I love this video with passion cause of how well you summed up AI and machine learning in a short and entertaning way.
You know it's gonna be a good one when it opens with a CIA document from the 60s.
21:53 additional physics
I don't know why but i just love your sense of humor. Especially that passive aggressive mocking of people who turn away from any difficult sounding word that literally just means what it means.
One of the rare times when I'm pulled _out_ of the experience by the editing choices. I was still on the concept of Elders React recognizing emotion in the Lain gif when I got smacked in the face by "Douse Shinundakura" at 3:22. All good stuff, but... _...damn._
when we gained the capability of editing out, say, certain buildings from a photo of the Hellsalem's Lot skyline, we also gained the capability to detect cases of bad photoshopping, as well as being just better at cross-referencing images with other pieces of information.
i don't really see why this won't happen as well with modified videos, or even artificial intelligence that passes the Turing Test.
except people don't usually cross reference. And also tend to trust whatever they already believe.
@@Fabelaz hence our current problems with bias-catering media. but it doesn't mean that those strategies are ineffective, or that they cannot be taught in the usual media literacy course.
seatbelts are not automatically ineffective just because people refused to use them.
I remember when I was learning to use Adobe Suite 5, that I noticed the artifacts and distortions in online content more often...Then when I learned more about how photography and cinematography worked, I could better spot media that relied on those tricks to influence how its perceived... And as I learned more about language and psychology, I was better able to notice when the statements and actions a person makes, did not match up.
Essentially. Each piece of knowledge made it harder for me to remain ignorant.
A lot of things are a result of applying various tools to the idea in hand, to obtain specific resulting responses.
I personally think Photography, Cinematography, Music, and so on, are important things to learn about, since they all feature interdisciplinary skills.
In other words, they help you recognize how mathematics and other STEM topics are applied to reality.
An example is History... We all probably remember just how boring and irrelevant it was to learn about the history of the world, at some point... However, once you recognize how it informs you of things you're interested in, it's suddenly not boring.
Essentially, a good teacher can help you make the connection between one topic and another, and how to apply preexisting knowledge to those other topics.
Why I went off on this tangent is unknown technically... But they are connected, that is something I have noticed.
@@Fabelaz this is similar to how most people don't understand law or contracts and thus have to depend on them being honest by default. The security measure against fraud then is the fact that an expert CAN be found that can detect the fraud (a lawyer) and that once that happens other aspects kick in to spread the information to others (media, news, mail delivered by courts and lawyers wishing to evoke class actions). Eventually you get general trends that can be simplified to help the general population detect, not exactly the fraud, but detect red flags that let them know to either find and expert or to gtfo of the situation.
Not perfect, of course but it works.
Thus so long as Deep Fakes are countered by methods to detect the fakes, social constructs like the above to protect the population from them can form over time.
Again, far from ideal but it's like coding. You don't code with the assumption of no bugs. You code with the plan of it working despite (or at times assisted by) the bugs.
Apparently a lot of AIs have already passed the Turing Test.
2:25 "some of them are facing the consequences of their creation" added to that drawing had me dead hahaha. perfect!
“Being UA-camrs, they are most commonly found on Twitch…”
I’m dying over here, thank you
its nice that he takes the time to explain everything. even if some stuff sounds like it dont rly has a mattering connection. you dont see videos whit this much effort often anymore.
This man just summed up 3 classes spanning over a year of my life, in 28 minutes. Incredible. And the best part is, he did it better! (and made me laugh, see A.I. can be fun and interesting. I'm looking you 90 year old man who 'taught' me A.I.)
As someone who is finishing Data Analysis in a week, same here dude lol
your deadpanning is so good, not to mention the content presented. may the algorithm smile upon ye!!
_"He published a report on a project for recognizing faces called the facial recognition project report."_
Comedy.
Gold.
We did it boys we finally made junferno a vtuber
i can't say that i understood everything perfectly in this video (even if it was an oversimplification), but it was still wildly interesting and super funny :D i love your style and humour, you pace the jokes perfectly in between the actually educational parts ~ keep up the good work ~ ^^
The end part of this aged well considering the debut of Neuro Sama lol
Love this video, and would like to tangentially mention FUNKe might accidentally have been the first VTuber? He did review/thought pieces with SFM animation, and experimented with a face rig. This might predate the conventional anime VTuber.
That’s definitely a likely candidate for first VTuber at least in the sense that we understand it today. If you go any further back you sort of have to stretch the definition a little. Since the 90s there have been shows that use full body rigs/face cams, but they were never “streamed” or directly talked to an audience, and obviously were never on a video/social media site like twitch or youtube. Some even say (as a joke but it still stands as an argument) that Annoying Orange is a vtuber which fits the social media aspect but it doesn’t use any face rigging to map onto a model.
However, the way that FUNKe used his face rig fits the bill in every way. Live streamed, directed to an audience on social media, using an anime-inspired character, using facial mapping on a character, etc.
I think the one thing holding him back tho is most VTubers have an established context/story behind them, you watch them with the illusion that they’re “real” characters and that’s like the whole “virtual” aspect. Whereas for FUNKe it was purely cosmetic, we all knew who he was outside of the rig and there was no story behind the character
3:24 A Virtual UA-camr
5:23 Online Animated Character
8:30 Excel Linear Regression 9:27 Multivariable Analysis and Predictions
10:33 Modifying Features
11:28 Neural Networks
Conventional Programming
Machine Learning
12:10 Input Layer, Processing Layer, Activation Function
13:34 Parameters
14:07 Back Propogation
14:43 Architecture
15:33 Fully Connected Layers
16:39 Facial Landmark Detection
17:12 1 Star Spawns More Stars
Sub to Dub
18:16 A Short Rapid Rise from 2016 to today
19:24
19:47 Lower cost V-Tube Tech
Mouth Tracking
Hand Tracking
21:06 Motion Capture
22:18 AlterEcho
Vroid Studio - Customization suite
24:24 V-Tuber Lore
26:04
26:46 27:14
I love the deadpan delivery of your humor. the video is also quite interesting. thanks!
just here to say i rally liked all the visual flair you've added to your editing style
I can feel you put so much effort into creating this video, I'm amazed! Great job!!!
Also, as a person who studied machine learning a little bit, I can admit that playing "BREADY STEADY GO" while explaining it was the *best* choice.
Ok but that ad transition was smooooooth
Pretty funny to see this video just after having discovered the existence of Neuro-sama.
Only seven minutes into the video, but it's already used Love Colored Master Spark and Pandora's Palace as BGM, so thumbs up.
once again, you delivered an absolutely amazing video! i love your great technology-overviews together with the presentation and jokes :)
(and srsly, how do you fit so much information of complex topics explained soo well in a single video?! just spectacular.)
Nice man, good stuff and funny at times :) This must have been a ton of work to create. Great job.
Man, I love your deadpan humour! That, combined with how genuinely informative this video is, makes it amazing.
I teared up a bit when I noticed the Mimiga Village music in the background. I really love your videos.
Your work is amazing. Thank you for putting in the time and effort to make this video.
5:50 this callback actually took me way off gaurd and totally got me, what a brilliant set up and execution of a point to help the viewer grasp the concept - good teaching stragedy you big nerd
you really chose the most deranged pics and clips of jerma. doing the gods work
LMAO the amount of jokes in your content!
Keep up the good work! I keep rewinding bits and notice small bamboozlements that I just barely caught.
Your work is such a gem!
top tier vid, you covered multiple industries in such a condensed manner.
your editing is so good!
I love your sense of humor, combined with the really interesting topics makes a *really* entertaining video! Keep it up Jun! :)))
the small subtle inserts are killing me. What a video man!
this is my favorite junferno type channel video yet
Another amazing video. ¡Thank you!
dude that neural network explanation is out of this world!!!! this guy is a literal genius
Amazing video, explanation, editing - ooooh boy, it’s been ages since my smooth brain got stimulation like this and had fun. Well done, keep up the good work.
24:20 Imagine if he knew about Neuro-sama by the time of this video
great video it had me hooked from beginning to end i'v been subscribed for a litle while and i think this might be your most intresting and captivating video yet you've really outdone yourself
Can he just mark this video as an entry to the "summer of math exposition" and win it?
0:06 BRO that corpse party music gave me whiplash 💀💀
This is brilliant in it's entirety
Oh wow i got that recommended randomly by youtube and i wasn't expecting ray of hope to ever be used by anyone in any context, thanks a lot for that
"'facing' the consequences of their creation." Epic.
best ad placement-never thought a commercial rolling would be more epic.
I thought if Junferno had model it had to be Shinji, he surprised me, well done. Also good video.
This the second video I watch and I am still amazed how so much research can be linked together in such an entertaining way. Kudos
This was a quite interesting video, I even joked to myself I could make a vtuber in R (especially funny since I flunked in that class). I like vtubing so it's a plus to know how it's made. Also, on the lore and why creators choose to include, I always understood that as "I bought the entire anime girl avatar, I'm going to use the entire anime girl avatar!"
Your ability to make me temporarily forget things while your explaining them is extraordinary. I knew what you were talking about and still had no idea.
This man always finds a way to loop back to math or miku
This is the one time where if I had procrastinated instead of doing my uni work (building a neural network from scratch), I would have had a better idea of what I was actually doing. This shit is actually explained quite well
This was really good, thanks for presenting all the research in a fun way
it’s so funny because I COME FOR UR ANIME JOKES AND REFERENCES AND JUST IGNORE THE MATH PART LOOOL BUT OMG UR VIDEOS ARE SO WELL EDITED YOU’VE BEEN GETTING SOO GOOD IM SO PROUD!
*neuro sama joins the chat*
I'm impressed that the video gets more and more sarcastic as it goes, but without becoming any less informative.
I will definitely be back for more.
Watched a few videos before, love the monotone delivery of these interesting topics, it somehow just makes whatever you talk about more fascinating, educational and hilarious
Also, YOUR TASTE IN MUSIC IS AMAZING (3:22 Douse Shinundakara was the tipping point for my sub, good tastes)
Nice vid, surprisingly in depth. The title led me the wrong way :p
What a convoluted way to explain Vtubing. I like it.
Finally someone who is a touhou fan gets views and popularity. I hope in the future that you point out about touhou and give it more recognition.
The mad lad is back!
I'm actually a neuroscience major and took some machine learning courses in the past.
Those courses well hell. Props to anyone continuing in the computational side of neuroscience.
As for 17:00, I got no clue. No one does, probably.
Most of neuroscience right now is barebones.
To be frank, "Machine Learning" or "Artificial Intelligences" or what not are more of selling terms. Might as well as calling it "find-lowest-point-of-multiple-variables-function-then-rip-all-parameters-out-and-replace-them-with-something-similar-and-hope-it-works" algorithm. Which I think not a single word here included in any Neuroscience reports out there in the world
@@hongkyang7107
Yes, a lot of terms are like this. Neuroscience itself is also a selling term.
You also mentioned that neuroscience articles don't overlap with machine learning. That's not the case.
Many papers in neuroscience ARE about machine learning or programming.
Both fields are relatively new and they overlap.
This is the most confusing Junferno video yet but it's still hella entertaining. You've got one hell of a formula that no one has ever really done or replicated yet, so keep up the good work!
artificial intelligence is when i pretend to know what im talking about until the person im talking to mercifully moves on without calling me out
Your humour is the funniest I've ever seen. I wish you made videos more frequently
"But making predictions off something as advanced as an image of a face may not be as simple as just drawing a multivariate nth-degree polynomial."
If only, if only it was...
I studied ML as my major, you probably made the best explanation of what a NN is i've ever seen.
Once again Junferno tricks us into learning math and computer science using anime
I shit you not, I am a statistician by profession, and from now on when folks ask me what a polynomial regression is and why I can't just "run the numbers" I'm going to send them your explanation. Its genuinely very good at showing how complex regressions can get.
"And if you don't know calculus, well it's the same thing but you wouldn't understand."
Your videos are great lmao
I’m glad that you shined some light on FUNKe, the real first vtuber
Never stop with your humor, please
“Additional physics” 21:52
Just noticed the civ 6? music under the voiceover
I only just opened the vidoe, and this is so unbelievably cursed. Im looking forward to it
Wait just a second. Are you trying to tell me that vtubers are not sentient AI who live in a parallel virtual world?
10:53 and with that, you have acknowledged persona 2s existence far more than atlus ever has. well done!
(oh and uh good video i guess /lh)
omg literally every single one of your videos is an instant banger like literally i'm laughing out loud at 12:19AM and it's only 10 minutes into the video
your adequately detailed but understandable explanations of complex topics, combined with your deadpan delivery of lines like "a report on a project for recognizing faces called the Facial Recognition Project Report" combines to make some very high quality, entertaining content.
you clearly put in a ton of effort, time, and research into your videos, and (as a viewer who didn't have to deal with any of that) i can confidently say it's worth it!
This is the cyberpunk future that is here now. I've gotten myself knee-deep in AI everyday, due to being one of the first 15K people to beta test Stable Diffusion. It's getting huge.
This has to be one of Junfero's most cursed videos
"Her involvement in commercials"
Puts ad in video
Stares at screen smugly for a second after ad ends
You heard of neuro sama?