People are terrified at the thought of machines taking over, but actually the algorithms being used in AI and recommendation system are just as inacurate as a friend's recommendation.
I think it should be noted that for the cold start problem, you'd want to use content filtering to define which users to show those new items to - hence, a combination of content and collaborative filtering is the best approach.
These are some solid gold videos on your channel you are putting up for free! Your incredible knowledge, such hardwork and the will to put such amazing educational concepts before the audience is really creating these masterpieces! Absolutely love it! 💗
I love all the artistic choices you guys make when putting these videos together, they have a spacious mood to them. It’s a little sad to read other viewers don’t like the music choice as much, each to their own I guess.
@@ArtOfTheProblem I found it distracting - I thinbk it is simply to high to be "background music". I paused the video several times because I was doing something else in the mean time and I thought another video started or something.
Interesting video. I downloaded my netflix data once. It is amazing how much data they actually collect . One of the bits they collect is how long you watch each video (whether the actual movie) or the preview clip on the movie selection screen. i.e. If you watch the whole thing, you are somewhat interested in it and "that type of movie". It also logs what suggestions it gave to you and why that suggestion was given (due to another video) . It also collects search terms (full / partial) and what results were given to you. i.e. You type "term" and up comes "Terminator 1,2,3" , "The terminal" (totally different type of movie)
I am a mathematics phd student doing my thesis on low-rank matrix completion, it was great seeing this video show up in my feed! One of my biggest concerns was why we can assume that real life data is part of a low-rank matrix. Even though data being non-random and part of a low-dimensional space is a very reasonable assumption, the issue is that the space of low rank matrices is a very specific low-dimensional space, so why should we assume that our data lies on this specific low dimensional space? The features argument seems fair to me as why it may be reasonable to assume that our data is low-rank.
It's a great question. I'm currently working on a video on manifold hypothesis that gets at this question a little deeper. Would love to hear other's thoughts
That's very interesting, I like the field of prediction/compression/NMF fact a lot. Do you have some references or papers on the subject you mentioned ? How do you define real life data ?
@@lucacaccistani9636 Here is a paper on matrix completion that describes the alternating projection method, and some theoretical results using algebraic geometry: arxiv.org/abs/1711.02151 By real life data I mean data that comes from real life, such as an image or the incomplete user ratings in the netflix problem. Given unknown positions of a matrix, it's easy to find a partially complete matrix which can be completed to a rank r matrix. Just generate a rank r matrix then delete entries in the unknown indices, then we know the resulting incomplete matrix has a rank r completion. If we choose the known entries of a partially complete matrix randomly from a continuous distribution, then often times there will exist a rank r completion with probability 0, or there will be infinitely many rank r completions. However, it is assumed that our data lies on some low dimensional space, so choosing random known entries may not be a good model for real data.
In reverse, doesn't the utility of the approximation (people do seem to like the recommendations) provide some clue that there is a lower-dimensional manifold useful for the purpose of estimating *specifically* the preferences of people regarding movies? Also, if true randomness provides maximum information, and for the most part people's movie preferences, and movies themselves, are far from random, doesn't that also imply that there will be a useful, lower-dimensional manifold? All while keeping in mind that the movies people make and the movies that people watch are reflections of each other: people make movies that other people want to watch, and people only watch what movies people make.
The real problem here is traditional recommendation algorithm would recommend to you with things you already have. We need a new algorithm which can analyze and tell you what you may need to get in future, based on historical data.
Love the video, thank you, great explanation. I wonder if I’m the only one who finds the music a bit...creepy or disturbing....or, maybe that’s intended. Rewatching, I see that may be my fault for watch at 2x speed.
My god, the music put me in a freaking trauma. The explanation was great but I had to turn off the audio. What the heck did the creator think? Since when horror music as background music is a good idea?
Very interesting!! I would love to see more videos on this topic. I would guess that the amount of features can be increased in order to have a more accurate result, at the expense of greater computing power and storage requirements.
"The things that are recommended to you are based on patterns the machine has observed in other people that are similar to yourself" It would be interesting to take this to the next step of analysis.. what happens when the recommendations the machine gives start to have an actual tangible affect on the people being given the recommendations?
It's worth noting that the patterns in data don't always mirror reality. People with asthma and copd check earlier with the doctor when they have trouble breathing. So an ann would predict that people with asthma are at lower risk when catching pneumonia and schedule them accordingly. These problems are frustratingly hard to find. Most approaches to make answers interpretable seem to be about learning a linear local approximation of the machine model, which works reasonably well on convolutional networks.
yes this is an old idea but the same idea remains but they are just doing it at a larger scale...think of the matric factorization as one layer in a neural network...now they use many
I was asking myself the same question but I think it's smth like: you take the highest value (in this case 28) and you know that you need a value that is less or equal to 4, so you solve the inéquation 28/x
Notes for my future revision. *CONTENT FILTERING* Based on what someone like, work out what else he/she might like. *COLLABORATIVE FILTERING* A user likes things that other users with similar habit also like.
Lemmino if you find any more channels like this. These days prediction has made youtube channel subscription less important. However, I use them just as an A-list. Btw I subscribed.
Hi, where did you get the images of netflix about content filtering at 4:16 in the video? I need it for my dissertation as a talking point that Netflix was a content filtered recommender at one point, thanks!
Can you explain me what if there are many, many ways to generate the current data? At that time does it mean that we will have multiple reference table? How do we fix this problem?
what's happening to you tube, why do my videos keep stopping suddenly then starting back. but when i sign into another account using a vpn that doesn't happen, and I'm watching the same video.
Not really, no. There is always some sort of classifying being done but in this case not in the way you mean it, I think. In this approach we decide (by hand or algorithmically, but always beforehand) that we are going to reduce the data space to a smaller space of dimension k. Choosing k is often difficult. Then, the main algorithm converges to the optimal representation, that is, the space of dimension k that represents the best the data space. You can look up NMF factorization, k-means clustering or even PCA (the last one doesn't has a k and tends to over-fit, in the end you have the same problem of choosing when you stop, hence choosing k..)
Hi, for your explanation on collaborative filtering are you explaining from the model-based approach, I'm a little bit confused between the memory and model-based approach fro CBF
Recommendation algorithms don't have enough actually useful data. What do I mean by that? First, they recommend things based on what you have watched, assuming you are interested in that topic. For example, I may click on a random video about a bass player or bass guitar style.....then my feed is full of bass guitar channels. NO, I was curious about the video, but I don't want bass channels. Also, they don't take a good survey of the person's tastes. for example, Netflix could have a customer take a "what do you like" survey, 4 or 5 pages of 20 - 30 various movies, shows, etc, have the customer pick 8-10 on each page. Essentially sprinkle in enough variety on each page to get a more accurate read on their tastes. Netflix suggestions are usually 50% wrong for me. I would love if they would allow a "don't recommend" option along with like, don't like, etc so it never shows up again in the normal lists. Grapes of wrath and Annie Hall are NOT on any list I would ever create. ahhahaha. would also be great if we could exclude specific actors, directors, etc to keep them from the list as well, considering I can't stand Will Ferrell.
And then there is Amazon asking me to buy a second washing machine.
Some products really need a "people usually buy only one at a time" tag. Refrigerators, cars, houses...
Wait a minute... hey Thats basically amazon telling you "Hey your washing machine is about to go out of service wanna buy a new one just in case?"
Amazon uses "customers that bought this also like" and similar simple algorithms. Nothing complicated or sophisticated. They work just fine.
People are terrified at the thought of machines taking over, but actually the algorithms being used in AI and recommendation system are just as inacurate as a friend's recommendation.
🤣🤣🤣🤣
I think it should be noted that for the cold start problem, you'd want to use content filtering to define which users to show those new items to - hence, a combination of content and collaborative filtering is the best approach.
an hybrid approach
Dude , literally watched a zillion videos on YT , nothing comes close to this video. The SVD simplification is on another level!
woo! glad you found it
🤣🤣🤣
I don't understand why this channel isn't more popular. From the beginning it's been great.
thanks for sticking around, have you checked out the new series?
These are some solid gold videos on your channel you are putting up for free! Your incredible knowledge, such hardwork and the will to put such amazing educational concepts before the audience is really creating these masterpieces! Absolutely love it! 💗
appreciate this feedback thank you
Thanks a lot. It is so simple that I can understand immediately.
glad it helped
I love all the artistic choices you guys make when putting these videos together, they have a spacious mood to them. It’s a little sad to read other viewers don’t like the music choice as much, each to their own I guess.
I get that a lot, it's nice to hear from both sides that the mood 'works'
@@ArtOfTheProblem I found it distracting - I thinbk it is simply to high to be "background music". I paused the video several times because I was doing something else in the mean time and I thought another video started or something.
The name I learnt this as in Uni was Singular Value Decomposition. Same thing, different names. Great video as usual!
My goodness this is such a great video. Just now diving into your channel and loving what you're publishing. Thank you! Just subscribed.
@@patricksweet4104 thanks! Stay tuned
FYI consider supporting future content via. www.patreon.com/artoftheproblem - thanks again
Interesting video. I downloaded my netflix data once. It is amazing how much data they actually collect . One of the bits they collect is how long you watch each video (whether the actual movie) or the preview clip on the movie selection screen. i.e. If you watch the whole thing, you are somewhat interested in it and "that type of movie".
It also logs what suggestions it gave to you and why that suggestion was given (due to another video) .
It also collects search terms (full / partial) and what results were given to you. i.e. You type "term" and up comes "Terminator 1,2,3" , "The terminal" (totally different type of movie)
how did you download your netflix data?
Art of the Problem is one of the better things on the internet.
I am a mathematics phd student doing my thesis on low-rank matrix completion, it was great seeing this video show up in my feed! One of my biggest concerns was why we can assume that real life data is part of a low-rank matrix. Even though data being non-random and part of a low-dimensional space is a very reasonable assumption, the issue is that the space of low rank matrices is a very specific low-dimensional space, so why should we assume that our data lies on this specific low dimensional space? The features argument seems fair to me as why it may be reasonable to assume that our data is low-rank.
It's a great question. I'm currently working on a video on manifold hypothesis that gets at this question a little deeper. Would love to hear other's thoughts
That's very interesting, I like the field of prediction/compression/NMF fact a lot. Do you have some references or papers on the subject you mentioned ? How do you define real life data ?
@@lucacaccistani9636 Here is a paper on matrix completion that describes the alternating projection method, and some theoretical results using algebraic geometry: arxiv.org/abs/1711.02151
By real life data I mean data that comes from real life, such as an image or the incomplete user ratings in the netflix problem. Given unknown positions of a matrix, it's easy to find a partially complete matrix which can be completed to a rank r matrix. Just generate a rank r matrix then delete entries in the unknown indices, then we know the resulting incomplete matrix has a rank r completion. If we choose the known entries of a partially complete matrix randomly from a continuous distribution, then often times there will exist a rank r completion with probability 0, or there will be infinitely many rank r completions. However, it is assumed that our data lies on some low dimensional space, so choosing random known entries may not be a good model for real data.
In reverse, doesn't the utility of the approximation (people do seem to like the recommendations) provide some clue that there is a lower-dimensional manifold useful for the purpose of estimating *specifically* the preferences of people regarding movies? Also, if true randomness provides maximum information, and for the most part people's movie preferences, and movies themselves, are far from random, doesn't that also imply that there will be a useful, lower-dimensional manifold? All while keeping in mind that the movies people make and the movies that people watch are reflections of each other: people make movies that other people want to watch, and people only watch what movies people make.
this is what I assume (low-dimensional manifold)@@dmc-au
How can I like the video a million times...now I can gladly go back those papers with recondite information.
:))
One of the coolest algorithm that is taking 1/3 of my day in scrolltime.
just posted new video on RL ua-cam.com/video/Dov68JsIC4g/v-deo.html
I AM SO HAPPY i discovered this channel!!!
welcome!
I’m attempting to make a video game recommendation system from a Steam games dataset and your video was super helpful to me!
cool please keep me posted
Excellent presentation and visualisation. I recommend this video for Google best award.
I find it funny you used the matrix as the main movie while also explaining matrix and matrices
Easily and concisely explained. Appreciated
Really like how the explanation is concise and clear.
The real problem here is traditional recommendation algorithm would recommend to you with things you already have. We need a new algorithm which can analyze and tell you what you may need to get in future, based on historical data.
Damn. This is so concise and perfect.
Love the video, thank you, great explanation. I wonder if I’m the only one who finds the music a bit...creepy or disturbing....or, maybe that’s intended.
Rewatching, I see that may be my fault for watch at 2x speed.
I feel the same, it's rather distracting
its rly disturbing at any speed...couldnt keep watching it so I was looking for this comment :/
My god, the music put me in a freaking trauma. The explanation was great but I had to turn off the audio. What the heck did the creator think? Since when horror music as background music is a good idea?
awful background music
thanks for this video, i have to build a recommender system for college and this was a really good concise description of how the thing works!
sweet glad this helped you
Hi, how did it go. I'm also in journey to build one
Very interesting!! I would love to see more videos on this topic. I would guess that the amount of features can be increased in order to have a more accurate result, at the expense of greater computing power and storage requirements.
yes, exactly (same as making a neural network wider)
Not necessarily more accurate though, due to a phenomenon called overfitting: en.wikipedia.org/wiki/Overfitting?wprov=sfla1
Recommendation engines, a hot CS topic, are desired by business folks for personalization and user engagement in marketing, media, and e-commerce.
Great explanation!
Thanks! stay tuned for more
This gold. Thank you so much for making this.
Brit, you posted a video but I didn't see a Patreon billing. Please take my money! You deserve it!
thank you for your ongoing support!
the background music is weird
+++++++
+++
omg BGM is really annoying, felt like it is subconsciously programming me!
+++++++++
Nice video, but background music is a disaster
Great video, love the simple explanations and easy to follow visuals. But why do I feel like I'm about to get jumpscared at any point
@@collinshen3808 horror fan :)
Very nice video! I'm searching for a while for the correct explanation of those algorithm. Finally I've found it!
excellent welcome to the club!
"The things that are recommended to you are based on patterns the machine has observed in other people that are similar to yourself"
It would be interesting to take this to the next step of analysis.. what happens when the recommendations the machine gives start to have an actual tangible affect on the people being given the recommendations?
i would say this is certainly the case
some legend made this video!
glad this helped you
holy cow this is a good video
glad this helped
Very enjoyable and clear explanation! Great video
It's worth noting that the patterns in data don't always mirror reality. People with asthma and copd check earlier with the doctor when they have trouble breathing. So an ann would predict that people with asthma are at lower risk when catching pneumonia and schedule them accordingly.
These problems are frustratingly hard to find. Most approaches to make answers interpretable seem to be about learning a linear local approximation of the machine model, which works reasonably well on convolutional networks.
Great story 👍🏻
How long a good filtering system would take to be built? Let’s say 10,000 people based on 100 data points.
Thank you for this video! Explained a very complex concept for me in a very understandable way.
appreciate the feedback
Nice!
would love if you could help share my newest video: ua-cam.com/video/5EcQ1IcEMFQ/v-deo.html
Really nice and insightful video.
Great work 👏👏
Great work. Very precise and comprehensive. Thank you.
Please don't stop making videos
Very intuitive approach, thanks a lot !!!
i have a doubt is these videos are outdated? are still same importions will working ?
yes this is an old idea but the same idea remains but they are just doing it at a larger scale...think of the matric factorization as one layer in a neural network...now they use many
Very interesting and clear explanation
awesome video
Very good and funny videos bring a great sense of entertainment!
3:47, Sir Can I ask? where did you get the "By diving the all values by 8?". Can I know where did you get the 8? thank you sir
I was asking myself the same question but I think it's smth like:
you take the highest value (in this case 28) and you know that you need a value that is less or equal to 4, so you solve the inéquation 28/x
that's just to normalize the data, so you take the largest
@@ArtOfTheProblem Couldn't fully understand - the largest what??
Great explaination !
Thankyou
Great video!
glad you found this helpful
good job, amazing video
Notes for my future revision.
*CONTENT FILTERING*
Based on what someone like, work out what else he/she might like.
*COLLABORATIVE FILTERING*
A user likes things that other users with similar habit also like.
I'm Making a movie recommendation system for my final year project any idea on how I can get started?
Fantastic job again! :)
nice video
youtube knew I was gonna like this video u say?
This was really cool
The background music is not working for me.
Answer me, which one is true netflix using deep learning or machine learning for algorithm?
The bg music feels like being in an horror movie lol
But the video is great
it was just awesome
glad you enjoyed sub for more
Can someone help me: how are wo normalising the data? At 3:48?
the background music is quite annoying
Actually it's pretty cool, my thesis is in that area :)
Which ML algo is he talking about in 5:10 to 5:48?
Lemmino if you find any more channels like this. These days prediction has made youtube channel subscription less important. However, I use them just as an A-list. Btw I subscribed.
Very clear video!
I'd love to meet the people with the most similar movie taste to me.
And you found - NONE!
Probably ppl you're already friends with
But how to know how many latent features to use? There must bea better way than trial and error.
very nice video, but background music disturbing the original content, sorry its a bit annoying, thank you for the video.
Is there any product we can filter out the background music it doesn't really fit the topic and is really distracting
How is the preference data matrix factorized?
Hi, where did you get the images of netflix about content filtering at 4:16 in the video? I need it for my dissertation as a talking point that Netflix was a content filtered recommender at one point, thanks!
this was pretty good
Can you explain me what if there are many, many ways to generate the current data? At that time does it mean that we will have multiple reference table? How do we fix this problem?
what a nice video! sooo useful :)
That was really helpful thanks
3:40 why do you divide by 8 specifically?
could used 7 I guess
Which ML algo is used in 5:10 to 5:48 can you please name it, it will be very helpful.
Thanks for sharing this amazing work.
interestingly enough, you can do the most simple thing here which is repeatedly guess and keep what works.
thank you so much
i don't see any use of recommendation system instead of online movie and online product? can anyone give me some others example
what's happening to you tube, why do my videos keep stopping suddenly then starting back. but when i sign into another account using a vpn that doesn't happen, and I'm watching the same video.
what is the background music for?
Content Filtering still is required for Collaborative Filtering to work.
great vid!!! thank you
what is background song name?
Do we use classifiers in collaborative filtering?
Not really, no. There is always some sort of classifying being done but in this case not in the way you mean it, I think. In this approach we decide (by hand or algorithmically, but always beforehand) that we are going to reduce the data space to a smaller space of dimension k. Choosing k is often difficult. Then, the main algorithm converges to the optimal representation, that is, the space of dimension k that represents the best the data space. You can look up NMF factorization, k-means clustering or even PCA (the last one doesn't has a k and tends to over-fit, in the end you have the same problem of choosing when you stop, hence choosing k..)
This reminds me of factor analysis...
Hi, for your explanation on collaborative filtering are you explaining from the model-based approach, I'm a little bit confused between the memory and model-based approach fro CBF
Is this anyway related to SVD? Nice video!
thank you
So I guess all of you are similar to myself because here we are.
Loved the explanation but song selection is really weird
nice vid i love it
Recommendation algorithms don't have enough actually useful data. What do I mean by that? First, they recommend things based on what you have watched, assuming you are interested in that topic. For example, I may click on a random video about a bass player or bass guitar style.....then my feed is full of bass guitar channels. NO, I was curious about the video, but I don't want bass channels. Also, they don't take a good survey of the person's tastes.
for example, Netflix could have a customer take a "what do you like" survey, 4 or 5 pages of 20 - 30 various movies, shows, etc, have the customer pick 8-10 on each page. Essentially sprinkle in enough variety on each page to get a more accurate read on their tastes.
Netflix suggestions are usually 50% wrong for me. I would love if they would allow a "don't recommend" option along with like, don't like, etc so it never shows up again in the normal lists. Grapes of wrath and Annie Hall are NOT on any list I would ever create. ahhahaha. would also be great if we could exclude specific actors, directors, etc to keep them from the list as well, considering I can't stand Will Ferrell.
Love the vid but that ambient noise is mildly annoying ngl
Bro I'm home alone in the middle of the night but why did the music scare me so much
Liked and subbed
would love if you could help share my newest video: ua-cam.com/video/5EcQ1IcEMFQ/v-deo.html