I will have an exam in 30 minutes. I didn't understand anything about this topic. Then I watched this and holy hell I went from not understanding it at all to understand it almost fully (The formulas isn't yet though). Thank you!
@@tkorting I just saw your comment again after a long time. I think it went well. Already graduated. Working as a software engineer/data scientist in one of the top paid companies in my country. So all is well I guess haha. Thank you for this!
If only I had seen this video months ago I would have saved myself the pain of going over every online publication on SOM For any Newbie in SOM.. This is what you been looking for. Love from Afrika And Oh, thank you Thales Sehn Korting
+Thales Sehn Körting You are welcome. Sure, already did. Would be nice if you add more videos about some important topics e.g. BSplines interpolation, Active Shape Models and Deep Learning.
In order to classify discrete groups correctly you need to break the neighbourhood link otherwise they are all going to be in the same cluster since both neurons are connected. Search for 'A novel self-organizing map (SOM)'
The learning rate and the neighbourhood radius are modified as the algorithm continues, therefor with each iteration the effect on other neurons will become less.
thanks for your feedback please like/share the video and Subscribe! Neurons are selected by the competitive method (2:23) and them they are updated to stay more similar to the patterns. In www.researchgate.net/publication/220785373_A_Geographical_Approach_to_Self-Organizing_Maps_Algorithm_Applied_to_Image_Segmentation you can find how the neurons are updated. Regards
thank you for the informative video! does the order in which you select the inputs matter? I can imagine that the distance between the connected neurons would look different depending on the displacement of the input.
Thanks for the video. One question, I understanding was that the SOM is a unsupervised learning algorithm. Could you tell me why there are two "true neurons" with labeling?
thanks for your feedback, please subscribe to my channel and share this video with your peers. For the GeoSOM approach, I could recommend a paper in which I briefly explain the method and point to the main reference. www.researchgate.net/publication/220785373_A_Geographical_Approach_to_Self-Organizing_Maps_Algorithm_Applied_to_Image_Segmentation Regards
Hi, thank you very much for the explanation, it really helped. I'm just wondering about the example data set you presented at the beginning of the video. I see you have 20 unique IDs, each of which with 6 unique attributes. How is this plotted as the input distribution? What values would you use as coordinates? Thank you for your time.
Dear ramadenlama. Thanks for yout comments. Please share this video with your peers. The case with 6 attributes is different from the example, because it is simpler to show a 2 dimension case in the screen. For more than 2 attributes, the measures are the same, but plotting is not possible. The SOM can continue to be a 2D matrix, in order to be used as a dimension reduction scheme. Regards
Thales Sehn Körting Hi Thales, Thank you very much for your immediate reply to my question, it is greatly appreciated. I see, I guess my assumption was correct in that with 3 or more attributes multidimensional scaling must be applied. I find it strange that the literature available regarding SOMs does not directly address the nature of the input data used which is why I asked the question above. Again, thanks for your response.
Dear Hira Imran, thanks for your feedback. Please subscribe to my channel and share this video with your peers. I will recommend the following article, so that you can go deep into neighborhood functions: www.researchgate.net/publication/220785373_A_Geographical_Approach_to_Self-Organizing_Maps_Algorithm_Applied_to_Image_Segmentation Best regards
Thales Sehn Körting Thanks alot Sir. ALso i want to ask the basic difference betewwn LVQ1 and SOM. It would be quite helpful as i m preparing for my exams if u could help
How are the *colors* of the data-points initially computed? Is each attribute (table column) assigned a color and 1 datapoint is a mixture of those colors? I've seen SOM presentations where at the beginning of the algorithm, the data-points are colored differently. Is this the reason why?
Thanks for the question, please subscribe to my channel They are different because k-means does not have connections between the clusters, and SOM has. You define the degree of connections between the clusters, or the neurons. Regards
thanks for your question. Please subscribe to my channel and share this video with your peers. You are right to think that both algorithms are very similar. In fact, if you don't use the connection between the neurons, SOM will be equal to k-means. The main difference is, then, is (example at 3:21) when you update one neuron, the other neuron is also updated with a smaller shift, but is updated. Regards
Thanks so much for the video and your explanation. Could you be so kind and tell the difference between using SOM as compared to k-means? In which cases one is more advantageous than the other?
thanks for your question, please subscribe to my channel. SOM provides more tools to understand the data, when you project your N-dimensional feature space to a 2-D, you can use the U-matrix to understand better the relations between clusters and so on. You can still use more neurons and after decide to reduce the number. Whereas in k-means you have to provide this number as the first thing for classifying. Although SOM is fast, k-means is faster. With some of these arguments, and depending on your dataset, you can chose the best. Regards
How does the random selection of the input work? I understand that you randomly select 1 input out of the set of all inputs. However, after selecting, let's say input i, do you then remove this input from the set of possible inputs that can be selected in the next epoch? Or is it still possible to randomly select again input i in the next input?
+Musa Bara thanks for your feedback, please subscribe to my channel and watch more videos about pattern recognition. SOM is good to such cases of clustering, and to analize several features projected in the 2D space of neurons. Regards
+aditya malte thanks for all your feedback. In fact these algorithm is similar to K means however there is an important difference between the two, because SOM connects all the Clusters, and one cluster when converting to the classe it influences the neighboring clusters, which does not happen in K means algorithm. Please like and share the video and subscribe to my channel. Regards
In this case, this is very similar to K means. However, because the neurons are topologically connected, it will create some interesting patterns, especially when K is large. For example, if we have 3 neurons, A, B and C. And neuron B is between A and C topologically, at the very end, it will also be between A and C.
@@arielgenesis In the K-means clustering algorithm you have to specify the amount of clusters you want to have whereas in SOM the neural network detects the amount itself. Also SOMs allow topology preservation (for example representing a 3d input space in a 2d space)
sir... I need intrusion detection system matlab source code ....plz sir. sir i can research on IMPLEMENTATION OF AN INTRUSION DETECTION SYSTEM BASED ON SELF ORGANIZING MAP but i cant find matlab code sir plz help me
Thanks for the feedback, please like and share the video and subscribe to my channel. I am not an expert on intrusion detection, but maybe someone that sees your comment here can help you. Regards
what you want to do is look for outliers in your data. do test cases and optimize against false positives. thats the best I can give u without knowing more about your setup.
thanks for your question. please like/share and subscribe to my channel the main difference from k-means is that each cluster converges "alone" to one of the found centers, although in SOM all the clusters are connected, generally in 2D matrices, and when one of them converges to a place, it influences all the neighboring clusters also, allowing a smoother convergence to the clusters. regards
Hi, the main difference is that centroida here are connected to each other. So when one cluster is updated, the neighbors are updated too. Check previous discussions in this video about the same topic. Please subscribe to my channel. Regards
Thanks for your feedback, please like and share the video and subscribe to my channel. In this case the neuron has the same feature space of the input data. So it can be considered as a data point. The convergence of the neurons is given by competition between all neurons. Regards
As far as I have understood SOMs and neural networks, the neurons are not data points. Neurons are a mathematical function that consists of an input function, an activation function, and an output function. While it might be a good representation to show the neurons in the feature space of the input data, it is not actually the neuron that changes its position, but the winning neuron's weight vector (A vector containing all weights between in the input layer and the neuron) is shifted in the direction of the input vector.
Thanks for your comments. Indeed both algorithms seem similar from this video. The only difference is the connection between the clusters, or in this case neurons, allowing a smooth convergence. Please subscribe to my channel Regards
Dear daV iD, thanks for your comment. Please subscribe to my channel and share this video with your peers. To find the C++ code, follow these links: svn.dpi.inpe.br/terralib/trunk/terraViewPlugins/geodma/include/som_classifier.h svn.dpi.inpe.br/terralib/trunk/terraViewPlugins/geodma/src/som_classifier.cpp Regards
Thanks for your comments. Please like and share the video and subscribe to my channel. You are right to perceive that both algorithms are similar. The most important difference is that the clusters are connected, and the user determines the structure of the network, therefore the convergence of SOM is smoother. Regards
I will have an exam in 30 minutes. I didn't understand anything about this topic. Then I watched this and holy hell I went from not understanding it at all to understand it almost fully (The formulas isn't yet though). Thank you!
Thanks for your feedback. How was the exam? :)
Please subscribe to my channel
Regards
Me too bro LOL
me too lol
@@tkorting I just saw your comment again after a long time.
I think it went well. Already graduated. Working as a software engineer/data scientist in one of the top paid companies in my country. So all is well I guess haha.
Thank you for this!
Wow, first time I’m seeing someone explain this without using a simple formula, just common sense. GREAT JOB
Thanks for the feedback, please subscribe to my channel
Thank You so much, after wasting one hour on another channel(didn't understand). Then I found yours, watched and understood SOM. Thank You so much.
Thanks for this feedback
Please like and share the video, and subscribe to my channel
Regards
Thanks, it is the most clear and simple interpretation of SOM I had ever found
+Mengyang Chen thanks for your feedback.
Please subscribe to my channel and share this video with your peers.
Regards
Lol this is very informative, ty. I read a tons of articles and didn't understand it very well, but now I feel myself much better
Very good visual representation of how SOM's work, thank you sir
Thanks for your comment,
please like/share/subscribe.
Regards
If only I had seen this video months ago
I would have saved myself the pain
of going over every online publication on SOM
For any Newbie in SOM..
This is what you been looking for.
Love from Afrika
And Oh, thank you Thales Sehn Korting
+Mbambokaz'balwa greetings from Brazil
thanks for your feedback
please subscribe to my channel
regards
Is there a way I can inbox you ASAP? Am working on optimising competitive learning in SOM.. Just wanna share what I have and hear what you think..
Thank you!! In less than 5 minutes, you described what my professor wasn't able to do in 2 weeks.
thanks Jack for your valuable comments
please subscribe to my channel and share this video with your peers
regards
Thank you SO MUCH for this. I didn't get it at all, and this is such a simple and effective explanation.
Thanks for this comment
Please subscribe to my channel
Regards
I like your way to simplify topics with simple examples :).
+Ibraheem Al-Dhamari thanks for your valuable comments.
Please subscribe to my channel and share this video with your peers.
Regards
+Thales Sehn Körting You are welcome. Sure, already did. Would be nice if you add more videos about some important topics e.g. BSplines interpolation, Active Shape Models and Deep Learning.
I know this is a simple explanation - I'm a visual learning - I think in pictures and images - this is super helpful :) Thanks!
I agree with you. Thanks for this feedback.
Please subscribe to my channel.
Regards
you simplified SOM in a very clear way, thanks
thanks for your feedback
Please subscribe to my channel and share this video with your peers.
Regards
Simple explanation...looks very similar to a particle swarm!
Great explanation. This makes so much more sense in 2 dimensions. Thanks!
Thanks for your feedback
Please like and share the video and subscribe to my channel.
Regards
Really helpful for understanding the fundamental concept. Thanks.
Thanks for your comment.
Please like/share/subscribe.
Regards
Thanks! Your explanation is clear and simply!
Thanks for the feedback, please subscribe to my channel
Goodness, you made it sound like a very simple idea....
A good introduction, thankyou.
thanks a lot for this positive feedback
please like/share the video and subscribe
regards
This video explains the concept so well.. Great job
+Piyush Chitkara thanks a lot for your positive feedback. Please like and share the video and subscribe to my channel. Regards
Thank you, this visualization made some parts more clearer which other sources just didn't make clear
Thank you very much! It couldn't be more clear than this!
thanks a lot for the feedback,
please subscribe to my channel and like the video.
Regards
Thank you for the simple and clear explanation.
Thanks for your feedback
Please like/share/subscribe.
Regards
Thank you very much! This video was very helpful for me. The graph and animations make it easy to understand too!
Simple explaining againest complicated task. Great
Very helpful algorighm. Thank you sharing this with us!
Valerii Potokov Hi!
thanks for your feedback,
please subscribe to my channel.
Best regards
Thanks for your explanation, it really helps!
Thanks for the feedback, please subscribe to my channel
Your voice sounds a bit nervous but this really helped me understand the matter, thank you!
Thanks for the feedback. For more nervous videos please subscribe to my channel ;)
I finally get it, thank you!
Thanks for your feedback
Please subscribe to my channel and share this video with your peers.
Regards
This is so nice & crisp.....thanks
Hi, thanks for the feedback
please like/share and subscribe
Regards
very simple and effective explanation. Thank u a lot for the work :D
Thanks for your comment,
please like/share/subscribe.
Regards
Really clearly explained, G thanks!
Dear Saana, thanks for your valuable comments.
Please subscribe to my channel and share this video with your peers.
Regards
In order to classify discrete groups correctly you need to break the neighbourhood link otherwise they are all going to be in the same cluster since both neurons are connected. Search for 'A novel self-organizing map (SOM)'
The learning rate and the neighbourhood radius are modified as the algorithm continues, therefor with each iteration the effect on other neurons will become less.
Thank you! Very helpful and simple to understand!
+Henrique Luis Schmidt thanks for your feedback.
Please subscribe to my channel and share this video with your peers.
Regards
Nice video! Kudos from a fellow joseense
very clear.. Thanks a lot for your efforts
+Jing Lu thanks for your feedback.
Please subscribe to my channel and share this video with your peers.
Regards
Thank you! Helpful!
Thanks! Very nice and clear!
+bobanas many thanks for your positive feedback. Please like and share the video. Regards
Thank you for this wonderful explanation!!!
thanks for your feedback,
please like/share the video and subscribe
Regards
Very Nice Explanation ...! can you include how neurons are Selected/ Calculated
thanks for your feedback
please like/share the video and Subscribe!
Neurons are selected by the competitive method (2:23) and them they are updated to stay more similar to the patterns.
In www.researchgate.net/publication/220785373_A_Geographical_Approach_to_Self-Organizing_Maps_Algorithm_Applied_to_Image_Segmentation you can find how the neurons are updated.
Regards
Why we call the clusters' centers NEURONS actually? Does it mean that a neuron is always from the same vector space as input data?
Thank you for this wonderful explanation!!! This is my diploma's theme now, btw :D
Thanks for your feedback.
Please like/share/subscribe and make a good diploma's document.
Regards
Thanks! That explain a lot for me! Thanks again!
Please subscribe to my channel!
ua-cam.com/users/tkorting
Regards
thank you for the informative video! does the order in which you select the inputs matter? I can imagine that the distance between the connected neurons would look different depending on the displacement of the input.
Good job
Useful! Thanks a lot.
Thanks for the feedback, please subscribe to my channel
too simple and too helpful many thanks
Dear Kawazaki,
thanks for your valuable comments.
Please subscribe to my channel and share this video with your peers.
Regards
Thank you so much for your explanation
Thanks for your feedback
Please subscribe to my channel
Regards
Loved the video
What is the difference between K-means clustering and SOM? Is it just the way the centroids are updated?
Very helpful!
Thanks for your feedback
Please subscribe to my channel
Regards
Thanks for the video. One question, I understanding was that the SOM is a unsupervised learning algorithm. Could you tell me why there are two "true neurons" with labeling?
thank you thank you thank you a million times! I got it now!
very well-explained and great illustration. It would be even better if you could explain Geo-SOM as well. :-)
thanks for your feedback, please subscribe to my channel and share this video with your peers.
For the GeoSOM approach, I could recommend a paper in which I briefly explain the method and point to the main reference.
www.researchgate.net/publication/220785373_A_Geographical_Approach_to_Self-Organizing_Maps_Algorithm_Applied_to_Image_Segmentation
Regards
Hi, thank you very much for the explanation, it really helped. I'm just wondering about the example data set you presented at the beginning of the video. I see you have 20 unique IDs, each of which with 6 unique attributes. How is this plotted as the input distribution? What values would you use as coordinates? Thank you for your time.
Dear ramadenlama.
Thanks for yout comments. Please share this video with your peers.
The case with 6 attributes is different from the example, because it is simpler to show a 2 dimension case in the screen.
For more than 2 attributes, the measures are the same, but plotting is not possible. The SOM can continue to be a 2D matrix, in order to be used as a dimension reduction scheme.
Regards
Thales Sehn Körting Hi Thales,
Thank you very much for your immediate reply to my question, it is greatly appreciated. I see, I guess my assumption was correct in that with 3 or more attributes multidimensional scaling must be applied. I find it strange that the literature available regarding SOMs does not directly address the nature of the input data used which is why I asked the question above.
Again, thanks for your response.
Thank You so much
What a great explanation pace. .👍Subbed!
Thanks for the feedback!
Dear SIr,
Can u pls elaborate the importance of neighbourhood function used in SOM?
Dear Hira Imran,
thanks for your feedback. Please subscribe to my channel and share this video with your peers.
I will recommend the following article, so that you can go deep into neighborhood functions:
www.researchgate.net/publication/220785373_A_Geographical_Approach_to_Self-Organizing_Maps_Algorithm_Applied_to_Image_Segmentation
Best regards
Thales Sehn Körting Thanks alot Sir. ALso i want to ask the basic difference betewwn LVQ1 and SOM. It would be quite helpful as i m preparing for my exams if u could help
How are the *colors* of the data-points initially computed? Is each attribute (table column) assigned a color and 1 datapoint is a mixture of those colors? I've seen SOM presentations where at the beginning of the algorithm, the data-points are colored differently. Is this the reason why?
what's the difference between SOM and K-means
I have a question. During the update phase, shouldn't you update the other neuron AWAY? Because it is a competitive network?
nice explanation. i have to so a small project on SOM for my college assignment.Can you suggest me some project ideas that i can do
hello, what is the difference between k-means clustering and self organizing maps ?
Thanks for the question, please subscribe to my channel
They are different because k-means does not have connections between the clusters, and SOM has. You define the degree of connections between the clusters, or the neurons.
Regards
So what is the difference between using SOM and K-means? It seems that both techniques are some sort of unsupervised learning
thanks for your question. Please subscribe to my channel and share this video with your peers.
You are right to think that both algorithms are very similar. In fact, if you don't use the connection between the neurons, SOM will be equal to k-means. The main difference is, then, is (example at 3:21) when you update one neuron, the other neuron is also updated with a smaller shift, but is updated.
Regards
Thanks so much for the video and your explanation. Could you be so kind and tell the difference between using SOM as compared to k-means? In which cases one is more advantageous than the other?
thanks for your question, please subscribe to my channel.
SOM provides more tools to understand the data, when you project your N-dimensional feature space to a 2-D, you can use the U-matrix to understand better the relations between clusters and so on. You can still use more neurons and after decide to reduce the number. Whereas in k-means you have to provide this number as the first thing for classifying. Although SOM is fast, k-means is faster.
With some of these arguments, and depending on your dataset, you can chose the best.
Regards
i dont understand what affects how far they move to a direction
thank you for the very informative video...
Thanks for your feedback.
Please share this video with your peers.
Regards
very nice !
Thanks for the feedback, please subscribe to my channel
How does the random selection of the input work?
I understand that you randomly select 1 input out of the set of all inputs. However, after selecting, let's say input i, do you then remove this input from the set of possible inputs that can be selected in the next epoch?
Or is it still possible to randomly select again input i in the next input?
I love you. Thanks!
I don't believe :)
Subscribe to my channel
Simple and easy
Tq for this video
Thanks for your comment. Please like and share the video and subscribe to my channel.
Regards
I got this video helpful, I want to use SOM clustering for grouping students based on their learning styles, can I get more videos to help me?
+Musa Bara thanks for your feedback, please subscribe to my channel and watch more videos about pattern recognition.
SOM is good to such cases of clustering, and to analize several features projected in the 2D space of neurons.
Regards
unintentional asmr is putting me to sleep. watch this if u have watched too much asmr and everything else doesnt work.
Subscribe to my channel and good night
@@tkorting cant believe I missed hitting that subscribe button. thanks for the video!
you missed because you fall asleep :D
Haha, it cost me some points at UNI, altogether is a good explanation but some very vital grid should be shown early on, at least for my profs.
thank you so much.
Thanks for sharing my video, please subscribe to my channel
ua-cam.com/users/tkorting
Regards
Good video. Good man
Ajudou bastante, obrigada!
+Julia Litvinoff Justus muito obrigado pelo seu comentário
Não esqueça de dar like e compartilhar o vídeo
inscreva-se no meu canal
um abraço
U are from Brazil?
Nice ascent.
If yes: Muito obrigado pela explição!!
If no: My very thanks for the explanation.
+Nathan Bortman Eu que agradeço ;)
Não esqueça de se inscrever no canal e dar like no vídeo.
Abraço
Uu so is SOM similar to clustering?
thanks for the video :)
Thanks for your feedback
Please subscribe to my channel
Regards
Great! Thank you so much :)
+Alice Fantazzini thanks for your feedback
Please subscribe to my channel and share this video with your peers.
Regards
thank you! very good video!
Dear gregerz,
Thanks for your valuable feedback.
Please subscribe to my channel and share this video with your peers.
Regards
Thank you.
+Hend Selmy thanks for your feedback
Please subscribe to my channel and share this video with your peers.
Regards
I don't understand,
this algorithm is identical to K means clustering, if so then why is it called SOM?
+aditya malte thanks for all your feedback. In fact these algorithm is similar to K means however there is an important difference between the two, because SOM connects all the Clusters, and one cluster when converting to the classe it influences the neighboring clusters, which does not happen in K means algorithm.
Please like and share the video and subscribe to my channel.
Regards
In this case, this is very similar to K means. However, because the neurons are topologically connected, it will create some interesting patterns, especially when K is large.
For example, if we have 3 neurons, A, B and C. And neuron B is between A and C topologically, at the very end, it will also be between A and C.
@@arielgenesis In the K-means clustering algorithm you have to specify the amount of clusters you want to have whereas in SOM the neural network detects the amount itself. Also SOMs allow topology preservation (for example representing a 3d input space in a 2d space)
thank you!
+Camilla Bianco many thanks for your feedback
please like and share the video And subscribe to my channel
best regards
thanks a lot, greetings from University of Hamburg.
thanks for your feedback
please subscribe and like the video
regards
another thanks from university of hamburg =)
thanks!
hi from Brazil
Another greeting from University of Hamburg. :)
and another greeting from University of Hamburg :P
Thank you!
Valerii Potokov Thanks for your feedback
please subscribe to my channel and share this video with your peers.
Best regards
was helpful. Thanks a lot.
sir... I need intrusion detection system matlab source code ....plz sir. sir i can research on IMPLEMENTATION OF AN INTRUSION DETECTION SYSTEM BASED ON SELF ORGANIZING MAP but i cant find matlab code sir plz help me
Thanks for the feedback, please like and share the video and subscribe to my channel.
I am not an expert on intrusion detection, but maybe someone that sees your comment here can help you.
Regards
what you want to do is look for outliers in your data. do test cases and optimize against false positives.
thats the best I can give u without knowing more about your setup.
Thanks man
thanks for the feedback
please like/share the video and subscribe!
regards
thanks for sharing
Thanks for the feedback, please subscribe to my channel
How is it different from K-Means ?
thanks for your question.
please like/share and subscribe to my channel
the main difference from k-means is that each cluster converges "alone" to one of the found centers, although in SOM all the clusters are connected, generally in 2D matrices, and when one of them converges to a place, it influences all the neighboring clusters also, allowing a smoother convergence to the clusters.
regards
thanks!from tu munich
thanks for your feedback
please like/share the video and subscribe
regards
Muito obrigada por esse vídeo, ajudou muito!
+Laura Vieira muito obrigado pelo comentário
não esqueça de dar like e se inscrever no canal. Abraço
How is this different than k means algorithm?
Hi, the main difference is that centroida here are connected to each other. So when one cluster is updated, the neighbors are updated too. Check previous discussions in this video about the same topic.
Please subscribe to my channel.
Regards
tysm😢
what is an neuron? is it just a data point?
Thanks for your feedback, please like and share the video and subscribe to my channel.
In this case the neuron has the same feature space of the input data. So it can be considered as a data point. The convergence of the neurons is given by competition between all neurons.
Regards
As far as I have understood SOMs and neural networks, the neurons are not data points. Neurons are a mathematical function that consists of an input function, an activation function, and an output function. While it might be a good representation to show the neurons in the feature space of the input data, it is not actually the neuron that changes its position, but the winning neuron's weight vector (A vector containing all weights between in the input layer and the neuron) is shifted in the direction of the input vector.
is this SOM or Kneighbors algorithm?
Thanks for your comments. Indeed both algorithms seem similar from this video.
The only difference is the connection between the clusters, or in this case neurons, allowing a smooth convergence.
Please subscribe to my channel
Regards
can u write there simple c code?
Dear daV iD,
thanks for your comment. Please subscribe to my channel and share this video with your peers.
To find the C++ code, follow these links:
svn.dpi.inpe.br/terralib/trunk/terraViewPlugins/geodma/include/som_classifier.h
svn.dpi.inpe.br/terralib/trunk/terraViewPlugins/geodma/src/som_classifier.cpp
Regards
thanks broo :))
thank you bhai
So similar to k nearest mean, at least visually.
Thanks for your comments.
Please like and share the video and subscribe to my channel.
You are right to perceive that both algorithms are similar. The most important difference is that the clusters are connected, and the user determines the structure of the network, therefore the convergence of SOM is smoother.
Regards
thats because SOMS are a generalization of PCS which is itself a generalization of k means.
Valeu!
bravo
Thanks!
Please like/share/subscribe.
Regards