This is by far the best explanation of the MapReduce technique that I have come across. I especially like how the technique was explained with the least amount of technical jargon. This is truly an ELI5 definition for MapReduce. Good work!
To wrap this up: Map = Split data Reduce = Perform calculations on small chunks of data in parallel Then combine the subresults from each reduced-chunk. Is that correct?
If I understand correctly, the mapper divvies up the data among nodes of the cluster and subsequently organizes the data on each node into key-value pairs, and the reducer collates the key-value pairs and distributes the pairs among the nodes.
6:16 got a question! Would you please elaborate more on those moving data? Since there is two separate reduce task on those two nodes how does two different reduce tasks combine together? How do we choose which cards move to which node?
4:51 - - i'm kind of lost. so you said two papers as two sets of nodes. left is node1 and right is node2. then you said, "I have two nodes, where each node has 4 stacks of cards". I also understood that you are merging two varieties of cards in node1 and another two varieties of cards in node2. " a cluster is made of tens, hundreds or even thousands of nodes all connected by a network". so in this example, let's say two papers(nodes) are one cluster. the part I get confused is , when you say " the mapper on a node operates on that smaller part. the magic takes the mapper data from every node and brings it together on nodes all around the cluster. the reducer runs a node and knows it has access to everything with same key ". So if there are two nodes A and B that has mapper data, then the reducer part will happen on two other nodes C and D. I'm confused when you say "on nodes all around the cluster".
Wonderful, you have used the right tool(cards) and made it simpler. Thank you. Am i correct in saying, in this manual shuffle and sort, the block size is 52 cards where as in a node it would be 128.
Might be a bit clearer to understand the advantage of this if instead of having the same person run the cards on each node sequentially and have two people do it at the same time. Or go further and have four people show it. Then each person can grab all the cards of the suit from each node and can sum their values up, again, at the same time. Show a timer showing how long it took for the one person to do everything on one node and the time of having all four running at the same time.
Good to understand for a layman! So its quite crucial to identify the basis of the grouping i.e. the parameters based on which the data should be stored in each node. Is it possible to revisit that at a later stage?
When you say nodes and clusters, does an input file of 1TB should definitely be run in more than one computer or we can install hadoop in a single laptop and virtually create nodes and clusters ?
What if the node with clubs and hearts breaks down during the reduce operation? Will data be lost? Or will the complete Map Reduce job be repeated using the replicated data?
Thanks Jesse! This is a wonderful video! I have 2 doubts. 1. Instead of sum, if it is a sort function, how will splitting it into nodes work? Because then every data point should be treated in one go. 2. The last part on scaling, how will different nodes working on a file and then combining based on key, be more efficient than one node working on one file? I am new to this and would appreciate some guidance and help on the same.
IMO the key takeaway from the video is that MR only works when: a. There is one really large data set (e.g. a giant stack of playing cards) b. Each row in the data set can be processed independently. (e.g. sorting or counting playing cards does not require knowing the sequence of cards in the deck - each card is processed based on information on the face of card) To process real-world problems using MR, the data sets will need to be massaged and joined to satisfy the criteria listed above. This is where all the challenges lie. MR itself is the easy part.
+Subramanian Iyer agreed MR is difficult, but the understanding of how to use and manipulate the data is far more complex. This is why I think data engineering should be a specific discipline and job title. www.jesse-anderson.com/big-data-engineering/
The 'scalability' of hadoop has to do with the fact that the data being processed CAN be broken up and processed in parallel in chunks and then the results can be tallied by key. It's not an inherent ability of the tech other than HDFS itself. Like most technology or jobs for that matter the actual 'process' is simple it's wading through the industry specific terminology that has makes it unnecessarily complicated. Hell you can make boiling an egg or making toast complicated too if that's your intent.
Why did they come up with such a terribly unintuitive name as "MapReduce" ??? It's basically just "bin by attribute, then process each bin in parallel". BinProcess.
This is by far the best explanation of the MapReduce technique that I have come across. I especially like how the technique was explained with the least amount of technical jargon. This is truly an ELI5 definition for MapReduce. Good work!
+Subramanian Iyer Thanks!
An innovative idea to use a pack of cards to explain the concept. Getting fundamentals right with an example is great ! Thank you
Great explanation !! You Mapped the Complexity and Reduced it to Simplicity = MapReduce :)
The only one I watched which can clearly introduce mapreduce to newbie
To wrap this up:
Map = Split data
Reduce = Perform calculations on small chunks of data in parallel
Then combine the subresults from each reduced-chunk.
Is that correct?
+mmuuuuhh Somewhat correct. I'd suggest buying the screencast to learn more about the code and how it works.
+mmuuuuhh merge-sort maybe?
divide and conquer
Map transforms data too
No no... Map = Reduce the Data, Reduce = Map the Data . .... ....
Really good illustration.... really easy to understand for people as me that we are not computer experts.. thanks
Jesse may you get all SUCCESS and BLESSINGS
Very well done - not too slow, yet very clear and well structured.
If I understand correctly, the mapper divvies up the data among nodes of the cluster and subsequently organizes the data on each node into key-value pairs, and the reducer collates the key-value pairs and distributes the pairs among the nodes.
Almost. Hadoop divvies up the data, the mapper creates key value pairs, and the reducer processes the collated pairs.
Great presentation. The visualization makes it so much easier to understand.
loved the idea. Now I understood how map reduce works. Thank you.
what a great effort, i am astonished by your teaching skills.we need teachers like you.Thanks for your best explanation
.
an ounce of example is better than a ton of precept! --Thanks, this was great!
and that's how you explain any technical concept. simple is beautiful!
It was very nice. But I could not find the video that you showed the shuffling "magic part"
Great explanation! This is how a tutor should simplify the understanding! Thanks
Really liked your way of presentation....."Simple" and "Informative". Thanks for sharing!!
Nice video explaining the Map Reduce Practically.
6:16 got a question!
Would you please elaborate more on those moving data? Since there is two separate reduce task on those two nodes how does two different reduce tasks combine together? How do we choose which cards move to which node?
That is called the shuffle sort. See more about that here www.inkling.com/read/hadoop-definitive-guide-tom-white-3rd/chapter-6/shuffle-and-sort.
Does the actual data in the node moves or copies of the data is moved?
really cool one. It is always nice to come back to the basics. Thanks for that one
4:51 - - i'm kind of lost. so you said two papers as two sets of nodes.
left is node1 and right is node2.
then you said, "I have two nodes, where each node has 4 stacks of cards".
I also understood that you are merging two varieties of cards in node1 and another two varieties of cards in node2.
" a cluster is made of tens, hundreds or even thousands of nodes all connected by a network".
so in this example, let's say two papers(nodes) are one cluster.
the part I get confused is , when you say " the mapper on a node operates on that smaller part. the magic takes the mapper data from every node and brings it together on nodes all around the cluster. the reducer runs a node and knows it has access to everything with same key ".
So if there are two nodes A and B that has mapper data, then the reducer part will happen on two other nodes C and D. I'm confused when you say "on nodes all around the cluster".
Wonderful, you have used the right tool(cards) and made it simpler. Thank you.
Am i correct in saying, in this manual shuffle and sort, the block size is 52 cards where as in a node it would be 128.
The explanation is wonderful.. You made me understand things easily.
Wow.. You have made this look so simple and easy... Thanks a ton !!!
Just wow...very nicely explained
Might be a bit clearer to understand the advantage of this if instead of having the same person run the cards on each node sequentially and have two people do it at the same time. Or go further and have four people show it. Then each person can grab all the cards of the suit from each node and can sum their values up, again, at the same time. Show a timer showing how long it took for the one person to do everything on one node and the time of having all four running at the same time.
Well, that explains the interview question: How would you sort a ridiculously large amount of data?
Good illustration using a practical example...
Your explanation is majic! Well done
Never trust a man whose deck of playing cards has two 7s of Diamonds.
Wonderful explanation ! Made it very simple to understand! Thanks a ton!
huge 1Tb file..
anyone watching this in 2065?
February 2019 (Go RAMS)
@@NuEnque July 21 2019
feb 2020
more like 2025
August 11, 2020!!
That was very helpful Jesse. Thank you for sharing this!!
Brilliant approach to teach the concept
amazing explanation! I love it. Huge Thanks!
Good to understand for a layman! So its quite crucial to identify the basis of the grouping i.e. the parameters based on which the data should be stored in each node.
Is it possible to revisit that at a later stage?
So it follows mainly the principle of divide and conquer?
Following that analogy, it would be divide, reassemble, and conquer.
When you say nodes and clusters, does an input file of 1TB should definitely be run in more than one computer or we can install hadoop in a single laptop and virtually create nodes and clusters ?
best explanation of mapReduce. Thanks!
Great video with good explanation technique.
Great explanation!! worth a bookmark. Thank you sir!
really nice video and explain the terms in a simple way...
What if the node with clubs and hearts breaks down during the reduce operation? Will data be lost? Or will the complete Map Reduce job be repeated using the replicated data?
The data is replicated and the reduce would be re-run on a different node.
I actually did this with cards.Thanks
Good Explanation with simple example
Thanks Jesse! This is a wonderful video! I have 2 doubts.
1. Instead of sum, if it is a sort function, how will splitting it into nodes work? Because then every data point should be treated in one go.
2. The last part on scaling, how will different nodes working on a file and then combining based on key, be more efficient than one node working on one file?
I am new to this and would appreciate some guidance and help on the same.
1. This example goes more into sorting github.com/eljefe6a/CardSecondarySort 2. It isn't more efficient, but more scalable.
@@jessetanderson Thank you!
Thanks this really helped me for my exam !!
Thats wonderful ..... you are a gret teacher
greta video. why is there performance issues with hadoop however?
I'm not sure what you mean by performance issues.
Nice tutorial! Easy to understand
Superb. Thank you Jesse Anderson
Very useful explanation.
Thank u very much for the explanation.
Good illustration. 😃
IMO the key takeaway from the video is that MR only works when:
a. There is one really large data set (e.g. a giant stack of playing cards)
b. Each row in the data set can be processed independently. (e.g. sorting or counting playing cards does not require knowing the sequence of cards in the deck - each card is processed based on information on the face of card)
To process real-world problems using MR, the data sets will need to be massaged and joined to satisfy the criteria listed above. This is where all the challenges lie. MR itself is the easy part.
+Subramanian Iyer agreed MR is difficult, but the understanding of how to use and manipulate the data is far more complex. This is why I think data engineering should be a specific discipline and job title. www.jesse-anderson.com/big-data-engineering/
Thank You sir for such a wonderful explanation. :-)
dude, whats the name of that magic??
little bit long explanation. could be done faster (e.g. card-sorting). But after watching, you know what's happening. So all thumbs up!
spade clubs ... I think you used the wrong suite names for them :)
The 'scalability' of hadoop has to do with the fact that the data being processed CAN be broken up and processed in parallel in chunks and then the results can be tallied by key. It's not an inherent ability of the tech other than HDFS itself.
Like most technology or jobs for that matter the actual 'process' is simple it's wading through the industry specific terminology that has makes it unnecessarily complicated. Hell you can make boiling an egg or making toast complicated too if that's your intent.
Sorry, you misunderstood.
@@jessetanderson I didn't misunderstand you. Your explanation was great.
Hi Jesse, can I use map reduce only on document-oriented DBs, or also e.g. on Graph databases?
Hessebub you can use it for both, but the processing Algorithms are very different between them.
Alright, thanks very much for answering & doing the video in the first place!
Excellent video explanation
Great explanation!
Superb video....thanks a lot sir
Great video, thanks for sharing !
Thanks for the great video!
Which music is this in start of this video
I'm not sure where they got it from.
Excellent explanation!
Great summary - thanks!
Best explanation. Thanks a lot
great video by the way!!
just great explanation !
Interesting. Now I want to request a bunny comes out from a hat.
my friend: i wish i had ur calm we having an exam tomorrow you watching how playing cards....
Hat's of man. very well understood
awesome explanation super
Aiwa. Simply explained.
thanks! that is an easy explanation!
Now I get it, thanks!
Great lesson. Thanks..
Very nice, thanks a lot.
i like this technique nice keep it up
Good explanation
Great video
Brilliant - thanks!
Why did they come up with such a terribly unintuitive name as "MapReduce" ??? It's basically just "bin by attribute, then process each bin in parallel". BinProcess.
It's a well-known functional programming paradigm.
wow this was great
Easiest explanation.
awesome
Brilliant!
This is just a sales pitch
I think the description is pretty clear that it's an extended preview of the screencast.
This is a great example video without the accent to deal with.
thanks
excellent!!!
so nice
awesome
Cool
like si vienes por riwb
Great
keep kinging