Bhai ek live use case ka complete video banao na, jaise live streaming with kafka -- processing the same in spark -- hive me queries etc etc aisa complete pipeline banao, matlab live project ka koi ek use case.. waiting for you response bro!!!
very nice explanation...keep posting videos like this. Thanks a lot. I had a question what if we don't specify any location while creating external table then what would happen ? please clarify this doubt of mine.
Sir me haryana se hu or kafi tym se follow kr ra hu aapko. Please ek Video bna do ki hadoop install kaise kre or ye sb commands ko apne laptop pe kaise chlaye... Mtlb kya install kre or kha se kre... Sari info de do
Considering we can create multiple tables pointing to the same location can we create both internal and external tables that point to the same location? Thank you!
Your videos are good resource to complete my project work. As non-Hindi speaker ( I mentioned in other video) , I could follow what you were saying a bit as you used English words. But I couldn't get what you meant by commands ' use default ; and use demo ;'. Can I ask what was that? and also can you say again the concept of internal and external table , please?
You have not shown where external table get stored? As all the data is under HiveData only. As per my understanding in external table we are using external files data to be seen in our external table. That's why even after dropping table only metadata gets deleted.
In external table we are just mentioning the location where data is stored , so external table is just pointing at the data, and this data is stored in HDFS.
Hello sandeep , i have a question in my mind . As big data consists of structured , unstructured and semi structured data . As taught by you for structured we can use hive . For unstructured , we need to write map reduce code . But how semi structured data will be processed as it contains mainly of excel data . Curious to know this as you already mentioned if data type discrepancies arise , query will fetch NULL and excel data will have no data type .
+Rohit Pandey semi structured data is a type of data where we don't have schema for data but we know the structure of data as shown in example CSV is a semi structured data , while loading this semi structured data into hive we specify schema and this semi structured data is converted to structured data , we can export CSV,TSV from Excel and load it in hive by specifying datatypes for fields .
Mai jab table create kar rha hun toh yeh error aa raha hai, mai cloudera manager use kar rha hun. NoViableAltException(26@[1750:103: ( tableRowFormatMapKeysIdentifier )?])
sir while creating internal table if we give location as well then where will my data get store,is it strore in user/hive/warehouse directory or it will point to location...i have some doubts regarding this clarify my doubts?
The Location command is given to overwrite the default location (/user/hive/warehouse), so when we will specify LOCATION '/hdfs_directory' in our create table command , all the files that we will import in that table will be copied in the hdfs_directory that we have specified in our command , Still there is difference between external and internal table , when we drop internal table , no matter where data(data+metadata) is stored it will get deleted , This wont be a case with external table , only metadata will be deleted in external table when we drop it .
database create kiya and use bhi kiya .create table mera ho gaya ye command dala LOAD DATA LOCAL INPATH '/home/cloudera/employee_data.csv' INTO TABLE employee_data; ye command run par table show karena command dala toh detail pe null null likha tha
sir ye jo files h student, student1 inka size to bytes me h or jo block size create Ho rha h wo mb me ho rha h to Memory Jada ja rhi hena apni or ye block me jo free space h wo kbii free hogy kya ???... or ha iska matlb ye bi hua ki size ka size agr block size se kam hua to bi same size ka block create hoga !!!!
File blocks utne hi size ke create ho rahe hai , jitne size ki file hai , you might see 128Mb blocks on browser window , for detailed description about particular file and its blocks , execute following command $hadoop fsck /Demo/student -files -blocks Ye wali command execute hone ke bad total block size ke samne average block size dikhai jati hai , wahapar 128Mb nahi dikhega ....
+Isha Saini I will be uploading video series for spark after completing hadoop 😊 I will suggest you to have a look at official documentation of spark spark.apache.org/documentation.html
sir, apne start-dfs.sh and start-yarn.sh use kiya hai, lekin agar hum inke jagha start-all.sh use karte hai tho, warning ate hai 'this script is decripted instead use start-dfs.sh and start-yarn.sh separetly ', why does it happen ????????????????????
+Piyush Ghildiyal Kyu ki wo command purani ho chuki hai , aur maybe next version me nahi aaegi , so users ne updated commands use krni chahiye isiliye 😊
In Hadoop 1.X the command start-all.sh was been used. But in the upgraded version of Hadoop 2.X which includes YARN the command has been upgraded to start-dfs.sh and start-yarn.sh. Inorder to run the hadoop demons of dfs and Yarn demons in an efficient manner.
+K.G Choudhry Sure Bhai 😊 , edureka ke maximum videos available hai UA-cam pr , Still agar tumhe certification krna hai , to mai suggest karunga ki koi institute join krna behtar rahega , aur apka mail id share Kro , agar muze practical wale videos mile to mai share kr dunga 😊
For Online classes Visit our website
technologicalgeeks.com/
Course Details : ua-cam.com/video/KBK85ETH5nI/v-deo.html
hats off to the explanation. amazed to see the explanation. ❤️
best.. even some paid courses cant match this level of explanation..!! need much more from you.. !! thanks buddy..
Jhakaas, bhai since today’s morning I was struggling to understand hiveql and ur tutorial breaks the ice.
Major appreciation from a finance grad struggling as a data analyst. 👍
Bhawa ..
Ek number .. kash tu maza mentor astas je 3 months nahi samjle te atta clear zala..✌️
Your video solved many of my doubts. Keep up the good work
Awesome!!!! You explained the concept in a wonderful way! Thank u!
What an explanation and practical session. Hawa aahe bhau tumchi...
thanx bhai ek number video hai exam ke ek din pehle dekha sab smjha
awsome...an IT person need such explanations...seriusly please upload complete spark vedios..it will be a great help...
Brother! Superb... I became your fan... the way you speak... movie ki style me...:)
Best explanation
Awesome. There still 4 people who think this video wasnt useful. They are either jealous of this guy being awesome or has no idea of big data.
like the way you teach thank you so much for the valuable tutorials
Very good explanation , helped me to understand the difference!
Great Brother,,,keep sharing
Lyk the way you explain the topics , looking forward to Hbase, Kafka videos it will be great help
Hi Sandip your videos are amazing easy to understand .I pray what you have determined you will achieve.I am new in this field.
Awesome explanation. Thank you. :)
Outstanding explanation
I wish i had a teacher like him in all my academic life
bahut bahut shaandaarrrr...
+Ayush Gupta Thank you Bhai 😊
Very Nice Video Bro... Helped me a Lot
best on UA-cam
extraordinary !!! waiting for streaming & Kafka videos...
u rock brother. please come up with more tutorials of hive and spark
Awesome explanation, Thanks a lot!
Bhai ek live use case ka complete video banao na, jaise live streaming with kafka -- processing the same in spark -- hive me queries etc etc aisa complete pipeline banao, matlab live project ka koi ek use case..
waiting for you response bro!!!
Sure bhai, jald hi upload krunga
Bhai IS beech main videos upload hi nahn kar rahe ho ? ek end to end live project video upload karo na as said in above comment ?
Very nice sir
sir you explains so wonderful! create some more content for big data
Lajawab sir
Amazing buddy plz upload more videos related to hadoop project and ecosystem components in detail👍
Bhai naye videos upload nhi kar rahe kya?
Bahut sahi videos banaye he..
macha diya he apne..
bhai fan ho gy tumara
Awesome video! !!!
very nice explanation Bhai
Hi brother...you are doing a great job...please create Apache spark videos for beginners...
Nice videos Sir...
Jo bhi ho lekin Wallpaper pe " Bhai " ki photo dekh ke maje aa gyi .. :D
Thanks boss
big fan of yours
Thnx for uploaded this 👏👏
good job sir
very nice dear frnd...plz upload more practical video regarding hadoop
+Soumya Star Thank you 😊
Will be uploading new videos soon 😊
explanation in another level ... lol
Sir if we want to see external table like we internal table in localhost... then what should we have to do
very nice explanation...keep posting videos like this. Thanks a lot. I had a question what if we don't specify any location while creating external table then what would happen ? please clarify this doubt of mine.
+Amit Sen we have to specify the location while creating external table as the framework should know which file is to be inserted into the table.
i liked ur sessions of hive
can u upload the videos about flume practicles
Sir me haryana se hu or kafi tym se follow kr ra hu aapko. Please ek Video bna do ki hadoop install kaise kre or ye sb commands ko apne laptop pe kaise chlaye... Mtlb kya install kre or kha se kre... Sari info de do
Considering we can create multiple tables pointing to the same location can we create both internal and external tables that point to the same location? Thank you!
Hii
Sneha ji
Where do you work
Your videos are good resource to complete my project work. As non-Hindi speaker ( I mentioned in other video) , I could follow what you were saying a bit as you used English words. But I couldn't get what you meant by commands ' use default ; and use demo ;'. Can I ask what was that? and also can you say again the concept of internal and external table , please?
Use demo is to use demo database otherwise it will default database
Brother please add more videos :-D waiting :D
+ahmad ali sure Bhai 😊 , Thoda busy chal raha hu , lekin jald hi upload krne ki koshish karunga 😊
Thanks ....
You have not shown where external table get stored? As all the data is under HiveData only.
As per my understanding in external table we are using external files data to be seen in our external table. That's why even after dropping table only metadata gets deleted.
In external table we are just mentioning the location where data is stored , so external table is just pointing at the data, and this data is stored in HDFS.
Hello sandeep , i have a question in my mind . As big data consists of structured , unstructured and semi structured data . As taught by you for structured we can use hive . For unstructured , we need to write map reduce code . But how semi structured data will be processed as it contains mainly of excel data . Curious to know this as you already mentioned if data type discrepancies arise , query will fetch NULL and excel data will have no data type .
+Rohit Pandey semi structured data is a type of data where we don't have schema for data but we know the structure of data as shown in example CSV is a semi structured data , while loading this semi structured data into hive we specify schema and this semi structured data is converted to structured data , we can export CSV,TSV from Excel and load it in hive by specifying datatypes for fields .
Mai jab table create kar rha hun toh yeh error aa raha hai, mai cloudera manager use kar rha hun.
NoViableAltException(26@[1750:103: ( tableRowFormatMapKeysIdentifier )?])
difference between load data local inpath and hdfs dfs put commands
where is the practical video of SQOOP. please share the link in comment section
external tables is created , but not showing into hdfs .please help me.
Hi Sir, Please make a video for flume too.
sir while creating internal table if we give location as well then where will my data get store,is it strore in user/hive/warehouse directory or it will point to location...i have some doubts regarding this clarify my doubts?
The Location command is given to overwrite the default location (/user/hive/warehouse), so when we will specify LOCATION '/hdfs_directory' in our create table command , all the files that we will import in that table will be copied in the hdfs_directory that we have specified in our command ,
Still there is difference between external and internal table , when we drop internal table , no matter where data(data+metadata) is stored it will get deleted , This wont be a case with external table , only metadata will be deleted in external table when we drop it .
i want to work on big data and hadoop, could u suggest to me ?
database create kiya and use bhi kiya .create table mera ho gaya ye command dala LOAD DATA LOCAL INPATH '/home/cloudera/employee_data.csv' INTO TABLE employee_data; ye command run par table show karena command dala toh detail pe null null likha tha
phir mujhe aage kya karna padega
Hbase and SQOOP ki video bhi upload kr do bhai g
sir ye jo files h student, student1 inka size to bytes me h or jo block size create Ho rha h wo mb me ho rha h to Memory Jada ja rhi hena apni or ye block me jo free space h wo kbii free hogy kya ???...
or ha iska matlb ye bi hua ki size ka size agr block size se kam hua to bi same size ka block create hoga !!!!
File blocks utne hi size ke create ho rahe hai , jitne size ki file hai , you might see 128Mb blocks on browser window , for detailed description about particular file and its blocks , execute following command
$hadoop fsck /Demo/student -files -blocks
Ye wali command execute hone ke bad total block size ke samne average block size dikhai jati hai , wahapar 128Mb nahi dikhega ....
Technological Geeks Hindi Ok I will catch... n tx
Sir can you also provide some video for Spark of any link ...
if you have so plz share
+Isha Saini I will be uploading video series for spark after completing hadoop 😊
I will suggest you to have a look at official documentation of spark
spark.apache.org/documentation.html
sir,
apne start-dfs.sh and start-yarn.sh use kiya hai,
lekin agar hum inke jagha start-all.sh use karte hai tho,
warning ate hai 'this script is decripted instead use start-dfs.sh and start-yarn.sh separetly ',
why does it happen ????????????????????
+Piyush Ghildiyal Kyu ki wo command purani ho chuki hai , aur maybe next version me nahi aaegi , so users ne updated commands use krni chahiye isiliye 😊
In Hadoop 1.X the command start-all.sh was been used. But in the upgraded version of Hadoop 2.X which includes YARN the command has been upgraded to start-dfs.sh and start-yarn.sh. Inorder to run the hadoop demons of dfs and Yarn demons in an efficient manner.
now how to stop hive
exit;
@sandeep what are those "~" files for ?
+swapnil pal Those are just temporary files .
hi dost
mujhe system administrator, or linux administrator k liye kuch videos ki links send KAROO Na
+K.G Choudhry Sure Bhai 😊 , edureka ke maximum videos available hai UA-cam pr ,
Still agar tumhe certification krna hai , to mai suggest karunga ki koi institute join krna behtar rahega , aur apka mail id share Kro , agar muze practical wale videos mile to mai share kr dunga 😊
kailashchoudhry23@gmail.com
Ultimate explanation