Data Engineer Mock Interview | SQL | PySpark | Project & Scenario based Interview Questions

Поділитися
Вставка
  • Опубліковано 16 вер 2024
  • 𝐓𝐨 𝐞𝐧𝐡𝐚𝐧𝐜𝐞 𝐲𝐨𝐮𝐫 𝐜𝐚𝐫𝐞𝐞𝐫 𝐚𝐬 𝐚 𝐂𝐥𝐨𝐮𝐝 𝐃𝐚𝐭𝐚 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫, 𝐂𝐡𝐞𝐜𝐤 trendytech.in/... for curated courses developed by me.
    I have trained over 20,000+ professionals in the field of Data Engineering in the last 5 years.
    𝐖𝐚𝐧𝐭 𝐭𝐨 𝐌𝐚𝐬𝐭𝐞𝐫 𝐒𝐐𝐋? 𝐋𝐞𝐚𝐫𝐧 𝐒𝐐𝐋 𝐭𝐡𝐞 𝐫𝐢𝐠𝐡𝐭 𝐰𝐚𝐲 𝐭𝐡𝐫𝐨𝐮𝐠𝐡 𝐭𝐡𝐞 𝐦𝐨𝐬𝐭 𝐬𝐨𝐮𝐠𝐡𝐭 𝐚𝐟𝐭𝐞𝐫 𝐜𝐨𝐮𝐫𝐬𝐞 - 𝐒𝐐𝐋 𝐂𝐡𝐚𝐦𝐩𝐢𝐨𝐧𝐬 𝐏𝐫𝐨𝐠𝐫𝐚𝐦!
    "𝐀 8 𝐰𝐞𝐞𝐤 𝐏𝐫𝐨𝐠𝐫𝐚𝐦 𝐝𝐞𝐬𝐢𝐠𝐧𝐞𝐝 𝐭𝐨 𝐡𝐞𝐥𝐩 𝐲𝐨𝐮 𝐜𝐫𝐚𝐜𝐤 𝐭𝐡𝐞 𝐢𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰𝐬 𝐨𝐟 𝐭𝐨𝐩 𝐩𝐫𝐨𝐝𝐮𝐜𝐭 𝐛𝐚𝐬𝐞𝐝 𝐜𝐨𝐦𝐩𝐚𝐧𝐢𝐞𝐬 𝐛𝐲 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐢𝐧𝐠 𝐚 𝐭𝐡𝐨𝐮𝐠𝐡𝐭 𝐩𝐫𝐨𝐜𝐞𝐬𝐬 𝐚𝐧𝐝 𝐚𝐧 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡 𝐭𝐨 𝐬𝐨𝐥𝐯𝐞 𝐚𝐧 𝐮𝐧𝐬𝐞𝐞𝐧 𝐏𝐫𝐨𝐛𝐥𝐞𝐦."
    𝐇𝐞𝐫𝐞 𝐢𝐬 𝐡𝐨𝐰 𝐲𝐨𝐮 𝐜𝐚𝐧 𝐫𝐞𝐠𝐢𝐬𝐭𝐞𝐫 𝐟𝐨𝐫 𝐭𝐡𝐞 𝐏𝐫𝐨𝐠𝐫𝐚𝐦 -
    𝐑𝐞𝐠𝐢𝐬𝐭𝐫𝐚𝐭𝐢𝐨𝐧 𝐋𝐢𝐧𝐤 (𝐂𝐨𝐮𝐫𝐬𝐞 𝐀𝐜𝐜𝐞𝐬𝐬 𝐟𝐫𝐨𝐦 𝐈𝐧𝐝𝐢𝐚) : rzp.io/l/SQLINR
    𝐑𝐞𝐠𝐢𝐬𝐭𝐫𝐚𝐭𝐢𝐨𝐧 𝐋𝐢𝐧𝐤 (𝐂𝐨𝐮𝐫𝐬𝐞 𝐀𝐜𝐜𝐞𝐬𝐬 𝐟𝐫𝐨𝐦 𝐨𝐮𝐭𝐬𝐢𝐝𝐞 𝐈𝐧𝐝𝐢𝐚) : rzp.io/l/SQLUSD
    30 INTERVIEWS IN 30 DAYS- BIG DATA INTERVIEW SERIES
    This mock interview series is launched as a community initiative under Data Engineers Club aimed at aiding the community's growth and development
    Our highly experienced guest interviewer, Ankur Bhattacharya, / ankur-bhattacharya-100... shares invaluable insights and practical advice coming from his extensive experience, catering to aspiring data engineers and seasoned professionals alike.
    Our talented guest interviewee, Praroop Sacheti, / praroopsacheti has a remarkable approach to answering the interview questions in a very well articulated manner.
    Link of Free SQL & Python series developed by me are given below -
    SQL Playlist - • SQL tutorial for every...
    Python Playlist - • Complete Python By Sum...
    Don't miss out - Subscribe to the channel for more such informative interviews and unlock the secrets to success in this thriving field!
    Social Media Links :
    LinkedIn - / bigdatabysumit
    Twitter - / bigdatasumit
    Instagram - / bigdatabysumit
    Student Testimonials - trendytech.in/...
    Discussed Questions : Timestamp
    1:30 Introduction
    3:29 When you are processing the data with databricks pyspark job. What is the sink for your pipeline?
    4:58 Are you incorporating fact and dimension tables, or any schema in your project's database design?
    5:50 What amount of data are you dealing with in your day to day pipeline?
    6:33 What are the different types of triggers in ADF?
    7:45 What is incremental load ? How can you implement it through ADF ?
    10:03 Difference between Data Lake and Data Warehouse?
    11:41 What is columnar storage in a data warehouse ?
    13:38 What were some challenges encountered during your project, and how were they resolved? Describe the strategies implemented to optimize your pipeline?
    16:18 Optimizations related to Databricks or pyspark ?
    20:41 What is broadcast join ? What exactly happens when we broadcast the table ?
    23:01 SQL coding question
    35:46 PySpark coding question
    Tags
    #mockinterview #bigdata #career #dataengineering #data #datascience #dataanalysis #productbasedcompanies #interviewquestions #apachespark #google #interview #faang #companies #amazon #walmart #flipkart #microsoft #azure #databricks #jobs

КОМЕНТАРІ • 44

  • @rajnarayanshriwas4653
    @rajnarayanshriwas4653 5 місяців тому +3

    For incremental laod why we go about MERGE or UPSERT. MERGE or UPSERT we use to implement SCD types. For incremental load what we want is to copy newly arrived data in ADLS. For which we keep track of some reference key, through which we can recognize the new data. For example, in an Order fact table lets say it is Order_ID which keeps on increasing whenever we get a new order.

  • @harshitgoel2985
    @harshitgoel2985 6 місяців тому +7

    Please attach the questions list link(in view mode) that are asked in mock interview in description

  • @ShubhamYadav-gq6fe
    @ShubhamYadav-gq6fe 6 місяців тому +6

    Please provide the interview feedback in few mins at the end to help more with this.

  • @PradyutJoshi
    @PradyutJoshi 6 місяців тому +5

    Good initiative. This is quite helpful on how to answer the scenario based questions, with an example. Thank you sir, Ankur and Praroop! 🙌

  • @harish7548
    @harish7548 День тому

    we need to controll flow with cfg file for incremental dataload
    not merge or upsert .

  • @Vlogs..573
    @Vlogs..573 6 місяців тому +4

    Great Initiative Sumit Sir !

    • @sumitmittal07
      @sumitmittal07  6 місяців тому +1

      thank you. A big thanks to people who are participating in this.

  • @3A3A11
    @3A3A11 6 місяців тому +6

    Sir please make videos on topics like " Someone working in Tech Support from past 5 years and now moving to Data Engineer" What they should write in their resume like in experience section... Whether should give try as fresher or whatever

    • @ravichakraborty3878
      @ravichakraborty3878 6 місяців тому +1

      Sir, I also have the same question.

    • @Shivamyogi10
      @Shivamyogi10 6 місяців тому +1

      Yes that is very valuable. As most of the people are working in different roles but being in support roles in data field we are interested to switch into data engg.

    • @sumitmittal07
      @sumitmittal07  6 місяців тому +2

      surely will release a video on this soon

  • @yifeichen5198
    @yifeichen5198 6 місяців тому +1

    great content! very insightful questions and answers!

  • @WadieGamer
    @WadieGamer 6 місяців тому +1

    Great video for new data engineers like me.

  • @BooksWala
    @BooksWala 6 місяців тому +1

    Please also some video regarding what kinds of problems data engineer face in their day to days working

    • @sumitmittal07
      @sumitmittal07  6 місяців тому

      noted, will bring a video on this soon

  • @gopalgaihre9710
    @gopalgaihre9710 6 місяців тому +1

    Please make videos for freshers as well, because these days no one is looking for freshers for data engineering roles...

  • @NabaKrPaul-ik2oy
    @NabaKrPaul-ik2oy 5 місяців тому

    Hi Sir, Thanks for this series, very insightful. Just a query, does majority of the interviews goes till coding part or majority cases its theory only? or is it mix and match?

  • @RahulSaini-ng6po
    @RahulSaini-ng6po 6 місяців тому +3

    Hi Folks, below is the solution to the PySpark problem written in >>SCALA

  • @user-jg2tn1wb3d
    @user-jg2tn1wb3d 5 місяців тому

    thank you so much sumit sir its really helpful

    • @sumitmittal07
      @sumitmittal07  5 місяців тому

      Happy to share more such informative videos for the community!

  • @salonisacheti7350
    @salonisacheti7350 6 місяців тому

    Good Work Praroop ❤

  • @swapnildande4706
    @swapnildande4706 6 місяців тому

    Hi Sir ,Request you to please upload more videos on Data engineer mock interview

    • @sumitmittal07
      @sumitmittal07  6 місяців тому

      one video daily for next 30 days

  • @Raghavendraginka
    @Raghavendraginka 5 місяців тому

    sir please make complete video on sql and mock interviews too

    • @sumitmittal07
      @sumitmittal07  5 місяців тому

      Definitely, will be covered in the upcoming videos

  • @ashwinigadekar2956
    @ashwinigadekar2956 6 місяців тому

    Please make interview session
    for fresher.

  • @saurabhgavande6728
    @saurabhgavande6728 6 місяців тому

    can u make a video for aws cloud as of azure

  • @shivanisaini2076
    @shivanisaini2076 5 місяців тому

    I want to give mock interview.

  • @digantapurkait6231
    @digantapurkait6231 6 місяців тому

    wahh

  • @karthikeyanr1171
    @karthikeyanr1171 6 місяців тому +2

    Solution for Pyspark Problem
    def location_f(loc):
    if loc == 'CHN':
    return 'CHENNAI'
    elif loc == 'AP':
    return 'ANDHRA PRADESH'
    elif loc == 'HYD':
    return 'HYDERABAD'
    else:
    return loc
    re_location = F.udf(location_f, StringType())
    df1 = df.withColumn('ref_id1', F.split('ref_id','\DIV-|\_')).drop('ref_id')
    df2 = df1.withColumn('ref_id', F.col('ref_id1')[2]).withColumn('location', re_location(F.col('ref_id1')[1]))
    df3 = df2.select('name', 'ref_id', 'salary','location')
    df3.show

    • @ArunKumar-mr7pc
      @ArunKumar-mr7pc 3 місяці тому

      from pyspark.sql.functions import col, lit,when
      df_employee.withColumn("LOCATION",
      when(col("REF-ID").like("DIV-CHN%"), "CHN-CHENNAI")
      .when(col("REF-ID").like("DIV-HYD%"), "HYD-HYDERABAD")
      .when(col("REF-ID").like("DIV-AP%"), "AP-ANDHRA PRADESH")
      .when(col("REF-ID").like("DIV-PUNE%"), "PUNE-PUNE")).show()

  • @vishaldeshatwad8690
    @vishaldeshatwad8690 6 місяців тому

    df_new = df.select(col("name"),col("refid"),col("salary"),split("refid","-")[1].alias("l"),split("l","_")[0].alias("loc")).drop(col("l"))
    final_result_df = df_new.withColumn("location",when(col("loc")=="CHN","CHENNAI")\
    .when(col("loc")=="HYD","HYDERABAD")\
    .when(col("loc")=="AP","ANDRA_PRADESH")\
    .when(col("loc")=="PUN","PUNE") ).drop("loc")