DEwithDhairy
DEwithDhairy
  • 95
  • 94 082
day 11 : american express scenario based interview questions and answers in pyspark
# Create DataFrame Code
friends_data = [(1, 2),
(1, 3),
(1, 4),
(2, 1),
(3, 1),
(3, 4),
(4, 1),
(4, 3)]
friend_schema = "user_id int , friend_id int"
friends_df = spark.createDataFrame(data =friends_data , schema = friend_schema )
likes_data = [
(1, 'A'),
(1, 'B'),
(1, 'C'),
(2, 'A'),
(3, 'B'),
(3, 'C'),
(4, 'B')
]
like_schema = "user_id int , page_id string"
likes_df = spark.createDataFrame(data =likes_data , schema = like_schema )
display(friends_df)
display(likes_df)
Need Help ? Connect With me 1:1 - topmate.io/dewithdhairy
Let's connect on LinkedIn : www.linkedin.com/in/dhirajgupta141/
pyspark 30 days challenge : ua-cam.com/play/PLqGLh1jt697xzk9LCLL_wFPDZi_xa0xR0.html
top interview question and answer in pyspark :
ua-cam.com/play/PLqGLh1jt697yISVOi54qPLRKX1ExluhY2.html&si=BK_7MkOr0SnS_p6s
PySpark Installation and Setup : ua-cam.com/video/jO9wZGEsPRo/v-deo.htmlsi=WVktJl4eh0mN3P9X
DSA In Python Interview Series : ua-cam.com/play/PLqGLh1jt697wQTamFvXx_Odlm-Wg3zbxq.html&si=CAiVdcY4A7CEOKlO
PySpark Interview Series : ua-cam.com/play/PLqGLh1jt697zXpQy8WyyDr194qoCLNg_0.html&si=-JG6S1LyZzjDZyPB
Pandas Interview Series : ua-cam.com/play/PLqGLh1jt697yabH8-hRdV8Y5nzIEHDo29.html&si=1bwfHNeKLvcUFFXX
SQL Interview Series : ua-cam.com/play/PLqGLh1jt697xtgiGwGUTFpOctT82ANdJZ.html&si=fsF6PkJiStf9_Dh-
Your Queries :
===========
american express scenario based interview questions and answers in pyspark
american express online assessment
american express interview questions and answers
american express interview questions and answers for data engineer
#pyspark #americanexpress #dataanalytics #dataengineers #youtube #dataengineers #coding #interview #faang
Переглядів: 405

Відео

google sql scenario based interview questions and answers | sql interview questions and answers
Переглядів 840Місяць тому
to_tsvector function in postgresql ts_stat function in postgresql Create table Statement : create table google_files( file_name varchar, content varchar ); Insert Statement : insert into google_files(file_name , content) values('file1.txt', 'Google Uses SQL.') , ('file2.txt','Google Uses SQL and PySpark to fetch the Data.'), ('file3.txt','Google Uses NoSQL DataBase and PySpark for processing of...
tiger analytics python interview questions and answers | dsa for data engineer |dsa for data science
Переглядів 752Місяць тому
tiger analytics python interview questions and answers tiger analytics dsa interview questions and answers python interview questions and answers dsa for data engineer dsa for data science dsa for data analyst Need help ? Connect with me 1:1 - topmate.io/dewithdhairy Let's connect on LinkedIn : www.linkedin.com/in/dhirajgupta141/ pyspark 30 days challenge : ua-cam.com/play/PLqGLh1jt697xzk9LCLL_...
leetcode 75 | faang interview questions and answers | dsa for data engineer | dsa for data science
Переглядів 383Місяць тому
python interview questions and answers dsa for data engineer dsa for data science dsa for data analyst Need help ? Connect with me 1:1 - topmate.io/dewithdhairy Let's connect on LinkedIn : www.linkedin.com/in/dhirajgupta141/ pyspark 30 days challenge : ua-cam.com/play/PLqGLh1jt697xzk9LCLL_wFPDZi_xa0xR0.html DSA In Python Interview Series : ua-cam.com/play/PLqGLh1jt697wQTamFvXx_Odlm-Wg3zbxq.html...
day 10 | fractal scenario based interview questions and answers in pyspark | online assessment
Переглядів 921Місяць тому
fractal scenario based interview questions and answers in pyspark fractal online assessment fractal analytics online assessment fractal interview questions and answers # Create DataFrame Code king_data = [ (1, 'Robb Stark', 'House Stark'), (2, 'Joffrey Baratheon', 'House Lannister'), (3, 'Stannis Baratheon', 'House Baratheon'), (4, 'Balon Greyjoy', 'House Greyjoy'), (5, 'Mace Tyrell', 'House Ty...
day 9 | meta scenario based interview questions and answers in pyspark | popularity percentage
Переглядів 516Місяць тому
Create Statement : # creating the dataframe data = [ (1,5), (1,3), (1,6), (2,1), (2,6), (3,9), (4,1), (7,2), (8,3) ] schema ="user1 int, user2 int" df = spark.createDataFrame(data = data , schema = schema) df.show() Need Help ? Connect With me 1:1 - topmate.io/dewithdhairy Let's connect on LinkedIn : www.linkedin.com/in/dhirajgupta141/ pyspark 30 days challenge : ua-cam.com/play/PLqGLh1jt697xzk...
meta scenario based interview questions and answers in sql | popularity percentage | #facebook #sql
Переглядів 647Місяць тому
Problem Statement : platform.stratascratch.com/coding/10284-popularity-percentage?code_type=1 Need Help ? Connect With me 1:1 - topmate.io/dewithdhairy Let's connect on LinkedIn : www.linkedin.com/in/dhirajgupta141/ pyspark 30 days challenge : ua-cam.com/play/PLqGLh1jt697xzk9LCLL_wFPDZi_xa0xR0.html top interview question and answer in pyspark : ua-cam.com/play/PLqGLh1jt697yISVOi54qPLRKX1ExluhY2...
apple | python interview questions and answers | dsa for data engineer | dsa for data science
Переглядів 476Місяць тому
apple python interview questions and answers dsa for data engineer dsa for data science dsa for data analyst Need help ? Connect with me 1:1 - topmate.io/dewithdhairy Let's connect on LinkedIn : www.linkedin.com/in/dhirajgupta141/ pyspark 30 days challenge : ua-cam.com/play/PLqGLh1jt697xzk9LCLL_wFPDZi_xa0xR0.html DSA In Python Interview Series : ua-cam.com/play/PLqGLh1jt697wQTamFvXx_Odlm-Wg3zbx...
apple | python interview questions and answers | dsa for data engineer | dsa for data science
Переглядів 421Місяць тому
apple python interview questions and answers dsa for data engineer dsa for data science dsa for data analyst Need help ? Connect with me 1:1 - topmate.io/dewithdhairy Let's connect on LinkedIn : www.linkedin.com/in/dhirajgupta141/ pyspark 30 days challenge : ua-cam.com/play/PLqGLh1jt697xzk9LCLL_wFPDZi_xa0xR0.html DSA In Python Interview Series : ua-cam.com/play/PLqGLh1jt697wQTamFvXx_Odlm-Wg3zbx...
sql | apple interview question and answer | sql scenario based interview questions and answers #sql
Переглядів 445Місяць тому
Problem Statement : Create table code : create table flight ( source varchar(10), destination varchar(10) ); insert into flight(source, destination) values('A','B'),('C','D'),('B','A'),('D','C'); Need help ? Connect with me 1:1 - topmate.io/dewithdhairy Let's connect on LinkedIn : www.linkedin.com/in/dhirajgupta141/ pyspark 30 days challenge : ua-cam.com/play/PLqGLh1jt697xzk9LCLL_wFPDZi_xa0xR0....
lecture-1 | oops in python | classes , objects , methods and pillars of oops | #python
Переглядів 275Місяць тому
Welcome to our comprehensive introduction to Object-Oriented Programming (OOP) in Python! In this video, we will delve into the fundamental concepts of OOP and how to implement them in Python. Whether you're a beginner or looking to refresh your knowledge, this tutorial is designed to give you a solid foundation. In this video, you will learn: What is OOP?: Understand the definition and importa...
day 8 | capgemini interview question | pyspark scenario based interview questions and answers
Переглядів 1,3 тис.2 місяці тому
pyspark scenario based interview questions and answers capgemini interview question and answers Create DataFrame : lift_data = [ (1,300), (2,350) ] lift_schema = "id int , capacity_kg int" lift_df = spark.createDataFrame(data = lift_data , schema = lift_schema) lift_passengers_data = [ ('Rahul',85,1), ('Adarsh',73,1), ('Riti',95,1), ('Viraj',80,1), ('Vimal',83,2), ('Neha',77,2), ('Priti',73,2),...
lecture 3 : mastering exception handling in python | creating custom exception classes in python
Переглядів 1042 місяці тому
Welcome to my detailed tutorial on creating custom exception classes in Python! In this video, we'll dive into the following key topics: 1. Creating Custom Exception Classes: Learn step-by-step how to define your own exception classes in Python. 2. Modifying the Constructor: Understand how to customize the constructor of the base Exception class to suit your needs. 3. Practical Examples: Follow...
lecture 2 : mastering exception handling in python | raise keyword in python explained #python
Переглядів 912 місяці тому
Welcome to our comprehensive tutorial on the raise keyword in Python! Whether you're a beginner or looking to deepen your understanding of Python exceptions, this video is for you. In this tutorial, we will cover: 1. What the raise keyword is and why it's used. 2. How to raise standard exceptions in Python. 3. Customizing exceptions with custom error messages. 4. Creating and raising custom exc...
lecture 1 : mastering exception handling in python | try, except, else, finally explained
Переглядів 2202 місяці тому
Welcome to our Python tutorial on Exception Handling! In this video, we'll dive deep into the essentials of managing errors and exceptions in your Python programs. Whether you're a beginner or looking to refine your skills, this tutorial will provide a comprehensive understanding of how to handle exceptions effectively. What you'll learn: 1. Introduction to Exception Handling: Understand what e...
lecture 2 | project 1 | pandas dataframe to excel report using xlsx writer | #xlsxwriter | #pandas
Переглядів 1272 місяці тому
lecture 2 | project 1 | pandas dataframe to excel report using xlsx writer | #xlsxwriter | #pandas
lecture-1 | pandas dataframe to excel report using xlsxwriter | xlsxwriter | formatting | #excel
Переглядів 1062 місяці тому
lecture-1 | pandas dataframe to excel report using xlsxwriter | xlsxwriter | formatting | #excel
sql scenario based interview questions and answers | sql | interview
Переглядів 6042 місяці тому
sql scenario based interview questions and answers | sql | interview
day 7 | pyspark scenario based interview questions and answers
Переглядів 5552 місяці тому
day 7 | pyspark scenario based interview questions and answers
day 6 | fill null values | pyspark scenario based interview questions and answers
Переглядів 1,1 тис.3 місяці тому
day 6 | fill null values | pyspark scenario based interview questions and answers
day 5 | salary report | pyspark scenario based interview questions and answers
Переглядів 6033 місяці тому
day 5 | salary report | pyspark scenario based interview questions and answers
sql scenario based interview questions and answers | sql | interview
Переглядів 8193 місяці тому
sql scenario based interview questions and answers | sql | interview
day 4 | ipl winning streak| pyspark scenario based interview questions and answers
Переглядів 7414 місяці тому
day 4 | ipl winning streak| pyspark scenario based interview questions and answers
day 3 | consecutive days | pyspark scenario based interview questions and answers
Переглядів 1,4 тис.4 місяці тому
day 3 | consecutive days | pyspark scenario based interview questions and answers
day 2 | calculate percentage increase | pyspark scenario based interview questions and answers
Переглядів 1,1 тис.4 місяці тому
day 2 | calculate percentage increase | pyspark scenario based interview questions and answers
day1 | remove redundant pairs | pyspark scenario based interview questions and answers | #pyspark
Переглядів 1,9 тис.4 місяці тому
day1 | remove redundant pairs | pyspark scenario based interview questions and answers | #pyspark
remove duplicates from sorted array 2 | leetcode 80 | dsa for data engineer | dsa for data analyst
Переглядів 3454 місяці тому
remove duplicates from sorted array 2 | leetcode 80 | dsa for data engineer | dsa for data analyst
bank account summary | faang interview questions and answers | #facebook #amazon #netflix #google
Переглядів 1,2 тис.4 місяці тому
bank account summary | faang interview questions and answers | #facebook #amazon #netflix #google
premier league stats | google | ZS interview questions and answer sql #sql #interview
Переглядів 5744 місяці тому
premier league stats | google | ZS interview questions and answer sql #sql #interview
highest grade for each student | microsoft interview questions and answer sql | #interview
Переглядів 6994 місяці тому
highest grade for each student | microsoft interview questions and answer sql | #interview

КОМЕНТАРІ

  • @kalaivanik8872
    @kalaivanik8872 5 годин тому

    mysql with per_month as (select year(created_at) as yr,month(created_at) as mon,sum(value)total_per_mon from amazon_monthly_rev group by year(created_at),month(created_at)) ,prev_revenue as (select *,lag(total_per_mon,1,0) over()pre from per_month) select concat(yr,'-',mon)as yearMonth,round((total_per_mon-pre)/pre*100,2)as rev_diff from prev_revenue;

  • @kalaivanik8872
    @kalaivanik8872 6 годин тому

    with percentage as (select student_id,round(sum(marks)/count(1),0) as percentage from marks group by student_id) select s.student_id,s.name,p.percentage, case when p.percentage >=70 then 'Distinction' when p.percentage between 60 and 69 then 'First Class' when p.percentage between 50 and 59 then 'Second Class' when p.percentage between 40 and 49 then 'Thid Class' when p.percentage <= 39 then 'Fail' end as Result from student s join percentage p on s.student_id = p.student_id;

  • @kalaivanik8872
    @kalaivanik8872 7 годин тому

    with first_joindate as (select *,min(join_date) over(partition by user_id)as first from user_data) ,new_user_count as ( select join_date, sum(case when join_date = first then 1 else 0 end)new_user,count(1)total_user from first_joindate group by join_date) select join_date,new_user,case when new_user > 0 then round((new_user/total_user) * 100,0) else 0 end as percentage_new_user from new_user_count;

  • @kalaivanik8872
    @kalaivanik8872 День тому

    with cte as (select *, (total_sales_revenue-lag(total_sales_revenue,1,0) over(partition by product_id))diff from salesq) select * from products where product_id = (select product_id from cte group by product_id having min(diff) > 0)

  • @kalaivanik8872
    @kalaivanik8872 День тому

    create table pop(user1 int,user2 int); insert into pop values (1,5), (1,3), (1,6), (2,1), (2,6), (3,9), (4,1), (7,2), (8,3); with all_pairs as( select user1,user2 from pop union select user2,user1 from pop) select user1 as user,round(count(user1)/(select count(distinct user1) from all_pairs)*100,2) as per from all_pairs group by user1;

  • @kalaivanik8872
    @kalaivanik8872 День тому

    select a.source,a.destination from flight a join flight b on a.source < b.source and a.destination = b.source;

  • @kalaivanik8872
    @kalaivanik8872 2 дні тому

    with report as( select success_date as date ,'succeeded' as status from succeeded union select fail_date as date,'fail' as status from failed) ,cte2 as( select *,row_number() over(partition by status order by date)rn,(day(date) - row_number() over(partition by status order by date))diff from report where date >= '2019-01-01') select case when status = 'succeeded' then 'succeeded' else 'failed' end as period_state, min(date)as start_date,max(date)as end_date from cte2 group by diff,status

    • @DEwithDhairy
      @DEwithDhairy День тому

      Great approach thanks for sharing

  • @koushiksinha3007
    @koushiksinha3007 2 дні тому

    Hey Man , Thank you for your videos , I have been following you since long , I have a question , in the Coding interview for Data Engineers do they provide Leetcode style predefined code or we have to write the entire code including the input handling ?

    • @DEwithDhairy
      @DEwithDhairy 2 дні тому

      Interview questions as not like leetcode format.. Interviewer just gives u the input and output u just need to write the core logic of it and then dry run on the given input to check if we are getting the desired output.

  • @sureshrecinp
    @sureshrecinp 2 дні тому

    Thank you for sharing very useful info.

  • @BiswajitSibun-n4b
    @BiswajitSibun-n4b 3 дні тому

    The Best Video about this topic I found on YT

  • @vivekdutta7131
    @vivekdutta7131 3 дні тому

    a = [1,2,0,4,-1,5,6,0,0,7,0] b = [] c = [] for i in a: if i != 0: b.append(i) else: c.append(i) final = b.extend(c) print(b)

  • @vivekdutta7131
    @vivekdutta7131 3 дні тому

    def operation(st): st_new = "" cnt = 0 dr = st[0] print(dr) for i in st: if i == dr: cnt +=1 else: st_new = st_new + dr + str(cnt) cnt = 1 dr = i st_new = st_new + dr + str(cnt) print(st_new) if __name__ == "__main__": st = "abcabbbccaabd" print(st) operation(st)

  • @vivekdutta7131
    @vivekdutta7131 3 дні тому

    a = [2,0,2,1,1,0] for i in range(len(a)): for j in range(0,len(a)-1): if a[j] > a[j+1]: a[j],a[j+1] = a[j+1],a[j] print(a)

  • @vivekdutta7131
    @vivekdutta7131 3 дні тому

    a = [2,3,[10,20,[100,200],[2,5]],50] b = [] def rec(n): for i in n: if isinstance(i,list): rec(i) else: b.append(i) return b print(rec(a))

  • @sushanthsai2078
    @sushanthsai2078 4 дні тому

    Tried with different approach from pyspark.sql.functions import * frnds_like_df = friends_df.alias("fdf").join(likes_df.alias("ldf"),friends_df['friend_id'] == likes_df['user_id'],'left').select('fdf.user_id','ldf.page_id').groupBy("user_id").agg(collect_set(col('page_id')).alias('liked_array')) user_like_df = friends_df.alias("fdf").join(likes_df.alias("ldf"),friends_df['user_id'] == likes_df['user_id'],'left').select('fdf.user_id','ldf.page_id').groupBy("user_id").agg(collect_set(col('page_id')).alias('user_likes_array')) # final_df = frnds_like_df.join(user_like_df, ['user_id']) final_df = frnds_like_df.join(user_like_df, ['user_id']) \ .withColumn("uncommon", array_except(col('liked_array').cast('array<string>'), col('user_likes_array').cast('array<string>'))) \ .filter(size(col("uncommon")) > 0) \ .withColumn("values", explode(col("uncommon"))).drop(*['liked_array','user_likes_array','uncommon']) final_df.show()

  • @ithisrinu9593
    @ithisrinu9593 4 дні тому

    I really apricate you brother . i was encountering many issues even i could not figure out from out . but this video resolves all errors .Thank you .

  • @SureshK-gr4vc
    @SureshK-gr4vc 4 дні тому

    pls provide code repo for this

    • @DEwithDhairy
      @DEwithDhairy 4 дні тому

      Haven't created any repo.. You may write the code while going through the vdo.

  • @DEwithDhairy
    @DEwithDhairy 5 днів тому

    We need this piece also after the filter condition. # Finding the unique records. answer_df = friend_page_concat_df.select(col("friend_id").alias("user_id"), col("page_id")).distinct() answer_df.show()

  • @srinivasn2646
    @srinivasn2646 7 днів тому

    Thanks Man

  • @neelbanerjee7875
    @neelbanerjee7875 8 днів тому

    I have a simpler solution with first_value and last_value window function as below (wrote in spark sql and can be adjusted in pyspark script acordingly - ************************************************************************************ %sql with cte as ( select cust_id, first_value(origin) over( partition by cust_id order by flight_id range between unbounded preceding and unbounded following ) as origin, last_value(destination) over( partition by cust_id order by flight_id range between unbounded preceding and unbounded following ) as destination from flight ) select distinct(*) from cte ************************************************************************************

  • @Sudeep-ow4pe
    @Sudeep-ow4pe 8 днів тому

    Thank you so much for making these different types of pyspark questions, This is really helpful.

  • @DataEngineerPratik
    @DataEngineerPratik 9 днів тому

    could you please make alternate solution in mysql ?

  • @Arnob_111
    @Arnob_111 12 днів тому

    This solution only works for a specific month. This will fail if the data is scaled over several months.

    • @DEwithDhairy
      @DEwithDhairy 12 днів тому

      Yes correct, To cover that scenario take the difference between the date and row number to make the group that covers all the cases... I have covered this approach in my videos.

  • @kirankumarkathe7318
    @kirankumarkathe7318 13 днів тому

    This worked for me : ) os.environ['PYSPARK_PYTHON'] = sys.executable os.environ['PYSPARK_DRIVER_PYTHON'] = sys.executable Thanks alottt !!!!!!!!

  • @shivaprasad-kn3kw
    @shivaprasad-kn3kw 14 днів тому

    solution in sql server with CTE2 as( select user1 as allusers from popularity union all select user2 from popularity) select distinct allusers, count(allusers) / COUNT(distinct allusers )*100 as alluserscnt from CTE2 group by allusers

  • @sankaranarayanan2319
    @sankaranarayanan2319 16 днів тому

    I have an interview with Freshworks for position of SDET engineer..Frst round they told it will be of Hacker rank exam..Can you please tell me what type of questions that we can expect?

    • @DEwithDhairy
      @DEwithDhairy 13 днів тому

      You may expect scenario based questions like this.. All the best

  • @dibyaranjanbasuri7331
    @dibyaranjanbasuri7331 17 днів тому

    I used the below way before looking at your solution: with cte as (select unnest(string_to_array(content, ' ')) as word from google_files) select word,count(word) as word_count from cte where word in ('SQL', 'PySpark') group by word Thanks for the solution, I learned 2 new functions.

  • @kritika3143
    @kritika3143 18 днів тому

    import pyspark from pyspark.sql.window import Window from pyspark.sql.functions import col , count, round, sum , desc, rank df=voting_results.groupBy(col("voter")).agg(count(col("candidate")).alias("count_total")) df=df.withColumn("vote_value" , round(1*1.0/col("count_total"),3)) df2=voting_results.join(df,on = (voting_results.voter==df.voter), how="inner")\ .select(col("candidate"),col("vote_value")) df2=df2.filter(col("candidate").isNotNull()) df2=df2.groupby(col("candidate")).agg(round(sum(col("vote_value")),3).alias("total_count")) #df2.show() df2=df2.withColumn("rnk",rank().over(Window.orderBy(col("total_count").desc()))) df2=df2.filter(col("rnk")==1)\ .select("candidate") df2.show()

  • @amiyaroy6789
    @amiyaroy6789 22 дні тому

    Instead of zip, can’t we use extend or + for the 2nd problem?

  • @saktibiswal6445
    @saktibiswal6445 26 днів тому

    Thanks. I will try to get this done in MS SQL Server.

  • @ayanghosh7692
    @ayanghosh7692 27 днів тому

    please share the method using self join. also the union. humble request.

  • @TrishlaSaxena-y1g
    @TrishlaSaxena-y1g Місяць тому

    thanks for coming up with python questions for de role

  • @anupamsharma4263
    @anupamsharma4263 Місяць тому

    from pyspark.sql.functions import * from pyspark.sql.window import Window schema=['firstName','lastName','videoId','flagId'] data=[('Anupam','Sharma','V1','F1'),('Anupam','Sharma','V1','F11'),('Shashank','Saxena','V2','F2'),('Karan','Rawat','V3','F3'),('Anupam','Sharma','V4','F4'),('Anupam','Sharma','V5','F5'),('Anupam','Sharma','V6','F6'),('Shashank','Saxena','V7','F7'),('Shashank','Saxena','V8','F8'),('Shashank','Saxena','V9','F9'),('Karan','Rawat','V10','F10')] df1=spark.createDataFrame(data,schema) df1.display() schema2=['flagId','reviewByYT','reviewOutcome'] data2=[('F1','TRUE','Approved'),('F2','TRUE','Approved'),('F3','TRUE','Approved'),('F4','TRUE','Removed'),('F5','TRUE','Approved'),('F6','TRUE','Approved'),('F7','TRUE','Approved'),('F8','FALSE',None),('F9','TRUE','Approved'), ('F10','TRUE','Removed'),('F11','TRUE','Approved')] df2=spark.createDataFrame(data2,schema2) df2.display() df=df1.join(df2,df1.flagId==df2.flagId,'inner').filter(df2.reviewOutcome=='Approved').select(concat(df1.firstName,lit(' '),df1.lastName).alias('fullName'),df1.videoId,df2.reviewOutcome).distinct() df.display() dfOut=df.groupBy(df.fullName).agg(count(df.videoId).alias('approvedCount')).withColumn('highestRank',dense_rank().over(Window.orderBy(col('approvedCount').desc()))).filter(col('highestRank')==1).select(df.fullName) dfOut.display()

  • @srikanthb8101
    @srikanthb8101 Місяць тому

    @DEwithDhairy First when I look into two DF's I didn't understand how to define relationship but after careful observation b/w columns like attacker and defender then got it. Keep going 👍👍👍👍

  • @nikunjmistry373
    @nikunjmistry373 Місяць тому

    In pg u can use unnest function along with json with a delimiter of a space to get the word together

  • @vineetjain7518
    @vineetjain7518 Місяць тому

    understood thanks for concept

  • @sravankumar1767
    @sravankumar1767 Місяць тому

    Superb explanation 👌 👏 👍

  • @siddharthchoudhary103
    @siddharthchoudhary103 Місяць тому

    one doubt, when I'm doing partitionby on house and region instead of groupBy I'm getting 2 duplicates records, any idea why?

    • @DEwithDhairy
      @DEwithDhairy Місяць тому

      Partition does not reduce the number of rows.

  • @rajeshk7908
    @rajeshk7908 Місяць тому

    The voice is too low, other than that all good.

  • @sravankumar1767
    @sravankumar1767 Місяць тому

    Superb explanation 👌 👏 👍

  • @lakshayagarwal4953
    @lakshayagarwal4953 Місяць тому

    Is this question is for freshers or experienced ones??

    • @DEwithDhairy
      @DEwithDhairy Місяць тому

      This one was asked in the online assessment. This you expect for the senior level roles.

  • @nupoornawathey100
    @nupoornawathey100 Місяць тому

    one feedback, volume is too low for almost all videos, explanation pace / approach is all fine.

    • @DEwithDhairy
      @DEwithDhairy Місяць тому

      Thanks for the feedback. Will rectify from the next video onwards. Thanks

  • @hariprasad3820
    @hariprasad3820 Місяць тому

    I'm little new to these type of questions, I have used another method to solve the same , can you tell me why this approach is not suited ? Select a.cust_id, a.origin, b.destination from (Select o.cust_id, o.origin from travelling_details o where origin not In ( Select t.destination from travelling_details t where t.cust_id = o.cust_id)) a join (Select o.cust_id, o.destination from travelling_details o where o.destination not In ( Select t.origin from travelling_details t where t.cust_id = o.cust_id)) b on a.cust_id = b.cust_id;

  • @IOSALive
    @IOSALive Місяць тому

    DEwithDhairy, nice content keep up the great content

  • @Tech.S7
    @Tech.S7 Місяць тому

    Thanks for informative stuff. Instead of specifying all conditions in the join. Just we can specify only one condition ( I mean not required and Or conditions) It works and fetch expected output. Cheers!!

  • @shryk0s963
    @shryk0s963 Місяць тому

    check my solution using dictionary r="aabcabbccd" s=r+"1"#added 1 as a dummy charecter to help in loop dic={} dic[s[0]]=1; for i in range(1,len(s)): if (s[i] in dic.keys() and (s[i-1]==s[i])): dic[s[i]]=dic[s[i]]+1 else: print(f"{s[i-1]}",end="") print(dic[s[i-1]],end="") del dic[s[i-1]] dic[s[i]]=1

  • @prabhatgupta6415
    @prabhatgupta6415 Місяць тому

    thoda tej bola kro ya phir use good speaker.

    • @DEwithDhairy
      @DEwithDhairy Місяць тому

      Noted bro I know voice is on the little lower side

  • @sravankumar1767
    @sravankumar1767 Місяць тому

    Nice explanation 👌 👍 👏

  • @Vishal_9120
    @Vishal_9120 Місяць тому

    Sir in Case1 why the, rank and dense rank are taking reference of salary column , and why not department column

  • @nupoornawathey100
    @nupoornawathey100 Місяць тому

    Using two pointer approach: lst = [1,0,1,2,5] m = 0 n = len(lst)-1 while m <= n: if lst[m] == 0: lst[m], lst[n] = lst[n], lst[m] m += 1 print(lst)