Rahul Upadhya
Rahul Upadhya
  • 64
  • 17 172
Spark Scala Tutorial: Data Transformation with Split and concat_ws Functions | Full Name Composition
Welcome to our Spark Scala tutorial series! 🚀 In this video, we'll explore advanced data transformation using Spark's split and concat_ws functions. Our focus is on the "name_det.json" dataset, where we aim to enrich the data by adding the middle name between the first and last names. Join us as we demonstrate step-by-step how to achieve this using Spark Scala.
🎓 **Learn and code along with us! Don't forget to like, share, and subscribe for more Spark Scala tutorials and practical examples.
👉 Stay tuned for upcoming videos where we'll explore more advanced data manipulation techniques and Spark DataFrame functionalities!
name_det.json:
{"name":"Rahul Dravid","mname":"Sharad"},
{"name":"Sachin Tendulkar","mname":"Ramesh"},
{"name":"Ricky Ponting","mname":"Thomas"},
{"name":"Sourav Ganguly","mname":"Chandidas"},
{"name":"Mahendra Singh","mname":"Dhoni"}
Переглядів: 81

Відео

Spark Scala Tutorial: Quick DataFrame and RDD Creation | Sample Records Testing
Переглядів 447 місяців тому
Welcome to another Spark Scala tutorial! 🚀 In this video, we'll explore the efficient use of createDataFrame and parallelize methods in Spark to quickly create DataFrames and RDDs from sample records. This is incredibly useful when testing your logic with a small dataset. Join us as we demonstrate step-by-step how to create a DataFrame and RDD from the provided sample records in #spark. 🎓 Learn...
Spark Scala Tutorial: Handling Multi-Line Data with multiLine Option | Accurate DataFrame Creation
Переглядів 698 місяців тому
Welcome to our Spark Scala tutorial series! 🚀 In this video, we'll address a unique data handling challenge using the multiLine option in Spark's read API. Our focus is on the "details.csv" dataset, where the 'address' field is spread across multiple lines. Join us as we demonstrate how to accurately read and capture this multi-line data into a DataFrame. 🎓 Learn and code along with us! Don't f...
Spark Scala Tutorial: Exploding and Splitting Data for Multi-Value Columns
Переглядів 508 місяців тому
Welcome to another Spark Scala tutorial! 🚀 In this video, we'll tackle a common data transformation challenge using Spark SQL functions explode and split. Our focus is on the "choice.csv" dataset, where user preferences for sweets are stored as comma-separated values. Join us as we demonstrate how to use explode and split to unravel these preferences and create a more structured representation....
Spark Scala Tutorial: Text Transformation with Translate Function | Replace Characters
Переглядів 578 місяців тому
Welcome to our Spark Scala tutorial series! 🚀 In this video, we'll tackle a unique text transformation challenge using the powerful translate function in Spark SQL. Our focus is on the "jalebi.txt" dataset, where characters Ȑ and ş are represented differently. Join us as we demonstrate how to leverage the translate function to replace these special characters and enhance readability. 🎓 Learn an...
Spark Scala Tutorial: Extracting Substring with substring_index Function | Text Processing Example
Переглядів 708 місяців тому
Welcome to another Spark Scala tutorial! 🚀 In this video, we'll tackle a common text processing challenge using the powerful substring_index function. Our goal is to extract valuable information from the "companies.txt" file, specifically the details preceding the keyword "Description." Join us as we demonstrate step-by-step how to achieve this using Spark Scala. 🎓 Learn and code along with us!...
Spark Scala Tutorial: Random Sampling in DataFrames | Extracting 30% Records
Переглядів 998 місяців тому
Welcome to our Spark Scala tutorial series! 🚀 In this video, we'll tackle a common data processing task: random sampling. Join us as we use Spark Scala to fetch a subset of records-from the "fruits_india.csv" dataset. We'll then demonstrate how to efficiently write this sampled data to a table named "fruits_sample" in ORC format. 🎓 Learn and code along with us! Don't forget to like, share, and ...
Spark Scala Tutorial: Solving Bus Journey Data Problem | Unique Location Combinations
Переглядів 1119 місяців тому
We have a dataset, "busjourney.csv," containing columns such as source, destination, bus fare, and bus type. The task at hand is to find unique location combinations where the buses are running. Join us as we explore Spark Scala to solve this challenge efficiently. 🎓 Learn as you code and follow along! Don't forget to like, share, and subscribe for more Spark Scala tutorials and problem-solving...
Spark SQL Mastery: How to Format Amounts with 'format_number'
Переглядів 1489 місяців тому
Welcome to another Spark SQL tutorial! In this video, we tackle the challenge of formatting the 'total_profit' column in a dataset ('profits.csv') using the powerful 'format_number' function in Spark. 📋 Problem Statement: We explore an input dataset with columns 'year,' 'month,' and 'total_profit.' Our mission is to format the 'total_profit' column by adding commas and prefixing it with the Rup...
Mastering Spark SQL Functions: Filtering and Row_Number | Spark Quiz Data Analysis
Переглядів 1379 місяців тому
In this Spark tutorial, dive into the world of Spark SQL functions as we explore practical applications using real-world quiz data. 🚀 Learn how to use Spark functions like 'filter' and 'row_number' to extract meaningful insights from your dataset. Input Dataset (quiz.csv) content: quizDate, empId, empName, result, score 2022-04-01, 45, Ram Singh, Fail, 20 2022-04-02, 49, Kiran Deol, Pass, 87 20...
Spark SQL Translate: Generating Unique UserIDs with Vowels Removed | Spark Problem-Solving
Переглядів 749 місяців тому
Welcome to another episode of our Spark Problem-Solving series! 🚀 In this video, we tackle the challenge of generating unique UserIDs with a twist - by removing vowels from the 'name' column and appending a unique 5-digit number. This Spark SQL tutorial showcases the powerful translate function for efficient string manipulation. PFB the content of "user_det.csv": name,place,age Kishan,Jaipur,54...
Efficient Null Replacement in Apache Spark using na.fill | Spark Tutorial
Переглядів 1379 місяців тому
Welcome to my Spark tutorial series! 🚀 In this video, we'll dive into a common data cleaning scenario: replacing null values using Apache Spark's powerful na.fill method. 📋 Problem Description: We'll work with the "visit_details.csv" dataset, tackling null values in columns such as name, age, place, expenditure, and fabColor. Follow along as we replace nulls smartly: "name" nulls with "Unknown"...
Apache Spark Databricks Setup Tutorial | Step-by-Step Guide for Beginners
Переглядів 1439 місяців тому
Welcome to the first video in our Apache Spark Problem Solving series! In this tutorial, we'll walk you through the essential steps to set up Apache Spark on Databricks. Whether you're a beginner or looking to refresh your skills, this comprehensive guide will help you get started with the powerful capabilities of Spark.
SQL Multi Join Interesting Problem | Finding the employees moved to next level
Переглядів 173Рік тому
In this SQL problem solving session, we work on to identify which employees have moved to next level while switching organizations using multiple joins. Table creation & insertion script: CREATE TABLE switch_info( emp_id INT, emp_name VARCHAR(100), emp_current_org VARCHAR(100), emp_current_role VARCHAR(100), emp_current_release_date DATE, emp_next_org VARCHAR(100), emp_next_role VARCHAR(100) );...
SQL Decoding Problem | Decoding the message problem
Переглядів 355Рік тому
In this SQL problem solving session, we work on to decode and identify the message sent by the sender using the decoding mapping. Table creation & insertion script: CREATE TABLE decode_details( input_code CHAR(1), output_code CHAR(1) ); INSERT INTO decode_details VALUES('6','f'),('0','g'),('1',' '),('2','c'),('3','!'),('4','s'),('5','y'),('8','a'),('A','k'),('B','w'),('C','j'),('D','n'),('E','l...
SQL Third highest score with Twist | SQL Problem Solving
Переглядів 209Рік тому
SQL Third highest score with Twist | SQL Problem Solving
SQL Customer Frequent Interview Question | 3 Approaches | Customers who have not placed any order
Переглядів 271Рік тому
SQL Customer Frequent Interview Question | 3 Approaches | Customers who have not placed any order
SQL Interview Salary Problem | Employees with salary greater than department average salary
Переглядів 286Рік тому
SQL Interview Salary Problem | Employees with salary greater than department average salary
SQL Self Join | Finding unique matches between opponents
Переглядів 231Рік тому
SQL Self Join | Finding unique matches between opponents
SQL Pattern Generation | Generating the Right half Pyramid pattern | Recursive CTE
Переглядів 223Рік тому
SQL Pattern Generation | Generating the Right half Pyramid pattern | Recursive CTE
SQL INNER or LEFT Join? | Finding list of reportees to manager
Переглядів 186Рік тому
SQL INNER or LEFT Join? | Finding list of reportees to manager
Customer Orders SQL Problem | Find 3 consecutive orders within 7 days
Переглядів 917Рік тому
Customer Orders SQL Problem | Find 3 consecutive orders within 7 days
SQL CASE Usage in Different way | Finding number of employees hired in a quarter
Переглядів 193Рік тому
SQL CASE Usage in Different way | Finding number of employees hired in a quarter
SQL Self Join | Frequent Interview Question | Employees with salary greater than their manager
Переглядів 259Рік тому
SQL Self Join | Frequent Interview Question | Employees with salary greater than their manager
Find the total call duration | SQL Problem Solving
Переглядів 235Рік тому
Find the total call duration | SQL Problem Solving
SQL Duplicate Delete Interview Question | Delete duplicates with increasing id
Переглядів 130Рік тому
SQL Duplicate Delete Interview Question | Delete duplicates with increasing id
Find the expenditure amount in the group | SQL Problem Solving
Переглядів 321Рік тому
Find the expenditure amount in the group | SQL Problem Solving
Multi Join SQL Problem | Employee & Manager with same number of reportee | SQL Problem Solving
Переглядів 149Рік тому
Multi Join SQL Problem | Employee & Manager with same number of reportee | SQL Problem Solving
Best SQL Self Join Problem | Finding the best buy and sell date of stocks to get maximum profit
Переглядів 149Рік тому
Best SQL Self Join Problem | Finding the best buy and sell date of stocks to get maximum profit
Getting the file path, name and extension | LEFT | RIGHT | SQL Problem Solving
Переглядів 116Рік тому
Getting the file path, name and extension | LEFT | RIGHT | SQL Problem Solving

КОМЕНТАРІ

  • @rajinipraba.k2125
    @rajinipraba.k2125 2 місяці тому

    Hi Rahul, Excellent video. I have one doubt. How can we compare the string value is smaller or bigger?Can you explain me?

  • @ComedyXRoad
    @ComedyXRoad 3 місяці тому

    funny images😄

  • @ComedyXRoad
    @ComedyXRoad 3 місяці тому

    thank you

  • @melissaleigh3013
    @melissaleigh3013 4 місяці тому

    Perfect explanation thank you 🙏🏻

  • @aradhyaambole92
    @aradhyaambole92 4 місяці тому

    nice table

  • @akshaygangwani9875
    @akshaygangwani9875 4 місяці тому

    how to save union data into a new table?

  • @shreyasdevadas7596
    @shreyasdevadas7596 5 місяців тому

    I like the way you have provided table creation and insertion script . very thoughtful Sir.

  • @NGKannur
    @NGKannur 5 місяців тому

    Good one and helpful 👍🏻👍🏻 I dont know why viewers less here😔

  • @aniketjain3875
    @aniketjain3875 6 місяців тому

    Question - Why we make as a false? InputDF.show(false) I’m waiting for your response Thanks 🙏

  • @aniketjain3875
    @aniketjain3875 6 місяців тому

    Thanks for this video 🎉 Question - how can we save notebook for future purpose coz cluster is getting disconnected after 1-2 hours if it’s in idle state. Thanks

  • @aniketjain3875
    @aniketjain3875 6 місяців тому

    Thanks for video sir

  • @saivaibhav3331
    @saivaibhav3331 8 місяців тому

    with current_role as ( select a.emp_id,b.org_role_level as old_level from switch_info as a join role_info as b on a.emp_current_org = b.org_name and a.emp_current_role = b.org_role_name), new_role as ( select a.emp_id,b.org_role_level as new_level from switch_info as a join role_info as b on a.emp_next_org = b.org_name and a.emp_next_role = b.org_role_name) select e.*,c.old_level,n.new_level from switch_info as e join current_role as c on e.emp_id = c.emp_id join new_role as n on e.emp_id = n.emp_id where c.old_level < n.new_level

  • @saivaibhav3331
    @saivaibhav3331 8 місяців тому

    with cte1 as ( select emp_dept_id,avg(emp_salary) as avg_salry from emp_info_avg group by emp_dept_id),cte2 as ( select * from emp_info_avg) select a.* from cte2 as a join cte1 as b on a.emp_dept_id = b.emp_dept_id and a.emp_salary > b.avg_salry

  • @saivaibhav3331
    @saivaibhav3331 8 місяців тому

    select a.team_name as team_1 , b.team_name as team_2 from team_info as b , team_info as a where a.team_name <> b.team_name;

  • @saivaibhav3331
    @saivaibhav3331 8 місяців тому

    with cte as ( select p.person_id,p.person_name,p.person_group_id,e.exp_amount as total_expenditure, sum(e.exp_amount) over(partition by p.person_group_id) as total_group_amount, (sum(e.exp_amount) over(partition by p.person_group_id)/count(*) over(partition by p.person_group_id)) as equal_amt_share from person_info as p left join expenditure as e on p.person_id = e.exp_person_id) select *,(equal_amt_share - isnull(total_expenditure,0)) as pending_share from cte order by person_group_id,pending_share

  • @saivaibhav3331
    @saivaibhav3331 9 місяців тому

    select * , left(file_location,len(file_location) - CHARINDEX('\',reverse(file_location))) as [file_Path], Right(file_location,CHARINDEX('\',reverse(file_location))-1) as [file_name], RIGHT(file_location,CHARINDEX('.',reverse(file_location))) as file_extenstion from file_details;

  • @logansimpson5635
    @logansimpson5635 9 місяців тому

    'Promosm' 💪

  • @yogeshkadam8483
    @yogeshkadam8483 9 місяців тому

    Nice one

  • @yogeshkadam8483
    @yogeshkadam8483 9 місяців тому

    Thanks for the valuable session

  • @abhishekkatroliya5846
    @abhishekkatroliya5846 9 місяців тому

    Hi Rahul I just started learning data engineering staff your content is looking good, thank you looking forward to learn from you.

    • @rahul_upadhya
      @rahul_upadhya 9 місяців тому

      Hi Abhishek, Thank you for your kind words 😊. Great to know you are pursuing DE path & wishing you the very best in your Data Engineering journey.

  • @prabhatgupta6415
    @prabhatgupta6415 9 місяців тому

    too great sir bring more.

    • @rahul_upadhya
      @rahul_upadhya 9 місяців тому

      Thank you Prabhat, many more to come 😊

  • @onybus
    @onybus 10 місяців тому

    SELECT type, sum(coalesce(time_res, 0)) as time_duration FROM electric_items GROUP by id, type;

    • @rahul_upadhya
      @rahul_upadhya 9 місяців тому

      Good try, but this will just sum the entire duration and would not consider on-off scenario which is required.

  • @onybus
    @onybus 10 місяців тому

    I don’t understand English 😢

  • @soumyaranjanrout2843
    @soumyaranjanrout2843 10 місяців тому

    Hello Sir, I got the logic behind grouping the consecutive days. But I am trying to do it in another approach but after so many tries I didn't able to find any such logic apart from yours in whicu we can group the consecutive days. If possible could you please tell me other approach apart from the one which you discussed in the video?

  • @ishwarkokkili7646
    @ishwarkokkili7646 11 місяців тому

    this also works : SELECT match_no, COALESCE( mom_player, (SELECT top 1 mom_player FROM mom_2004 AS prev WHERE prev.match_no < m.match_no AND mom_player IS NOT NULL ORDER BY prev.match_no DESC) ) AS mom_player, match_opponent FROM mom_2004 AS m ORDER BY match_no;

    • @rahul_upadhya
      @rahul_upadhya 9 місяців тому

      Awesome, great use of correlated query and coalesce 👌

  • @vijaypalmanit
    @vijaypalmanit 11 місяців тому

    Super, saw many videos but yours is amazing

    • @rahul_upadhya
      @rahul_upadhya 10 місяців тому

      Thank you Vijay for the kind words 😊

  • @ishwarkokkili7646
    @ishwarkokkili7646 11 місяців тому

    SELECT type,sum( case when status = 'on' then -time_res else time_res end) as time_duration FROM electric_items GROUP by id, type;

    • @onybus
      @onybus 10 місяців тому

      Yes, the provided code is correct and it effectively calculates the total time duration for each type of electric item in the electric_items table. It groups the data by id and type and calculates the sum of time_res for each group, using the case when statement to differentiate between the on and off states of each item. Here's a breakdown of the code: The SELECT clause specifies the columns to be retrieved from the electric_items table. In this case, it's type and time_duration. The case when expression inside the sum() aggregation function is used to handle the on and off states of the status column. For on items, it subtracts the time_res value to account for the negative duration, while for off items, it adds the time_res value to reflect the positive duration. The GROUP by clause groups the data by id and type to ensure that the sum() operation is performed separately for each group of items. Overall, the provided code is concise and efficient in calculating the total time duration for each type of electric item. It avoids introducing unnecessary complexity and clearly showcases the intended operation.

    • @rahul_upadhya
      @rahul_upadhya 9 місяців тому

      @@onybus Thanks for your detailed breakdown! 😊

  • @dasoumya
    @dasoumya Рік тому

    1st time watching this type of case statement in sql server ....learnt something new today...thank you so much rahul for this wonderful question.

    • @rahul_upadhya
      @rahul_upadhya Рік тому

      Thank you Soumya, happy to know that 😊

  • @dasoumya
    @dasoumya Рік тому

    Hii.. rahul! This is my approach: with exp as(select exp_person_id,sum(exp_amount) as exp_amount from expenditure group by exp_person_id) select p.person_id,p.person_name,p.person_group_id, e.exp_amount as total_expenditure, sum(e.exp_amount)over(partition by person_group_id) as total_group_amount, avg(coalesce(e.exp_amount,0))over(partition by person_group_id) as equal_amount_share, avg(coalesce(e.exp_amount,0))over(partition by person_group_id)-(coalesce(e.exp_amount,0)) as pending_share from person_info p left join exp e on p.person_id=e.exp_person_id

  • @higiniofuentes2551
    @higiniofuentes2551 Рік тому

    I like the way you use the CTE structure!

  • @higiniofuentes2551
    @higiniofuentes2551 Рік тому

    Thank you for this very useful video!

  • @higiniofuentes2551
    @higiniofuentes2551 Рік тому

    Thank you for this very useful video!

  • @higiniofuentes2551
    @higiniofuentes2551 Рік тому

    Thank you for this very useful video!

  • @higiniofuentes2551
    @higiniofuentes2551 Рік тому

    These functions exists in other databases?

    • @rahul_upadhya
      @rahul_upadhya Рік тому

      In other databases, the function names might differ but they are available performing similar functionality.

  • @higiniofuentes2551
    @higiniofuentes2551 Рік тому

    Thank you for this very useful video!

    • @rahul_upadhya
      @rahul_upadhya Рік тому

      Thank you, happy to know that you found it useful 😊.

  • @higiniofuentes2551
    @higiniofuentes2551 Рік тому

    Thank you for this very useful video!

    • @rahul_upadhya
      @rahul_upadhya Рік тому

      Thank you, happy to know that you found it useful 😊.

  • @higiniofuentes2551
    @higiniofuentes2551 Рік тому

    Thank you for this very useful video!

  • @higiniofuentes2551
    @higiniofuentes2551 Рік тому

    And if the first name is sometimes composite like: Johann Sebastian or the last name is composite too like: Perez Cruz? Thank you!

    • @rahul_upadhya
      @rahul_upadhya Рік тому

      The problem solved is for non-composite scenarios. For composite name scenarios, we can utilize the string functions to either place the mname just after the "first word" or before the "last word" in the name or follow business logic based on know composite last names to place mname just before that.

  • @higiniofuentes2551
    @higiniofuentes2551 Рік тому

    All the functions presented here are for MSS, where to find for other databases like DB2, SQLlite etc! Thank you!

    • @rahul_upadhya
      @rahul_upadhya Рік тому

      Good question. For other databases, we can go through the documentation to find the required functions. Link for DB2 functions: www.ibm.com/docs/en/db2-for-zos/11?topic=functions-array-agg Link for SQLite functions: www.sqlite.org/lang_corefunc.html

  • @higiniofuentes2551
    @higiniofuentes2551 Рік тому

    Thank you for this very useful video!

  • @meettoraju
    @meettoraju Рік тому

    Nice sir

  • @hiralalpatra500
    @hiralalpatra500 Рік тому

    select itemid,itemname, sum(case when purchase_month='january'then itemquantity end) as january, sum(case when purchase_month='febuary'then itemquantity end) as febuary, sum(case when purchase_month='march'then itemquantity end) as march, sum(case when purchase_month='april'then itemquantity end) as april, sum(case when purchase_month='may'then itemquantity end) as may, sum(case when purchase_month='june'then itemquantity end) as june from purchase_2019 group by itemid,itemname

  • @raghur2074
    @raghur2074 Рік тому

    select * from( SELECT PLAYER_ID, PLAYER_NAME, PLAYER_SCORE, MATCH_DATE, ROW_NUMBER() OVER(PARTITION BY PLAYER_ID ORDER BY PLAYER_SCORE DESC) AS RN, ROW_NUMBER() OVER(PARTITION BY PLAYER_ID ORDER BY PLAYER_SCORE asc) AS RNK FROM MATCH_SCORE)x where rn=3 or rnK=1

  • @yashmishra4069
    @yashmishra4069 Рік тому

    Here is mu solution - with cte as (select *, LAG(result) over (partition by empid order by quizdate) as previous , LEAD(result,1,result) over (partition by empid order by quizdate) as next_result from quiz) select *, (case when previous IS NULL and next_result = 'Fail' then 'Valid' when previous = 'Fail' and next_result = 'Fail' then 'Valid' when previous = 'Pass' and next_result = 'Fail' then 'Invalid' when previous = 'Fail' and next_result = 'Pass' then 'Valid' when previous IS NULL and next_result = 'Pass' then 'Valid' else 'Invalid' end) as result from cte order by empid desc

  • @yashmishra4069
    @yashmishra4069 Рік тому

    Hello sir I have used LAG function to solve this problem, but I feel that my solution is hard coded. I tried to use recursive CTE by giving a condition IS NULL and populate until it's not NULL. Can you please solve it using LAG and Recursive CTE if possible?? with cte as (select *, LAG(mom_player) over (order by match_no) new_player from mom_2004), cte2 as (select match_no, (case when mom_player IS Null then new_player else mom_player end) as mom_player, match_opponent from cte), cte3 as (select *, LAG(mom_player,1,mom_player) over (order by match_no) new_player from cte2) select match_no, (case when mom_player IS Null then new_player else mom_player end) as mom_player, match_opponent from cte3

    • @rahul_upadhya
      @rahul_upadhya 9 місяців тому

      Great approach! 😊 Using both LAG function and Recursive CTE demonstrates your versatility in solving problems. Keep up the good work! 👍

  • @yashmishra4069
    @yashmishra4069 Рік тому

    Here is my solution, I tried to solve the two cases separately and the use union all to combine (select * , row_number() over (partition by player_name order by player_score desc ) as rw, COUNT(1) over (partition by player_name) as total_innings from match_score), cte_1 as (select * from cte where rw = 3 and total_innings >= 3), cte_2 as (select * from cte where rw = total_innings and total_innings<=2), cte_3 as (select * from cte_1 union all select * from cte_2) select * from cte_3 order by player_id

    • @rahul_upadhya
      @rahul_upadhya 9 місяців тому

      Yash - Your engagement and contribution are much appreciated. 👏

  • @hiralalpatra500
    @hiralalpatra500 Рік тому

    with cte as (select *,dense_rank()over(partition by player_name order by player_score) as rank_ from match_score) ,cte2 as(select player_name, max(case when rank_=3 then player_score when rank_=1 then player_score end ) player_score from cte group by player_name) select m.player_id,m.player_name,m.player_score,max(m.match_date) as date_ from match_score as m right join cte2 as c on m.player_score=c.player_score group by m.player_id,m.player_name,m.player_score

    • @rahul_upadhya
      @rahul_upadhya 9 місяців тому

      Keep up the great work! 😊👍

  • @yashmishra4069
    @yashmishra4069 Рік тому

    Thank you sir, for such scenarios Here is my solution using join and cte with cte as (select e.*, b.emp_salary as manager_salary from emp_details as e inner join emp_details as b on e.emp_mgr_id = b.emp_id) select emp_id, emp_name, emp_location, emp_salary, emp_mgr_id from cte where emp_salary > manager_salary

  • @aryaparashar3305
    @aryaparashar3305 Рік тому

    Best Playlist I have ever encountered👏

  • @yashmishra4069
    @yashmishra4069 Рік тому

    Sir, thank you for creating such scenarios, please keep them coming Meanwhile, here is my solution- with cte as (select *, SUM(weight) over (order by queue_position) as cumulative from user_queue) select top 1 * from cte where cumulative < 400 order by cumulative desc