Data Semantics SQL Interview Question - REPLACE and LEN Functions in SQL
Вставка
- Опубліковано 15 вер 2024
- One of the SQL questions recently asked in Data Semantics interview.
Given us input table, we need to find the length of each record in the table.
In order to solve this questions, we used REPLACE() and LEN() Functions. You will understand how all these functions works in this video.
Let us first create input table
create table tbl_cnt (col1 int, col2 varchar(50))
Insert the records
insert into tbl_cnt values (1, 'a,b,c'),(2, 'a,b')
For more SQL interview questions. Check out our playlist.
• SQL Interview Questions
Contact us:
info@cloudchallengers.com
Follow us on
Instagram : cloudchallengers
Facebook : cloudchallengers
LinkedIn : linkedin.com/company/cloudchallengers
Select case when len(col2) - len(replace(col2, ' , ', ' ')) = 2 then 3 when len(col2) - len(replace(col2, ' , ', ' ')) = 1 then 2 else 1 end as cnt from tbl_cnt; This method will work even if the length of the characters are more than 1.
select col1, len(col2)-len(replace(col2,',','')) +1 as no of word
select col1,len(col3) as count from
(
select col1,col2,replace(col2,',','') as col3 from tbl_cnt
) as a
Select length(col2)-length(replace(col2,',')+1 as cnt from enput;
Nice explanation 👌 👍 👏
select id,
count(value)
from hobby
cross apply string_split(hobby_list,',')
group by id
what if there are 2 letters separated by comma?
for example,
insert into tbl_cnt values (1, 'ab,b,cd'),(2, 'ac,b')
That's the good question Shashank. In that scenario we can use string split function to split values.
select col1, count(value) as cnt
from tbl_cnt
CROSS APPLY STRING_SPLIT(col2, ',')
group by col1
@@CloudChallengers What if my col2 has null's data...is that query pass test in this scenario
string_split also we can use
Yes Siddharth. We can use it
Thanks for the explanation.
Can you please explain below problem which i am struggling to solve
Activity table:
+------------+------------+---------------+-----------+
| machine_id | process_id | activity_type | timestamp |
+------------+------------+---------------+-----------+
| 0 | 0 | start | 0.712 |
| 0 | 0 | end | 1.520 |
| 0 | 1 | start | 3.140 |
| 0 | 1 | end | 4.120 |
| 1 | 0 | start | 0.550 |
| 1 | 0 | end | 1.550 |
| 1 | 1 | start | 0.430 |
| 1 | 1 | end | 1.420 |
| 2 | 0 | start | 4.100 |
| 2 | 0 | end | 4.512 |
| 2 | 1 | start | 2.500 |
| 2 | 1 | end | 5.000 |
+------------+------------+---------------+-----------+
Output:
+------------+-----------------+
| machine_id | processing_time |
+------------+-----------------+
| 0 | 0.894 |
| 1 | 0.995 |
| 2 | 1.456 |
+------------+-----------------+
The resulting table should have the machine_id along with the average time as processing_time, which should be rounded to 3 decimal places.
Here you need to use windows functions lag and then do a time difference for processing time
with cte as (select *,lag(timestamp,1,timestamp) over(partition by machine_id,process_id order by timestamp) as differ from machine_activity),
cte2 as (select *, (timestamp-differ) as processing_time from cte)
select machine_id, avg(processing_time) from cte2 where processing_time 0 group by machine_id
Thank You Ram for the Solution
@@iamram436
@@velagambetipoojitha Thanks for posting the question, I hope it is cleared. Thanks @iamram436 for the solution.