Hello, I hope you'll notice my comment. I have a professor from college asking to us if how many trials should we conduct for our study. I am confuse and started researching the answer yet I can't find anything. How do you determine the number of trials/observations needed when conducting a study? Does cycle over time matters and the product or workers your observing?
You need to take a look at a different video I created in regards to sample size ua-cam.com/video/Q50rpMAUWS4/v-deo.html This particular example shown here uses very small sample sizes to help students understand how to calculate time studies. In reality, the sample sizes are much larger. As you will see in the other video on how to calculate sample sizes. You start with some basic measurements to calculate the average (x bar) and the standard deviation. You could do that with the limited sample sizes in this video. You will then need to determine the confidence level and what the associated z score is. You will also need to know the error rate (h) or the absolute error rate. The other video (ua-cam.com/video/Q50rpMAUWS4/v-deo.html) will show you the formula to calculate what the sample size should be and how many more measurements you would need to achieve the desired confidence level and acceptable error rate.
I just confused in the 3rd element, the observe average is 1.7 minutes to do the work and the normal time is 2.04 minutes while the rating performance is above average, is it supposedly lower minutes because he is morethan the average person to do the work?
Sorry it has taken me so long to respond. You are correct. This individual is able to complete the work faster than what we would expect the average or normal individual to complete it in. We would not want to use his/her time as the standard because he/she is above average in their performance. Vice-versa, you would not want to use someone's time as the standard if they were below average in their performance (taking longer than expected). This technique allows you to observe your workers (recorded time and performance rating) to establish a standard or expected time that the average or normal employee should be able to complete the task.
question, for element 2 wouldn't the normal time be lower than the recorded times? If the average person can do it in 2.07 seconds, should the times listed be higher than that if the participants were slower than average? for example, in element 3, the participants were working at 120%, so faster than average. The average person does it in 2.04 seconds, hence the time values shown are quicker. hope that makes sense.
The time measurement were for an individual that was being observed or measured. They took five measurements and averaged them. They also evaluated the skill level of this person and that is shown as a rating. If the rating is 100% that means this person is an average person and therefore we can apply this time standard (normal time) to everyone else (the time would be the same). If this person was rated below a 100%, this means they were slower than average and therefore an average individual should be able to do it faster. Hence the normal time for Element 2 (90% rating of the individual observed) becomes a faster time for the average individual. For Element 3, the person being observed is faster than an average person (120% rating), therefore the normal time (for the average individual) is slower and that is why it is higher (more time) for the average individual. We wouldn't want to use the average time for this individual because they are better than the average individual and therefore faster.
@@carycountryman Thanks for taking the time to respond. I understand the percentages, elements 1, 3 ,4 all make sense. If the observed person is working at 100% effort, then all average workers would complete the task in normal time. If observed workers are at 120%, that means they are working faster than the average worker so there times would be faster. The average worker needing 2:04 to complete the task where the above average workers completed the task quicker (with an average time of 1.7) I guess what I am missing here is: if workers are 90% slower than average, shouldn't Normal time be less than the observed times of the below average workers i.e Normal time is calculated at 2.07, but the below average workers completed times are way BELOW the normal time at an average of 2.3 I don't have a background in this, working on a project that is teaching time standards and just wanted to get a better understanding. thanks again.
@@noxskuses The observation is usually made using one worker and then rating that worker on how well she or he completed the task. If this observed worker is not as good as an average person (the observed worker is at 90% for Element 3), then an average worker should be able to do it faster. This is why the normal time for Element 3 is 2.07, which is faster than the observed worker at 2.3.
@@carycountryman Thank you sir, I totally missed the zero in the 10ths column thinking 2.7, NOT 2.07.. Thanks for clarifying and taking the time to respond. greatly appreciated.
+Shantanu Tandon Performance ratings are usually done by an expert in that particular field. Someone who was average in their performance of the job would get 100% as a rating while those that are not quite at the average level would get less than 100%. How much less depends on how poorly they perform in the job compared to others. Those that perform better than most people in the job would get more than 100%. The expert would determine how much better the employee performed in those particular tasks or job compared to the average employee.
So is this performance rating is to be done by Industrial Engineers ? Because I am also an Industrial engineer but never did this performance rating thing !
Kundan Kumar Industrial engineers could do this but often the best raters are those that have worked in that position or role for quite some time. They know what an average worker looks like. One could also have several raters to make sure there is some consistency (inter-rater reliability). Remember that the performance ratings are how well someone performs a job (average=100%, above average>100%, below average
I am not sure what you mean by a cumulative time study. I guess this could be called a cumulative time study. Instead of measuring the time it took to do the entire task and making an overall performance rating, the overall task was broken down into 4 elements and you can see that for some elements of the task, the worker is below average or above average. This allows the time study to be more accurate. After making the adjustments to each element of the overall task, one would then add them together (cumulative) to determine the normal time. You would then calculate the standard time by dividing the cumulative normal time by 1-allowance factor and this would give you the standard time for the overall task.
Really helpful video. Thanks so much!
I am glad the video was helpful
thanksss!!! what a perfect video
Thank you. I am glad that you found it helpful.
Just what I needed to know. Thanks
Glad I could help.
Thank you it was very clear
Hello, I hope you'll notice my comment. I have a professor from college asking to us if how many trials should we conduct for our study. I am confuse and started researching the answer yet I can't find anything. How do you determine the number of trials/observations needed when conducting a study? Does cycle over time matters and the product or workers your observing?
You need to take a look at a different video I created in regards to sample size ua-cam.com/video/Q50rpMAUWS4/v-deo.html
This particular example shown here uses very small sample sizes to help students understand how to calculate time studies. In reality, the sample sizes are much larger.
As you will see in the other video on how to calculate sample sizes. You start with some basic measurements to calculate the average (x bar) and the standard deviation. You could do that with the limited sample sizes in this video. You will then need to determine the confidence level and what the associated z score is. You will also need to know the error rate (h) or the absolute error rate. The other video (ua-cam.com/video/Q50rpMAUWS4/v-deo.html) will show you the formula to calculate what the sample size should be and how many more measurements you would need to achieve the desired confidence level and acceptable error rate.
I just confused in the 3rd element, the observe average is 1.7 minutes to do the work and the normal time is 2.04 minutes while the rating performance is above average, is it supposedly lower minutes because he is morethan the average person to do the work?
Sorry it has taken me so long to respond. You are correct. This individual is able to complete the work faster than what we would expect the average or normal individual to complete it in. We would not want to use his/her time as the standard because he/she is above average in their performance. Vice-versa, you would not want to use someone's time as the standard if they were below average in their performance (taking longer than expected). This technique allows you to observe your workers (recorded time and performance rating) to establish a standard or expected time that the average or normal employee should be able to complete the task.
question, for element 2 wouldn't the normal time be lower than the recorded times? If the average person can do it in 2.07 seconds, should the times listed be higher than that if the participants were slower than average? for example, in element 3, the participants were working at 120%, so faster than average. The average person does it in 2.04 seconds, hence the time values shown are quicker. hope that makes sense.
The time measurement were for an individual that was being observed or measured. They took five measurements and averaged them. They also evaluated the skill level of this person and that is shown as a rating. If the rating is 100% that means this person is an average person and therefore we can apply this time standard (normal time) to everyone else (the time would be the same). If this person was rated below a 100%, this means they were slower than average and therefore an average individual should be able to do it faster. Hence the normal time for Element 2 (90% rating of the individual observed) becomes a faster time for the average individual. For Element 3, the person being observed is faster than an average person (120% rating), therefore the normal time (for the average individual) is slower and that is why it is higher (more time) for the average individual. We wouldn't want to use the average time for this individual because they are better than the average individual and therefore faster.
@@carycountryman Thanks for taking the time to respond. I understand the percentages, elements 1, 3 ,4 all make sense. If the observed person is working at 100% effort, then all average workers would complete the task in normal time. If observed workers are at 120%, that means they are working faster than the average worker so there times would be faster. The average worker needing 2:04 to complete the task where the above average workers completed the task quicker (with an average time of 1.7) I guess what I am missing here is: if workers are 90% slower than average, shouldn't Normal time be less than the observed times of the below average workers i.e Normal time is calculated at 2.07, but the below average workers completed times are way BELOW the normal time at an average of 2.3 I don't have a background in this, working on a project that is teaching time standards and just wanted to get a better understanding. thanks again.
@@noxskuses The observation is usually made using one worker and then rating that worker on how well she or he completed the task. If this observed worker is not as good as an average person (the observed worker is at 90% for Element 3), then an average worker should be able to do it faster. This is why the normal time for Element 3 is 2.07, which is faster than the observed worker at 2.3.
@@carycountryman Thank you sir, I totally missed the zero in the 10ths column thinking 2.7, NOT 2.07.. Thanks for clarifying and taking the time to respond. greatly appreciated.
helpful video.thanks
How have the performance ratings been given?
+Shantanu Tandon Performance ratings are usually done by an expert in that particular field. Someone who was average in their performance of the job would get 100% as a rating while those that are not quite at the average level would get less than 100%. How much less depends on how poorly they perform in the job compared to others. Those that perform better than most people in the job would get more than 100%. The expert would determine how much better the employee performed in those particular tasks or job compared to the average employee.
So is this performance rating is to be done by Industrial Engineers ? Because I am also an Industrial engineer but never did this performance rating thing !
Kundan Kumar Industrial engineers could do this but often the best raters are those that have worked in that position or role for quite some time. They know what an average worker looks like. One could also have several raters to make sure there is some consistency (inter-rater reliability). Remember that the performance ratings are how well someone performs a job (average=100%, above average>100%, below average
Thanks Cary
How to do cumulative time study method?
I am not sure what you mean by a cumulative time study. I guess this could be called a cumulative time study. Instead of measuring the time it took to do the entire task and making an overall performance rating, the overall task was broken down into 4 elements and you can see that for some elements of the task, the worker is below average or above average. This allows the time study to be more accurate. After making the adjustments to each element of the overall task, one would then add them together (cumulative) to determine the normal time. You would then calculate the standard time by dividing the cumulative normal time by 1-allowance factor and this would give you the standard time for the overall task.