Christian Hollmann
Christian Hollmann
  • 84
  • 189 532
Jupyter Notebook in MINT
In this video, we look at some quick tips for data analysts using the Jupyter Notebook in MINT WebAssistant. For more information on using the Jupyter Notebook for evaluating your data in MINT, please check out the 'Data Evaluation in MINT' (4-part) video playlist - ua-cam.com/video/a9AYj0OTXZs/v-deo.html.
*Please note that there is a duplicate Record Item (1.1.1 Perform Flight Planning) shown during the video.
Переглядів: 369

Відео

MINT User Conference 2019: Special Preview
Переглядів 2855 років тому
A short preview on what you can expect to see at the Jupyter Reports training session during the MINT User Conference in September.
What is MINT TMS?
Переглядів 1,6 тис.5 років тому
Airlines and training centers from all around the globe use MINT’s Training Management System (TMS) to optimize their crew, maintenance and service staff training and qualification management. Its modular design perfectly supports the business process and offers tools and techniques that are essential for achieving regulatory compliance. MINT TMS is trusted by more than 60 customers all over th...
AQP Curriculum Design in MINT
Переглядів 3995 років тому
This tutorial gives an overview of the AQP Curriculum design in the MINT system.
Data Evaluation in MINT - Part 4
Переглядів 1965 років тому
Understanding how to pivot the data.
Data Evaluation in MINT - Part 3
Переглядів 1855 років тому
Who is the Santa Claus Instructor?
Data Evaluation in MINT - Part 2
Переглядів 2246 років тому
This is part 2 of a new series to demonstrate the features of the MINT system to evaluate data. Here we show the comparison to the respective SQL statements.
Data Evaluation in MINT - Part 1
Переглядів 3856 років тому
This is the first part of a new series to demonstrate the features of the MINT system to evaluate data.
Scheduling in MINT - Part 3 (Continuity)
Переглядів 2426 років тому
This is part 3 of a series of videos that show scheduling with the most recent version (v.12.2) of the WebAssistant. In this part, we are talking about Continuity.
Scheduling in MINT - Part 2 (Eligibility)
Переглядів 2876 років тому
This is part 2 of a series of videos that show scheduling with the most recent version (v.12.2) of the WebAssistant. In this part, we are talking about Eligibility.
Scheduling in MINT - Part 1
Переглядів 4026 років тому
This is part 1 of a series of videos that show scheduling with the most recent version (v.12.2) of the WebAssistant.
Forms in MINT - Part 8 (Examination)
Переглядів 2196 років тому
In this tutorial you will learn how to create examinations with the FormBuilder.
Forms in MINT - Part 7 (Feedback Forms)
Переглядів 1916 років тому
In this tutorial you will learn how to setup Feedback Forms in MINT.
Forms in MINT - Part 6 (Behaviors)
Переглядів 1806 років тому
This tutorial explains the usage of Form behaviors in MINT.
Forms in MINT - Part 5 (Ad-Hoc Grading)
Переглядів 1936 років тому
In this tutorial you will learn how to use the ad-hoc grading functionality. Ad-hoc grading is independent of the scheduling part.
Forms in MINT - Part 4
Переглядів 2046 років тому
Forms in MINT - Part 4
Forms in MINT - Part 3
Переглядів 2206 років тому
Forms in MINT - Part 3
Forms in MINT - Part 2
Переглядів 2896 років тому
Forms in MINT - Part 2
Record Items Overview
Переглядів 3996 років тому
Record Items Overview
Interrater Reliability with MINT
Переглядів 4636 років тому
Interrater Reliability with MINT
Reports in MINT - Part 6 Grouping
Переглядів 2476 років тому
Reports in MINT - Part 6 Grouping
Forms in MINT - Part 1
Переглядів 5646 років тому
Forms in MINT - Part 1
Reports in MINT - Part 5 (Crosstab)
Переглядів 5846 років тому
Reports in MINT - Part 5 (Crosstab)
Reports in MINT - Part 4 (Sub-reports)
Переглядів 3546 років тому
Reports in MINT - Part 4 (Sub-reports)
Reports in MINT - Part 3
Переглядів 2856 років тому
Reports in MINT - Part 3
Reports in MINT - Part 1
Переглядів 2,8 тис.6 років тому
Reports in MINT - Part 1
Reports in MINT - Part 2
Переглядів 3836 років тому
Reports in MINT - Part 2
Feedback Forms in MINT
Переглядів 2497 років тому
Feedback Forms in MINT
Simulator Scheduling in MINT (en Español)
Переглядів 927 років тому
Simulator Scheduling in MINT (en Español)
Simulator Scheduling in MINT
Переглядів 3207 років тому
Simulator Scheduling in MINT

КОМЕНТАРІ

  • @danam7172
    @danam7172 20 днів тому

    love u

  • @genwei007
    @genwei007 Рік тому

    Still not clear how come to the final Kappa equation? Why (OA-AC)? Why divided by (1-AC)? The rationale is obscure to me.

  • @michaellika6567
    @michaellika6567 Рік тому

    THANK U!!!

  • @heikochujikyo
    @heikochujikyo Рік тому

    This is pretty quick and effective it seems. Understanding the formula and how it works in depth surely takes more than 5 minutes, but it sure saves some work lmao Thank you for this

  • @jaminv4907
    @jaminv4907 Рік тому

    great concise explanation thank you. I will be passing this on

  • @isa..333
    @isa..333 Рік тому

    this video is so good

  • @vikeshnallamilli
    @vikeshnallamilli 2 роки тому

    Thank you for this video!

  • @66ehssan
    @66ehssan 2 роки тому

    What I though its not possible to understand, needed only a great 4 minute video to understand. Thanks a lot!

  • @LastsailorEgy
    @LastsailorEgy 2 роки тому

    very good simple clear video

  • @rekr6381
    @rekr6381 2 роки тому

    Thank you!

  • @llxua7487
    @llxua7487 2 роки тому

    thank you for yourvideo

  • @ProfGarcia
    @ProfGarcia 2 роки тому

    I have a very strange Kappa result: I have checked for a certain behavior in fotage of animals, which I have assessed twice. For 28 animals, I have agreed 27 times that the behavior is present and have disagreed only once (the behavior was present in the first assessment, but not in the second). My data is organized as the following matrix: 0 1 0 27 And that gives me a Kappa value of zero which I find very strange because in only 1 of 28 assessments I disagree. How come it is considered these results as pure chance?

    • @krautbonbon
      @krautbonbon Рік тому

      i am wondering the same thing

    • @krautbonbon
      @krautbonbon Рік тому

      I think that's the answer: pubmed.ncbi.nlm.nih.gov/2348207/

  • @louiskapp
    @louiskapp 2 роки тому

    This is phenomenal

  • @anukulburanapratheprat7483
    @anukulburanapratheprat7483 2 роки тому

    Thank you

  • @mayralizcano8892
    @mayralizcano8892 2 роки тому

    thank you, you help me so much

  • @Fanboy-chum-chum
    @Fanboy-chum-chum 3 роки тому

    Thank you 🥲❤

  • @riridefrog
    @riridefrog 3 роки тому

    Thanks so much, VERY helpful and simplified the concept

  • @nhaoyjj
    @nhaoyjj 3 роки тому

    I like this video so much, you explained it very clearly. Thank you

  • @mahnazmehrabi7626
    @mahnazmehrabi7626 3 роки тому

    Many thanks subscribe me please

  • @Lector_1979
    @Lector_1979 3 роки тому

    Great explication. Thanks a lot.

  • @daliael-rouby2411
    @daliael-rouby2411 3 роки тому

    Thank you. If I have data with high agreement between both observers, should i choose the results of any one of the raters or should i use the mean of rating of both?

  • @KnightMD
    @KnightMD 3 роки тому

    Thank you so much! Problem is, I don't have a "YES" or "NO" answer from each rater. I have a grade of 1-5 given by each rater. Can I still calculate Kappa?

  • @lakesidemission7172
    @lakesidemission7172 3 роки тому

    👍♥️♥️🐾

  • @rafa_leo_siempre
    @rafa_leo_siempre 3 роки тому

    Great explanation (with nice sketches as a bonus)- thank you!

  • @nomantech8813
    @nomantech8813 3 роки тому

    Well explained. Thank you sir

  • @anasanchez2935
    @anasanchez2935 3 роки тому

    Gracias teacher lo he entendido :)

  • @chenshilongsun1581
    @chenshilongsun1581 3 роки тому

    So helpful, watching a 4.5 min video sure beats a 50 minute lecture

  • @atefehzeinoddini9925
    @atefehzeinoddini9925 3 роки тому

    great..thank you

  • @gokhancam1754
    @gokhancam1754 4 роки тому

    accurate, sharp and on the point. thank you sir! :)

  • @drsantoo
    @drsantoo 4 роки тому

    Superb explanation. Thanks sir.

  • @MinhNguyen-kv8el
    @MinhNguyen-kv8el 4 роки тому

    thank you for your clear explanation.

  • @handle0617
    @handle0617 4 роки тому

    A very well explained topic

  • @arnoudvanrooij
    @arnoudvanrooij 4 роки тому

    The explanation is quite clear, the numbers can be a bit optimized. Agreement: 63.1578947368421%, Cohen’s k: 0.10738255033557026. Thanks for the video!

  • @diverse4985
    @diverse4985 5 років тому

    When I apply the another formula of Kappa (from the link www.harrisgeospatial.com/docs/ENVIConfusionMatrix__KappaCoefficient.html) to calculate same data in the video. Why it has different result

  • @o1971
    @o1971 5 років тому

    Great video. Could you also explain if 0.12 is significant or not?

    • @robinredhu1995
      @robinredhu1995 4 роки тому

      Cohen suggested the Kappa result be interpreted as follows: values ≤ 0 as indicating no agreement and 0.01-0.20 as none to slight, 0.21-0.40 as fair, 0.41- 0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1.00 as almost perfect agreement.

  • @lakshmikrishakanumuru9043
    @lakshmikrishakanumuru9043 5 років тому

    This was made so clear thank you!

  • @galk32
    @galk32 5 років тому

    great explanation

  • @MrThesyeight
    @MrThesyeight 6 років тому

    How to calculate the agreement between "strongly disagree,disagree,agree,strongly disagree" , what is the formula only to calculate 'observed agreement'

  • @alejandroarvizu3099
    @alejandroarvizu3099 6 років тому

    It a value clock-agreement chart.

  • @EvaSlash
    @EvaSlash 7 років тому

    The only thing I do not understand is the "Chance Agreements", the AC calculation of .58. I understand where the numbers come from, but I do not understand the theory behind why the arithmetic works to give us this concept of "chance" agreement. All of the numbers in the table are what was observed to have happened...how can we just take some of the values in the table and call it "chance" agreement? Where is the actual proof they agreed by chance in .58 of the cases?

    • @farihinufiya
      @farihinufiya 6 років тому

      for the "chance" of agreement, we are essentially multiplying the probability of rater 1 saying yes and the probability of rater 2 saying yes and doing the same for the no(s). The same way you would calculate the "chances" of getting both heads on two coins, we would multiply the probability of obtaining heads in coin 1 (0.5) and the probability of obtaining heads in coin 2 (0.5). The chance of us obtaining heads by mere luck for both is hence 0.25, the same way the chance of the two raters agreeing by chance is 0.58

  • @zicodgra2684
    @zicodgra2684 7 років тому

    what is the range of kappa values that indicate good agreement and low agreement?

    • @zicodgra2684
      @zicodgra2684 7 років тому

      i did my own research and figured id post it here in case anyone ever has the same question. taken from the source, article name "interrater reliability: the kappa statistic" it reads... Similar to correlation coefficients, it can range from −1 to +1, where 0 represents the amount of agreement that can be expected from random chance, and 1 represents perfect agreement between the raters. While kappa values below 0 are possible, Cohen notes they are unlikely in practice (8). As with all correlation statistics, the kappa is a standardized value and thus is interpreted the same across multiple studies. Cohen suggested the Kappa result be interpreted as follows: values ≤ 0 as indicating no agreement and 0.01-0.20 as none to slight, 0.21-0.40 as fair, 0.41- 0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1.00 as almost perfect agreement.

  • @Adrimartja
    @Adrimartja 7 років тому

    thank you, this is really helping.

  • @simonchan2394
    @simonchan2394 7 років тому

    can you please eloborate on the meaning of a high or low Kappa value? I can now calculate kappa, but what does it mean?

    • @jordi2808
      @jordi2808 3 роки тому

      A bit late. But in school we learned the following. <40 is not trustworthy <60 a neutral <80 decent and 80-100 is trustworthy. In this case the doctors only had 2 options to respond. But the higher amount of options the slimmer chance of a aggrement by chance.

  • @zubirbinshazli9441
    @zubirbinshazli9441 7 років тому

    how about weighted kappa?

  • @samisami25
    @samisami25 8 років тому

    Thank you. More videos please :-)

  • @ezzrabella6624
    @ezzrabella6624 8 років тому

    this was VERY helpful and simplified the concept.. thank you. please do more videos !

  • @Teksuyi
    @Teksuyi 8 років тому

    no he entendido un carajo de lo que haz dicho (no entiendo el inglés) pero estos 4 minutos han sido mejores que la hora de mi profesor. Muchas gracias.

  • @danjosh20
    @danjosh20 8 років тому

    Question please: We are supposed to do kappa scoring for dentistry but we have 5 graders. How do we do such thing?