SmartPLS 4: Validating a (reflective) measurement model

Поділитися
Вставка
  • Опубліковано 11 тра 2022
  • In this video I show how to validate a reflective measurement model, includings tests for convergent and discriminant validity and reliability.

КОМЕНТАРІ • 90

  • @hamidnaidjat336
    @hamidnaidjat336 2 роки тому

    Thank you Mr Gaskin for all the clarifications, always the best

  • @anna73469
    @anna73469 Рік тому

    Thank you for the video. I have a question regarding the convergent validity test. In my analysis, for three items in a construct with loadings 0.872, 0.917, and 0.381, the Cronbach's alpha is 0.595 and AVE is 0.582, while CR (rho_a) and CR (rho_c) values are above 0.7.
    When I removed one of the three items with the loading value 0.381, the results were all over 0.8 with rho_c 0.916. Should I remove the third item value 0.381? And how do I report this?

    • @Gaskination
      @Gaskination  Рік тому

      It seems like the third item does not equally contribute to the measurement of the construct. In this particular case, I would probably recommend to remove the poorly performing item, even though that leaves you with only two indicators.

  • @Gug_family
    @Gug_family Рік тому

    Thanks for the video. Looking at the discriminant validity, HTMT and cross-loadings are good. Still, one of the values at the Fornell-Larker criterion is slightly higher than one construct's square root of AVE. So I looked up the outer model correlations at residuals and found one value is about.3. If I remove one of the problem indicators, everything becomes good. However, I would like to know if this is how I should proceed. If so, how do I report this? If this is not an adequate procedure, what else can I do? I really appreciate any help you can provide.

    • @Gaskination
      @Gaskination  Рік тому +1

      This approach is fine as long as that latent factor had enough items that losing one did not bring it below three. You can simply report that the fornell larcker test indicated discriminant validity would be achieved only if this item was omitted, and that omitting it was permitted because it was part of a reflective factor, for which all indicators are interchangeable. Thus omitting one of them does not change the trait being measured. You can cite Hair et al 2010, or Lowry and Gaskin 2014, or Jarvis et al (about misspecification).

    • @Gug_family
      @Gug_family Рік тому

      @@Gaskination Yes, it is one of the six indicators. I really do appreciate all the provided information. Truly helpful!

  • @user-xt4ot4lh1t
    @user-xt4ot4lh1t Рік тому

    Thank you so much Mr Gaskin for this video. I have one question regarding outer loading. One of my construct has 4 items and the outer loading for each item is 0.879, 0.878, 0.859 and 0.414. The AVE value is 0.613, Cronbach alpha value is 0.767 and composite reliability is 0.840. Should I retain item with outer loading 0.414, or delete it?

    • @user-xt4ot4lh1t
      @user-xt4ot4lh1t Рік тому

      other constructs in the model have AVE, Cronbach alpha, and composite reliability values above 0.5 (AVE), 0.7 (Cronbach/composite), respectively and the outer loading of 0.414 is the lowest one in the model

    • @Gaskination
      @Gaskination  Рік тому

      @@user-xt4ot4lh1t Yes, it is fine to delete the low loading if it is a reflective factor that has still three items remaining.

  • @lina.r11
    @lina.r11 Рік тому

    Hello Mr Gaskin, what happened if one of the exogenous variable has two items, one with outer loading greater than 1, and the other item with outer loading of 0.430. The other validity value is 0.63 for AVE, 0.619 for the cronbach alpha, and composite reliability (rho_c) of 0.750. What approach should I use for these two items? Do I still use them or delete one of them. Thank you for sharing your insight!

    • @Gaskination
      @Gaskination  Рік тому

      Sounds like these two items are not truly reflective. If there were more items that were already trimmed, please bring them back and model them formatively. If it is just these two, then consider whether one of the items is a better measure of the construct. Then just use that one item (and then there will be no CR, AVE etc.).

    • @lina.r11
      @lina.r11 Рік тому

      Dear Mr. James Gaskin. Thank you for the clarification. I applied PLSc when I generated the loading values above 1 and below 0.4. But when I used the PLS Algorithm , the loading values were around 0.9 and 0.6.

  • @user-wi7mt9gy6n
    @user-wi7mt9gy6n Рік тому

    Thanks for the great video
    I have one question
    I'm doing a study on how different predictors affect intention to use through perceived value, and the effect size of perceived value is 1.066, while the effect size of other factors in the model is very low. I know that researchers usually expect the effect size to be in the range of 0-1. Can you provide an explanation for this?
    If this is normal, I would appreciate any references to support this.
    I really appreciate the opportunity to ask questions.
    I look forward to your response

    • @Gaskination
      @Gaskination  Рік тому

      Effect size can be more than 1, though it is uncommon. Here is a discussion about it: forum.smartpls.com/viewtopic.php?t=1902#:~:text=A%20higher%20effect%20size%20than,effect%20on%20your%20endogenous%20variable.

  • @BiErLiN99
    @BiErLiN99 Рік тому

    What if deleting an item leads to increasing the AVE above the threshold, but simultaneously decreases cronbach's alpha under the threshold. In my case the AVE is at 0.490 before and at 0.510 after deleting an item with a loading of 0.613 (not terribly low), but cronbach's alpha decreases from 0.784 to 0.656. Is there a "right" approach to this? I hope someone can help me as I am working with PLS-SEM and smartPLS for the first time.

    • @Gaskination
      @Gaskination  Рік тому +1

      I would always lean towards keeping items. So, in this case, I would argue that the AVE is close enough. There is precedence for this with composite reliability (similar to Cronbach's Alpha). AVE is a strict measure of convergent validity. Malhotra and Dash (2011) note that "AVE is a more conservative measure than CR. On the basis of CR alone, the researcher may conclude that the convergent validity of the construct is adequate, even though more than 50% of the variance is due to error.” (Malhotra and Dash, 2011, p.702).
      Malhotra N. K., Dash S. (2011). Marketing Research an Applied Orientation. London: Pearson Publishing.

  • @noonatatao687
    @noonatatao687 2 роки тому

    Thank you very very much

  • @user-vm4jd7py5q
    @user-vm4jd7py5q Рік тому

    Thank you for teaching SmartPLS 4.0. I have a question regarding the convergent validity test. In my analysis, the Cronbach's alpha and CR (rho_a) values are below 0.7, while CR (rho_c) is above 0.7. Can I conclude that the construct reliability is established because the rho_c value is above 0.7, despite the lower values for Cronbach's alpha and rho_a?

    • @Gaskination
      @Gaskination  Рік тому

      This is common to have rho_c be the highest. It is probably sufficient, though it would be good to have multiple points of evidence, such as a strong AVE.

    • @haiyenle1511
      @haiyenle1511 9 місяців тому

      can I ask what is the different between rho_a and rho_c ?@@Gaskination

  • @mohamedzaki5795
    @mohamedzaki5795 2 роки тому

    Thank you, Mr. Gaskin. Can I use the dependent variable which is a one-item construct in pls?

    • @Gaskination
      @Gaskination  2 роки тому +1

      Yes. That is fine.

    • @mohamedzaki5795
      @mohamedzaki5795 2 роки тому

      @@Gaskination thank you very much for your kind help

    • @soehartosoeharto8471
      @soehartosoeharto8471 2 роки тому

      @@Gaskination are there any citing claims for one-item construct in PLS?

    • @Gaskination
      @Gaskination  2 роки тому +1

      @@soehartosoeharto8471 None needed. It is not uncommon practice.

    • @soehartosoeharto8471
      @soehartosoeharto8471 2 роки тому

      @@Gaskination ok, thank you. i just remeber last time in statwiki, citing claim for at least minimal 3 items for one construct in CBSEM

  • @abdullahalmahroqi8166
    @abdullahalmahroqi8166 Рік тому

    Thank you professor.
    I have a question.
    When I run a standardized pls-algorithm I get acceptable results in terms of discriminant validity (HTMT) and that is not the case when I run an unstandardized pls-algorithm.
    Can I proceed the analysis with standardized ones? (my study is on factors influencing loyalty)

  • @luapnus
    @luapnus Місяць тому

    Hi Prof Gaskin, it looks to me the model is a structure model. So in PLS-SEM, the structure model and the measurement model can be the same?

    • @Gaskination
      @Gaskination  Місяць тому

      Correct, unless conducting a factor analysis in CBSEM, the measurement and structural models can be specified the same way at the same time in SEM.

  • @user-it2zj4rk4h
    @user-it2zj4rk4h 4 місяці тому

    Thanks so much for sharing the knowledge.
    Analysis of my empirical model in Smartpls 4 shows that rho-A and rho-C (composite reliability) values are above 0.95 (between 0.95 to 0.965) for two of my exogenous and one endogenous construct. Rest all parameters are under range for both measurement and structural models (including VIF for inner model), and CMB is also not there. Please advise if it is a matter of concern and how it can be addressed? Any resource/video or paper which can help me with the process if correction is needed? Thanks so much.

    • @Gaskination
      @Gaskination  4 місяці тому

      I don't think I understand the problem. VIF should be low and CMB is not there. So what is the problem?

    • @user-it2zj4rk4h
      @user-it2zj4rk4h 4 місяці тому

      ​@@Gaskinationsome researchers advocate that Composite Reliability and rho-A values above 0.95 are not desirable in the measurement model and it indicates multi-collinearity and need to be corrected for. Please advise your opinion on the same. As highlighted above, all other parameters like no CMB, low VIF for inner model etc are as per the recommended range in my measurement and structural model (except CR and rho-A being above 0.95).
      Quoting a few resources:
      "values of 0.95 and higher are problematic, since they indicate that the items are redundant, thereby reducing construct validity (Diamantopoulos et al., 2012; Drolet and Morrison, 2001)."
      "Values above 0.90 (and definitely above 0.95) are not desirable because they indicate that all the indicator variables are measuring the same phenomenon and are therefore not likely to be a valid measure of the construct. Specifically, such composite reliability values occur if one uses semantically redundant items by slightly rephrasing the very same question. As the use of redundant items has adverse consequences for the measures’ content validity (e.g., Rossiter, 2002) and may boost error term correlations (Drolet & Morrison, 2001; Hayduk & Littvay, 2012), researchers are advised to minimize the number of redundant indicators." - Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2017). A Primer on Partial Least Squares Structural Equation Modeling. write on page 112:

    • @Gaskination
      @Gaskination  4 місяці тому

      @@user-it2zj4rk4hYou can either justify the high CR by showing VIF is sufficiently low, or you can drop redundant items to achieve the "ideal" number of items stated by Hair et al for a reflective latent factor: four.

    • @user-it2zj4rk4h
      @user-it2zj4rk4h 4 місяці тому

      ​@@GaskinationThanks you for the suggestions. Just to clarify:
      1. I can justify high CR by showing VIF of inner model being sufficiently low, correct?
      2. Can you please share the reference by Hair et al where they mention the 'ideal' number of items for a reflective construct?
      3. For your second suggestion, does it imply I would need to drop items with highest outer/factor loadings? Also, I believe, then the paper needs to report only the remaining items and the structural measurement runs on reduced model. - correct? Any illustrative paper you may recommend?

    • @Gaskination
      @Gaskination  4 місяці тому

      @@user-it2zj4rk4h 1. Yes, if the concern of high CR is multicollinearity, then a low VIF should alleviate that concern. 2. statwiki.gaskination.com/index.php?title=Citing_Claims#Four_Indicators_Per_Factor 3. If you were to drop items, you would want to look at the wording of the items to see which ones are truly redundant.

  • @padmavathychandrasekaran8958

    Professor, I have a few doubts. 1. when my goal is to just test the model validity (not theory testing), can I use SmartPLS over CBSEM? 2. When I use SmartPLS for measurement model validity, should I state the results of R2 and RMR, which is always poor than CBSEM. Please advice

    • @Gaskination
      @Gaskination  Рік тому

      1. If all factors are reflective, CB-SEM is a better choice because it allows for model fit tests. However, SmartPLS can do most validity tests for reflective models as well (just not model fit).
      2. You can ignore those tests in SmartPLS for validating your model. Instead focus on convergent and discriminant validity and reliability.

  • @a.rizalkhabibi9416
    @a.rizalkhabibi9416 Рік тому

    One of the variables in my model has a loading score below 0.2 which is very low when compared to other items which average above 0.7. What does it mean? Thank You

    • @Gaskination
      @Gaskination  Рік тому

      If it is a loading for an indicator on a factor, then it implies that this indicator does not strongly correlate with the other indicators on that factor. Check whether it was reverse-coded or worded in a very different way.

  • @padmavathychandrasekaran8958

    Dear Professor, If I have a continuous moderator in the model, should I include it in validating the measurement model? Or I can take it while doing the moderation analysis?

    • @Gaskination
      @Gaskination  Рік тому

      Only latent factors should be part of the measurement model validation. You can bring single measures into the model after validating the factors.

    • @padmavathychandrasekaran8958
      @padmavathychandrasekaran8958 Рік тому

      @@Gaskination Thank you for your clarification Prof..

  • @Moe4572
    @Moe4572 Рік тому

    Dear Mr. Gaskin,
    I am currently in a serious struggle with my master thesis. If you read this comment, could you help me asses the severity of issues of masther thesis' model? Particularly: How important are SRMR and NFI? Can I really ignore the fit measures? No mather what i do, the NFI is always problematic.
    I am trying to examine a possible relationship between a construct and another group of constructs that have not been connected before in international marketing.
    I chose PLS-Sem because it is more robust to non-normal data, because I use Likert-Scales and additionally expected quite some skewness and heterogeinity in the data, because of differences in brand loyalty (BL) among the respondents. If the BL is omitted, my SRMR is at 0.075 (PLSc / 0.076 for PLS), however NFI is only at 0.830 (PLS). When I include BL as binary controll variable in the model the SRMR rises to 0.190, and the NFI takes the strange value of 1.050 (PLS).
    I am reflectively measuring the relationship of 4 first-order constructs using PLSc algorithm. (However, NFI only shows when i use PLS-algorithm instead of PLSc, which at the same time greatly improves all other validity measures)
    I am absolutely puzzled, and don't know what to do.
    You would be my absolute hero, if you could help me out!😁
    Greets Moe
    Further info (if necessary or interesting, I am happy to share):
    Saple size: 191 (quite some missing data, but no big changes with missing data treatment method)
    Loadings: Some loadings are rather low (PLSc; only one item below 0.708 with regular PLS), but deleting does not improve construct reliability. Further I am allready struggling with Content validity, because an adapted measurement scale (construct 1) turned out to be quite unreliable. I had to erase 2 of 3 expected dimensions and turn it into a first-order construct, but I think I can justify theoretically to continue with only one dimension.
    Construct 1: 0.662 - 0,570 - 0.815 - 0.685 ; Construct 2: 0.615 - 0.771 - 0.797 - 0.882
    Content Validity: Measures are all fine except AVE of construct 1 (0.471 with PLSc; about 0.6 with regular PLS)
    Discriminant Validity: Everything is fine (HTMT & Fornell-Larcker).
    Multicollinearity: there are some issues with one Construct, but to my understanding (Hair et al) this should only be an issue with formative measures. Construct 3 VIFs: 8.084 - 8.181 - 3.334 - 3.931 - 6.329 - 2.513
    All expected paths-relationships are significant (worst is 0.001)

    • @Gaskination
      @Gaskination  Рік тому

      Some thoughts and responses:
      1. VIF is only relevant for formative factors or for prediction of an outcome via multiple IVs.
      2. Model fit is not relevant to PLS, but if you want it, you can run the new CB-SEM model. Here is a quick video about it: ua-cam.com/video/FS1D4KmmABU/v-deo.html
      3. Not all fit measures need to be met. If enough are adequate, then it is probably sufficient. I prefer SRMR, CFI, and RMSEA.
      4. AVE is a strict measure of convergent validity. CR is probably sufficient evidence for convergence.
      5. Dropping factors due to failed loadings might imply that the factor would better be specified as formative.
      Hope this helps.

    • @Moe4572
      @Moe4572 Рік тому

      @@Gaskination Dear Mr. Gaskin, I really can't thank you enough! Espescially, considering the speed of the reply! This really helps me right now! :)

  • @padmavathychandrasekaran8958

    Dear Professor, suppose I have perceived severity (PS) with four items as a moderator, I should NOT include (PS) in the measurement model validity? Is'nt it Professor? Thank you in advance

    • @Gaskination
      @Gaskination  Рік тому

      If it is a latent factor, then I would include it in the measurement model.

  • @nurfadillah1738
    @nurfadillah1738 Рік тому

    Thank you so much for your great explanation but i still wonder
    what is the meaning of negative sign, how to intrepret it? I mean, aren't these values the result of squaring, how it can be negative? Thanks

    • @Gaskination
      @Gaskination  Рік тому

      A negative in the loadings of the pattern matrix: gaskination.com/forum/discussion/144/negative-loadings-in-pattern-matrix#latest
      A negative path coefficient: gaskination.com/forum/discussion/131/what-if-my-positive-hypothesis-results-in-a-negative-relationship#latest
      Or this one: gaskination.com/forum/discussion/120/why-is-my-hypothesis-test-result-significant-and-negative-when-i-expected-it-to-be-positive#latest

  • @asmaryadia4846
    @asmaryadia4846 Рік тому

    Thanks so much Mr. Gaskin, your videos so helpfull.
    I have questions for my high-order model CFA. I have a varabel with three sub variabels, and each sub variabel have indicators. I assue that model is reflective. The questions are:
    1. I confuse, which approach i should use, repeated indicator or two step... which one approach according to you that should I use and what the reference?
    2. in my case, which weight scheme should i use?

    • @Gaskination
      @Gaskination  Рік тому

      1. Yes, repeated indicator approach with two step to validate the higher order factor.
      2. factor weighting scheme for factor validation, path weighting scheme for path testing.

    • @asmaryadia4846
      @asmaryadia4846 Рік тому

      @@Gaskination should I use path testing to, Sir?

    • @Gaskination
      @Gaskination  Рік тому

      @@asmaryadia4846 path testing is for testing structural hypotheses, or hypotheses between constructs.

    • @asmaryadia4846
      @asmaryadia4846 Рік тому

      @@Gaskination thanks so much Sir

  • @farishakim6759
    @farishakim6759 3 місяці тому

    i have question for the AVE for a construct. Since it does not pass the thresholf of 0.5, we need to look at the loading of associated items to see which one does not pass the 0.7. And if it doesnt pass, you delete them. So, how can we justify in our thesis why we delete the item if conceptually speaking, the items are mentioned throughout literature?

    • @Gaskination
      @Gaskination  3 місяці тому

      Before deleting an item, check CR. AVE is a strict measure of convergent validity. Malhotra and Dash (2011) note that "AVE is a more conservative measure than CR. On the basis of CR alone, the researcher may conclude that the convergent validity of the construct is adequate, even though more than 50% of the variance is due to error.” (Malhotra and Dash, 2011, p.702).
      Malhotra N. K., Dash S. (2011). Marketing Research an Applied Orientation. London: Pearson Publishing.

    • @JanisFranxis
      @JanisFranxis Місяць тому

      I´d be interested in this as well but somehow the answer of Mr. Gaskin doesn´t show up for me for this question :(

    • @Gaskination
      @Gaskination  Місяць тому

      @@JanisFranxis Here is the reply from above: Before deleting an item, check CR. AVE is a strict measure of convergent validity. Malhotra and Dash (2011) note that "AVE is a more conservative measure than CR. On the basis of CR alone, the researcher may conclude that the convergent validity of the construct is adequate, even though more than 50% of the variance is due to error.” (Malhotra and Dash, 2011, p.702).
      Malhotra N. K., Dash S. (2011). Marketing Research an Applied Orientation. London: Pearson Publishing.

  • @ConnetieAyesigaNinaz
    @ConnetieAyesigaNinaz Рік тому

    Does anyone know why I keep getting the error of the singular matrix problem? kindly help as I seem not to go beyond this error to assess measurement model

    • @Gaskination
      @Gaskination  Рік тому

      gaskination.com/forum/discussion/169/what-might-cause-a-singularity-matrix

  • @syedsana123
    @syedsana123 Рік тому

    Hello james...can u plz state how to address collinearity issues and wt should be done if NFI value comes below 0.90..

    • @Gaskination
      @Gaskination  Рік тому

      NFI is not really relevant for smartpls. As for collinearity, you can try to better distinguish between the factors by checking the loadings matrix for high cross-loadings between factors. If there is a manifest variable that loads highly on two factors, it might cause collinearity issues.

    • @syedsana123
      @syedsana123 Рік тому

      @@Gaskination Thank u very much for ur instant response. Then what are the measures by which we can check model fit in PLS 4 ?

    • @Gaskination
      @Gaskination  Рік тому

      @@syedsana123 Model fit and PLS are not very compatible. Model fit is based on the covariance matrix, but PLS is not a covariance-based SEM method. So, even the creators of SmartPLS recommend against trying to assess model fit in PLS.

  • @g0916086082
    @g0916086082 2 роки тому

    When will the SmartPLS be launched? It’s so much easier to use.

    • @Gaskination
      @Gaskination  2 роки тому

      Not available yet. I think they plan to release it in early June 2022.

  • @dubai815
    @dubai815 2 роки тому

    How may I find smart pls 4 as it's not available on it's official website? Please let me know the source for downloading...

    • @Gaskination
      @Gaskination  2 роки тому +1

      Not available yet. I think they plan to release it in early June 2022.

    • @dubai815
      @dubai815 2 роки тому

      @@Gaskination Okay noted with Thanks

  • @forever763
    @forever763 Рік тому

    May I know what exactly is htmt inference? And how can we measure the htmt inference through bootstrapping?

    • @Gaskination
      @Gaskination  Рік тому

      The HTMT ratio is calculated by comparing the average correlation between indicators (observed variables) of different constructs (heterotrait) to the average correlation between indicators of the same construct (monotrait).

    • @forever763
      @forever763 Рік тому

      @@Gaskination is it htmt0.85 and htmt0.90 both are considered one criterion test and htmt inference is another criterion test? Is it can I just use htmt0.85 and htmt0.9, and no using htmt inference?

    • @Gaskination
      @Gaskination  Рік тому +1

      @@forever763 I don’t think I know what you mean by HTMT inference. If the HTMT values are less than .85, then there is no discriminant validity issue.

  • @fatimafifi2398
    @fatimafifi2398 Місяць тому

    Sir what is the solution for cr and ave is more than 0.95 also the factor loading is more than 0.92

    • @Gaskination
      @Gaskination  Місяць тому +1

      Wow, that's really high. In general, if the factor is reflective, then this is not a problem because it just means the items are all very consistently measuring the same dimension (which is the purpose of reflective measurement).

  • @ghadaeltazy735
    @ghadaeltazy735 Рік тому

    Hey again 😀
    I noticed that you used the consistent PLS-ESM algorithm not the first choice "PLS-SEM algorithm"! my question is there a real difference or both can be used?😁
    Thanks in advance

    • @Gaskination
      @Gaskination  Рік тому

      For models that have all reflective factors, PLSc should be used. For models that include any non-reflective constructs, the regular PLS algorithm should be used.

    • @ghadaeltazy735
      @ghadaeltazy735 Рік тому

      @@Gaskination Thank you a dozen 😀

  • @talhamansoor7108
    @talhamansoor7108 2 роки тому

    How to download smartpls 4?

    • @Gaskination
      @Gaskination  2 роки тому +1

      Not available yet. I think they plan to release it in early June 2022.

  • @wibowo_ha
    @wibowo_ha 13 днів тому

    Apa yang terjadi saat variabel latent miliki reliabilitas dan validitas yang baik, namun banyak hipotesis yang tidak terdukung? Help me please

    • @Gaskination
      @Gaskination  12 днів тому +1

      Itu hanya berarti bahwa variabel-variabel tersebut tidak saling berkaitan. Hal ini bisa terjadi meskipun dengan data yang baik dan faktor yang valid.

    • @wibowo_ha
      @wibowo_ha 11 днів тому

      @@Gaskination Thank you verymuch

  • @Hashimhamza007
    @Hashimhamza007 2 роки тому

    It looks like this analysis is very similar to CFA. You look for convergent validity and discriminant validity.
    However, the model you drew in this video is not similar to CFA models you created in AMOS videos.
    In your AMOS videos, you put all the latent variables vertically and connect each of them with all of them with double-headed arrows for CFA.
    But in this video, the model is not like the CFA model in AMOS. I wonder why they are different.

    • @Gaskination
      @Gaskination  2 роки тому +2

      Correct. AMOS is a covariance-based SEM software that allows for explicit control over correlations. However, PLS does not include the covariance matrix in its default algorithm. It can still produce the correlation matrix and we can use it for factor validities.