Discussion:
Social Desirability, Crowne & Marlowe (3)
(too old to reply)
Leite,Walter
2006-01-09 17:40:43 UTC
Permalink
Mike,

You should check my recent article about the Marlowe-Crowne Social
Desirability Scale (MCSDS):

Leite, W. L. & Beretvas, S. N. (2005). Validation of scores on the
Marlowe-Crowne Social Desirability Scale and the Balanced Inventory of
Desirable Responding. Educational and Psychological Measurement, 65(1),
140-154.

It is a dimensionality study of the MCSDS and its several short forms
using Confirmatory Factor Analysis with an appropriate estimator for
dichotomous items (WLSMV, available in MPLUS).

Enjoy,


Walter L. Leite, Ph.D.
Assistant Professor
Research and Evaluation Methodology
Department of Educational Psychology
UNIVERSITY OF FLORIDA
________________________________________
1403 Norman Hall
PO Box 117047
Gainesville, FL 32611
Phone: (352) 392-0723 EXT.240
Fax: (352) 392-5929
Website: http://plaza.ufl.edu/leitewl/



Date: Thu, 5 Jan 2006 15:59:09 +0100
From: Wolfgang Rauch <***@PSYCH.UNI-FRANKFURT.DE>
Subject: Social Desirability, Crowne & Marlowe

Hi,

just today I came across this citation:

@article{S.Natasha Beretvas08012002,
author = {Beretvas, S. Natasha and Meyers, Jason L. and Leite, Walter
L.},
title = {{A Reliability Generalization Study of the Marlowe-Crowne
Social Desirability Scale}},
journal = {Educational and Psychological Measurement},
volume = {62},
number = {4},
pages = {570-589},
doi = {10.1177/0013164402062004003},
year = {2002},
abstract = {A reliability generalization (RG) study was conducted for
the Marlowe-Crowne SocialDesirability Scale (MCSDS). The MCSDS is the
most commonly used tool designed toassess social desirability bias
(SDB). Several short forms, consisting of items from theoriginal 33-item

version, are in use by researchers investigating the potential for SDB
inresponses to other scales. These forms have been used to measure a
wide array of populations.Using a mixed-effects model analysis, the
predicted score reliability for male adolescentswas .53 and the
reliability for men's responses was lower than that for
women's.Suggestions are made concerning the necessity for further
psychometric evaluations ofthe MCSDS.
},
URL = {http://epm.sagepub.com/cgi/content/abstract/62/4/570},
eprint = {http://epm.sagepub.com/cgi/reprint/62/4/570.pdf}
}

But it is not a SEM study, and I did not see them mentioning the
dichtomous response format. If nothing else, at least you can learn more

about the spelling ;-)

Wolfgang
Hey Everybody,
I'll put my 2 cents (CDN) in on this topic, but in the meantime I
have a question on something a bit different. I need some references
on recent (or any) psychometric work on the Crowne-Marlow social
desirability scale (including the spelling of the authors' names).
This topic seems perfect for SEM since the presumed latent (social
desirability) should affect the measures of other interesting
constructs but may or may not be related to the constructs
themselves. (I.e., social desirability, if it exists, is one form of
"crud" referred to in previous messages.) Since the items are
dichotomous (binary), I would expect recent work on the scale would
take this into account in the analysis (i.e., by using either
polychoric correlations or related strategy).
Mike Gillespie
--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html
Michael Gillespie
2006-01-10 04:19:35 UTC
Permalink
Walter,

This sounds exactly what I'm looking for. I hope I "like" what you found. Thanks, Mike

________________________________

From: Structural Equation Modeling Discussion Group on behalf of Leite,Walter
Sent: Mon 1/9/2006 12:30 PM
To: ***@bama.ua.edu
Subject: Social Desirability, Crowne & Marlowe (3)



Mike,

You should check my recent article about the Marlowe-Crowne Social
Desirability Scale (MCSDS):

Leite, W. L. & Beretvas, S. N. (2005). Validation of scores on the
Marlowe-Crowne Social Desirability Scale and the Balanced Inventory of
Desirable Responding. Educational and Psychological Measurement, 65(1),
140-154.

It is a dimensionality study of the MCSDS and its several short forms
using Confirmatory Factor Analysis with an appropriate estimator for
dichotomous items (WLSMV, available in MPLUS).

Enjoy,


Walter L. Leite, Ph.D.
Assistant Professor
Research and Evaluation Methodology
Department of Educational Psychology
UNIVERSITY OF FLORIDA
________________________________________
1403 Norman Hall
PO Box 117047
Gainesville, FL 32611
Phone: (352) 392-0723 EXT.240
Fax: (352) 392-5929
Website: http://plaza.ufl.edu/leitewl/



Date: Thu, 5 Jan 2006 15:59:09 +0100
From: Wolfgang Rauch <***@PSYCH.UNI-FRANKFURT.DE>
Subject: Social Desirability, Crowne & Marlowe

Hi,

just today I came across this citation:

@article{S.Natasha Beretvas08012002,
author = {Beretvas, S. Natasha and Meyers, Jason L. and Leite, Walter
L.},
title = {{A Reliability Generalization Study of the Marlowe-Crowne
Social Desirability Scale}},
journal = {Educational and Psychological Measurement},
volume = {62},
number = {4},
pages = {570-589},
doi = {10.1177/0013164402062004003},
year = {2002},
abstract = {A reliability generalization (RG) study was conducted for
the Marlowe-Crowne SocialDesirability Scale (MCSDS). The MCSDS is the
most commonly used tool designed toassess social desirability bias
(SDB). Several short forms, consisting of items from theoriginal 33-item

version, are in use by researchers investigating the potential for SDB
inresponses to other scales. These forms have been used to measure a
wide array of populations.Using a mixed-effects model analysis, the
predicted score reliability for male adolescentswas .53 and the
reliability for men's responses was lower than that for
women's.Suggestions are made concerning the necessity for further
psychometric evaluations ofthe MCSDS.
},
URL = {http://epm.sagepub.com/cgi/content/abstract/62/4/570},
eprint = {http://epm.sagepub.com/cgi/reprint/62/4/570.pdf}
}

But it is not a SEM study, and I did not see them mentioning the
dichtomous response format. If nothing else, at least you can learn more

about the spelling ;-)

Wolfgang
Hey Everybody,
I'll put my 2 cents (CDN) in on this topic, but in the meantime I
have a question on something a bit different. I need some references
on recent (or any) psychometric work on the Crowne-Marlow social
desirability scale (including the spelling of the authors' names).
This topic seems perfect for SEM since the presumed latent (social
desirability) should affect the measures of other interesting
constructs but may or may not be related to the constructs
themselves. (I.e., social desirability, if it exists, is one form of
"crud" referred to in previous messages.) Since the items are
dichotomous (binary), I would expect recent work on the scale would
take this into account in the analysis (i.e., by using either
polychoric correlations or related strategy).
Mike Gillespie
--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html



--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html
Chien-Hsin LIN
2006-01-10 05:13:19 UTC
Permalink
Dear all,

We have a SEM model results as below,

GFI=0.84
CFI=0.92
TLI=0.90
RMSEA=0.11

Chi-square(df. 129)=726.36; p<0.01

N =406

We literally reported the results in a paper as,
"The fit statistics indicated that the model was adequate,......"

However, the reviewer commented that "I am reluctant to agree with the authors when they argue that the
model adequately fits the data." He/or she further sugested RMSEA 0.8 or less is acceptable,
the acceptable ratio of chi-square and d.f. is 2 or less.

My question is should I admit that our model is "inadequate."
Or are there any other adjectives to describe our results?
Given that above results is the best we can fit in terms of "meaningful interpretation," what strategy
can you suggest when we elaborate the results in the contexts?

Thank you for your help.

Best,

Chien-Hsin




--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html
Jason Cole
2006-01-10 05:23:34 UTC
Permalink
Chien-Hsin,

You are likely to get a panoply of responses to this e-mail, some of which will propaganda and tautology, some of which will be helpful. I'll let you decide where my e-mail falls.

From a simple glance of the model-fit values, the values do appear to be quite poor. You have a model with many paths, so Marsh et al. (2004) may be a good paper to consult regarding the problems associated with certain goodness of fit measures with high df. Nevertheless, rather than trying qualify the response as something good, I think you have two options. First, consider a very deconstructive approach examining all of the measurement models, partial structural models, and greater structural models, refining as you go (using theory and data - see Schumacker and Lomax, 2004, for a good process to preserve your generalizability). Second, if this process has already been undertaken, consider the model to just be inadequately fit to the data. I have examined a lot of measures using SEM where no previous work had been done with this tool. Sometimes our conception of the fit between the measure at hand and the theoretical construct is just wrong (of course, why it's wrong is a great story unto itself often times).

Good luck in your efforts,

Jason

Marsh, H. W., Hau, K.-T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler's (1999) findings. Structural Equation Modeling, 11, 320-341.

Schumacker, R. E., & Lomax, R. G. (2004). A beginner's guide to structural equation modeling (2nd ed.). Mahwah, NJ: Lawrence Erlbaum.


____________________________________

Jason C. Cole, PhD
Senior Research Scientist & President
Consulting Measurement Group, Inc.
Tel:   866 STATS 99 (ex. 5)
Fax: 818 905 7768
7071 Warner Ave. #F-400
Huntington Beach, CA 92647
E-mail: ***@webcmg.com
web: http://www.webcmg.com
The Measurement of Success
____________________________________

________________________________________
From: Structural Equation Modeling Discussion Group [mailto:***@BAMA.UA.EDU] On Behalf Of Chien-Hsin LIN
Sent: Monday, January 09, 2006 9:13 PM
To: ***@BAMA.UA.EDU
Subject: How to literally report SEM output?

Dear all,
 
We have a SEM model results as below,
 
GFI=0.84
CFI=0.92
TLI=0.90
RMSEA=0.11
 
Chi-square(df. 129)=726.36; p<0.01
 
N =406
 
We literally reported the results in a paper as,
"The fit statistics indicated that the model was adequate,......"
 
However, the reviewer commented that "I am reluctant to agree with the authors when they argue that the
model adequately fits the data." He/or she further sugested RMSEA 0.8 or less is acceptable,
the acceptable ratio of chi-square and d.f. is 2 or less.
 
My question is should I admit that our model is "inadequate."
Or are there any other adjectives to describe our results?
Given that above results is the best we can fit in terms of "meaningful interpretation," what strategy
can you suggest when we elaborate the results in the contexts?
 
Thank you for your help.
 
Best,
 
Chien-Hsin
 
 
 
-------------------------------------------------------------- To unsubscribe from SEMNET, send email to ***@bama.ua.edu with the body of the message as: SIGNOFF SEMNET Search the archives at http://bama.ua.edu/archives/semnet.html

--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html
Cheung, Gordon W. MGT
2006-01-10 07:46:54 UTC
Permalink
Dear Chien-Hsin,

Yes, the fit is inadequate. It is not even close to marginal fit. Probably
you are using LISREL 8.5 or higher versions where the incremental fit
indices are inflated.

What can you do? Work on your model again. One possibility: with a sample
of 406, is it possible to split your sample into two meaningful groups, and
probably each may fit the model better?

Good luck,

Gordon Cheung

Professor, Department of Management
The Chinese University of Hong Kong
Shatin, Hong Kong



-----Original Message-----
From: Chien-Hsin LIN
To: ***@BAMA.UA.EDU
Sent: 1/10/06 1:12 PM
Subject: How to literally report SEM output?

Dear all,

We have a SEM model results as below,

GFI=0.84
CFI=0.92
TLI=0.90
RMSEA=0.11

Chi-square(df. 129)=726.36; p<0.01

N =406

We literally reported the results in a paper as,
"The fit statistics indicated that the model was adequate,......"

However, the reviewer commented that "I am reluctant to agree with the
authors when they argue that the
model adequately fits the data." He/or she further sugested RMSEA 0.8 or
less is acceptable,
the acceptable ratio of chi-square and d.f. is 2 or less.

My question is should I admit that our model is "inadequate."
Or are there any other adjectives to describe our results?
Given that above results is the best we can fit in terms of "meaningful
interpretation," what strategy
can you suggest when we elaborate the results in the contexts?

Thank you for your help.

Best,

Chien-Hsin



-------------------------------------------------------------- To
unsubscribe from SEMNET, send email to ***@bama.ua.edu with the
body of the message as: SIGNOFF SEMNET Search the archives at
http://bama.ua.edu/archives/semnet.html

--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html
Les Hayduk
2006-01-10 18:03:15 UTC
Permalink
Hi Jason, Chien-Hsin, et al.

(Chien-Hsin said)
Post by Chien-Hsin LIN
Dear all,
We have a SEM model results as below,
GFI=0.84
CFI=0.92
TLI=0.90
RMSEA=0.11
Chi-square(df. 129)=726.36; p<0.01
N =406
We literally reported the results in a paper as,
"The fit statistics indicated that the model was adequate,......"
No this is NOT adequate fit. This is highly significant ill fit. Yes p<0.01
but it is also p< 0.000001 If you think it is adequate, please tell us
what it is adequate FOR doing or for claiming. Chien-Hsin, you are one of
the people who should have been listening to, or who should start listening
to, the exact-fit test discussion here on SEMNET.
Post by Chien-Hsin LIN
However, the reviewer commented that "I am reluctant to agree with the
authors when they argue that the model adequately fits the data."
This reviewer is being mild in their response. I will be more
direct. Your model is not fitting adequately. It is displaying highly
significant ill fit you should be investigating seriously.
Post by Chien-Hsin LIN
He/or she further sugested RMSEA 0.8 or less is acceptable,
the acceptable ratio of chi-square and d.f. is 2 or less.
Here I think your reviewer is out of date, and problematically
lax. There was a reference that suggested RMSEA was less than terrible with
that kind of value, but this reference itself has come under considerable
criticism in the last couple of years.
The idea of using a ratio of chi-square to d.f. as a criterion is
statistically problematic. Do not use this until you find someone who
justifies this on the basis of saying something other than "it seems OK for
me". I do not think you will find anyone who says anything else that holds
statistical-water.
Post by Chien-Hsin LIN
My question is should I admit that our model is "inadequate."
Admit that your model is highly statistically inconsistent with
the data, and then set about trying to figure out and correct what was/is
problematic. If you can not find out what is problematic, write this up and
submit it as a model that is inconsistent with the data for reasons you
can't identify, and list/state the kinds of things that remain likely
reasons of ill fit (that you did not or could not check).
Post by Chien-Hsin LIN
Or are there any other adjectives to describe our results?
Forget adjectives. Respect the data like a scientist would. Do as
thorough an investigation of the potential reasons for ill fit as you can,
and then report honestly on your success/failure at figuring this out. An
adjective does not do justice to the deep and substantive issues
significant ill fit points to.
Post by Chien-Hsin LIN
Given that above results is the best we can fit in terms of "meaningful
interpretation," what strategy can you suggest when we elaborate the
results in the contexts?
Thank you for your help. Best,
Chien-Hsin
I hear your wording as being typical of those using factor
analysis. It seems obvious that if there is no factor model that has a
"meaningful interpretation", you had better seriously consider some
non-factor model for these data. You should also consider all the other
kinds of things that can be serious problems: lack of causal homogeneity,
poor items/wordings, etc.

(Jason replied to Chien-Hsin, Les interrupting)
Post by Chien-Hsin LIN
Chien-Hsin,
You are likely to get a panoply of responses to this e-mail, some of which
will propaganda and tautology, some of which will be helpful.
Chien-Hsin, I will rate Jason's response as being propaganda and
as counseling inadequate SEM methodology, and hence as being
unhelpful/deficient.
Post by Chien-Hsin LIN
I'll let you decide where my e-mail falls.
Yes Chien-Hsin, you will have to decide how you will respond to
highly significant ill fit, and who it is that recommends doing things that
are problematic/deficient. I will also decide -- and others are also free
to decide.
Welcome to being confronted with either standing up and trying
(but likely failing) to defend not-good-enough fit testing, or being
described as providing methodologically deficient advice, Jason.
Post by Chien-Hsin LIN
From a simple glance of the model-fit values, the values do appear to be
quite poor.
You describe the "model-fit" as being quite poor. Now try to
describe the model-causal-specification, Jason. Chien-Hsin should be
attempting to test their model's causal specification, and to understand
the model's causal specification. What do you think Chien-Hsin should say
about this Jason? Chien-Hsin should be more interested in the
adequacy/appropriateness of the model's specification, than in "degree" of fit.
Post by Chien-Hsin LIN
You have a model with many paths, so Marsh et al. (2004) may be a good
paper to consult regarding the problems associated with certain goodness
of fit measures with high df.
Yes Chien-Hsin, consider carefully the idea that there are
PROBLEMS associated with certain goodness of fit measures. One of the
biggest PROBLEMS with fit measures is that people attempt to use them as a
way to displace model TESTING. That attempt is methodologically deficient
Chien-Hsin, so beware anyone that tries to distract you from model testing
by pointing to measures of degree of fit.
Post by Chien-Hsin LIN
Nevertheless, rather than trying qualify the response as something good, I
think you have two options. First, consider a very deconstructive
approach examining all of the measurement models, partial structural
models, and greater structural models, refining as you go (using theory
and data - see Schumacker and Lomax, 2004, for a good process to preserve
your generalizability).
Chien-Hsin, beware that the more "exploration" you do, the more
biased (toward reporting fit) will be your final model fit test outcome.
Post by Chien-Hsin LIN
Second, if this process has already been undertaken, consider the model
to just be inadequately fit to the data.
No the model is not "inadequately fit to the data". The model is
inconsistent with the data. And what should Chien-Hsin do then Jason?
Post by Chien-Hsin LIN
I have examined a lot of measures using SEM where no previous work had
been done with this tool.
I think you are meaning "measures" in the sense of scales created
from items, rather than fit indices as measures. Here are two questions
Jason. Please consider all the scales that you looked at that had been
based on (or that had a history of) factor analysis as their foundation.
What percent (perhaps range of percents) of these did you TEST by using the
full set of scale items in your model ALONG WITH some latent variable(s) in
addition to the scale-latent variable? If you used the scale-score, rather
than the full set of items along with it's underlying latent, that would
not have provided a sufficiently cogent testing of your "measure".
And my second question (in the same context)is: what percent of
the models tested with the full set of items passed or failed according to
chi-square?
I do not want a real count Jason, just sit back in your chair,
consider my wordings, of the above context and spend 30seconds describing
your experience in these regards. Chien-Hsin, I am asking these questions
so that you can get a sense of the adequacy or inadequacy in Jason's
attention to model testing. If Jason was accustomed to doing weak testing,
would you Chien-Hsin want to follow him into doing weak SEmodel testing?
Stronger testing is possible, and Jason my someday convert to stronger
testing, but at the moment I am hearing Jason as whispering inadequate
testing. So I am implicitly asking him to speak up loudly enough for all to
hear what he is really trying to say.
Post by Chien-Hsin LIN
Sometimes our conception of the fit between the measure at hand and the
theoretical construct is just wrong (of course, why it's wrong! is a
great story unto itself often times).
Good luck in your efforts,
Jason
Yes Chien-Hsin, consider carefully that your items might not be
caused by the theoretical construct(s) that your model/theory
requires/contains.
Les

========================
Post by Chien-Hsin LIN
Comment on hypothesis-testing approaches to setting cutoff values for fit
indexes and dangers in overgeneralizing Hu and Bentler's (1999) findings.
Structural Equation Modeling, 11, 320-341.
Schumacker, R. E., & Lomax, R. G. (2004). A beginner's guide to structural
equation modeling (2nd ed.). Mahwah, NJ: Lawrence Erlbaum.
===============================
Les Hayduk; Department of Sociology; University of Alberta; Edmonton,
Alberta;
Canada T6G 2H4 email: ***@ualberta.ca fax:
780-492-7196 phone 780-492-2730


--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html
Ed Rigdon
2006-01-10 21:55:51 UTC
Permalink
Chien-Hsin Lin--

I think your best approach would be to become a critic of
your own results. If the fit of your model seems poor (as these
results indicate), *why* did you obtain these fit results? First
steps are to look at your modification indices and standardized
residuals, to find the omitted paths which, if included in your
model, would dramatically improve fit. You may also need to look
at alternatives outside the model structure, and that includes
heterogeneity in your data. This can come from combining
different populations or from careless respondents (who may
choose a given category on a response scale regardless of the
question). If you *don't* find missing paths and you *don't* find
problems in the data, well then, perhaps the conclusion that fit is
poor needs to be reconsidered.

--Ed Rigdon

Edward E. Rigdon, Professor and Chair,
Department of Marketing
Georgia State University
P.O. Box 3991
Atlanta, GA 30302-3991
(express: 35 Broad St., Suite 1300, zip 30303)
phone (404) 651-4180 fax (404) 651-4198
Dear all,

We have a SEM model results as below,

GFI=0.84
CFI=0.92
TLI=0.90
RMSEA=0.11

Chi-square(df. 129)=726.36; p<0.01

N =406

We literally reported the results in a paper as,
"The fit statistics indicated that the model was adequate,......"

However, the reviewer commented that "I am reluctant to agree with the authors when they argue that the
model adequately fits the data." He/or she further sugested RMSEA 0.8 or less is acceptable,
the acceptable ratio of chi-square and d.f. is 2 or less.

My question is should I admit that our model is "inadequate."
Or are there any other adjectives to describe our results?
Given that above results is the best we can fit in terms of "meaningful interpretation," what strategy
can you suggest when we elaborate the results in the contexts?

Thank you for your help.

Best,

Chien-Hsin




--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html

--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html
Chien-Hsin LIN
2006-01-13 00:29:20 UTC
Permalink
Hi Ed, Les, Gordon and Jason,

Thank you for your helpful suggestions.
I will try to rework my model.

Chien-Hsin



Ed Rigdon <***@LANGATE.GSU.EDU> »¡¡G
Chien-Hsin Lin--

I think your best approach would be to become a critic of
your own results. If the fit of your model seems poor (as these
results indicate), *why* did you obtain these fit results? First
steps are to look at your modification indices and standardized
residuals, to find the omitted paths which, if included in your
model, would dramatically improve fit. You may also need to look
at alternatives outside the model structure, and that includes
heterogeneity in your data. This can come from combining
different populations or from careless respondents (who may
choose a given category on a response scale regardless of the
question). If you *don't* find missing paths and you *don't* find
problems in the data, well then, perhaps the conclusion that fit is
poor needs to be reconsidered.

--Ed Rigdon

Edward E. Rigdon, Professor and Chair,
Department of Marketing
Georgia State University
P.O. Box 3991
Atlanta, GA 30302-3991
(express: 35 Broad St., Suite 1300, zip 30303)
phone (404) 651-4180 fax (404) 651-4198
Chien-Hsin LIN 1/10/2006 12:12:54 AM >>>
Dear all,

We have a SEM model results as below,

GFI=0.84
CFI=0.92
TLI=0.90
RMSEA=0.11

Chi-square(df. 129)=726.36; p<0.01

N =406

We literally reported the results in a paper as,
"The fit statistics indicated that the model was adequate,......"

However, the reviewer commented that "I am reluctant to agree with the authors when they argue that the
model adequately fits the data." He/or she further sugested RMSEA 0.8 or less is acceptable,
the acceptable ratio of chi-square and d.f. is 2 or less.

My question is should I admit that our model is "inadequate."
Or are there any other adjectives to describe our results?
Given that above results is the best we can fit in terms of "meaningful interpretation," what strategy
can you suggest when we elaborate the results in the contexts?

Thank you for your help.

Best,

Chien-Hsin




--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html

--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html



--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html

Continue reading on narkive:
Search results for 'Social Desirability, Crowne & Marlowe (3)' (Questions and Answers)
16
replies
Friends with Benefits?
started 2007-05-22 09:17:33 UTC
singles & dating
Loading...