Discussion:
Single-indicator latent variable
(too old to reply)
Daniel Yang
2005-07-15 05:43:07 UTC
Permalink
Dear SEMNET,

I understand the major difference between SEM and path analysis lies in that SEM deals with relations among true scores. I have an idea that a typical path analysis can be converted into SEM by setting single-indicator latent variables where the factor loadings are 1.00 and error variances are (1-alpha)*variances of observed variables. I wonder any logical mistaken is hidden in such converting. If no, this idea raises the possibility that all path analysis, when appropriate, should be setup-ed as such in order to deal with relations among true scores.

Daniel Yang
Dale Glaser
2005-07-15 06:20:48 UTC
Permalink
I was consulting with a researcher tonight, exploring what she refers to as the construct of “intuition”; she hypothesizes that though there are 6 constructs (e.g., energy, cues, spiritual connection, etc.), there is an overarching second order factor (i.e., intuition). Thus, in testing the second order confirmatory factor analysis the following fit is obtained: chi-square (269) = 1211.94, RMSEA = .0546, NNFI = .929, CFI = .937, AIC = 717.758, SRMR = .073. When testing the first order factor model only, the fit is somewhat (though negligibly) better: chi-square (284) = 1219.72, RMSEA = .0527, NNFI = .931, CFI = .939, AIC = 748.76, SRMR = .0647. Even though an examination of the information theoretic indices (e.g., AIC) may direct one’s interest towards the second order model, the chi-square difference test is not significant: chi-square diff (15) = 7.051, p > .05, indicating that in the name of parsimony the first order model deserves serious merit. However, as we were discussing
the theoretical model, the researcher developed the model with the overarching objective to tap the “general” construct of intuition consisting of its constituent elements (spiritual connection, etc.), and as she posed to me: if the 2nd order CFA fails to exhibit substantive fit over the first order model, can she still claim the model is testing the construct of intuition? In the spirit of Ocham’s razor it may be easier to explain the first order model, but for her she has a difficult time extricating herself from the superordinate construct of “intuition”, at least as the foundation/causal framework for the six subordinate constructs. Which got me thinking, let’s say a cadre of esteemed SEMNET’ers are asked to consult on a study circa 1939 (playing some liberties with dates!). A colleague named Charles Spearman, though acknowledging intelligence consists of a specific factor (i.e., special abilities), has built his model around the underlying thread of a general intelligence, a
‘g’ factor. Though another project team member (L. L. Thurstone) acknowledges this second order factor, in his minds eye he places more import on 7 'primary mental abilities' such as verbal comprehension, word fluency, etc. Indeed, in your testing of the 2nd order and 1st order models, a better fit is found with Thurstone’s model. Spearman does not disagree that there are subfactors, in fact they are closely aligned with his notion of “special abilities’. But he has a difficult time explaining these abilities without referring to umbrella construct of ‘g’. Despite his protestations, you indicate the statistical evidence points to the more parsimonious model. To some extent, I felt the same level of dissonance tonight when working with my colleague. For her, she developed the items with “intuition” in mind, pilot tested the items with “intuition”in mind, and postulated her model with “intuition” in mind. Though the evidence indicates a slightly better fit for the first order
model, she still defaults to depicting her model as a test of “intuition” despite the findings. I wonder if Spearman and Thurstone had similar thoughts?!!!!!



Dale Glaser, Ph.D.
Principal--Glaser Consulting
Lecturer--SDSU/USD/AIU
4003 Goldfinch St, Suite G
San Diego, CA 92103
phone: 619-220-0602
fax: 619-220-0412
email: ***@sbcglobal.net
website: www.glaserconsult.com
Scott Orr
2005-07-15 08:25:06 UTC
Permalink
<HEAD>
<META content="MSHTML 6.00.2800.1505" name=GENERATOR></HEAD>
<BODY>
<DIV><BR><BR><BR>-----Original Message----- <BR>From: Daniel Yang <***@GATE.SINICA.EDU.TW><BR>Sent: Jul 15, 2005 1:42 AM <BR>To: ***@BAMA.UA.EDU <BR>Subject: Single-indicator latent variable <BR><BR><ZZZHTML><ZZZHEAD><ZZZMETA content="text/html; charset=iso-8859-1" http-equiv="Content-Type"><ZZZMETA name="GENERATOR" content="MSHTML 6.00.2800.1505">
<STYLE></STYLE>
</ZZZHEAD><ZZZBODY bgColor="#ffffff">
<DIV><FONT face=Arial size=2>Dear SEMNET,</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT>&nbsp;</DIV>
<DIV><FONT face=Arial size=2>I understand the major difference between SEM and path analysis lies in that SEM deals with relations among true scores.&nbsp;I have an idea that a typical path analysis can be converted into SEM by setting single-indicator latent variables where the factor loadings are&nbsp;1.00 and error variances are (1-alpha)*variances of observed variables. I wonder any logical mistaken is hidden in such converting. If no, this idea raises the possibility that&nbsp;all path analysis, when appropriate,&nbsp;should be setup-ed as such in order to deal with relations among true scores.</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT>&nbsp;</DIV>
<DIV><FONT face=Arial size=2>-----------------------------------------------------------------------------------------------------------------------</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT>&nbsp;</DIV>
<DIV><FONT face=Arial size=2>That won't quite work unless the latent variable is the only non-random constituent of the indicator.&nbsp; That is, if the indicator is essentially the same thing as the latent, plus some measurement error, the procedure above works fine.&nbsp; If the indicator is the latent plus measurement error plus something else (say, it's actually an indicator of two different latents), then your error variance in the SEM is too small.</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT>&nbsp;</DIV>
<DIV><FONT face=Arial size=2>Scott Orr</FONT></DIV></ZZZBODY></ZZZHTML></DIV></BODY>
e***@ACSU.BUFFALO.EDU
2005-07-15 13:46:34 UTC
Permalink
All,

I'm working on a cfa involving both ordinal and continuous variables and i
want to either correct or improve my understanding the computation of
polychoric and polyserial correlations. So, two variables, both 5 category
likert type items. My questions: must every cell of the underlying 5 by 5
crosstab have nonzero entries? If the answer is no, then is there a point
at which the computation begins to break down because of the number of zero
entries? What determines that point? If the answer to my lead question is
yes, then what is the generally accepted wisdom about the minimum number of
cases per cell or the fraction of cells having below some minimum number?
And that minimum would be? Almost finally, although i think i have the
conceptual idea of how polychorics are computed, i don't have the
computational ideas. So, more generally, can the computation of polychorics
and polyserials break down and how will that be known when using a program
such as mplus?

Thank you,
Gene Maguin
Daniel Yang
2005-07-15 10:13:05 UTC
Permalink
Hi Scott,

Daniel Yang wrote,
Post by Daniel Yang
I understand the major difference between SEM and path analysis lies in that SEM deals with relations among true scores. I have an idea that a typical path analysis can be converted into SEM by setting single-indicator latent variables where the factor loadings are 1.00 and error variances are (1-alpha)*variances of observed variables. I wonder any logical mistaken is hidden in such converting. If no, this idea raises the possibility that all path analysis, when appropriate, should be setup-ed as such in order to deal with relations among true scores.
Scott Orr replied,
That won't quite work unless the latent variable is the only non-random constituent of the indicator. That is, if the indicator is essentially the same thing as the latent, plus some measurement error, the procedure above works fine. If the indicator is the latent plus measurement error plus something else (say, it's actually an indicator of two different latents), then your error variance in the SEM is too small.
Indeed, the situation you described (an indicator reflecting more than one latent variable) is one of the inappropriate situations to apply this idea. However, I wonder (a) any solution is given in this case? (b) are you sure the error variance is too small? I think the opposite since the error variance might contain true score of the 2nd latent variable (c) given (b) is right on my side, therefore (1-cronback alpha) remains a reasonable correction for error variance because it's at worst over-estimating the error variance. But I am glad with your agreement to know that it is indeed superior (so that error variance is statistically removed) to setup this way when any given indicator reflects just one LV.

Daniel Yang
Les Hayduk
2005-07-15 19:28:02 UTC
Permalink
Hi Daniel (Y.), Scott, et al.
Post by Daniel Yang
Hi Scott,
Daniel Yang wrote,
Post by Scott Orr
Post by Daniel Yang
I understand the major difference between SEM and path analysis lies
in that SEM deals with relations among true scores. I have an idea that a
typical path analysis can be converted into SEM by setting
single-indicator latent variables where the factor loadings are 1.00 and
error variances are (1-alpha)*variances of observed variables. I wonder
any logical mistaken is hidden in such converting. If no, this idea
raises the possibility that all path analysis, when appropriate, should
be setup-ed as such in order to deal with relations among true scores.
Yes, I think all, or almost all, path analyses could and should be
set up as SEmodels. This adjusts for measurement error and provides a model
test that is not available in path analysis.
Post by Daniel Yang
Scott Orr replied,
Post by Scott Orr
That won't quite work unless the latent variable is the only non-random
constituent of the indicator.
I think this it is false to claim this in general. I think this is
too situation/model/context dependent make a general assertion about.
Post by Daniel Yang
That is, if the indicator is essentially the same thing as the latent,
plus some measurement error, the procedure above works fine.
OK, so Scott says single-indicators work fine in a bunch of
contexts. Glad we at least agree on something.
Post by Daniel Yang
If the indicator is the latent plus measurement error plus something
else (say, it's actually an indicator of two different latents), then
your error variance in the SEM is too small.
Not if you include both the latents in the model! You may or may
not be able to incude both latents.

(Daniel replied)
Post by Daniel Yang
Indeed, the situation you described (an indicator reflecting more than one
latent variable) is one of the inappropriate situations to apply this idea.
I think this may or may not be possible, depending on many
circumstances.
Post by Daniel Yang
However, I wonder (a) any solution is given in this case?
-- we can't know this in general.
Post by Daniel Yang
(b) are you sure the error variance is too small? I think the opposite
since the error variance might contain true score of the 2nd latent variable
Here is a good issue to think/talk about. If we
know/model/theorize two latents as the sources of a single indicator (in a
model where the model remains identified due to other features of the
model) is the variance contributed into the indicator by the two latents
(and indeed by the covariance between the two latents) ALL counted as
"true" variance or is part of this "error"? I am on the side of saying it
is best to regard all the variance contributed by all the latents (and the
coordinations between the latents) as true variance. This is inconsistent
with much standard jargon, and some assumed/presumed/traditional thought
ways, so this is a good place for digging into what I think will be the
next big/prolonged SEMNET discussion (namely the use of single-indicators
in SEM).
Post by Daniel Yang
(c) given (b) is right on my side, therefore (1-cronback alpha) remains a
reasonable correction for error variance because it's at worst
over-estimating the error variance.
You seek validity (for your model's specification) and
alpha-speaks to reliability. Alpha may be close to, or even occasionally
correspond to, the appropriate value, but I recommend you consider what it
takes to potentially attain validity.
Post by Daniel Yang
But I am glad with your agreement to know that it is indeed superior (so
that error variance is statistically removed) to setup this way when any
given indicator reflects just one LV.
Daniel Yang
And it may (but is not guaranteed) to be possible to set this up
if a single indicator has two latent sources.
Les

===============================
Les Hayduk; Department of Sociology; University of Alberta; Edmonton,
Alberta;
Canada T6G 2H4 email: ***@ualberta.ca fax:
780-492-7196 phone 780-492-2730
Daniel Yang
2005-07-16 10:07:05 UTC
Permalink
Hi Les,

Daniel Yang wrote,
(1-cronback alpha) remains a reasonable correction for error variance.
Les replied,
You seek validity (for your model's specification) and alpha-speaks to
reliability. Alpha may be close to, or even occasionally correspond to, the
appropriate value, but I recommend you consider what it takes to potentially
attain validity.

No, I don't agree I use alpha for validity purpose. I use alpha only for
reliability purpose: making measurement more accurate by adjusting for error
variance. I don't quite understand why the issue of validity is involved in
this case.

Daniel Yang
Les Hayduk
2005-07-19 22:12:19 UTC
Permalink
Hi Daniel (Y.) et al.
Post by Daniel Yang
Daniel Yang wrote,
(1-cronback alpha) remains a reasonable correction for error variance.
Les replied,
You seek validity (for your model's specification) and alpha-speaks to
reliability. Alpha may be close to, or even occasionally correspond to, the
appropriate value, but I recommend you consider what it takes to potentially
attain validity.
(Daniel replied)
Post by Daniel Yang
No, I don't agree I use alpha for validity purpose.
It is not alpha that requires/forces/demands validity. It is the
SEmodel that requires/forces-attention-to/demands validity. You use
SEmodels to seek valid representations of the world, and this is the
foundation of our interest in validity.
Yes, alpha seeks/is/attempts-to-provide reliability. Our MODELS
require/seek/attempt to incorporate, validity.
Post by Daniel Yang
making measurement more accurate by adjusting for error variance.
Yes, adjusting for reliability is better than doing nothing at
all, but adjusting for reliability is not as good as adjusting for validity.
Post by Daniel Yang
I don't quite understand why the issue of validity is involved in this case.
Daniel Yang
Validity is involved because we want a validly specified model.
If there is any difference between validity and reliability, and the model
requires/seeks/wants an error-variance-specification that renders the model
VALID, and what you enter is based on reliability, your use of reliability
has prevented your model from being validly specified.
Les


===============================
Les Hayduk; Department of Sociology; University of Alberta; Edmonton,
Alberta;
Canada T6G 2H4 email: ***@ualberta.ca fax:
780-492-7196 phone 780-492-2730
Scott Orr
2005-07-16 09:38:27 UTC
Permalink
-----Original Message-----
From: Daniel Yang
Sent: Jul 15, 2005 6:10 AM
To: ***@ix.netcom.com
Subject: Re: Single-indicator latent variable


Indeed, the situation you described (an indicator reflecting more than one latent variable) is one inappropriate situation to directly apply my idea. I wonder (a) any solution is given in this case? (b) are you sure the error variance is too small? I think the opposite since the error variance might contain true score of the 2nd latent variable (c) therefore (1-cronback alpha) remains a reasonable correction for error variance because it's at worst over-estimating the error variance. But I am glad with your agreement to know that it is indeed superior (so that error variance is statistically removed) to setup this way when any given indicator reflects just one LV.

----------------------------------------------------------------------------------------------------------------

The solution may be to make an estimate based on known values for similar models--one approach I've used is to try a range of values to see how sensitive the model is to differences in that one paramenter.

I don't see how it's going to _over_ estimate the error variance. Indeed, it's fairly likely to underestimate it, since it's only a reliability coefficient--it's likely to represent just one type of error. If, for example, it represents test-retest reliability, it's missing the error that both tests share in measuring the latent (but that's really just another way of saying that the indicator may represent more than just that one latent plus measurement error).

Scott Orr
Daniel Yang
2005-07-16 10:35:58 UTC
Permalink
Hi Scott Orr,
Post by Scott Orr
I don't see how it's going to _over_ estimate the error variance. Indeed,
it's fairly likely to underestimate it, since it's only a reliability
coefficient--it's likely to represent just one type of error. If, for
example, it represents test-retest reliability, it's missing the error that
both tests share in measuring the latent (but that's really just another way
of saying that the indicator may represent more than just that one latent
plus measurement error).

Different methods for estimating reliability have their own sources of error
variance. For example, test-retest has changes-over-time kind of error
source; alpha has content-sampling kind of error source. But I think it
doesn't much sense to state either over- or under- if different sources of
error variances are compared. It is for sure that a typical interval-scale
indicator reflects more than one latent variables plus random error because
it can theorectically so decomposed. For example, many social attitudes
consist of both superficial part (external forced) and deep part
(committment). Now, the tricky part of the issue lies in the fact that the
meaning of error is somewhat context-specific. To the extent that the
meaning of error is a constant across the two LV, a single alpha works well.
To the extent that the meaning of error is different across the two LV, the
error is supposed to be residual of that single indicator regressed on the
two LV, an idea which seems infeasible.

Daniel Yang
Les Hayduk
2005-09-05 23:33:32 UTC
Permalink
Hi Daniel, Scott, et al.
Post by Daniel Yang
Hi Scott Orr,
(Scott had said, Les interrupting)
Post by Daniel Yang
Post by Scott Orr
I don't see how it's going to _over_ estimate the error variance.
I will try to help you "see" Scott. The title is single-indicators
of latent variables. Let us make our "reliability" estimate by looking at
TWO indicators of the latent. One indicator is perfect (no error) the other
contains much error. The correlation between the indicators will be "less
than perfect", and the reliability estimate resulting from this "less than
perfect correlation" will be less than perfect. We now use the
perfect-indicator as our "single indicator". The reliability (which is less
than perfect, results in a measurement error variance that is fixed at some
non-zero value) will overestimate the true (zero) error variance for this
indicator.
Post by Daniel Yang
Indeed, it's fairly likely to underestimate it, since it's only a
reliability
coefficient--it's likely to represent just one type of error.
I do not understand this claim in the context of single-indicators
Scott. Please explain. Estimates of reliability/validity ordinarily come
from multiple (two or more) indicators -- but some of the "ordinary ways of
thinking" need to be reconsidered in the context of single indicators.
Post by Daniel Yang
If, for example, it represents test-retest reliability, it's missing the
error that
both tests share in measuring the latent (but that's really just another way
of saying that the indicator may represent more than just that one latent
plus measurement error).
Now suppose the test-retest indicators are NOT equally good
indicators. (I think it will not always be the first, and will not always
be the second, that will be better, and sometimes the two might even be
equally good). Does the validity of the single-better indicator have to be
worse than the reliability from the test-retest that also uses the
not-so-good indicator?

(Daniel replied, to Scott, Les interrupting)
Post by Daniel Yang
Different methods for estimating reliability have their own sources of error
variance. For example, test-retest has changes-over-time kind of error
source;
-- so even two perfect measurements would provide a correlation of
less than 1.0 between the indicators --
Post by Daniel Yang
alpha has content-sampling kind of error source.
Daniel, do SEM indicators require, need, or even "permit"
content-sampling IF we have a single-indicator? This could lead to a fun
discussion of whether content-sampling is required/needed/permitted if the
world works causally-differentially (or the same) in various segments of
the potentially sampled-content area.
Post by Daniel Yang
But I think it
doesn't much sense to state either over- or under- if different sources of
error variances are compared.
Yes, each way of obtaining measurement-quality estimates will be
subject to a variety of potential reasons for mis-estimation of quality.
One usual assumption is the "the indicators are equally good" and
rethinking the consequences of the failure of this assumption is part of
what is involved in assessing the utility of single-indicators. I think it
is important to try to keep track of over-under, but I agree that we can
not routinely simply assert (as Scott did) that we will routinely encounter
over, or routinely encounter under, estimates of error variance of
single-indicators if we look at reliability estimates.
Post by Daniel Yang
It is for sure that a typical interval-scale
indicator reflects more than one latent variables plus random error because
it can theorectically so decomposed.
I do not understand this statement. Yes one can ("for sure") write
an equation that decomposes the indicator into a latent (true) and error
causal sources, but I do not think of this as being "for sure" because of
the appeal to "theory" that is implicit in making this specification for
the indicator part of a specific model. To connect the indicator to a model
properly (causally properly) adds constraints to both "what might be
thought of as the latent" and "what might be thought of as error".
Post by Daniel Yang
For example, many social attitudes
consist of both superficial part (external forced) and deep part
(committment). Now, the tricky part of the issue lies in the fact that the
meaning of error is somewhat context-specific.
I see the model as being "important specific context". That is,
the theory in the model is in control of the latent that this indicator is
supposed to indicate. If the theory provides a context that changes the
latent, then the error specification will also change.
Post by Daniel Yang
To the extent that the
meaning of error is a constant across the two LV,
Is it the meaning of the error that is at issue, or is it the
substance/meaning/nature of the latent -- the latent required/desired in
the full model theory -- that is of fundamental importance?
Post by Daniel Yang
a single alpha works well.
To the extent that the meaning of error is different across the two LV, the
error is supposed to be residual of that single indicator regressed on the
two LV, an idea which seems infeasible.
Daniel Yang
I do not see what you think is "infeasible". I think it is
possible, and may even be required, to properly specify a model having two
latents influencing a single indicator. This may make it more difficult to
estimate the model, but I would favor/pursue the proper specification even
if the estimation became more of a challenge.
Les


===============================
Les Hayduk; Department of Sociology; University of Alberta; Edmonton,
Alberta;
Canada T6G 2H4 email: ***@ualberta.ca fax:
780-492-7196 phone 780-492-2730


--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html
Daniel Yang
2005-07-20 16:10:05 UTC
Permalink
Hi Les,

You mentioned,
Post by Les Hayduk
Validity is involved because we want a validly specified model.
If there is any difference between validity and reliability, and the model
requires/seeks/wants an error-variance-specification that renders the model
VALID, and what you enter is based on reliability, your use of reliability
has prevented your model from being validly specified. Les

Here is a point of a lot of possible confusion. According to Gregory (2000),
validity is defined to be that of a psychological test. There are three
different ways of accumulating validity evidence, namely, content validity,
criterion-related validity, construct validity. Now, you seem talking
another thing, the model specification validity. These are different things,
I believe, at least different levels. I believe we must be careful to use
and clearly define the term 'validity' in our discussion, if we want the
inferences based on discussion to be valid. As far as I know, (1-alpha) is
valid model specification--the model can be estimated. So if I use
well-validated questionnaires to collect the data of indicators, I got valid
data too. I then specify the model by (1-alpha) style and I don't see why
the inferences made from this model are not appropriate, meaningful, and
useful?

Daniel Yang
Wolfgang Rauch
2005-07-20 17:29:39 UTC
Permalink
Daniel,

I think Les really means you should specify as error variance something
like (1-validity), since that would give you a better estimate of "true"
construct variance, meaning that true score variance is not the same as
reliable variance. But that is an issue that has probably been discussed
in the literature and on SEMNET. Still I will give it a try and hope
that someone more experienced might correct me if necessary:

When equating reliable variance with true score variance you would
really assume that your measure is a perfectly valid indicator for the
construct. Remember the factor-analytic model: Indicators share true
score variance (=variance that is explained by the factor); each
indicator also has unique variance, which is a mix of random error
variance and reliable variance due to the special aspects that this one
measure captures. That means, not all reliable variance of a measure is
due to the construct. Therefore, reliability is arguably not the best
(at least not the only) estimate of true score variance (=explained by a
latent construct).

One possible decomposition of variances appears in latent-state-trait
theory (LST): reliable variance is decomposed into state-, trait-, and
method variance (e.g. Schermelleh-Engel et al., 2004; Steyer et al.,
1999). Depending on your goal, you might for example consider only trait
variance as true score variance.

Above this, even if you maintain the position that reliable variance is
a reasonable estimate of true score variance, I would also agree with
Scott that Alpha may not be the best estimate of reliability, depending
on what kind of measure you use, so I do not think many researchers will
be comfortable with an absolute statement like "all path analyses should
be set up with error variances of single indicator variables set to
(1-Alpha)".

Finally, I want to note that in your first posting in this thread you
wrote: "I understand the major difference between SEM and path
analysis..."
Many researchers would say that this phrase is logically wrong.
Structural equation modeling is a general approach that encompasses path
analysis as a special case.

Regards, Wolfgang



Schermelleh-Engel, K., Keith, N., Moosbrugger, H. & Hodapp, V. (2004).
Decomposing person and occasion-specific effects: An extension of latent
state-trait theory to hierarchical LST models. Psychological Methods, 9,
198-219.

Steyer, R., Schmitt, M., & Eid, M. (1999). Latent state-trait theory and
research in personality and individual differences. European Journal of
Personality, 13, 389–408.
Post by Daniel Yang
Hi Les,
You mentioned,
Post by Les Hayduk
Validity is involved because we want a validly specified model.
If there is any difference between validity and reliability, and the model
requires/seeks/wants an error-variance-specification that renders the model
VALID, and what you enter is based on reliability, your use of reliability
has prevented your model from being validly specified. Les
Here is a point of a lot of possible confusion. According to Gregory (2000),
validity is defined to be that of a psychological test. There are three
different ways of accumulating validity evidence, namely, content validity,
criterion-related validity, construct validity. Now, you seem talking
another thing, the model specification validity. These are different things,
I believe, at least different levels. I believe we must be careful to use
and clearly define the term 'validity' in our discussion, if we want the
inferences based on discussion to be valid. As far as I know, (1-alpha) is
valid model specification--the model can be estimated. So if I use
well-validated questionnaires to collect the data of indicators, I got valid
data too. I then specify the model by (1-alpha) style and I don't see why
the inferences made from this model are not appropriate, meaningful, and
useful?
Daniel Yang
--
Dipl.-Psych. Wolfgang Rauch
Department of Psychology
Johann Wolfgang Goethe-Universität Frankfurt am Main
Psychological Research Methods and Evaluation
Tel. +49 (0) 69-798-22081
Fax +49 (0) 69-798-23847
***@psych.uni-frankfurt.de
http://www.uni-frankfurt.de/~rauchw
Les Hayduk
2005-07-20 18:28:21 UTC
Permalink
Hi Daniel, et al.
Post by Daniel Yang
Hi Les,
You mentioned,
Post by Les Hayduk
Validity is involved because we want a validly specified model.
If there is any difference between validity and reliability, and the model
requires/seeks/wants an error-variance-specification that renders the model
VALID, and what you enter is based on reliability, your use of reliability
has prevented your model from being validly specified. Les
Here is a point of a lot of possible confusion.
And a point of likely contention -- for some people.
Post by Daniel Yang
According to Gregory (2000), validity is defined to be that of a
psychological test.
And I view this style of definition as problematic. Is Gregory of
Gregory (2000) on SEMNET? If so please speak up and help clarify this so
Daniel does not get misled by either you or me.
Post by Daniel Yang
There are three different ways of accumulating validity evidence, namely,
content validity,
criterion-related validity, construct validity.
These are styles of "evidence" but they do not constitute
validity. They are the kinds of things that you can point to, and say "this
is evidence that seems consistent with a claim to having a valid model".
This is not the validity itself. These are merely styles of "evidence" --
things that can be offered as evidence. And these are not "exhaustive" --
they do not cover all the styles of evidence,. And they are not
"sufficient" -- because other evidence can result in a claim to
"invalidity" despite having satisfied one or more of these styles of
evidence regarding validity.
Post by Daniel Yang
Now, you seem talking another thing, the model specification validity.
No this is not another thing. I am merely saying that to be valid
our model much match up with the worldly causal forces. This is pointing to
"getting our view of the world correct/proper" as the basis of validity.
This is not something different. The above versions of evidence seem like
evidence because they address/report-upon things that seem to be required
if we are to have "matched-up with the world".
Post by Daniel Yang
These are different things, I believe, at least different levels.
They are not really different, and they certainly are NOT levels.
They all are connected to the claim of properly modeling/representing the
world out there.
Post by Daniel Yang
I believe we must be careful to use and clearly define the term 'validity'
in our discussion, if we want the inferences based on discussion to be valid.
I am not sure definitions are the only issue Daniel. Are there any
definitions (clear, exact, precise words) that you could make up
for xxxxxx-validity that you would claim do NOT get at or speak to
validity despite the term "validity" in the entity defined
as "xxxxx-validity"? The clarity of the definitions is nice and helpful,
but the definitions themselves must swear-allegiance to properness
(world-matchingness) if they are to deserve/warrant the term "validity"
in "xxxxx-validity".
For a very clear and helpful discussion of this see Borsboom,
Mellenbergh and van Heerden (2004) "The concept of validity" Psychological
Review 111(4):1061-1071.
Post by Daniel Yang
As far as I know, (1-alpha) is valid model specification--the model can
be estimated.
The fact that the model can be estimated is not telling you the
model is valid. If the model is estimated with (1-alpha) it is also likely
to be estimable with (1 - (1/2)alpha) and (1 - (1.3)alpha). These can't all
be "valid" -- even though they are all likely to result in estimable
models. So separate estimable, from valid. The serious part of this is
whether (1-alpha) corresponds to the world and hence deserves/warrants the
term "validity" as in "xxxxx-validity". The term validity is there (if we
temporarily forget that alpha is usually described as "reliability") -- but
is it warranted or backed up by a matching to the world. It is that
matching to the world and the worldly forces that grounds validity -- not
the appearance of the definition/term validity.
Post by Daniel Yang
So if I use well-validated questionnaires to collect the data of
indicators, I got valid
data too.
No. You get data you anticipate will possess a bunch of
validity-matching features because it has gone through prior screening that
has provided a variety of kinds of evidence relevant to validity. The prior
evidence does not preclude some new and different evidence (hear exact-fit
test as one of these kinds of evidence) as potentially reporting that the
indicators are not valid despite the prior evidence. That is, just as
failure of any one the prior kinds of evidence would remove the indicators
from the list of valid-indicator, there are additional kinds of evidence
that can remove the indicators from the list of valid-indicators.
Post by Daniel Yang
I then specify the model by (1-alpha) style and I don't see why
the inferences made from this model are not appropriate,
It will not be "appropriate" if it does not match with the world.
Post by Daniel Yang
meaningful,
The model should be "understandable" even if it is wrong.
Post by Daniel Yang
and useful?
Daniel Yang
And if the model does not match with the world, using it will be
risky and point you in the direction of a lawyer's office if the "use"
results in harm to someone.
Les

===============================
Les Hayduk; Department of Sociology; University of Alberta; Edmonton,
Alberta;
Canada T6G 2H4 email: ***@ualberta.ca fax:
780-492-7196 phone 780-492-2730
Daniel Yang
2005-07-21 12:50:27 UTC
Permalink
Hi Wolfgang,

Thanks for the Latent-State-Trait (LST) theory. I just read the two
references you cited. I noticed a key concept: "the true scores ... do not
characterize the person but the person in the situation" (Steyer et al.,
1999, p.395), that is, "a true score ... is the expectation (true mean) of
the distribution of the Yik conditional on the person in the siuation rather
than the person" (pp.393-394), where "the term 'situation' refers to the
unobservable psychological conditions that might be relevant for the
measurement of the construct considered" (p.394), such as the sleep hours
(emotional situation) or the TV program watched (priming situation) in the
previous night.

But what if my interest just lies in person in the situation? This means my
measure corrected for reliability is a perfectly valid indicator for the
construct, the latent state construct.

I see validity as an art of labeling: the LV is valid if its label is
appropriate, meaningful, and useful. If researchers are interested in latent
trait variables, they should not use latent state variables; otherwise, it's
invalid because of label-construct mismatch. I don't see validity can be
readily achieved by something like (1-validity) unless appropriate,
meaningful, and useful labels are assigned.

Daniel Yang
Victor Willson
2005-07-21 16:49:43 UTC
Permalink
To me the statement below regarding true score (Steyer et al) is
inadequate. Going back to Lord and Novick, observed scores x were defined
with respect to three subscripts, i=person, j=measure, and k=occasion. They
then introduced an asterisk for any subscript to define the conditions,
such as x(ij*), which was observed score for person i on item j averaged
(expectation) across occasions. This might be used for a model in which
stability of the measure across time was appropriate (trait stability). The
true score for the observed scores then follows from expectation of the
observed score, but with respect to the subscripts as they are defined, and
reliability follows that.
Post by Daniel Yang
Hi Wolfgang,
Thanks for the Latent-State-Trait (LST) theory. I just read the two
references you cited. I noticed a key concept: "the true scores ... do not
characterize the person but the person in the situation" (Steyer et al.,
1999, p.395), that is, "a true score ... is the expectation (true mean) of
the distribution of the Yik conditional on the person in the siuation rather
than the person" (pp.393-394), where "the term 'situation' refers to the
unobservable psychological conditions that might be relevant for the
measurement of the construct considered" (p.394), such as the sleep hours
(emotional situation) or the TV program watched (priming situation) in the
previous night.
But what if my interest just lies in person in the situation? This means my
measure corrected for reliability is a perfectly valid indicator for the
construct, the latent state construct.
I see validity as an art of labeling: the LV is valid if its label is
appropriate, meaningful, and useful. If researchers are interested in latent
trait variables, they should not use latent state variables; otherwise, it's
invalid because of label-construct mismatch. I don't see validity can be
readily achieved by something like (1-validity) unless appropriate,
meaningful, and useful labels are assigned.
Daniel Yang
Daniel Yang
2005-07-22 09:06:18 UTC
Permalink
Hi Wolfgang,

You mentioned,
Above this, even if you maintain the position that reliable variance is a
reasonable estimate of true score variance, I would also agree with Scott
that Alpha may not be the best estimate of reliability, depending on what
kind of measure you use, so I do not think many researchers will be
comfortable with an absolute statement like "all path analyses should be set
up with error variances of single indicator variables set to (1-Alpha)".

Sorry, I just meant a typical path analysis with only observed variables
(i.e. X/Y), when applicable, could be and should be set up with error
variances set to (1-reliability) in order to deal with relations among true
scores.
I want to note that in your first posting in this thread you wrote: "I
understand the major difference between SEM and path analysis..." Many
researchers would say that this phrase is logically wrong. Structural
equation modeling is a general approach that encompasses path analysis as a
special case.

Sorry for that too. I just meant the major difference between observed- and
true-score analysis.

Daniel Yang
Daniel Yang
2005-07-22 14:40:21 UTC
Permalink
Hi Les,

How do you agree with Bollen's (1989) definitions of validity and
reliability? I am curious. Bollen defined the validity of a measure x(i) of
ksi(j) to be "the magnitude of the direct structural relation between ksi(j)
and x(i)" (p.197), and the reliability of x(i) to be "the magnitude of the
direct relations that all variables (except delta's) have on x(i)" (p.221).
We need a common ground on which meaningful discussions can be built, that's
all.

Daniel Yang
Wolfgang Rauch
2005-07-25 15:27:48 UTC
Permalink
Hello Daniel,

thank you for the clarification and sorry for the delay.

[snip]
Post by Daniel Yang
Sorry, I just meant a typical path analysis with only observed variables
(i.e. X/Y), when applicable, could be and should be set up with error
variances set to (1-reliability) in order to deal with relations among true
scores.
[snip]
Post by Daniel Yang
Sorry for that too. I just meant the major difference between observed- and
true-score analysis.
In another posting you said that you may be only interested in the
person-in-situation value. What I think is still an issue (I might be
nit-picking here) is the term "true-score analysis". When talking about
"true scores", you should be clear about what the attribute "true" is
referring to. If you have a single indicator in one situation (and note
that I am not bound to the LST approach, there are reasonable
alternative definitions of what "true scores" are), you should be aware
that this one indicator gives you just one estimate of the true scores;
but it might be biased or even invalid. To illustrate: If you want to
measure construct X with questionnaire x, the individual values (answers
to the items) x_i might have a non-linear relation with X, or, to use a
more common example, X is multi-faceted and the questionnaire taps just
one of these facets. Now I have to be clear too: With this I do not mean
that using more than one indicator is always better or that with more
than one indicator one could automatically achieve a valid measurement.
But neither can you achieve "true score" analysis automatically by
changing parameter values in a SE-model. What you suggest might rather
be called "correcting for unreliability" or "correcting for
attenuation". But others on the list may have more to say on this issue.

Wolfgang
--
Dipl.-Psych. Wolfgang Rauch
Department of Psychology
Johann Wolfgang Goethe-Universität Frankfurt am Main
Psychological Research Methods and Evaluation
Tel. +49 (0) 69-798-22081
Fax +49 (0) 69-798-23847
***@psych.uni-frankfurt.de
http://www.uni-frankfurt.de/~rauchw
Daniel Yang
2005-07-26 00:45:18 UTC
Permalink
Dear Wolfgang,

Thanks for clarification too.

Yes, I agree that true score = reliable score = valid score + invalid score,
where the valid score may be biased because of non-linearity (e.g. Olson's
FACES II/III vs. FACES IV) or limitation in content validity. So the
challenges are (a) how to correct for invalidity after correcting for
unreliability? (b) how to validly reflect the reality in measurement (i.e.
modeling valid latent variables)? Whether single or multiple indicators, I
believe, these are the common challenges and single-indicator LV only makes
such challenges more salient.

Some additonal answers to these challenges. For challenge (a), researchers
can compare the unique validity variance (U_x(i)_ksi(j)) (Bollen, 1989,
p.200) and the squared multiple correlation coefficient for x(i),
R-squared_x(i) (p.221). For challenge (b), researchers can have more precise
measurement. For example, Olson (2004) directly measure the extreme part in
FACES IV.

Daniel Yang

----- Original Message -----
From: "Wolfgang Rauch" <***@PSYCH.UNI-FRANKFURT.DE>
To: <***@BAMA.UA.EDU>
Sent: Monday, July 25, 2005 11:27 PM
Subject: Re: Single-indicator latent variable
Post by Wolfgang Rauch
Hello Daniel,
thank you for the clarification and sorry for the delay.
[snip]
Post by Daniel Yang
Sorry, I just meant a typical path analysis with only observed variables
(i.e. X/Y), when applicable, could be and should be set up with error
variances set to (1-reliability) in order to deal with relations among
true
Post by Wolfgang Rauch
Post by Daniel Yang
scores.
[snip]
Post by Daniel Yang
Sorry for that too. I just meant the major difference between observed-
and
Post by Wolfgang Rauch
Post by Daniel Yang
true-score analysis.
In another posting you said that you may be only interested in the
person-in-situation value. What I think is still an issue (I might be
nit-picking here) is the term "true-score analysis". When talking about
"true scores", you should be clear about what the attribute "true" is
referring to. If you have a single indicator in one situation (and note
that I am not bound to the LST approach, there are reasonable
alternative definitions of what "true scores" are), you should be aware
that this one indicator gives you just one estimate of the true scores;
but it might be biased or even invalid. To illustrate: If you want to
measure construct X with questionnaire x, the individual values (answers
to the items) x_i might have a non-linear relation with X, or, to use a
more common example, X is multi-faceted and the questionnaire taps just
one of these facets. Now I have to be clear too: With this I do not mean
that using more than one indicator is always better or that with more
than one indicator one could automatically achieve a valid measurement.
But neither can you achieve "true score" analysis automatically by
changing parameter values in a SE-model. What you suggest might rather
be called "correcting for unreliability" or "correcting for
attenuation". But others on the list may have more to say on this issue.
Wolfgang
--
Dipl.-Psych. Wolfgang Rauch
Department of Psychology
Johann Wolfgang Goethe-Universität Frankfurt am Main
Psychological Research Methods and Evaluation
Tel. +49 (0) 69-798-22081
Fax +49 (0) 69-798-23847
http://www.uni-frankfurt.de/~rauchw
In another posting you said that you may be only interested in the
person-in-situation value. What I think is still an issue (I might be
nit-picking here) is the term "true-score analysis". When talking about
"true scores", you should be clear about what the attribute "true" is
referring to. If you have a single indicator in one situation (and note
that I am not bound to the LST approach, there are reasonable
alternative definitions of what "true scores" are), you should be aware
that this one indicator gives you just one estimate of the true scores;
but it might be biased or even invalid. To illustrate: If you want to
measure construct X with questionnaire x, the individual values (answers
to the items) x_i might have a non-linear relation with X, or, to use a
more common example, X is multi-faceted and the questionnaire taps just
one of these facets. Now I have to be clear too: With this I do not mean
that using more than one indicator is always better or that with more
than one indicator one could automatically achieve a valid measurement.
But neither can you achieve "true score" analysis automatically by
changing parameter values in a SE-model. What you suggest might rather
be called "correcting for unreliability" or "correcting for
attenuation". But others on the list may have more to say on this issue.
Wolfgang
--
Dipl.-Psych. Wolfgang Rauch
Department of Psychology
Johann Wolfgang Goethe-Universität Frankfurt am Main
Psychological Research Methods and Evaluation
Tel. +49 (0) 69-798-22081
Fax +49 (0) 69-798-23847
http://www.uni-frankfurt.de/~rauchw
Wolfgang Rauch
2005-07-26 09:25:21 UTC
Permalink
I forgot to add the full reference in my previous post:

Borsboom, D. & Mellenbergh, G. J. (2002). True scores, latent variables,
and constructs: A comment on Schmidt and Hunter. Intelligence, 30, 505-514.
Wolfgang Rauch
2005-07-26 09:20:18 UTC
Permalink
Hi Daniel,

after re-reading my message from yesterday I realised that I myself had
gotten lost in terminology (and, maybe, translation) by confusing true
scores and validity. So, actually if you do a single indicator analysis
and set the error variance to (1 - estimate of reliability) you really
get a true score correlation. The point why I have been so insistent is
nicely put in a paper by Borsboom and Mellenbergh (2002): They point out
that too often true scores and construct scores get confused. True
scores rarely equal construct scores, and only under some conditions is
the true score correlation a reasonable estimate of construct
correlation. The concept of true scores stems from classical test theory
(CTT) which is not concerned with constructs and validity. But in most
instances researchers want to know about constructs and not about true
scores. Borsboom and Mellenbergh (2002) argue strongly for the use of
more sophisticated models: "However, it would be a step back, and not a
step forward, if [...] researchers [were persuaded] to use the
correction for attenuation, instead of putting effort into making tests
that can stand up to the demands of modern test theory and finding the
appropriate model for their constructs."

The point I want to stress is that one can not magically invoke
constructs or validity just by switching parameter values - a path model
without latents is just the same as a path model with latents and error
variances of the single indicators set to zero - setting error variances
and loadings to different values changes the structural parameters in
predictable ways, but one does not magically get constructs.

So, I absolutely agree with what you say below as challenge (b): Valid
measurement is not something that can be achieved by SEM alone.

Regards,
Wolfgang
Post by Daniel Yang
Yes, I agree that true score = reliable score = valid score + invalid score,
where the valid score may be biased because of non-linearity (e.g. Olson's
FACES II/III vs. FACES IV) or limitation in content validity. So the
challenges are (a) how to correct for invalidity after correcting for
unreliability? (b) how to validly reflect the reality in measurement (i.e.
modeling valid latent variables)? Whether single or multiple indicators, I
believe, these are the common challenges and single-indicator LV only makes
such challenges more salient.
Some additonal answers to these challenges. For challenge (a), researchers
can compare the unique validity variance (U_x(i)_ksi(j)) (Bollen, 1989,
p.200) and the squared multiple correlation coefficient for x(i),
R-squared_x(i) (p.221). For challenge (b), researchers can have more precise
measurement. For example, Olson (2004) directly measure the extreme part in
FACES IV.
Daniel Yang
--
Dipl.-Psych. Wolfgang Rauch
Department of Psychology
Johann Wolfgang Goethe-Universität Frankfurt am Main
Psychological Research Methods and Evaluation
Tel. +49 (0) 69-798-22081
Fax +49 (0) 69-798-23847
***@psych.uni-frankfurt.de
http://www.uni-frankfurt.de/~rauchw
Daniel Yang
2005-09-07 07:46:56 UTC
Permalink
Hi Les,

Glad to see you back. I am still highly curious about your answer to my
posting back on July 22, where I wrote:

Do you agree with Bollen's (1989) definitions of validity and reliability? I
am curious. Bollen defined the validity of a measure x(i) of ksi(j) to be
"the magnitude of the direct structural relation between ksi(j) and x(i)"
(p.197), and the reliability of x(i) to be "the magnitude of the direct
relations that all variables (except delta's) have on x(i)" (p.221). We need
a common ground on which meaningful discussions can be built.

Best,
Daniel Yang

--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html
Les Hayduk
2005-09-08 21:36:27 UTC
Permalink
Daniel
I have not forgotten you. There are several message from you, and
from other people (in my SEMNET backlog) that I will reply to as soon as
possible. I just can't do everything at once. This topic is important, but
I want to have time to do this justice when I turn to it. Please be patient.
Les
Post by Daniel Yang
Hi Les,
Glad to see you back. I am still highly curious about your answer to my
Do you agree with Bollen's (1989) definitions of validity and reliability? I
am curious. Bollen defined the validity of a measure x(i) of ksi(j) to be
"the magnitude of the direct structural relation between ksi(j) and x(i)"
(p.197), and the reliability of x(i) to be "the magnitude of the direct
relations that all variables (except delta's) have on x(i)" (p.221). We need
a common ground on which meaningful discussions can be built.
Best,
Daniel Yang
--------------------------------------------------------------
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html
===============================
Les Hayduk; Department of Sociology; University of Alberta; Edmonton,
Alberta;
Canada T6G 2H4 email: ***@ualberta.ca fax:
780-492-7196 phone 780-492-2730


--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html
Les Hayduk
2005-09-20 19:52:05 UTC
Permalink
Hi Daniel, Ken (Bollen) et al.

Here is a response to your posting of July 22 and re-posting of 9/7/2005
(snip to content)
Post by Daniel Yang
Do you agree with Bollen's (1989) definitions of validity and reliability?
I cringe at the thought of having to define validity, and at the
thought that people would actually attempt to diligently follow some
"definition" of validity. I recognize how some readers' thirsts for
definitions are foisted upon authors -- often by readers who think (and in
my experience sometimes pretend) they "know nothing" without a definition
(of validity), and who think they confidently know something (about
validity) is if they have memorized a definition. I have been forced,
slowly but surely, to have to replace my understanding of the term
validity, with a sense of "proper model" -- as in proper, or world
matching, model specification. Validity can not be defined without
reference (sometimes implicit reference) to properness. The problem seems
to be that SEmodel properness can be so varied, and so unusual, as to defy
encapsulation in a single definition. A definition of validity that seems
to "work fine" in one context, can be seriously incompatible with another
(even fully properly specified) context. This has led me to focus on the
properness of model specification, as a more-general replacement for the
notion of validity. When I use the term validity, and I do use the term, I
routinely think that this is a short-hand reference to properness of an
item's specification, but it demands more than just the item -- it demands
properness of the latent or latents underlying the item, so that the
properness of the whole model sneaks in. You can't have properness if an
indicator is connected to a "slightly wrong" latent, and the latent is
likely to be "slightly wrong" if it is embedded in a problematic latent
model.
Post by Daniel Yang
I am curious. Bollen defined the validity of a measure x(i) of ksi(j) to be
"the magnitude of the direct structural relation between ksi(j) and x(i)"
(p.197), and the reliability of x(i) to be "the magnitude of the direct
relations that all variables (except delta's) have on x(i)" (p.221).
We need a common ground on which meaningful discussions can be built.
Best,
Daniel Yang
We need a common ground, just not the above common ground. I will
copy the above statements by Ken Bollen and insert comments.
Post by Daniel Yang
Bollen defined the validity of a measure x(i) of ksi(j) to be
"the magnitude of the direct structural relation between ksi(j) and x(i)"
(p.197),
With single indicators the magnitude of the effect is set at 1.0
to scale the latent. The magnitude of this is not really reporting the
strength of an effect -- it is providing a scale to the latent.
The "directness" of this is problematic. I think Ken Bollen sensed
this, (see his statement regarding Chapters 2 and 3 a few lines below the
quote you provided from page 197). The claim to directness seems strange
when you realize that we know nothing about the "inner workings" of any
direct effect. We can not look "inside" a direct effect because a direct
effect is non-decomposable, it is not segmentable, it is not
take-apart-able. Now suppose we replace the direct effect with a properly
modeled "intervening variable" between the latent ksi and the indicator x.
We know can claim to know something about the previously non-decomposable
entity. We can speak about structure (an indirect effect made up of two
segments) that corresponds to the "original effect". Ken's defintional
requirement of "directness" of the effect says x is no longer a valid
indicator of Ksi, despite the same magnitude of effect originating in Ksi
and leading to x, and despite our ability to now have something clear and
definite to say about the "formerly direct" effect. We now know how the
effect functions because we have located an intervening variable that shows
us something about the connection between Ksi and x, so we have more
confident knowledge of Ksi's connection to x, yet Ken's inclusion of
"directness" in his definition says we no longer have any validity since we
have lost the "directness" required by his definition! Well I do not think
we have lost validity -- I think we have more comfortable and confident
understanding with the indirect effect in the model. We have a more
detailed understanding of what is happening in the world that connects Ksi
to x, and this is measurement advancement/progress -- not a loss.
Post by Daniel Yang
and the reliability of x(i) to be "the magnitude of the direct
relations that all variables (except delta's) have on x(i)" (p.221).
See direct again! Do indirect not also contribute to reliability?
And do notice the english: "the magnitude" singular, and "all the
variables" plural.
The switch to Rsquare (page 221) is supposed to somehow cover
this, but it can't. Rsquare is dependent on the actual variances of the
variables, and not merely on "effects". Look at the word "direct" in the
definition -- the only thing that is direct here are the EFFECTs running
from the Ksi's to x. If we move to Rsquare we include something else -- the
VARIANCES of the ksi's, not just their effects, and variance is NOT in
Ken's definition.
Now consider covariances between the Ksi variables as sources of
variance in the indicator. Once there are two or more causes of something
(in this case x) the coordinations/correlations/covariances between those
causes contribute VARIANCE into x (the effect). (If you do not understand
this, see my 1987 book page 20 equation 1.28, or my 1996 book page xvi,
under the figure/model in the second column.) Ken's term "direct" precludes
the variance in indicator x that arises from
coordination/covariance/correlation between the causative Ksi latents.
And notice that this definition makes reliability dependent on the
specific model specification. Consider "except delta's". Delta is the error
variable, but the error variable is merely what is NOT included in the
current model. If I add a variable in my model that was previously subsumed
within the error (delta) the new error variable is different (parallel to
how a regression error keeps changing as you add predictors). This makes
Ken's definition of reliability dependent on the specific "other variables"
included in the model -- not on just Ksi and x, but on whatever other
causes of x are included in the model versus their being dumped
collectively into the error variable delta. Are you happy with a sense of
reliability that changes depending on the inclusion/exclusion of specific
other variables in the model? That is what Ken's definition requires, but I
am not sure this is conducive to furthering "meaningful discussion".
The switch between "effects" (directedness) and proportions of
variance is a switch between a model's component parts and the IMPLICATIONS
of the model containing those parts. I view understanding the nature of the
implications, and the functioning of the implication structure, as being
much more fundamental than the definitions. If you understand the
implication structure, you will see limitations on, and qualification of,
the definitions. But the reverse is different. Having memorized the
definitions will not lead to understanding the implication structure. One
of the key differences between my 1987 book and Ken's 1989 book is that I
spent several pages (106-116) and considerable effort trying to FORCE an
understanding of the relevant implication structure upon my readers. Ken
provides much the same math (see his pages 323-325) but in his discussion I
do not hear the "urgency" or "import" that is warranted by the truly
fundamental place this "proof" occupies in a variety of discussions --
including discussions of measurement quality. The math is there, but I get
the impression that the only people who will recognize the
significance/centrality of this are those who already knew what they were
looking for. But enough of this for now. What do you think Daniel?
Les

===============================
Les Hayduk; Department of Sociology; University of Alberta; Edmonton,
Alberta;
Canada T6G 2H4 email: ***@ualberta.ca fax:
780-492-7196 phone 780-492-2730


--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html

Continue reading on narkive:
Loading...