Hi Daniel, Ken (Bollen) et al.
Here is a response to your posting of July 22 and re-posting of 9/7/2005
(snip to content)
Post by Daniel YangDo you agree with Bollen's (1989) definitions of validity and reliability?
I cringe at the thought of having to define validity, and at the
thought that people would actually attempt to diligently follow some
"definition" of validity. I recognize how some readers' thirsts for
definitions are foisted upon authors -- often by readers who think (and in
my experience sometimes pretend) they "know nothing" without a definition
(of validity), and who think they confidently know something (about
validity) is if they have memorized a definition. I have been forced,
slowly but surely, to have to replace my understanding of the term
validity, with a sense of "proper model" -- as in proper, or world
matching, model specification. Validity can not be defined without
reference (sometimes implicit reference) to properness. The problem seems
to be that SEmodel properness can be so varied, and so unusual, as to defy
encapsulation in a single definition. A definition of validity that seems
to "work fine" in one context, can be seriously incompatible with another
(even fully properly specified) context. This has led me to focus on the
properness of model specification, as a more-general replacement for the
notion of validity. When I use the term validity, and I do use the term, I
routinely think that this is a short-hand reference to properness of an
item's specification, but it demands more than just the item -- it demands
properness of the latent or latents underlying the item, so that the
properness of the whole model sneaks in. You can't have properness if an
indicator is connected to a "slightly wrong" latent, and the latent is
likely to be "slightly wrong" if it is embedded in a problematic latent
model.
Post by Daniel YangI am curious. Bollen defined the validity of a measure x(i) of ksi(j) to be
"the magnitude of the direct structural relation between ksi(j) and x(i)"
(p.197), and the reliability of x(i) to be "the magnitude of the direct
relations that all variables (except delta's) have on x(i)" (p.221).
We need a common ground on which meaningful discussions can be built.
Best,
Daniel Yang
We need a common ground, just not the above common ground. I will
copy the above statements by Ken Bollen and insert comments.
Post by Daniel YangBollen defined the validity of a measure x(i) of ksi(j) to be
"the magnitude of the direct structural relation between ksi(j) and x(i)"
(p.197),
With single indicators the magnitude of the effect is set at 1.0
to scale the latent. The magnitude of this is not really reporting the
strength of an effect -- it is providing a scale to the latent.
The "directness" of this is problematic. I think Ken Bollen sensed
this, (see his statement regarding Chapters 2 and 3 a few lines below the
quote you provided from page 197). The claim to directness seems strange
when you realize that we know nothing about the "inner workings" of any
direct effect. We can not look "inside" a direct effect because a direct
effect is non-decomposable, it is not segmentable, it is not
take-apart-able. Now suppose we replace the direct effect with a properly
modeled "intervening variable" between the latent ksi and the indicator x.
We know can claim to know something about the previously non-decomposable
entity. We can speak about structure (an indirect effect made up of two
segments) that corresponds to the "original effect". Ken's defintional
requirement of "directness" of the effect says x is no longer a valid
indicator of Ksi, despite the same magnitude of effect originating in Ksi
and leading to x, and despite our ability to now have something clear and
definite to say about the "formerly direct" effect. We now know how the
effect functions because we have located an intervening variable that shows
us something about the connection between Ksi and x, so we have more
confident knowledge of Ksi's connection to x, yet Ken's inclusion of
"directness" in his definition says we no longer have any validity since we
have lost the "directness" required by his definition! Well I do not think
we have lost validity -- I think we have more comfortable and confident
understanding with the indirect effect in the model. We have a more
detailed understanding of what is happening in the world that connects Ksi
to x, and this is measurement advancement/progress -- not a loss.
Post by Daniel Yangand the reliability of x(i) to be "the magnitude of the direct
relations that all variables (except delta's) have on x(i)" (p.221).
See direct again! Do indirect not also contribute to reliability?
And do notice the english: "the magnitude" singular, and "all the
variables" plural.
The switch to Rsquare (page 221) is supposed to somehow cover
this, but it can't. Rsquare is dependent on the actual variances of the
variables, and not merely on "effects". Look at the word "direct" in the
definition -- the only thing that is direct here are the EFFECTs running
from the Ksi's to x. If we move to Rsquare we include something else -- the
VARIANCES of the ksi's, not just their effects, and variance is NOT in
Ken's definition.
Now consider covariances between the Ksi variables as sources of
variance in the indicator. Once there are two or more causes of something
(in this case x) the coordinations/correlations/covariances between those
causes contribute VARIANCE into x (the effect). (If you do not understand
this, see my 1987 book page 20 equation 1.28, or my 1996 book page xvi,
under the figure/model in the second column.) Ken's term "direct" precludes
the variance in indicator x that arises from
coordination/covariance/correlation between the causative Ksi latents.
And notice that this definition makes reliability dependent on the
specific model specification. Consider "except delta's". Delta is the error
variable, but the error variable is merely what is NOT included in the
current model. If I add a variable in my model that was previously subsumed
within the error (delta) the new error variable is different (parallel to
how a regression error keeps changing as you add predictors). This makes
Ken's definition of reliability dependent on the specific "other variables"
included in the model -- not on just Ksi and x, but on whatever other
causes of x are included in the model versus their being dumped
collectively into the error variable delta. Are you happy with a sense of
reliability that changes depending on the inclusion/exclusion of specific
other variables in the model? That is what Ken's definition requires, but I
am not sure this is conducive to furthering "meaningful discussion".
The switch between "effects" (directedness) and proportions of
variance is a switch between a model's component parts and the IMPLICATIONS
of the model containing those parts. I view understanding the nature of the
implications, and the functioning of the implication structure, as being
much more fundamental than the definitions. If you understand the
implication structure, you will see limitations on, and qualification of,
the definitions. But the reverse is different. Having memorized the
definitions will not lead to understanding the implication structure. One
of the key differences between my 1987 book and Ken's 1989 book is that I
spent several pages (106-116) and considerable effort trying to FORCE an
understanding of the relevant implication structure upon my readers. Ken
provides much the same math (see his pages 323-325) but in his discussion I
do not hear the "urgency" or "import" that is warranted by the truly
fundamental place this "proof" occupies in a variety of discussions --
including discussions of measurement quality. The math is there, but I get
the impression that the only people who will recognize the
significance/centrality of this are those who already knew what they were
looking for. But enough of this for now. What do you think Daniel?
Les
===============================
Les Hayduk; Department of Sociology; University of Alberta; Edmonton,
Alberta;
Canada T6G 2H4 email: ***@ualberta.ca fax:
780-492-7196 phone 780-492-2730
--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html