Discussion:
Abduction, Deduction, Induction
(too old to reply)
Stanley Mulaik
2006-10-11 02:49:00 UTC
Permalink
SEM and abduction reasoning
Hello Stan
As I see it, SEM takes place within a hypothetico-deductive
framework. SEM goodness-of-fit is at best a minimum test for
empirical adequacy. It makes no explicit use of abductive
reasoning because it does not appeal to explanatory
criteria. However, this is not to say that one can't
supplement hypothetico-deductive theory appraisal by
appealing to such complementary criteria -- in fact
scientists often seem to do this.
I think SEM/hypothetico-deductive testing sharply contrasts
with an abductive perspective on theory evaluation. The
latter makes use of what philosophers of science call
'inference to the best explanation'. Paul Thagard's
(1992)theory of explanatory coherence is an effective
abductive method for the comparative appraisal of theories
in respect of their explanatory goodness. This method
appeals to explanatory criteria only; explanatory breadth,
not predictive success, is its criterion of empirical
adequacy. In addition, the theory of explanatory coherence
adopts a neural network conception of goodness-of-fit --
one that is understood as the overall coherence, or
harmony, of a set of elements.
Even though abductve reasoning does not formally connect to
SEM, I think it is plausible to suggest that SE modellers do
reason abductively when they go about their research (for
example they engage in explanatory reasoning when they
figure out the links between manifest and latent variables
in their measurement models, and similarly when they make
latent variable adjustments to their models as a result of
misspecification).
Brian
I agree with you Brian. For others abduction is only the first
stage in C. S. Peirce’s three-stage cycle of scientific inquiry.

The three stages are 1) abduction, 2) deduction 3) induction.

“Abduction” is reasoning from present and past information
about some phenomenon to the best hypothesis one can think
of with which to explain the phenomenon. It is a hypothesis
formulating stage in which the scientist uses his/her
imagination or some other means of organizing past and even
current data about the phenomenon to formulate the
best-fitting and simplest model one can think of for the
current data. This may involve putting forth a number of
different hypotheses, that one then compares for fit and
simplicity to the past and present data. Or one may proceed
to adjust and modify an initial model until one gets a best
fitting and simplest model. (Doing this is still
formulating and rejecting models/hypotheses because they
don’t fit the past and current data or are not the
simplest). The simplest and best fitting hypothesis or
model is chosen for further work as a viable candidate for
consideration in the later stages. No decision is made as
to whether the hypothesis is correct at this stage. One can
hardly do so, since by imagination one can formulate an
infinity of models and hypotheses to explain a given set of
data, given the freedom to adjust and modify the model until
one gets the best fit. At this stage the final candidate
hypothesis is a subjective product of the researcher and
not yet considered an objective explanation.

“Deduction” involves two stages. The first is a
constructive stage in which we seek to make the hypothesis
logically consistent and clear. It should also be logically
consistent with what we already know generally about the
world. Then we enter a second deductive stage in which we
seek to derive predictions (e.g. what kind of data should
we obtain with which to test our hypothesis and what should
we expect to observe?).

“Induction” is the third stage, in which hypotheses are
tested against the new data we have collected according to
what we derived in the deductive phase to be the kind of data
with which the hypothesis would be tested. It is absolutely
essential that one distinguish this data from the data used
in the abductive phase. It must not be data used in the
hypothesis formulating stage of abduction. It must be data
in which it would be logically possible to disconfirm the
hypothesis. Any data set used earlier to adjust and modify
hypotheses so that they would fit that data, would necessarily
fit that data, and it would not be logically possible then
to disconfirm the hypothesis against that data. It would not
then be a test, since with a test there must be a logical
possibility of disconfirming the hypothesis.

SEM in principle works at the inductive stage. It presumes
that one has gone through an abductive and deductive stage
in formulating your model and selecting variables for study.

Peirce would consider a statistic like chi-square to be
an appropriate statistic for testing the hypothesis.

Now, I hinted at the outset that Peirce’s three-stage
cycle is continuously employed in on-going inquiry.

If the hypothesis is confirmed by predictions fitting the
observed data selected for testing, then the confirmed
hypothesis may be combined with other confirmed hypotheses
in formulating new hypotheses when confronted with new
phenomena. Going forth with a new cycle will put again
all preceding “confirmed” hypotheses--as well as any new
hypothesis formulated to combine the previous confirmed
hypotheses into a new hypothesis--to the test again.

If the hypothesis is disconfirmed, then one enters again
a new cycle, beginning with an abductive phase, and so forth.

A word about indices of approximation. They should not
be confused with confirmation of the hypothesis as would
occur with a non-significant chi-square. The fact that
one knows he/she has only a good approximation implies
that he/she does not have an exact fit to the
data. That’s what “only an approximation” means. That
simply is a piece of information that may be taken into
a new abductive phase of a new cycle of inquiry. It
provisionally suggests that maybe the current hypothesis
can be a viable candidate, given new modifications and
adjustments, based on attempts to understand the nature
of the lack of fit. But one must also be aware that perhaps
an entirely different kind of model may be needed. [I am
thinking of a simplex model versus a common factor analysis
model as a kind of case where the common factor model could
be a good approximation to data generated by a simplex,
but one must consider a different kind of model (simplex).
But whether it would be reasonable to do so would depend on
whether the additional feature of an ordering among the
variables in time or space or in nested composition should
be considered--since the common factor model takes no
cognizance of order among variables].

We also should consider again simplicity. Any alternative
model should not only fit, but fit with fewer estimated
parameters (simplicity), or to put it another way, with
more degrees of freedom than the current model.

A nice critical essay on Peirce's three-stage method
of scientific inquiry is given by Albert Atkin at

http://www.sinica.edu.tw/ioe/chinese/r2711/oldfiles/911109/paper/taiwan/2605.
doc





Stan Mulaik

--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html
Les Hayduk
2006-10-12 01:25:41 UTC
Permalink
Hi Stan, Brian, et al.

(Brian had said)
Post by Stanley Mulaik
Hello Stan
As I see it, SEM takes place within a hypothetico-deductive
framework. SEM goodness-of-fit is at best a minimum test for
empirical adequacy.
Brian, what better model-level test is there? And if there is no
better test, surely we should be especially attentive to the results
reported by this test -- lest SEM be seen as being methodologically
deficient for not even doing what minimum testing is possible!
Post by Stanley Mulaik
It makes no explicit use of abductive
reasoning because it does not appeal to explanatory
criteria. However, this is not to say that one can't
supplement hypothetico-deductive theory appraisal by
appealing to such complementary criteria -- in fact
scientists often seem to do this.
I think SEM/hypothetico-deductive testing sharply contrasts
with an abductive perspective on theory evaluation. The
latter makes use of what philosophers of science call
'inference to the best explanation'. Paul Thagard's
(1992)theory of explanatory coherence is an effective
abductive method for the comparative appraisal of theories
in respect of their explanatory goodness. This method
appeals to explanatory criteria only; explanatory breadth,
not predictive success, is its criterion of empirical
adequacy. In addition, the theory of explanatory coherence
adopts a neural network conception of goodness-of-fit --
one that is understood as the overall coherence, or
harmony, of a set of elements.
Even though abductve reasoning does not formally connect to
SEM, I think it is plausible to suggest that SE modellers do
reason abductively when they go about their research (for
example they engage in explanatory reasoning when they
figure out the links between manifest and latent variables
in their measurement models, and similarly when they make
latent variable adjustments to their models as a result of
misspecification).
Brian
(Stan replied)
Post by Stanley Mulaik
I agree with you Brian. For others abduction is only the first
stage in C. S. Peirce's three-stage cycle of scientific inquiry.
If these stages are supposed to describe what we as
researchers do, they are not constraints on what we as researchers
do. Stan, is Peirce claiming researchers (SEM researchers in
particular) SHOULD follow/be-constrained-by these three stages, or
merely that these terms seems to be convenient as descriptors?
Post by Stanley Mulaik
The three stages are 1) abduction, 2) deduction 3) induction.
If Peirce really means this is a "cycle" Stan should have
been able to begin with:
1) deduction, 2) induction, and 3) abduction. Do notice that if this
really is a cycle (with Stan's 1,2,3 version) and one gets to
"deduction" (2) one should NOT be able to get back to "abduction"
without going through "induction". Well, what do you think, would a
step of "induction" be required? Do you think this really is a
"cycle"? I don't.
Post by Stanley Mulaik
"Abduction" is reasoning from ***present and past information***
about some phenomenon to the best hypothesis one can think
of with which to explain the phenomenon.
Stan, is a model's significant failure to fit with the data
information (*** emphasis by Les) that a SEM research must respect
and investigate diagnostically? That is would it be methodologically
adequate to either ignore model failure to fit, or to proceed without
gathering as much relevant information as possible about the possible
reasons for significant ill fit? I think it would be deficient for
anyone doing SEM to try to overlook or hide significant ill fit, or
to fail to gather whatever diagnostic information they can about the
potential sources of ill fit. Surely "abduction" would not condone
overlooking significant information, or being methodologically
deficient by failing to diagnostically investigate significant ill
fit! That is, is abduction an excuse for only doing a poor-job of
considering the information, or does abduction require a careful and
attentive seeking of "information"?
Post by Stanley Mulaik
It is a hypothesis
The singular on "hypothesis" is problematic in the context
of SEM -- unless Stan thinks that a SEmodel constitutes a
singularity. The model fit provides a test, but there are multiple
things that can result in test-failure (not one thing) and that is
part of the complexity of model testing. And do notice that the
model's coefficients have tests attached to them -- but Stan's
singularity on "hypothesis" does not incline one to think of SEmodels
as having multiple kinds of testing connected to them.
Post by Stanley Mulaik
formulating stage in which the scientist uses his/her
imagination or some other means of organizing past and even
current data about the phenomenon to formulate the
best-fitting and simplest model one can think of for the current data.
Does this claim that the model OUTPUT that gives you
estimates/test-outcome/diagnostics is BEYOND the abductive "stage"
Stan? Your sentence can be heard as merely forming a model as a
hypothesis (a complex of hypotheses) -- all one gets to is the
hypothesis. But if you stress "current data" that says the
researcher must also respect the current output. But then what does
it mean to "formulate the best-fitting and simplest model". It does
NOT mean SELECT from the available models -- there is one model
output. "Formulate" does not mean "select". So it seems you are
saying abduction is just a way to say "I will reformulate the model
because it failed" (presumably you would not re-formulate the model
if it fit).
And no, a SEM researcher does NOT have to artificially
pretend there is one model they will proceed with. Have you not heard
the repeated recommendations to try to develop more than one SEmodel?
There are plenty of statements in the SEM literature recommending we
go into our next bit of SEM research with more than a single model in
mind. It is a trap and a "dead end" to artificially narrow down the
number of models that are investigated and "thought about" as one does SEM.
Post by Stanley Mulaik
This may involve putting forth a number of
different hypotheses, that one then compares for fit and
simplicity to the past and present data.
The idea of multiple models is but it is problematic to
suggest we "compare for fit" (implicitly covariance fit) in the
context of SEM. The problem of comparing non-nested models should be
obvious, but this also contains Mulaik's Methodological Mistake --
yet again. It is problematic to apply degree of fit as a criterion
for retention of a model because the degree of fit can not be trusted
to correspond to the degree of properness of the model's causal
specification. I would have thought that you understood my challenge
to your Scenario Stan, where you built in a small degree of
covariance ill fit, and I illustrated how two wrongly-directed
effects, and one missed effect could disrupt all the latent-level
claims made by your model! I illustrated how it was that small
covariance ill fit -- even in the presence of your scenario with its
many degrees of freedom, and near replication, and substantial N's,
and near-borderline p -- could still have a small degree of
covariance ill fit signal IMPORTANT model misspecifications.
SEMNETers, why do you think Stan does not recognize that
"comparing fit" (especially among significantly ill-fit models) is
not a reasonable basis for comparing SEmodels? Stan keeps making
Mulaik's Methodologicial Mistake, but you should be asking yourself
why he keeps doing this. Here he is merely trying to slip this into
the SEM world via supposedly "merely describing abduction" -- without
the slightest hint that his presentation of abduction is seriously
problematic if it is taken as applying to SEM.
Post by Stanley Mulaik
Or one may proceed
to adjust and modify an initial model until one gets a best fitting
and simplest model.
Well SEMNETers, were you listening to the difference between how Stan
proceeded with modification indices and how I proceeded with
modification indices? These were seriously different, and I still
think Stan's way of proceeding was methodologically deficient.
Post by Stanley Mulaik
(Doing this is still formulating and rejecting models/hypotheses because they
don't fit the past and current data or are not the simplest).
Stan, do you mean "don't fit" in the sense that you TESTED
for fit and found it did not hold? But your "or" is seriously
problematic even if we hear this as chi-square model fit testing.
Suppose the model FITS (and you can find nothing wrong with the
model) but it is not the simplest! Why would you "reject" the model?
Your "or" says simplicity alone is sufficient to reject -- and that
is simply not reasonable in the context of SEM.
Post by Stanley Mulaik
The simplest and best fitting hypothesis or
model is chosen for further work as a viable candidate for
consideration in the later stages.
Nonsense. If the world is complex, we ought to find that our
SEmodels are equally complex if they are to match up with, and tell
us about, that complex world. The idea of simplest is NOT an excuse
for permitting model misspecification. The viable candidate models
need not be simplest.
And do notice the term "later" -- my renumbering of the
"cycle above" makes abduction the LATER stage. So Stan is implicitly
backing away from the "cycle" idea as he presents something as
"later". Everything, and even the same thing, is "later" if there is
cycling through the entities. "Later" makes sense only in the absence
of cycling.
And do notice that a key part of this is the claim that one
is "chosen". Well suppose a researcher only has ONE model -- then
abduction would seem to be inapplicable because no "choice" would be
possible. But then again, that might be exactly what abduction is
supposed to do for Stan -- permit him to retain the one model that he
has rather than doing/considering a major revamping of the whole model.
Post by Stanley Mulaik
No decision is made as
to whether the hypothesis is correct at this stage.
Why is the decision not made at this stage if the model is
significantly inconsistent with the data Stan? What keeps the
researcher from potentially deciding that their current model is
wrong once they have evidence the model is inconsistent with the
data? SEMNETers, this is simply abduction-talk hampering our
literature by pretending that it is OK to keep hanging onto the same
old model, and to keep overlooking or look-past significant model ill
fit. Stan can't seem to think that it is possible to say/think and
proceed after saying: We have no currently viable model. He can't
permit himself to think that all the models we have (perhaps the only
model we have) are/is not viable.
Post by Stanley Mulaik
One can hardly do so, since by imagination one can formulate an
infinity of models and hypotheses to explain a given set of
data, given the freedom to adjust and modify the model until
one gets the best fit.
But if the "best fit" is still significantly poor fit, one
does have evidence speaking against the model. SEMNETers, do you hear
Stan's claim as being deceptive. It is deviously trying to tell you
to overlook evidence you should be investigating and honestly
reporting -- not overlooking or discounting the evidence. The model
should be modified NOT merely, or focusedly, just to get fit. The
task is to seek a model that is properly specified. The task is not
to explain the covariances, but to locate the structure of the world
that informs us about where the covariances come from. The
covariances end up being explained but explained by a model whose
structure respects the world's structure.
Post by Stanley Mulaik
At this stage the final candidate
hypothesis is a subjective product of the researcher and
not yet considered an objective explanation.
Of course a model that is significantly inconsistent with
the evidence will have a hard time claiming to be "objective"! No
surprise there.
Post by Stanley Mulaik
"Deduction" involves two stages. The first is a
constructive stage in which we seek to make the hypothesis
logically consistent and clear. It should also be logically
consistent with what we already know generally about the
world. Then we enter a second deductive stage in which we
seek to derive predictions (e.g. what kind of data should
we obtain with which to ***test*** our hypothesis and what should
we expect to observe?).
I think this is a close parallel to 1) setting up a model
and 2) noticing that a model (with its estimates) implies a
covariance matrix among the indicators (usually called sigma). Sigma
is a derived model prediction.
Stan, what test should SEMNETers use to test (***emphasis by
Les above) the model's implied/predicted sigma?
Post by Stanley Mulaik
"Induction" is the third stage, in which hypotheses are
tested against the new data we have collected according to
what we derived in the deductive phase to be the kind of data
with which the hypothesis would be tested. It is absolutely
essential that one distinguish this data from the data used
in the abductive phase. It must not be data used in the
hypothesis formulating stage of abduction. It must be data
in which it would be logically possible to disconfirm the hypothesis.
SEMNETers, do you recall why "replication" is not a
convincing way to check out a SEmodel? The problem is that SEM
replications do NOT provide sufficient possibility to DISCONFIRM the
model (because of prior capitalizing on NON-chance covariance). Even
ridiculously formed models (e.g.add random effects until the model is
saturated, then drop 1/2 the coefficients whose estimates are very
close to zero) will result in a model that tends to replicate on
"new" but parallel data. In SEM, replication (replicate data) does
NOT provide a clear "logical possibility to disconfirm the
hypothesis". Replication has some minimal possibility to disconfirm,
but it is very limited and very sensitive to prior
real-covariance-prompted model modifications.
Post by Stanley Mulaik
Any data set used earlier to adjust and modify
hypotheses so that they would fit that data, would necessarily
fit that data, and it would not be logically possible then
to disconfirm the hypothesis against that data.
And "parallel" data sets are also severely hampered in their
ability to disconfirm SEmodels following "adjustments" and
data-prompted modifications.
Post by Stanley Mulaik
It would not then be a test, since with a test there must be a logical
possibility of disconfirming the hypothesis.
And that is why "replication" does not provide an adequate
"test" of a SEmodel -- there are clear and logical reasons why prior
data-prompted modifications severely reduce the "possibility of
disconfirming".
Post by Stanley Mulaik
SEM in principle works at the inductive stage. It presumes
that one has gone through an abductive and deductive stage
in formulating your model and selecting variables for study.
If there really is a "cycle" (recall above) why does this
presume we have not also gone through "inductive" previously?
Post by Stanley Mulaik
Peirce would consider a statistic like chi-square to be
an appropriate statistic for testing the hypothesis.
Would Stan consider chi-square to be an appropriate statistic for
testing SEmodels? Les would claim the chi-square test (possibly
normality adjusted) is the best available test. If Les and Peirce
consider chi-square appropriate, do you think Stan thinks it is not
the appropriate test?
But do consider Stan's word "like" in "like chi-square".
This is devious because it implies Stan can still be considering (or
attributing to Peirce) a style of testing that is quite at odds with
clean testing. It permits a style of testing that says "overlook
thiiississs much ill fit, and then test with a test that is LIKE
chi-square to see if the model has even worse fit than this". It is
called the RMSEA test with thiiissss big a value in the not-so-null
hypothesis. The RMSEA test is "like" the chi-square test, but it is
seriously deficient as a model test.
Post by Stanley Mulaik
Now, I hinted at the outset that Peirce's three-stage
cycle is continuously employed in on-going inquiry.
Enough comments on cycling above...
Post by Stanley Mulaik
If the hypothesis is confirmed by predictions fitting the
observed data selected for testing, then the confirmed
hypothesis may be combined with other confirmed hypotheses
in formulating new hypotheses when confronted with new
phenomena.
Now suppose the test says the hypothesis is disconfirmed --
the model fails? Should the SEM researcher pay careful attention to
this? Combine the disconfirmed and some confirmed hypotheses? Combine
it with other disconfirmed hypotheses? Try to pretend it was not
disconfirmed so it looks like only confirmed are being combined?
Post by Stanley Mulaik
Going forth with a new cycle will put again
all preceding "confirmed" hypotheses--as well as any new
hypothesis formulated to combine the previous confirmed
hypotheses into a new hypothesis--to the test again.
Test again makes sense only if you paid attention to the
test outcome last time, and only if you pay attention to the
next/coming test outcome. Stan, why did you NOT see the failure of
Study 4 in your scenario as having potentially seriously questioned
the structuring of the latent level of your model? Study 4 did
provide a new test whose failure was capable of undoing, or
questioning, all the prior testing -- but you failed to respect this
possibility. We can attribute the disrespecting of this evidence to
researcher bias, but why would an honest and attentive researcher not
seriously investigate the possibility that the latest test could
actually confront or challenge the prior (supposedly passed) tests?
Post by Stanley Mulaik
If the hypothesis is disconfirmed, then one enters again
a new cycle, beginning with an abductive phase, and so forth.
Stan, your Study 4 researcher should have done whatever diagnostic
investigations were possible to try to figure out why the Study 4
model failed. Would such diagnostics constitute being "abductive"
(inductive? deductive?) and why did you not clearly and consistently
recommend that the diagnostics be done?
Post by Stanley Mulaik
A word about indices of approximation. They should not
be confused with confirmation of the hypothesis as would
occur with a non-significant chi-square.
Yes, fit indices do NOT provide the kind of
confirm/disconfirm statements Stan depends on above. The researcher
needs a model test, and NOT even a test of approximation.
Post by Stanley Mulaik
The fact that
one knows he/she has only a good approximation implies
that he/she does not have an exact fit to the data.
This statement says "ONLY a good approximation" -- which
corresponds to DISCONFIRMATION. That is, Stan is stressing the word "only".
Post by Stanley Mulaik
That's what "only an approximation" means.
Yes, this is Stan's way of saying that if the model is
significantly inconsistent with the data the model is DISCONFIRMED
irrespective of the degree of approximation. SEMNETers, please
connect this to Stan's scenario and the failure of the Study 4 model
where Stan tried to argue that the degree of the approximation was
somehow able to discard or displace the significance of the ill fit.
Post by Stanley Mulaik
That simply is a piece of information that may be taken into
a new abductive phase of a new cycle of inquiry. It
provisionally suggests that maybe the current hypothesis
can be a viable candidate,
Model FAILURE provisionally suggests that the current
hypothesis/model is NOT a viable candidate,
Post by Stanley Mulaik
given new modifications and adjustments,
and may require major modifications and adjustments,
Post by Stanley Mulaik
based on attempts to understand the nature of the lack of fit.
based on diagnostic attempts to understand the nature of the lack of
fit -- diagnostics Stan refused to steadfastly REQUIRE of his Study 4
researcher!
Post by Stanley Mulaik
But one must also be aware that perhaps
an entirely different kind of model may be needed.
Next time put this FIRST and foremost Stan, and you will be well on
your way to agreeing with me.
Post by Stanley Mulaik
[I am thinking of a simplex model versus a common factor analysis
model as a kind of case where the common factor model could
be a good approximation to data generated by a simplex,
but one must consider a different kind of model (simplex).
But whether it would be reasonable to do so would depend on
whether the additional feature of an ordering among the
variables in time or space or in nested composition should
be considered--since the common factor model takes no
cognizance of order among variables].
I was indeed hearing Stan as still connecting to the Stan
and Les discussions. But the issue is not "order" as Stan pretends.
Consider the SimplexPLUS models Stan....the same concern is present
there even if the "order" is no longer available as a "distracting excuse".
Post by Stanley Mulaik
We also should consider again simplicity. Any alternative
model should not only fit, but fit with fewer estimated
parameters (simplicity), or to put it another way, with
more degrees of freedom than the current model.
Bah Humbug. If the world is complex, our model should be
equally complex if it is to be proper. Stan is not sufficiently
god-like to be able to assuredly assert that the world will be
"simple". SEMNETers ought to simply strive for models that match the
world's structuring, whether that structuring is simple or complex,
in the area they are investigating.
Les
Post by Stanley Mulaik
A nice critical essay on Peirce's three-stage method
of scientific inquiry is given by Albert Atkin at
http://www.sinica.edu.tw/ioe/chinese/r2711/oldfiles/911109/paper/taiwan/2605.
doc
Stan Mulaik
===============================
Les Hayduk; Department of Sociology; University of
Alberta; Edmonton, Alberta;
Canada T6G 2H4 email: ***@ualberta.ca fax:
780-492-7196 phone 780-492-2730


--------------------------------------------------------------
To unsubscribe from SEMNET, send email to ***@bama.ua.edu
with the body of the message as: SIGNOFF SEMNET
Search the archives at http://bama.ua.edu/archives/semnet.html

Loading...