[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

seminario Prof. J.O. Berger



Dipartimento di Statistica, Probabilita' e Statistiche Applicate
Universita' di Roma "La Sapienza"


Il Prof. James O. Berger (Duke University - Durham, NC - U.S.A.), 
terra' un seminario dal titolo:
                                              
Could Fisher, Jeffreys and Neyman Have Agreed On Testing?

Lunedi' 7 maggio 2001 ore 12:00, Sala 34

 

 

Abstract: 

Fisher frequently advocated p-values as an intuitively reasonable
measure of evidence against a hypothesis, but quite disliked (fixed)
frequentist error probabilities or Bayesian posterior probabilities of
hypotheses. Neyman disliked p-values and posterior probabilities,
because they did not seem to have any interpretation as frequentist
error rates. Jeffreys felt that both p-values and frequentist error
probabilities were illogical, and argued for use of "objective"
posterior probabilities of hypotheses. Thus the three ad sharply opposed
perspectives on testing (in contrast to estimation where, for many
common situations, the procedures they
proposed were in essential agreement).


If one accepts p-values as providing a reasonable intuitive measure of
strength of evidence of hypotheses, and conditions on this strength of
evidence to define (data-dependent) frequentist error probabilities, one
can have the best of the Fisher and Neyman testing worlds. (Indeed, this
can be viewed as a way of directly converting p-values into frequentist
error probabilities.) Furthermore, the resulting frequentist error
probabilities turn out to be equal to the objective Bayesian posterior
probabilities of hypotheses
advocated by Jeffreys, so one actually has the best of all three worlds.
This suggests that testing can also become a foundationally harmonious
realm of statistics, long the way ending the harmful and all-too-common
practice of incorrectly interpreting values as error probabilities.