The following calculations are based on Jeffreys’s (1961, p. 269) Theory of Probability and elaborates why he choose to use a symmetric proper weighting function  on the test relevant parameter the population effect size
 on the test relevant parameter the population effect size  .
.
First, to relate the observed  -value to the population effect size
-value to the population effect size  within
 within  , Jeffreys rewrote the likelihood of
, Jeffreys rewrote the likelihood of  in terms of the effect size
 in terms of the effect size  and
 and  . To calculate the weighted likelihood of
. To calculate the weighted likelihood of  he then choose to set
 he then choose to set  . By assigning the same weighting function to
. By assigning the same weighting function to  as was done for
 as was done for  , we obtain:
, we obtain:
 (1)    ![Rendered by QuickLaTeX.com \begin{align*}  p(d \, | \, \mathcal{M}_{1}) = (2 \pi)^{-{n \over 2}} \int_{0}^{\infty} \sigma^{-n-1} \int_{-\infty}^{\infty} \exp \left ( - {n \over 2} \left [ \Big ({\bar{x} \over \sigma} - \delta \Big )^{2} + \Big ({s \over \sigma} \Big )^{2} \right ] \right ) \pi_{1}(\delta ) \, \text{d} \delta \, \text{d} \sigma. \end{align*}](https://www.alexander-ly.com/wp-content/ql-cache/quicklatex.com-0c4340e9fe8fabd249e84e1ce6df669f_l3.png)
The remaining task is to specify  , the weighting function for the test-relevant parameter. Jeffreys proposed his weighting function
, the weighting function for the test-relevant parameter. Jeffreys proposed his weighting function  based on desiderata obtained from hypothetical, extreme data.
 based on desiderata obtained from hypothetical, extreme data.
Predictive matching: Symmetric 
The first “extreme” case Jeffreys discusses is when  ; this automatically yields
; this automatically yields  regardless of the value of
 regardless of the value of  . Jeffreys noted that a single datum cannot provide support for
. Jeffreys noted that a single datum cannot provide support for  , as any deviation of
, as any deviation of  from zero can also be attributed to our lack of knowledge of
 from zero can also be attributed to our lack of knowledge of  . Hence, nothing is learned from only one observation and consequently the Bayes factor should equal 1 whenever
. Hence, nothing is learned from only one observation and consequently the Bayes factor should equal 1 whenever  .
. 
To ensure that  whenever
 whenever  , Jeffreys (1961, p. 269) entered
, Jeffreys (1961, p. 269) entered  , thus,
, thus,  and
 and  into Eq. 2 and noted that
 into Eq. 2 and noted that  equals
 equals  , if
, if  is taken to be symmetric around zero. The proof assumes that
 is taken to be symmetric around zero. The proof assumes that  and uses the transformation
 and uses the transformation  , thus,
, thus,  and
 and  . Hence, by symmetry of
. Hence, by symmetry of  we get
 we get 
 (2)    ![Rendered by QuickLaTeX.com \begin{align*} (2 \pi)^{1 \over 2}  p(d \, | \, \mathcal{M}_{1}) = & \int_{0}^{\infty} \sigma^{-1} \int_{-\infty}^{\infty} \exp \left ( - {1 \over 2} \left [  (\xi - \delta )^{2} \right ] \right ) \pi_{1}(\delta ) \, \text{d} \delta \, \sigma^{-1} \text{d} \sigma \\ = & {1 \over x} \int_{0}^{\infty} \int_{-\infty}^{\infty} \exp \left ( - {1 \over 2} \left [  (\xi - \delta )^{2} \right ] \right ) \pi_{1}(\delta ) \, \text{d} \delta \, \text{d} \xi \\ = & {1 \over x} \Big [ \int_{\xi=0}^{\xi=\infty} \int_{\delta=0}^{\delta=\infty} \exp \left ( - {1 \over 2} \left [  (\xi - \delta )^{2} \right ] \right ) \pi_{1}(\delta ) \, \text{d} \delta \, \text{d} \xi \\  & + \int_{\xi=0}^{\xi=\infty} \int_{\delta=0}^{\delta=\infty} \exp \left ( - {1 \over 2} \left [  (\xi + \delta )^{2} \right ] \right ) \pi_{1}(\delta ) \, \text{d} \delta \, \text{d} \xi \Big ] \end{align*}](https://www.alexander-ly.com/wp-content/ql-cache/quicklatex.com-4256de07e9c1895358ca21dfb90039f0_l3.png)
By swapping the order of integration (Fubini) then yields an integral in terms of  that yields the normalisation constant of a normal distribution, that is,
 that yields the normalisation constant of a normal distribution, that is,
 (3)    ![Rendered by QuickLaTeX.com \begin{align*}  p(d \, | \, \mathcal{M}_{1}) = & (2 \pi)^{-{1 \over 2}} {1 \over x} \Big [ \int_{\xi=-\infty}^{\xi=\infty} \exp \left ( - {1 \over 2} \left [  (\xi - \delta )^{2} \right ] \right ) \, \text{d} \xi \int_{\delta=0}^{\delta=\infty} \pi_{1}(\delta ) \, \text{d} \delta \Big ] \\   = & {1 \over x} \int_{0}^{\infty} \pi_{1}(\delta) \text{d} \delta = {1 \over 2 x}. \end{align*}](https://www.alexander-ly.com/wp-content/ql-cache/quicklatex.com-4ebcf5efad0bace319b359b160adac2e_l3.png)
Hence, for  symmetric around zero the weighted likelihood of
 symmetric around zero the weighted likelihood of  is then equal to that of the null model whenever
 is then equal to that of the null model whenever  and
 and  .
.