python怎么实现检验_[python skill]利用python实现假设性检验方法

[python skill]利用python实现假设性检验方法

刀尔東 2018-08-03 09:19:13 1244 收藏 2

版权

hello,大噶好,最近新学习了利用python实现假设性检验的一些方法,下面结合方法的数学原理做简单的总结~

假设检验是推论统计中用于检验统计假设的一种方法。而“统计假设”是可通过观察一组随机变量的模型进行检验的科学假说。[1]一旦能估计未知参数,就会希望根据结果对未知的真正参数值做出适当的推论。

统计上对参数的假设,就是对一个或多个参数的论述。而其中欲检验其正确性的为零假设(null hypothesis),零假设通常由研究者决定,反应研究者对未知参数的看法。相对于零假设的其他有关参数之论述是备择假设(alternative hypothesis),它通常反应了执行检定的研究者对参数可能数值的另一种(对立的)看法(换句话说,备择假设通常才是研究者最想知道的)。

假设检验的种类包括:t检验,Z检验,卡方检验,F检验等等。

参考:https://zh.wikipedia.org/wiki/%E5%81%87%E8%A8%AD%E6%AA%A2%E5%AE%9A

应该说假设性检验是一种处理数据的思路,依据不同的实验数据和目的可以使用不同的处理方法。如T检验,Z检验,卡方检验,F检验等等~~

--------------------------------------------------------------------------------------------------------------------

Permutation test on frog data

学习假设性检验之前大家不妨先复习一下之前转载的一篇关于置换检验 permutation test的一篇博文,它是假设性检验的一种方法,比较简单基础,所以先从这种方法说起。

置换检验适用于两组完整的实验数据(n1,n2)之间的分析比较:

主要流程是:

1、合并n1,n2,无放回地抽取生成与原数据长度相等的两组数据(n1`,n2`)

def permutation_sample(data1, data2):

"""Generate a permutation sample from two data sets."""

# Concatenate the data sets: data

data = np.concatenate((data1, data2))

# Permute the concatenated array: permuted_data

permuted_data = np.random.permutation(data)

# Split the permuted array into two: perm_sample_1, perm_sample_2

perm_sample_1 = permuted_data[:len(data1)]

perm_sample_2 = permuted_data[len(data1):]

return perm_sample_1, perm_sample_2

2、重复1过程,进行多次实验(如10000次),将每次实验得出数据(n1`,n2`)求平均,再将平均值做差:

def draw_perm_reps(data_1, data_2, func, size=1):

"""Generate multiple permutation replicates."""

# Initialize array of replicates: perm_replicates

perm_replicates = np.empty(size)

for i in range(size):

# Generate permutation sample

perm_sample_1, perm_sample_2 = permutation_sample(data_1,data_2)

# Compute the test statistic

perm_replicates[i] = func(perm_sample_1,perm_sample_2)

return perm_replicates

def diff_of_means(data_1, data_2):

"""Difference in means of two arrays."""

# The difference of means of data_1, data_2: diff

diff = np.mean(data_1)-np.mean(data_2)

return diff

3、这样我们得到多个(10000)置换排列求得的结果,这些结果能代表模拟抽样总体情况。

举个栗子:

Kleinteich and Gorb (Sci. Rep., 4, 5225, 2014) performed an interesting experiment with South American horned frogs. They held a plate connected to a force transducer, along with a bait fly, in front of them. They then measured the impact force and adhesive force of the frog's tongue when it struck the target.

K和G博士以前做过一系列实验,研究青蛙舌头的黏力与哪些因素有关。

他们经过测试发现:FROG_A(老青蛙)的平均黏力0.71 Newtons (N)和FROG_B(小青蛙)的0.42 Newtons (N)。这0.29牛顿的差距仅仅因为测试样本过少而偶然发生的吗?还是由于年龄的差距确实影响了青蛙的舌头黏力呢?

于是,两位科学家使用置换检验的方法对数据进行了分析:

# Compute difference of mean impact force from experiment: empirical_diff_means

empirical_diff_means = diff_of_means(force_a,force_b)#求得原始数据的平均值的差值

# Draw 10,000 permutation replicates: perm_replicates

perm_replicates = draw_perm_reps(force_a, force_b,

diff_of_means, size=10000)#重复一万次实验之后统计差值分布情况

# Compute p-value: p

p = np.sum(perm_replicates >= empirical_diff_means) / len(perm_replicates)#计算统计差值中比原始数据差值还大的可能

# Print the result

print('p-value =', p)

output:

p-value = 0.0063

可以看到,在这个假设中,我们认为:FROG_A和FROG_B的分布是相同的(年龄并不影响舌头的黏力)(You will compute the probability of getting at least a 0.29 N difference in mean strike force under the hypothesis that the distributions of strike forces for the two frogs are identical. )。经过反复实验之后,我们得到依据原始数据得出的(估计的,可能的)客观世界均值的差值的分布情况,并求出了原始数据以及比原始数据更大(更加离谱的差值)的概率,他是0.6%,说明出现这种比原始数据还离谱的差值概率是很小的。所以我们只能否定原来的假设,认为年龄是影响舌头黏力的因素。

这就是简单置换检验。

-----------------------------------------------------------------------------------------------------------------------------------

Bootstrap hypothesis tests

-----------------------------------------------------------------------------------------------------------------------------------

A one-sample bootstrap hypothesis test

下面科学家继续对青蛙们进行研究:

在后面的研究中,科学家们发现了另一组青年青蛙FROG_C(FROG_B也是年轻青蛙哦),但不幸的是,FROG_C原始数据由于某些原因遗失,只记得它们的黏力均值为0.55N,而FROG_B的黏力均值是0.4191。因为没有FROG_C原始数据,所以我们无法进行置换检验,无法确定FROG_B和FROG_C是否服从同一种分布(是否是同一种青蛙)。它们是同一种青蛙吗?为了进行分析,两个科学家绞尽脑汁,提出了一个大胆的想法:

既然不能确定分布情况,那我们假设FROG_B和FROG_C的黏力均值是一样的好了(The mean strike force of Frog B is equal to that of Frog C.):

Another juvenile frog was studied, Frog C, and you want to see if Frog B and Frog C have similar impact forces. Unfortunately, you do not have Frog C's impact forces available, but you know they have a mean of 0.55 N. Because you don't have the original data, you cannot do a permutation test, and you cannot assess the hypothesis that the forces from Frog B and Frog C come from the same distribution. You will therefore test another, less restrictive hypothesis: The mean strike force of Frog B is equal to that of Frog C.

To set up the bootstrap hypothesis test, you will take the mean as our test statistic. Remember, your goal is to calculate the probability of getting a mean impact force less than or equal to what was observed for Frog B if the hypothesis that the true mean of Frog B's impact forces is equal to that of Frog C is true. You first translate all of the data of Frog B such that the mean is 0.55 N. This involves adding the mean force of Frog C and subtracting the mean force of Frog B from each measurement of Frog B. This leaves other properties of Frog B's distribution, such as the variance, unchanged.

# Make an array of translated impact forces: translated_force_b

translated_force_b = force_b-np.mean(force_b)+0.55#改变FROG_B的黏力(为什么要改FROG_B的数据呢???认为FROG_B的原始数据采集错误吗?)

# Take bootstrap replicates of Frog B's translated impact forces: bs_replicates

bs_replicates = draw_bs_reps(translated_force_b, np.mean, 10000)#bootstrap reps

# Compute fraction of replicates that are less than the observed Frog B force: p

p = np.sum(bs_replicates <= np.mean(force_b)) / 10000#求p

# Print the p-value

print('p = ', p)

output:

p =  0.0046

用人类普通话重复一下代码语言就是:在我们做出的假设的前提下,修改了一组FROG_B的数据(因为我们没有均值为0.55N的实验数据,所以我们杜撰了这个???我不确定这里我的理解是否正确),使之均值为0.55,记录于translated_force_b中,我们经过10000次bootstrap replicate实验,我们得出了满足假设条件的客观世界的均值分布,最后求出比真实FORG_B数据更加极端的所有情况的概率是0.46%,可能性很小,所以否定了原来的假设,即FROG_B的黏力均值几乎不可能和FROG_C相等。

可以看到这个实验是针对一组有数据的样本,和一组没有数据的样本(只是知道其中的某个统计量),以这个没有样本的某个统计量为研究目的进行的分析。

-----------------------------------------------------------------------------------------------------------------------------------

A bootstrap test for identical distributions

这个实验是基于两种样本完整的数据,检测他们是否具有相同的分布情况(Frog A and Frog B have identical distributions of impact forces ):

# Compute difference of mean impact force from experiment: empirical_diff_means

empirical_diff_means = diff_of_means(force_a,force_b)

# Concatenate forces: forces_concat

forces_concat = np.concatenate((force_a,force_b))

# Initialize bootstrap replicates: bs_replicates

bs_replicates = np.empty(10000)

for i in range(10000):

# Generate bootstrap sample

bs_sample = np.random.choice(forces_concat, size=len(forces_concat))

# Compute replicate

bs_replicates[i] = diff_of_means(bs_sample[:len(force_a)],

bs_sample[len(force_a):])

# Compute and print p-value: p

p = np.sum(bs_replicates>=empirical_diff_means) / len(bs_replicates)

print('p-value =', p)

output:

p-value = 0.0055

代码过程解析:

1、首先我们认为FROG_A和FROG_B的分布是相同的,那么我们就可以将其合并(forces_concat = np.concatenate((force_a,force_b)));

2、利用bootstrap replicate,我们可以有放回地抽取force_a,force_b长度的两组数据,求平均值(得到一种可能的客观世界均值),并对平均值做差。

3、重复2步骤10000次,我们就得到了可能的客观世界中两个里在假设情况下均值之差的分布情况。

4、检验比原始数据(empirical_diff_means = diff_of_means(force_a,force_b))更极端的情况发生的概率是多少,求出p值。

得到了概率为0.55%,很小,所以我们否定了原来的假设,即:FROG_A和FROG_B不应该有相同的分布情况。

可以看到,与重置检验的方法类似,两个检验方法的出发点都是:在两个给出的样本中,如果假设他们的分布相同,那么均值之差为0.29的情况下在是否是一个大概率事件。但两种方法孰优孰劣呢?datacamp中老师们给出的答案是:

Testing the hypothesis that two samples have the same distribution may be done with a bootstrap test, but a permutation test is preferred because it is more accurate (exact, in fact).

可见,重置检验的方法是更加值得信任的。

但重置检验方法也有它的局限性:

But therein lies the limit of a permutation test; it is not very versatile. We now want to test the hypothesis that Frog A and Frog B have the same mean impact force, but not necessarily the same distribution. This, too, is impossible with a permutation test.

当我们只想比较FROG_A和FROG_B是否具有相同的均值,而不必知道他们是否有相同的分布时,重置检验就没有办法了。

-----------------------------------------------------------------------------------------------------------------------------------

A two-sample bootstrap hypothesis test for difference of means.

# Compute mean of all forces: mean_force

mean_force = np.mean(forces_concat)

# Generate shifted arrays

force_a_shifted = force_a - np.mean(force_a) + mean_force

force_b_shifted = force_b - np.mean(force_b) + mean_force

# Compute 10,000 bootstrap replicates from shifted arrays

bs_replicates_a = draw_bs_reps(force_a_shifted, np.mean, 10000)

bs_replicates_b = draw_bs_reps(force_b_shifted, np.mean, 10000)

# Get replicates of difference of means: bs_replicates

bs_replicates = bs_replicates_a - bs_replicates_b

# Compute and print p-value: p

p = np.sum(bs_replicates>=empirical_diff_means) / len(bs_replicates)

print('p-value =', p)

output:

p-value = 0.0043

可以看到,bootstrap analysis确实具有更加灵活多样的检验能力,不仅可以检验两组数据是否具有相同的分布,而且可以检验数据是否具有相同的均值。

————————————————

版权声明:本文为CSDN博主「刀尔東」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。

原文链接:https://blog.csdn.net/weixin_38760323/java/article/details/81369432

你可能感兴趣的:(python怎么实现检验)