用于判断一个表达式,在表达式条件为 false 的时候触发异常。
断言可以在条件不满足程序运行的情况下直接返回错误,而不必等待程序运行后出现崩溃的情况
def masks_Unet(masks):
assert (len(masks.shape)==4) #4D arrays
assert (masks.shape[1]==1 ) #check the channel is 1
im_h = masks.shape[2]
im_w = masks.shape[3]
masks = np.reshape(masks,(masks.shape[0],im_h*im_w))
new_masks = np.empty((masks.shape[0],im_h*im_w,2))
for i in range(masks.shape[0]):
for j in range(im_h*im_w):
if masks[i,j] == 0:
new_masks[i,j,0]=1
new_masks[i,j,1]=0
else:
new_masks[i,j,0]=0
new_masks[i,j,1]=1
return new_masks`
例如:
a = 5
assert (a==1)
Accuracy = (True positives + True negatives) / all samples
Precision = True positives / (True positives + False positives)
eg.垃圾邮件:要有更高的查准率,即使漏放进来几个垃圾邮件,也不要把正常邮件给挡在外边
Recall = True positives / (True positives + False negatives)
eg.肿瘤判断和地震预测等场景,要求有更高的查全率,宁可错杀三千,也不放过一个
Specificity = TN / (TN + FP)
Sensitivity = TP / ( TP + FN )
在确保准确性不变的情况下选择不同的判断阈值 Decision Rule 而造成 Sensitivity 和 Specificity 的值的不同
完全随机的二分类器的AUC为0.5,虽然在不同的阈值下有不同的FPR和TPR,但相对面积更大,更靠近左上角的曲线代表着一个更加稳健的二分类器。
同时针对每一个分类器的ROC曲线,又能找到一个最佳的概率切分点使得自己关注的指标达到最佳水平。
F1兼顾了分类模型的准确率和召回率,可以看作是模型准确率和召回率的调和平均数,最大值是1,最小值是0。
print("random.randint(20)=",random.randint(20,30))
class Conv2D(Conv):
def __init__(self,
filters,
kernel_size,
strides=(1, 1),
padding='valid',
data_format=None,
dilation_rate=(1, 1),
groups=1,
activation=None,
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs):
up8 = Conv2D(128, 2, activation='relu', padding='same', data_format='channels_first')(UpSampling2D(size=(2, 2))(conv7))
主要通过add函数来添加layers,而这些layers包含比如:
None
is the batch dimension conv10 = keras.layers.core.Reshape((2,patch_height*patch_width))(conv9)
conv11 = keras.layers.core.Permute((2,1))(conv10)
conv12 = keras.layers.core.Activation('softmax')(conv11)
eg.当采用relu作为激活函数时
>> > layer = tf.keras.layers.Activation('relu')
>> > output = layer([-3.0, -1.0, 0.0, 2.0])
>> > output = [0.0, 0.0, 0.0, 2.0]
def compile(optimizer,
loss=None,
metrics=None,
loss_weights=None,
sample_weight_mode=None,
weighted_metrics=None,
target_tensors=None,
**kwargs)
10.25——U-Net中训练部分代码:
model.fit(patches_imgs_train,
patches_masks_train,
epochs=N_epochs,
batch_size=batch_size,
verbose=1,
shuffle=True,
validation_split=0.1, # 十分之一的数据用在验证集上
callbacks=[checkpointer])
def fit(x=None,
y=None,
batch_size=None,
epochs=1,
verbose=1,
callbacks=None,
validation_split=0.,
validation_data=None,
shuffle=True,
class_weight=None,
sample_weight=None,
initial_epoch=0,
steps_per_epoch=None,
validation_steps=None,
validation_freq=1,
max_queue_size=10,
workers=1,
use_multiprocessing=False,
**kwargs)
def evaluate(x=None,
label_y=None,
batch_size=None,
verbose=1,
sample_weight=None,
steps=None,
callbacks=None,
max_queue_size=10,
workers=1,
use_multiprocessing=False)
predictions = model.predict(patches_imgs_test, batch_size=2, verbose=2)
print("predicted images size :")
print(predictions.shape)
print(predictions)
def predict(test_x,
batch_size=None,
verbose=0,
steps=None)
1.json文件
# serialize model to JSON
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# load json and create model
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close() 注意要记得关闭读取流
loaded_model = model_from_json(loaded_model_json)
1.HDF5文件:model.save_weights()
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")
# load weights into new model
model.load_weights("model.h5")
print("Loaded model from disk")
eg:
model = model_from_json(open(path_experiment+name_experiment +'_architecture.json').read())
model.load_weights(path_experiment+name_experiment + '_'+best_last+'_weights.h5')