在之前的实际应用中,一般直接使用了其内置的损失函数,但是有些时候我们需要根据我们的实际需求自定义损失函数,这一节将详细讲解
先上代码:
def customized_mse(y_true,y_test):
return tf.reduce_mean(tf.square(y_true - y_test))
model=keras.models.Sequential([
keras.layers.Dense(30,activation='relu',
input_shape=x_train.shape[1:]),
keras.layers.Dense(1),
])
model.summary()
model.compile(loss=customized_mse,
optimizer='adam',
metrics=["mean_squared_error"])
这里,首先要定义一个损失函数,对于入口参数:
对于神经网络的设计,和之前一致,如果需要可以考古一下之前的博客
通过compile来确定模型的损失函数,这里只要将之前的函数名写上即可,对于优化算法,这里使用了ADAM,对于评价指标,这里使用了mean_squared_error
是否还记得第二章中已经对分类和回归都进行了具体的实践代码?已经好长时间了,这里就带着大家回顾一下完整的代码吧:
import matplotlib as mpl
import matplotlib.pyplot as plt
#为了在jupyter notebook中画图
%matplotlib inline
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
print(sys.version_info)
for module in mpl,np,pd,sklearn,tf,keras:
print(module.__name__,module.__version__)
这个数据集是sklearn中自带的数据集,第一次运行时会从网上自动下载,所以会慢一下,之后就会很快
from sklearn.datasets import fetch_california_housing
housing=fetch_california_housing()
print(housing.DESCR)
print(housing.data.shape)
print(housing.target.shape)
import pprint
pprint.pprint(housing.data[0:5])
pprint.pprint(housing.target[0:5])
from sklearn.model_selection import train_test_split
x_train_all,x_test,y_train_all,y_test=train_test_split(
housing.data,housing.target,random_state=7)
x_train,x_valid,y_train,y_valid=train_test_split(
x_train_all,y_train_all,random_state=11)
print(x_train.shape,y_train.shape)
print(x_valid.shape,y_valid.shape)
print(x_test.shape,y_test.shape)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(x_train)
x_valid_scaled = scaler.transform(x_valid)
x_test_scaled = scaler.transform(x_test)
# 自定义损失函数
def customized_mse(y_true,y_test):
return tf.reduce_mean(tf.square(y_true - y_test))
model=keras.models.Sequential([
keras.layers.Dense(30,activation='relu',
input_shape=x_train.shape[1:]),
keras.layers.Dense(1),
])
model.summary()
model.compile(loss=customized_mse,
optimizer='adam',
metrics=["mean_squared_error"])
callbacks=[keras.callbacks.EarlyStopping(patience=5,min_delta=1e-2)]
history=model.fit(x_train_scaled,y_train,
epochs=100,
validation_data=(x_valid_scaled,y_valid),
callbacks = callbacks )
def plot_learning_curves(history):
#设置画布大小为8和5
pd.DataFrame(history.history).plot(figsize=(8,5))
#显示网格
plt.grid(True)
#set_ylim为设置y坐标轴的范围
plt.gca().set_ylim(0,1)
plt.show()
plot_learning_curves(history)