tensorflow实战-反向传播

  • 测试环境
  1. windows10
  2. anaconda3(64位)
  3. spyder
  • 所需模块
  1. tensorflow(cpu)
  2. numpy

  • 神经网络示意图
tensorflow实战-反向传播_第1张图片

  • python代码
import tensorflow as tf
import numpy as np
BATCH_SIZE=8#每次训练的数据量
seed = 23455#随机种子


rng=np.random.RandomState(seed)
X=rng.rand(32,2)#产生32行2列的随机矩阵
Y=[[int(x0+x1<1)] for (x0,x1) in X]#如果x0+x1<1则将其对应位置的Y赋值为1
print("X:\n",X)
print("Y:\n",Y)


x =tf.placeholder(tf.float32,shape=(None,2))#占位
y_=tf.placeholder(tf.float32,shape=(None,1))#标准答案


w1=tf.Variable(tf.random_normal([2,3],stddev=1,seed=1))#2行3列随机数组
w2=tf.Variable(tf.random_normal([3,1],stddev=1,seed=1))


a=tf.matmul(x,w1)#矩阵乘法
y=tf.matmul(a,w2)#y为输出层


loss=tf.reduce_mean(tf.square(y-y_))#方差
train_step=tf.train.GradientDescentOptimizer(0.001).minimize(loss)#梯度下降优化器


with tf.Session() as sess:#with上下文控制器,打开会话后自动关闭会话
    init_op =tf.global_variables_initializer()#初始化所有变量
    sess.run(init_op)#运算图得到结果
    
    print("w1:\n",sess.run(w1))
    print("w2:\n",sess.run(w2))


    STEPS = 3000#训练轮数
    for i in range(STEPS):
        start = (i*BATCH_SIZE)%32#控制开始的数组下标:0、8、16...
        end = start +BATCH_SIZE
        sess.run(train_step,feed_dict={x:X[start:end],y_:Y[start:end]})#喂入训练数据
        if i%500 ==0:#每训练500次,输出一次loss值
            total_loss=sess.run(loss,feed_dict={x:X,y_:Y})#将标准答案喂入,输出loss值
            print("after %d training steps,loss on all data is %g"%(i,total_loss))
    
    print("\n")
    print("w1:\n",sess.run(w1))
    print("w2:\n",sess.run(w2))
    print("test:[0.2,0.3],[0.9/0.7],[1.5,2.1]\nresult:\n",sess.run(y,feed_dict={x:[[0.2,0.3],[0.9,0.7],[1.5,2.1]]}))#输入测试数据

  • 测试结果
X:
 [[ 0.83494319  0.11482951]
 [ 0.66899751  0.46594987]
 [ 0.60181666  0.58838408]
 [ 0.31836656  0.20502072]
 [ 0.87043944  0.02679395]
 [ 0.41539811  0.43938369]
 [ 0.68635684  0.24833404]
 [ 0.97315228  0.68541849]
 [ 0.03081617  0.89479913]
 [ 0.24665715  0.28584862]
 [ 0.31375667  0.47718349]
 [ 0.56689254  0.77079148]
 [ 0.7321604   0.35828963]
 [ 0.15724842  0.94294584]
 [ 0.34933722  0.84634483]
 [ 0.50304053  0.81299619]
 [ 0.23869886  0.9895604 ]
 [ 0.4636501   0.32531094]
 [ 0.36510487  0.97365522]
 [ 0.73350238  0.83833013]
 [ 0.61810158  0.12580353]
 [ 0.59274817  0.18779828]
 [ 0.87150299  0.34679501]
 [ 0.25883219  0.50002932]
 [ 0.75690948  0.83429824]
 [ 0.29316649  0.05646578]
 [ 0.10409134  0.88235166]
 [ 0.06727785  0.57784761]
 [ 0.38492705  0.48384792]
 [ 0.69234428  0.19687348]
 [ 0.42783492  0.73416985]
 [ 0.09696069  0.04883936]]
Y:
 [[1], [0], [0], [1], [1], [1], [1], [0], [1], [1], [1], [0], [0], [0], [0], [0], [0], [1], [0], [0], [1], [1], [0], [1], [0], [1], [1], [1], [1], [1], [0], [1]]
w1:
 [[-0.81131822  1.48459876  0.06532937]
 [-2.4427042   0.0992484   0.59122431]]
w2:
 [[-0.81131822]
 [ 1.48459876]
 [ 0.06532937]]
after 0 training steps,loss on all data is 5.13118
after 500 training steps,loss on all data is 0.429111
after 1000 training steps,loss on all data is 0.409789
after 1500 training steps,loss on all data is 0.399923
after 2000 training steps,loss on all data is 0.394146

after 2500 training steps,loss on all data is 0.390597



w1:
 [[-0.70006633  0.9136318   0.08953571]
 [-2.3402493  -0.14641267  0.58823055]]
w2:
 [[-0.06024267]
 [ 0.91956186]
 [-0.0682071 ]]
test:[0.2,0.3],[0.9/0.7],[1.5,2.1]
result:
 [[ 0.16510932]
 [ 0.76494515]
 [ 1.24338591]]

你可能感兴趣的:(tensorflow实战-反向传播)