解决: topological sort failed with message: The graph couldn't be sorted in topological ord

问题:

2020-04-05 18:50:45.822828: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:704] Iteration = 0, topological sort failed with message: The graph couldn't be sorted in topological order.
2020-04-05 18:50:45.825525: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:704] Iteration = 1, topological sort failed with message: The graph couldn't be sorted in topological order.
2020-04-05 18:50:45.842137: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:704] Iteration = 0, topological sort failed with message: The graph couldn't be sorted in topological order.
2020-04-05 18:50:45.843658: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:704] Iteration = 1, topological sort failed with message: The graph couldn't be sorted in topological order.

遇到上述问题之后,我就去网上找解决办法,看到的比较多的结果见:https://www.gitmemory.com/issue/tensorflow/tensorflow/24816/498134965

其中,它给出了一段代码,该代码运行后,就会出现上述错误,代码如下:

import tensorflow as tf
import numpy as np
print(tf.__version__)
activation = tf.nn.relu
img_plh = tf.placeholder(tf.float32, [None, 3, 3, 3])
label_plh = tf.placeholder(tf.float32, [None])
layer = img_plh
buffer = []
ks_list = list(range(1, 10, 1)) + list(range(9, 0, -1))
for ks in ks_list:
    buffer.append(tf.layers.conv2d(layer, 9, ks, 1, "same", activation=activation))
layer = tf.concat(buffer, 3)
layer = tf.layers.conv2d(layer, 1, 3, 1, "valid", activation=activation)
layer = tf.squeeze(layer, [1, 2, 3])
loss_op = tf.reduce_mean(tf.abs(label_plh - layer))
optimizer = tf.train.AdamOptimizer()
train_op = optimizer.minimize(loss_op)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    result = sess.run(train_op, {img_plh: np.zeros([2, 3, 3, 3], np.float32), label_plh: np.zeros([2], np.float32)})

然后我根据上面给出的分析,尝试找问题所在地方,最后,发现,上面代码中,对layer进行多个卷积操作后,然后将多个卷积结果粘连在一起,这一部分有问题。

ks_list = list(range(1, 10, 1)) + list(range(9, 0, -1))
for ks in ks_list:
    x = tf.layers.conv2d(layer, 9, ks, 1, "same", activation=activation)
    buffer.append(x)
layer = tf.concat(buffer, 3)

解决办法:

我将这部分代码改成,进行一次卷积操作,紧接着就将其结果与之前的结果进行concat操作,代码如下,修改之后,就不存在之前的问题了。

ks_list = list(range(1, 10, 1)) + list(range(9, 0, -1))
ll = 0
for i in range(len(ks_list)):
    x = tf.layers.conv2d(layer, 9, ks_list[i], 1, "same", activation=activation)
    ll = x if i == 0 else tf.concat([ll, x], axis=3)
layer = ll

我发现自己出错的代码中,也存在多个卷积结果concat的操作,我按照上述解决办法将其做了修改,问题就解决了。

在做上述尝试之前,我还将代码中 ks_list 这个list的长度做了不同程度的缩减,我发现当其长度<=16时,也同样不会出现上述问题,但当其长度超过这个值,就会出现上述错误。

我并不能确定上述解决办法是否适用于所有出现这种问题的情况,仅供参考。

 

你可能感兴趣的:(python,tensorflow练习,深度学习,python,tensorflow,深度学习)