TensorFLow2.0张量操作二

张量的一些高阶的操作

张量的一些高阶的操作
1、合并与分割

合并
主要就是concat和stack了。

In [1]: import tensorflow as tf
In [2]: import numpy as np
In [3]: a = tf.fill([4,20,6],6)
In [4]: b = tf.fill([2,20,6],6)

In [5]: tf.concat([a,b],axis=0).shape
Out[5]: TensorShape([6, 20, 6])
In [6]: tf.concat([a,b],axis=1).shape
InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [4,20,6] vs. shape[1] = [2,20,6] [Op:ConcatV2] name: concat

In [7]: a = tf.fill([4,20,6],6)
In [8]: b = tf.fill([4,20,6],6)
In [9]: tf.stack([a,b],axis=0).shape
Out[9]: TensorShape([2, 4, 20, 6])
In [10]: tf.stack([a,b],axis=1).shape
Out[10]: TensorShape([4, 2, 20, 6])

In [11]: tf.stack([a,b],axis=-1).shape
Out[11]: TensorShape([4, 20, 6, 2])
In [12]: tf.stack([a,b],axis=-2).shape
Out[12]: TensorShape([4, 20, 2, 6])

分割:
unstack的时候,相当于降维了一样,看下面的输出。
split的时候,是没有降维的,然后通过num_or_size_splits来控制分割后对应维度上元素的个数。

In [13]: tf.unstack(a,axis=0).shape
AttributeError: 'list' object has no attribute 'shape'

In [15]: res = tf.unstack(a,axis=0)
In [16]: res.__len__()
Out[16]: 4    #这里可以看出我们得到的结果list里面有四个
In [17]: res[0].shape,res[1].shape,res[2].shape,res[3].shape
Out[17]:
(TensorShape([20, 6]),
 TensorShape([20, 6]),
 TensorShape([20, 6]),
 TensorShape([20, 6]))

In [18]: res = tf.split(a,axis=2,num_or_size_splits=2)
In [19]: res.__len__()
Out[19]: 2
In [20]: res[0].shape,res[1].shape
Out[20]: (TensorShape([4, 20, 3]), TensorShape([4, 20, 3]))
2、数据统计

求范数

In [21]: a = tf.reshape(tf.convert_to_tensor([0.,1.,2.,3.]),[2,2])

In [22]: a
Out[22]:


In [23]: tf.norm(a)
Out[23]: 

In [24]: tf.norm(a,axis=0)
Out[24]: 

In [25]: tf.norm(a,axis=1)
Out[25]: 

In [26]: tf.norm(a,ord=1)
Out[26]: 

reduce_min/max/mean:有一个reduce在,说明有降维的存在

In [27]: a = tf.random.normal([3,6])

In [28]: a
Out[28]:


In [29]: tf.reduce_max(a), tf.reduce_min(a), tf.reduce_mean(a)
Out[29]:
(,
 ,
 )
In [31]: tf.reduce_max(a,axis=1), tf.reduce_min(a,axis=1), tf.reduce_mean(a,axis=1)
Out[31]:
(,
 ,
 )

argmax/argmin

In [32]: a.shape
Out[32]: TensorShape([3, 6])

In [33]: tf.argmax(a)
Out[33]: 

In [34]: tf.argmax(a).shape
Out[34]: TensorShape([6])
In [35]: a = tf.reshape(tf.range(4),[2,2])
In [36]: b = tf.fill([2,2],1)

In [37]: tf.equal(a,b)
Out[37]:


unique

In [41]: a = tf.constant([2,3,2,3,5])

In [42]: tf.unique(a)
Out[42]: Unique(y=, idx=)
3、排序

sort和argsor

In [43]: a = tf.random.shuffle(tf.range(5))

In [44]: a
Out[44]: 

In [45]: tf.sort(a)
Out[45]: 

In [46]: tf.argsort(a)
Out[46]: 

top_k

In [47]: a = tf.reshape(tf.convert_to_tensor([2,9,5,3,5,7,8,5,1]),[3,3])

In [48]: res = tf.math.top_k(a,3)

In [49]: res.indices
Out[49]:


In [50]: res.values
Out[50]:

4、填充与复制

tf.pad

In [51]: a = tf.fill([3,3],2)

In [52]: tf.pad(a,[[1,1],[2,2]])
Out[52]:


In [53]: tf.pad(a,[[1,1],[0,0]])
Out[53]:

tf.tile

In [54]: a = tf.reshape(tf.range(4),[2,2])

In [55]: tf.tile(a,[1,1])
Out[55]:


In [56]: tf.tile(a,[1,2])
Out[56]:


In [57]: tf.tile(a,[2,1])
Out[57]:

5、高阶op

你可能感兴趣的:(TensorFLow2.0张量操作二)