TensorFlow常用函数汇总

TensorFlow常⽤函数汇总
本⽂介绍了tensorflow的常⽤函数,源⾃⽹上整理。
  TensorFlow 将图形定义转换成分布式执⾏的操作, 以充分利⽤可⽤的计算资源(如 CPU 或 GPU。⼀般你不需要显式指定使⽤ CPU 还是 GPU, TensorFlow 能⾃动检测。如果检测到 GPU, TensorFlow 会尽可能地利⽤到的第⼀个 GPU 来执⾏操作.并⾏计算能让代价⼤的算法计算加速执⾏,TensorFlow也在实现上对复杂操作进⾏了有效的改进。⼤部分核相关的操作都是设备相关的实现,⽐如GPU。
  下⾯是⼀些重要的操作/核:
操作组操作
Maths Add, Sub, Mul, Div, Exp, Log, Greater, Less, Equal
Array Concat, Slice, Split, Constant, Rank, Shape, Shuffle
Matrix MatMul, MatrixInverse, MatrixDeterminant
Neuronal Network SoftMax, Sigmoid, ReLU, Convolution2D, MaxPool
Checkpointing Save, Restore
Queues and syncronizations Enqueue, Dequeue, MutexAcquire, MutexRelease
Flow control Merge, Switch, Enter, Leave, NextIteration
⼀、 TensorFlow的算术操作
操作描述
tf.add(x, y, name=None)求和
tf.sub(x, y, name=None)减法
tf.mul(x, y, name=None)乘法
tf.div(x, y, name=None)除法
tf.abs(x, name=None)求绝对值
<(x, name=None)取负 (y = -x)
tf.sign(x, name=None)
返回符号
y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0.
tf.inv(x, name=None)取反
tf.square(x, name=None)计算平⽅(y = x * x = x^2)
舍⼊最接近的整数
# ‘a’ is [0.9, 2.5, 2.3, -4.4] tf.round(a) ==> [ 1.0, 3.0, 2.0, -4.0 ]
tf.sqrt(x, name=None)开根号 (y = \sqrt{x} = x^{1/2}).
tf.pow(x, y, name=None)
幂次⽅(元素级)
# tensor ‘x’ is [[2, 2], [3, 3]]
# tensor ‘y’ is [[8, 16], [2, 3]] tf.pow(x, y) ==> [[256, 65536], [9, 27]]
tf.log(x, name=None)计算log,⼀个输⼊计算e的ln,两输⼊以第⼆输⼊为
tf.maximum(x, y,
name=None)返回最⼤值(x > y ? x : y)长安新星
tf.minimum(x, y,
name=None)返回最⼩值 (x < y ? x : y)
tf.sin(x, name=None)三⾓函数sine
tf.tan(x, name=None)三⾓函数tan
tf.atan(x, name=None)三⾓函数ctan
⼆、张量操作Tensor Transformations
2.1  数据类型转换Casting
操作描述
xcomtf.string_to_number
(string_tensor, out_type=None, name=None)字符串转为数字
<_double(x, name=’ToDouble’)转为64位浮点类型–_float(x, name=’ToFloat’)转为32位浮点类型–_int32(x, name=’ToInt32’)转为32位整型–_int64(x, name=’ToInt64’)转为64位整型–int64
tf.cast(x, dtype, name=None)
将x或者x.values转换为dtype # tensor a is [1.8, 2.2], dtype=tf.float tf.cast(a, tf.int32) ==> [1, 2] # dtype=tf.int32
操作描述
2.2  形状操作Shapes and Shaping
操作描述
tf.shape(input, name=None)
返回数据的shape
# ‘t’ is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
shape(t) ==> [2, 2, 3]
tf.size(input, name=None)
返回数据的元素数量
# ‘t’ is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
size(t) ==> 12
tf.rank(input, name=None)
冲突返回tensor的rank(维度
注意:此rank不同于矩阵的rank,
tensor的rank表⽰⼀个tensor需要的索引数⽬来唯⼀表⽰任何⼀个
元素
也就是通常所说的 “order”, “degree”或”ndims”
#’t’ is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
# shape of tensor ‘t’ is [2, 2, 3]
rank(t) ==> 3
改变tensor的形状
# tensor ‘t’ is [1, 2, 3, 4, 5, 6, 7, 8, 9]
# tensor ‘t’ has shape [9]
reshape(t, [3, 3]) ==>
[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]
#如果shape有元素[-1],表⽰在该维度打平⾄⼀维
# -1 将⾃动推导得为 9:
reshape(t, [2, -1]) ==>
[[1, 1, 1, 2, 2, 2, 3, 3, 3],
[4, 4, 4, 5, 5, 5, 6, 6, 6]]
插⼊维度1进⼊⼀个tensor中
#该操作要求-1-input.dims()
# ‘t’ is a tensor of shape [2]
shape(expand_dims(t, 0)) ==> [1, 2]
shape(expand_dims(t, 1)) ==> [2, 1]
shape(expand_dims(t, -1)) ==> [2, 1] <= dim <= input.dims()
2.3  切⽚与合并(Slicing and Joining)
操作描述
对tensor进⾏切⽚操作,从input中抽取部分内容
inputs:可以是list,array,tensor
begin:n维列表,begin[i] 表⽰从inputs中第i维抽取数据时,相对0的起始偏移量,也就
是从第i维的begin[i]开始抽取数据
size:n维列表,size[i]表⽰要抽取的第i维元素的数⽬
有⼏个关系式如下:
(1) i in [0,n]
tf.slice(input_, begin, size, name=None)
(2)tf.shape(inputs)[0]=len(begin)=len(size)        (3)begin[i]>=0  抽取第i 维元素的起始位置要⼤于等于0        (4)begin[i]+size[i]<=tf.shape(inputs)[i]
#’input’ is
#[[[1, 1, 1], [2, 2, 2]],[[3, 3, 3], [4, 4, 4]],[[5, 5, 5], [6, 6, 6]]]  tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]]
  tf.slice(input, [1, 0, 0], [1, 2, 3]) ==>
[[[3, 3, 3],[4, 4, 4]]]
tf.slice(input, [1, 0, 0], [2, 1, 3]) ==>
[[[3, 3, 3]],[[5, 5, 5]]]
tf.split(split_dim, num_split, value,
name=’split’)
沿着某⼀维度将tensor 分离为num_split  tensors
# ‘value’ is a tensor with shape [5, 30]# Split ‘value’ into 3 tensors along dimension 1
split0, split1, split2 = tf.split(1, 3, value)
五洲国际码头tf.shape(split0) ==> [5, at(concat_dim, values, name=’concat’)
沿着某⼀维度连结tensor (整体维度不变)
t1 = [[1, 2, 3], [4, 5, 6]]t2 = [[7, 8, 9], [10, 11, 12]]
如果想沿着tensor ⼀新轴连结打包,那么可以:tf.concat(axis, [tf.expand_dims(t, axis) for t in tensors])
等同于tf.pack(tensors, axis=axis)
tf.pack(values, axis=0, name=’pack’)
将⼀系列rank-R 的tensor 打包为⼀个rank-(R+1)的tensor (整体维度加⼀)
# ‘x’ is [1, 4], ‘y’ is [2, 5], ‘z’ is [3, 6]pack([x, y, z]) => [[1, 4], [2, 5], [3, 6]]
# 沿着第⼀维pack
pack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]]等价于tf.pack([x, y, z]) = np.asarray([x, y, z])tf.reverse(tensor, dims, name=None)
沿着某维度进⾏序列反转
其中dim 为列表,元素为bool 型,size 等于rank(tensor)
# tensor ‘t’ is [[[[ 0, 1, 2, 3],#[ 4, 5, 6, 7],#[ 8, 9, 10, 11]],#[[12, 13, 14, 15],#[16, 17, 18, 19],#[20, 21, 22, 23]]]]
# tensor ‘t’ shape is [1, 2, 3, 4]# ‘dims’ is [False, False, False, True]
reverse(t, dims) ==>
[[[[ 3, 2, 1, 0],[ 7, 6, 5, 4],[ 11, 10, 9, 8]],[[15, 14, 13, 12],[19, 18, 17, 16],[23, 22, 21, 20]]]]tf.transpose(a, perm=None, name=’transpose’)
调换tensor 的维度顺序(轴变换)
按照列表perm 的维度排列调换tensor 顺序,
如为定义,则perm 为(n-1…0)
# ‘x’ is [[1 2 3],[4 5 6]]
# Equivalently
合并索引 indices 所指⽰ params 中的切⽚
操作描述
validate_indices=None, name=None)
<_hot
(indices, depth, on_value=None,
off_value=None,
axis=None, dtype=None, name=None)独热编码(ont-hot encoing)indices = [0, 2, -1, 1]
depth = 3
on_value = 5.0
off_value = 0.0
axis = -1
#Then output is [4 x 3]:
output =
[5.0 0.0 0.0] // one_hot(0) [0.0 0.0 5.0] // one_hot(2) [0.0 0.0 0.0] // one_hot(-1) [0.0 5.0 0.0] // one_hot(1)
操作描述
三、矩阵相关运算
操作描述
tf.diag(diagonal, name=None)返回⼀个给定对⾓值的对⾓tensor # ‘diagonal’ is [1, 2, 3, 4]
tf.diag(diagonal) ==>
[[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]]
tf.diag_part(input, name=None)功能与上⾯相反
transpose_b=False, a_is_sparse=False,
b_is_sparse=False, name=None)
矩阵相乘(可以处理批数据)
tf.matrix_determinant(input, name=None)返回⽅阵的⾏列式
tf.matrix_inverse(input, adjoint=None,
name=None)求⽅阵的逆矩阵,adjoint为True时,计算输⼊共轭矩阵的逆矩阵
tf.cholesky(input, name=None)
对输⼊⽅阵cholesky分解,
即把⼀个对称正定的矩阵表⽰成⼀个下三⾓矩阵L和其转置的乘积的分
解A=LL^T
tf.matrix_solve(matrix, rhs, adjoint=None,
name=None)
求解⽅程
matrix为⽅阵shape为[M,M],rhs的shape为[M,K],output为[M,K]
四、复数操作
操作描述
tfplex(real, imag, name=None)
将两实数转换为复数形式
# tensor ‘real’ is [2.25, 3.25]
# tensor ‘imag’ is [4.75, 5.75]
tfplex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]]
tfplex_abs(x, name=None)
计算复数的绝对值,即长度
# tensor ‘x’ is [[-2.25 + 4.75j], [-3.25 + 5.75j]] tfplex_abs(x) ==> [5.25594902, 6.60492229]
tf.imag(input, name=None)
tf.fft(input, name=None)计算⼀维的离散傅⾥叶变换,输⼊数据类型为complex64五、归约计算(Reduction)
操作描述
潜阳封髓丹tf.reduce_sum(input_tensor,
reduction_indices=None, keep_dims=False, name=None)计算输⼊tensor元素的和,或者安照reduction_indices指定的轴进⾏求和
# ‘x’ is [[1, 1, 1]
# [1, 1, 1]]
基金频道
reduction_indices=None,
keep_dims=False, name=None)
计算输⼊tensor元素的乘积,或者按照reduction_indices指定的轴进⾏求乘积
reduction_indices=None,
keep_dims=False, name=None)
求tensor中最⼩值
reduction_indices=None,
keep_dims=False, name=None)
求tensor中最⼤值
reduction_indices=None,
keep_dims=False, name=None)
求tensor中平均值
reduction_indices=None, keep_dims=False, name=None)
对tensor中各个元素求逻辑’与’
# ‘x’ is
# [[True, True]
# [False, False]]
reduction_indices=None,
keep_dims=False, name=None)
对tensor中各个元素求逻辑’或’
tf.accumulate_n(inputs, shape=None, tensor_dtype=None, name=None)
计算⼀系列tensor的和
# tensor ‘a’ is [[1, 2], [3, 4]]
# tensor ‘b’ is [[5, 0], [0, 6]]
tf.accumulate_n([a, b, a]) ==> [[7, 4], [6, 14]]
tf.cumsum(x, axis=0, exclusive=False, reverse=False, name=None)
求累积和
tf.cumsum([a, b, c]) ==> [a, a + b, a + b + c]
tf.cumsum([a, b, c], exclusive=True) ==> [0, a, a + b]
tf.cumsum([a, b, c], reverse=True) ==> [a + b + c, b + c, c] tf.cumsum([a, b, c], exclusive=True, reverse=True) ==> [b + c, c, 0]
操作描述
六、分割(Segmentation)
:⼀个Tensor。
segment_ids:⼀个Tensor;必须是以下类型之⼀:

本文发布于:2024-09-21 17:42:22,感谢您对本站的认可!

本文链接:https://www.17tex.com/xueshu/117742.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:操作   维度   计算   元素   矩阵   数据
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2024 Comsenz Inc.Powered by © 易纺专利技术学习网 豫ICP备2022007602号 豫公网安备41160202000603 站长QQ:729038198 关于我们 投诉建议