zl程序教程

您现在的位置是:首页 >  数据库

当前栏目

TensorFlow2-高阶操作(四):张量排序【sort(返回排序后的新张量)、argsort(返回排序后的索引组成的张量)、Topk(返回排列后的前k个元素组成的张量)、Top-k准确度】

索引排序 操作 元素 返回 Top 组成 排列
2023-09-27 14:20:41 时间

一、tf.sort:返回排序后的新Tensor

tf.sort:返回排序后的新Tensor

也可以对多维Tensor进行排序,当对多维Tensor进行排序时,可以通过axis参数指定需要排序的维度,axis的默认值为-1,也就是对最后一维进行排序。

1、向量排序

import tensorflow as tf

a = tf.random.uniform([5], maxval=10, dtype=tf.int32)
print("a = ", a)
print("-" * 100)

b = tf.sort(a, direction='DESCENDING')
print("a = ", a)
print("-" * 50)
print("b = ", b)
print("-" * 100)

打印结果:

a =  tf.Tensor([4 9 5 7 2], shape=(5,), dtype=int32)
----------------------------------------------------------------------------------------------------
a =  tf.Tensor([4 9 5 7 2], shape=(5,), dtype=int32)
--------------------------------------------------
b =  tf.Tensor([9 7 5 4 2], shape=(5,), dtype=int32)
----------------------------------------------------------------------------------------------------

Process finished with exit code 0

2、矩阵排序

import tensorflow as tf

a = tf.random.uniform([3, 3], maxval=10, dtype=tf.int32)
print("a = ", a)
print("-" * 100)

b = tf.sort(a, direction='DESCENDING')
print("a = ", a)
print("-" * 50)
print("b = ", b)
print("-" * 100)

打印结果:

a =  tf.Tensor(
[[3 8 8]
 [6 4 7]
 [2 7 5]], shape=(3, 3), dtype=int32)
----------------------------------------------------------------------------------------------------
a =  tf.Tensor(
[[3 8 8]
 [6 4 7]
 [2 7 5]], shape=(3, 3), dtype=int32)
--------------------------------------------------
b =  tf.Tensor(
[[8 8 3]
 [7 6 4]
 [7 5 2]], shape=(3, 3), dtype=int32)
----------------------------------------------------------------------------------------------------

Process finished with exit code 0

二、tf.argsort:返回排序之后的索引组成的Tensor

1、向量排序

在这里插入图片描述

import tensorflow as tf

a = a = tf.constant([2, 0, 3, 4, 1])
print("a = ", a)
print("-" * 100)

b = tf.argsort(a, direction='DESCENDING')
print("a = ", a)
print("-" * 50)
print("b = ", b)
print("-" * 100)

打印结果:

a =  tf.Tensor([2 0 3 4 1], shape=(5,), dtype=int32)
----------------------------------------------------------------------------------------------------
a =  tf.Tensor([2 0 3 4 1], shape=(5,), dtype=int32)
--------------------------------------------------
b =  tf.Tensor([3 2 0 4 1], shape=(5,), dtype=int32)
----------------------------------------------------------------------------------------------------

Process finished with exit code 0

2、矩阵排序

import tensorflow as tf

a = tf.random.uniform([3, 3], maxval=10, dtype=tf.int32)
print("a = ", a)
print("-" * 100)

b = tf.argsort(a, direction='DESCENDING')
print("a = ", a)
print("-" * 50)
print("b = ", b)
print("-" * 100)

打印结果:

a =  tf.Tensor(
[[5 0 2]
 [1 9 5]
 [5 6 7]], shape=(3, 3), dtype=int32)
----------------------------------------------------------------------------------------------------
a =  tf.Tensor(
[[5 0 2]
 [1 9 5]
 [5 6 7]], shape=(3, 3), dtype=int32)
--------------------------------------------------
b =  tf.Tensor(
[[0 2 1]
 [1 2 0]
 [2 1 0]], shape=(3, 3), dtype=int32)
----------------------------------------------------------------------------------------------------

Process finished with exit code 0

三、Topk(tf.math.top_k()):只返回最值的前几个元素值和索引号

只返回最值的前几个元素值和索引号

tf.math.top_k(
    input,
    k=1,
    sorted=True,      //默认降序
    name=None
)

1、向量排序

import tensorflow as tf

a = tf.random.uniform([5], maxval=10, dtype=tf.int32)
print("a = ", a)
print("-" * 100)

b = tf.math.top_k(input=a, k=3)
print("a = ", a)
print("-" * 50)
print("b = ", b)
print("-" * 50)
print("b.values = ", b.values)
print("-" * 50)
print("b.indices = ", b.indices)
print("-" * 100)

打印结果:

a =  tf.Tensor([7 3 2 2 9], shape=(5,), dtype=int32)
----------------------------------------------------------------------------------------------------
a =  tf.Tensor([7 3 2 2 9], shape=(5,), dtype=int32)
--------------------------------------------------
b =  TopKV2(values=<tf.Tensor: shape=(3,), dtype=int32, numpy=array([9, 7, 3])>, indices=<tf.Tensor: shape=(3,), dtype=int32, numpy=array([4, 0, 1])>)
--------------------------------------------------
b.values =  tf.Tensor([9 7 3], shape=(3,), dtype=int32)
--------------------------------------------------
b.indices =  tf.Tensor([4 0 1], shape=(3,), dtype=int32)
----------------------------------------------------------------------------------------------------

Process finished with exit code 0

2、矩阵排序

返回a矩阵中,每一行最大的k个值及其索引。

import tensorflow as tf

a = tf.random.uniform([5, 5], maxval=10, dtype=tf.int32)
print("a = ", a)
print("-" * 100)

b = tf.math.top_k(input=a, k=3)
print("a = ", a)
print("-" * 50)
print("b = ", b)
print("-" * 50)
print("b.values = ", b.values)
print("-" * 50)
print("b.indices = ", b.indices)
print("-" * 100)

打印结果:

a =  tf.Tensor(
[[4 3 7 0 2]
 [2 0 6 8 5]
 [3 8 4 2 9]
 [1 5 9 0 6]
 [8 4 2 2 1]], shape=(5, 5), dtype=int32)
----------------------------------------------------------------------------------------------------
a =  tf.Tensor(
[[4 3 7 0 2]
 [2 0 6 8 5]
 [3 8 4 2 9]
 [1 5 9 0 6]
 [8 4 2 2 1]], shape=(5, 5), dtype=int32)
--------------------------------------------------
b =  TopKV2(values=<tf.Tensor: shape=(5, 3), dtype=int32, numpy=
array([[7, 4, 3],
       [8, 6, 5],
       [9, 8, 4],
       [9, 6, 5],
       [8, 4, 2]])>, indices=<tf.Tensor: shape=(5, 3), dtype=int32, numpy=
array([[2, 0, 1],
       [3, 2, 4],
       [4, 1, 2],
       [2, 4, 1],
       [0, 1, 2]])>)
--------------------------------------------------
b.values =  tf.Tensor(
[[7 4 3]
 [8 6 5]
 [9 8 4]
 [9 6 5]
 [8 4 2]], shape=(5, 3), dtype=int32)
--------------------------------------------------
b.indices =  tf.Tensor(
[[2 0 1]
 [3 2 4]
 [4 1 2]
 [2 4 1]
 [0 1 2]], shape=(5, 3), dtype=int32)
----------------------------------------------------------------------------------------------------

Process finished with exit code 0

四、Top-k 准确度

应用:准确度

  • 假设pred预测概率为[0.1, 0.2, 0.3, 0.4],即最有可能预测值为3
  • 假设正确值为2
  • 利用tf.math.top_k(pred,1) 返回3 => 准确度为0%
  • 利用tf.math.top_k(pred,2) 返回[3, 2] => 假设前两个中有正确值即可,准确度为100%
  • ImageNet 考虑Top-5,假设1000个种类中前五个预测中含有正确值,准确率认为100%

1、Top-5 准确度

import os

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf

tf.random.set_seed(2467)


# output为神经网络输出值
# target为实际值
# topk为想输出的top_k的值
def accuracy(output, target, topk):
    print("计算准确度:")
    print("\t", "-" * 50)
    batch_size = target.shape[0]
    print("\tbatch_size = ", batch_size)
    print("\t", "-" * 50)
    pred_top_k = tf.math.top_k(input=output, k=topk).indices  # 计算top_k的索引,也即预测值
    print("\tpred_top_k = ", pred_top_k)
    print("\t", "-" * 50)
    pred_top_k = tf.transpose(pred_top_k, perm=[1, 0])  # 将索引值进行转置
    print("\t转置之后:pred_top_k = ", pred_top_k)
    print("\t", "-" * 50)
    target_ = tf.broadcast_to(target, pred_top_k.shape)  # 对target进行扩充
    print("\ttarget_ = ", target_)
    print("\t", "-" * 50)

    equal_result = tf.equal(pred_top_k, target_)  # 比较输出结果
    print("\tequal_result = ", equal_result)
    print("\t", "-" * 50)

    res = []
    equal_count_sum = tf.reduce_sum(tf.cast(equal_result, dtype=tf.float32))
    print("\tequal_count_sum = ", equal_count_sum)
    acc = float(equal_count_sum * (100.0 / batch_size))
    res.append(acc)

    return res


output = tf.random.normal([10, 6])  # 生成一个六类的样本,样本数为10
output = tf.math.softmax(output, axis=1)  # softmax可以使得六类的概率为1
print('output = \n', output.numpy())
print("-" * 200)

pred = tf.argmax(output, axis=1)
print('\nprob = ', pred.numpy())

# 假设分类目标标签为:
target = tf.constant([0, 2, 3, 4, 2, 4, 2, 3, 5, 5])  # 结果为【0~5】的10个样本
print('target = ', target.numpy())
print("-" * 200)

acc = accuracy(output, target, topk=5)  # 调用精确度计算函数
print("-" * 200)

print('\ntop-5 acc:', acc)

打印结果:

output = 
 [[0.25310278 0.21715644 0.16043882 0.13088997 0.04334084 0.19507109]
 [0.05892418 0.04548917 0.00926314 0.14529602 0.66777605 0.07325139]
 [0.09742809 0.08304427 0.07460099 0.04067177 0.626185   0.07806987]
 [0.20478569 0.12294925 0.12010485 0.13751233 0.36418733 0.05046057]
 [0.11872064 0.31072393 0.12530337 0.15528883 0.21325873 0.07670453]
 [0.01519807 0.09672114 0.1460476  0.00934331 0.5649092  0.1677807 ]
 [0.04199061 0.18141054 0.06647632 0.6006175  0.03198383 0.07752118]
 [0.09226219 0.23460893 0.13022321 0.16295876 0.05362028 0.32632664]
 [0.07019574 0.08611772 0.10912607 0.10521299 0.2152082  0.4141393 ]
 [0.01882887 0.2659769  0.19122466 0.2410926  0.14920163 0.1336753 ]]
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

prob =  [0 4 4 4 1 4 3 5 5 1]
target =  [0 2 3 4 2 4 2 3 5 5]
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
计算准确度:
	 --------------------------------------------------
	batch_size =  10
	 --------------------------------------------------
	pred_top_k =  tf.Tensor(
							[[0 1 5 2 3]
							 [4 3 5 0 1]
							 [4 0 1 5 2]
							 [4 0 3 1 2]
							 [1 4 3 2 0]
							 [4 5 2 1 0]
							 [3 1 5 2 0]
							 [5 1 3 2 0]
							 [5 4 2 3 1]
							 [1 3 2 4 5]], shape=(10, 5), dtype=int32)
	 --------------------------------------------------
	转置之后:pred_top_k =  tf.Tensor(
							[[0 4 4 4 1 4 3 5 5 1]
							 [1 3 0 0 4 5 1 1 4 3]
							 [5 5 1 3 3 2 5 3 2 2]
							 [2 0 5 1 2 1 2 2 3 4]
							 [3 1 2 2 0 0 0 0 1 5]], shape=(5, 10), dtype=int32)
	 --------------------------------------------------
	target_ =  tf.Tensor(
							[[0 2 3 4 2 4 2 3 5 5]
							 [0 2 3 4 2 4 2 3 5 5]
							 [0 2 3 4 2 4 2 3 5 5]
							 [0 2 3 4 2 4 2 3 5 5]
							 [0 2 3 4 2 4 2 3 5 5]], shape=(5, 10), dtype=int32)
	 --------------------------------------------------
	equal_result =  tf.Tensor(
							[[ True False False  True False  True False False  True False]
							 [False False False False False False False False False False]
							 [False False False False False False False  True False False]
							 [False False False False  True False  True False False False]
							 [False False False False False False False False False  True]], shape=(5, 10), dtype=bool)
	 --------------------------------------------------
	equal_count_sum =  tf.Tensor(8.0, shape=(), dtype=float32)
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

top-5 acc: [80.0]

Process finished with exit code 0

2、Top-k 准确度

import os

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf

tf.random.set_seed(2467)


# output为神经网络输出值
# target为实际值
# topk为想输出的top_k的值
def accuracy(output, target, topk=(1,)):
    print("计算准确度:")
    maxk = max(topk)
    print("\tmaxk = ", maxk)
    print("\t", "-" * 50)
    batch_size = target.shape[0]
    print("\tbatch_size = ", batch_size)
    print("\t", "-" * 50)
    pred_top_k = tf.math.top_k(input=output, k=maxk).indices  # 计算top_k的索引,也即预测值
    print("\tpred_top_k = ", pred_top_k)
    print("\t", "-" * 50)
    pred_top_k = tf.transpose(pred_top_k, perm=[1, 0])  # 将索引值进行转置
    print("\t转置之后:pred_top_k = ", pred_top_k)
    print("\t", "-" * 50)
    target_ = tf.broadcast_to(target, pred_top_k.shape)  # 对target进行扩充,扩展成与top矩阵相同维度的tensor,方便比较
    print("\ttarget_ = ", target_)
    print("\t", "-" * 50)
    correct = tf.equal(pred_top_k, target_)  # 比较输出结果
    print("\tcorrect = ", correct)
    print("\t", "-" * 50)
    res = []
    for k in topk:  # 输出不同的Top_k对应的准确度
        # correct[:k] 取不同的top_k的数据
        correct_k = tf.reduce_sum(tf.cast(correct[:k], dtype=tf.float32))
        acc = float(correct_k * (100.0 / batch_size))
        res.append(acc)

    return res


output = tf.random.normal([10, 6])  # 生成一个六类的样本,样本数为10
output = tf.math.softmax(output, axis=1)  # softmax可以使得六类的概率为1
print('output = \n', output.numpy())
print("-" * 200)

pred = tf.argmax(output, axis=1)
print('\nprob = ', pred.numpy())

# 假设分类目标标签为:
target = tf.constant([0, 2, 3, 4, 2, 4, 2, 3, 5, 5])  # 结果为【0~5】的10个样本
print('target = ', target.numpy())
print("-" * 200)

acc = accuracy(output, target, topk=(1, 2, 3, 4, 5, 6))  # 调用精确度计算函数
print("-" * 200)

print('\ntop-1-6 acc:', acc)

打印结果:

output = 
 [[0.25310278 0.21715644 0.16043882 0.13088997 0.04334084 0.19507109]
 [0.05892418 0.04548917 0.00926314 0.14529602 0.66777605 0.07325139]
 [0.09742809 0.08304427 0.07460099 0.04067177 0.626185   0.07806987]
 [0.20478569 0.12294925 0.12010485 0.13751233 0.36418733 0.05046057]
 [0.11872064 0.31072393 0.12530337 0.15528883 0.21325873 0.07670453]
 [0.01519807 0.09672114 0.1460476  0.00934331 0.5649092  0.1677807 ]
 [0.04199061 0.18141054 0.06647632 0.6006175  0.03198383 0.07752118]
 [0.09226219 0.23460893 0.13022321 0.16295876 0.05362028 0.32632664]
 [0.07019574 0.08611772 0.10912607 0.10521299 0.2152082  0.4141393 ]
 [0.01882887 0.2659769  0.19122466 0.2410926  0.14920163 0.1336753 ]]
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

prob =  [0 4 4 4 1 4 3 5 5 1]
target =  [0 2 3 4 2 4 2 3 5 5]
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
计算准确度:
	maxk =  6
	 --------------------------------------------------
	batch_size =  10
	 --------------------------------------------------
	pred_top_k =  tf.Tensor(
							[[0 1 5 2 3 4]
							 [4 3 5 0 1 2]
							 [4 0 1 5 2 3]
							 [4 0 3 1 2 5]
							 [1 4 3 2 0 5]
							 [4 5 2 1 0 3]
							 [3 1 5 2 0 4]
							 [5 1 3 2 0 4]
							 [5 4 2 3 1 0]
							 [1 3 2 4 5 0]], shape=(10, 6), dtype=int32)
	 --------------------------------------------------
	转置之后:pred_top_k =  tf.Tensor(
							[[0 4 4 4 1 4 3 5 5 1]
							 [1 3 0 0 4 5 1 1 4 3]
							 [5 5 1 3 3 2 5 3 2 2]
							 [2 0 5 1 2 1 2 2 3 4]
							 [3 1 2 2 0 0 0 0 1 5]
							 [4 2 3 5 5 3 4 4 0 0]], shape=(6, 10), dtype=int32)
	 --------------------------------------------------
	target_ =  tf.Tensor(
							[[0 2 3 4 2 4 2 3 5 5]
							 [0 2 3 4 2 4 2 3 5 5]
							 [0 2 3 4 2 4 2 3 5 5]
							 [0 2 3 4 2 4 2 3 5 5]
							 [0 2 3 4 2 4 2 3 5 5]
							 [0 2 3 4 2 4 2 3 5 5]], shape=(6, 10), dtype=int32)
	 --------------------------------------------------
	correct =  tf.Tensor(
							[[ True False False  True False  True False False  True False]
							 [False False False False False False False False False False]
							 [False False False False False False False  True False False]
							 [False False False False  True False  True False False False]
							 [False False False False False False False False False  True]
							 [False  True  True False False False False False False False]], shape=(6, 10), dtype=bool)
	 --------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

top-1-6 acc: [40.0, 40.0, 50.0, 70.0, 80.0, 100.0]

Process finished with exit code 0



[TensorFlow 2] 张量排序
TensorFlow2.0(3):张量排序、最大最小值
TensorFlow高阶操作之张量排序
tensorflow张量排序
TensorFlow2.0(3):张量排序、最大最小值
tensorflow张量排序(示例代码)
张量排序