zl程序教程

您现在的位置是:首页 >  后端

当前栏目

C#,图像二值化(09)——全局阈值的最大熵算法(Maximum Entropy Algorithm)与源程序

c#算法 图像 最大 全局 09 maximum ALGORITHM
2023-09-11 14:15:48 时间

The Max Entropy classifier is a probabilistic classifier which belongs to the class of exponential modelsUnlike the Naive Bayes classifier that we discussed in the previous articlethe Max Entropy does not assume that the features are conditionally independent of each other.

最大熵阈值分割法和OTSU算法类似,假设将图像分为背景和前景两个部分。 熵代表信息量,图像信息量越大,熵就越大,最大熵算法就是找出一个最佳阈值使得背景与前景两个部分熵之和最大。 直方图每个矩形框的数值描述的是图像中相应灰度值的频率。

改编自:

七种常见阈值分割代码(Otsu、最大熵、迭代法、自适应阀值、手动、迭代法、基本全局阈值法)https://blog.csdn.net/xw20084898/article/details/17564957

 二值算法综述请阅读:

C#,图像二值化(01)——二值化算法综述与二十三种算法目录https://blog.csdn.net/beijinghorn/article/details/128425225?spm=1001.2014.3001.5502

支持函数请阅读:

C#,图像二值化(02)——用于图像二值化处理的一些基本图像处理函数之C#源代码https://blog.csdn.net/beijinghorn/article/details/128425984?spm=1001.2014.3001.5502

using System;
using System.Linq;
using System.Text;
using System.Drawing;
using System.Collections;
using System.Collections.Generic;
using System.Drawing.Imaging;

namespace Legalsoft.Truffer.ImageTools
{
    public static partial class BinarizationHelper
    {
        #region 灰度图像二值化 全局算法 最大熵阈值

        /// <summary>
        /// 最大熵阈值分割
        /// 计算当前位置的能量熵
        /// https://blog.csdn.net/xw20084898/article/details/17564957
        /// </summary>
        /// <param name="histogram"></param>
        /// <param name="cur_threshold"></param>
        /// <param name="state"></param>
        /// <returns></returns>
        private static double Current_Entropy(int[] histogram, int cur_threshold, bool state)
        {
            int start;
            int end;
            double cur_entropy = 0.0;
            if (state == false)
            {
                start = 0;
                end = cur_threshold;
            }
            else
            {
                start = cur_threshold;
                end = 256;
            }
            int sum = Histogram_Sum(histogram);
            for (int j = start; j < end; j++)
            {
                if (histogram[j] == 0)
                {
                    continue;
                }
                double percentage = (double)histogram[j] / (double)sum;
                cur_entropy += -percentage * Math.Log(percentage);
            }
            return cur_entropy;
        }

        /// <summary>
        /// 最大熵阈值分割算法
        /// 寻找最大熵阈值并分割
        /// </summary>
        /// <param name="data"></param>
        /// <returns></returns>
        private static int Found_Maxium_Entropy(byte[,] data)
        {
            int[] histogram = Gray_Histogram(data);
            double maxentropy = -1.0;
            int max_index = -1;
            for (int i = 0; i < histogram.Length; i++)
            {
                double cur_entropy = Current_Entropy(histogram, i, true) + Current_Entropy(histogram, i, false);
                if (cur_entropy > maxentropy)
                {
                    maxentropy = cur_entropy;
                    max_index = i;
                }
            }
            return max_index;
        }

        public static void Maxium_Entropy_Algorithm(byte[,] data)
        {
            int threshold = Found_Maxium_Entropy(data);
            Threshold_Algorithm(data, threshold);
        }

        #endregion
    }
}

What is the Principle of Maximum Entropy?
The principle of maximum entropy is a model creation rule that requires selecting the most unpredictable (maximum entropy) prior assumption if only a single parameter is known about a probability distribution. The goal is to maximize “uniformitiveness,” or uncertainty when making a prior probability assumption so that subjective bias is minimized in the model’s results.

For example, if only the mean of a certain parameter is known (the average outcome over long-term trials), then a researcher could use almost any probability distribution to build the model. They might be tempted to choose a probability function like Normal distribution, since knowing the mean first lets them fill in more variables in the prior assumption. However, under the maximum entropy principle, the researcher should go with whatever probability distribution they know the least about already.

Common Probability Distribution Parameterizations in Machine Learning:
While all probability models follow either Bayesian or Frequentist inference, they can yield vastly different results depending upon what specific parameter distribution algorithm is employed.

Bernoulli distribution – one parameter
Beta distribution – multiple parameters
Binomial distribution – two parameters
Exponential distribution – multiple parameters
Gamma distribution – multiple parameters
Geometric distribution – one parameter 
Gaussian (normal) distribution – multiple parameters
Lognormal distribution – one parameter
Negative binomial distribution – two parameters 
Poisson distribution – one parameter