zl程序教程

您现在的位置是:首页 >  后端

当前栏目

论文拾萃|用子集和、集合覆盖及遗传算法解决可变尺寸装箱(VSBPP)问题(JAVA)

JAVA论文集合 问题 解决 覆盖 尺寸 可变
2023-06-13 09:11:28 时间

参考文献:“Heuristics for the variable sized bin-packing problem”, Mohamed Haouari, Mehdi Serairi, Computers & Operations Research Volume 36, Issue 10, October 2009, Pages 2877-2884.

1 问题描述

1 可变尺寸装箱问题

可变尺寸装箱问题(Variable Sized Bin Packing Problem, 简称VSBPP)是著名的一维装箱问题的拓展问题。它的定义如下:

一共有m种不同类型的箱子(m different bin types),每一种箱子i(i=1,...,m)都有一个固定的容量Wi和成本ci。VSBPP就是要把n件物品所构成的集合J(每件物品有重量wj)放入到一系列箱子中,从而使得成本最小。

我们假设W1 < W2 < ... < Wm以及c1 < c2 < ... <cm。

2 解的构建

运行环境:IntelliJ IDEA + Windows10 使用语言:JAVA

2.1 子集和问题(Subset-sum Problem,简称SSP)

子集和问题(SSP)定义了一个正整数的集合S={x1, x2,…, xn},以及一个正整数c。子集和问题判定是否存在S的一个子集S1,使得子集S1和等于c。

尽管作为一个NP-hard问题,但是SSP可以在伪多项式时间(pseudo-polynomial time)内被高效地解出。

如果想了解更多,请点击查看参考文献。

https://xueshu.baidu.com/usercenter/paper/show?paperid=1h6a06s0x86b0jr0g87r0tk0wx737548&site=xueshu_se

2.2 构建解的启发式算法(Constructive heuristics)

下述四个启发式算法正是基于上述的子集和问题(SSP),它们能得到一些较好的初始解。我们将这四个启发式算法命名为SSP1、SSP2、SSP3、SSP4。

2.2.1 SSP1

SSP1可以看作是解决一维装箱问题的子集和算法的拓展算法。

首先,我们对每个箱子都引入一个最大装载量zi,注意它和箱子容量Wi的区别,zi还与物品集合J有关。

相信聪明的你一定理解了

计算zi就需要上面介绍的SSP的解法,在代码中用了一个函数来实现:

private ArrayList<Item> solveSSP(int i, ArrayList<Item> unpacked_itemsJ_, double[] maximum_load_z) {    ArrayList<Item> selected_items_at_this_iteration = new ArrayList<>();    try {        IloCplex cplex = new IloCplex();        cplex.setOut(null); //turn off display        IloLinearNumExpr obj = cplex.linearNumExpr();        String[] var_names = new String[unpacked_itemsJ_.size()];        for (int j = 0; j < var_names.length; j++) {            var_names[j] = "x" + unpacked_itemsJ_.get(j).index;        }        IloNumVar[] x = cplex.boolVarArray(unpacked_itemsJ_.size(), var_names);        for (int j = 0; j < x.length; j++) {            obj.addTerm(data.item_weights_w[unpacked_itemsJ_.get(j).index], x[j]);        }        cplex.addMaximize(obj);        //subject to (2)        IloLinearNumExpr expr = cplex.linearNumExpr();        for (int j = 0; j < x.length; j++) {            expr.addTerm(data.item_weights_w[unpacked_itemsJ_.get(j).index], x[j]);        }        cplex.addLe(expr, data.bin_capacicty_W[i]);        if (cplex.solve()) {            maximum_load_z[i] = cplex.getObjValue();            for (int j = 0; j < x.length; j++) {                if (cplex.getValue(x[j]) > 0.995) {                    selected_items_at_this_iteration.add(unpacked_itemsJ_.get(j));                }            }        } else {            System.err.println("Failed to solve SSP for bin " + i);            System.exit(-1);        }        cplex.clearModel();        cplex.endModel();        cplex.end();    } catch (IloException e) {        e.printStackTrace();    }    return selected_items_at_this_iteration;}

这一段代码运用了cplex library里面的函数。Cplex专门用于求解大规模的线性规划(LP)、二次规划(QP)、带约束的二次规划(QCQP)、二阶锥规划(SOCP)等四类基本问题,以及相应的混合整数规划(MIP)问题,如果大家想了解,可以去看下以前的推文:

干货 | cplex介绍、下载和安装以及java环境配置和API简单说明

SSP1的伪代码如下:

这里的Si指的是计算zi时选中的物品的集合。

在把每个箱子的最大装载量Zi和对应的物品集合Si都计算完之后,选出一个花费-最大装载量比(ci/ci)最小的箱子i*,并将对应的物品集Si*装入,更新集合J,然后循环直到装完物品。

这样就得到一个比较好的初始解啦。

代码如下:

  private void SSP1() {        System.out.println("Starting SSP1.");        //step 0: initialization        ArrayList<Item> unpacked_itemsJ_ = new ArrayList<>(data.item_num_n);        double[] maximum_load_z = new double[data.bin_num_m];        ArrayList<Bin> packed_bins = new ArrayList<>();        for (int i = 0; i < data.item_num_n; i++) {            unpacked_itemsJ_.add(new Item(i));        }        //iterate until the unpacked item set J_ is empty        while (!unpacked_itemsJ_.isEmpty()) {            HashMap<Integer, ArrayList<Item>> selected_items_S = new HashMap<>();            double minimal_ratio_i = Double.POSITIVE_INFINITY;            int winner_index = -1;            //step 1: compute maximum_load_z and selected_items_S for each bin            for (int i = 0; i < data.bin_num_m; i++) {                maximum_load_z[i] = 0.0000000001;                selected_items_S.put(i,                        solveSSP(i, unpacked_itemsJ_,maximum_load_z));                if(maximum_load_z[i] == 0.0) maximum_load_z[i] = 0.0000000001;                double ratio = data.bin_cost_C[i] / maximum_load_z[i];                //step 2: compute minimal ratio i*                if (ratio < minimal_ratio_i) {                    minimal_ratio_i = ratio;                    winner_index = i;                }            }            //step 3: load the items of i* into the bin at winner_index            packed_bins.add(new Bin(winner_index, selected_items_S.get(winner_index)));            //step 4: remove the packed items from the unpacked item set J_            for (Item item : selected_items_S.get(winner_index)) {                unpacked_itemsJ_.remove(item);            }        }        //store solutions        //the first solution is #0        Solution solution = new Solution(solution_counter, packed_bins);        solutions.add(solution);        solutions_GA.add(solution);//        System.out.println(solutions.get(solution_counter));    }

2.2.2 SSP2

SSP2是在SSP1的基础上定义的。

具体来说,就是每次循环开始,都只挑出那些能装下最大物品的箱子。

用伪代码来看,就是在Step1里插入一个判断if Wi>=max(wj)。

private void SSP2() {        System.out.println("Starting SSP2.");        //step 0: initialization        ArrayList<Item> unpacked_itemsJ_ = new ArrayList<>(data.item_num_n);        double[] maximum_load_z = new double[data.bin_num_m];        ArrayList<Bin> packed_bins = new ArrayList<>();        for (int i = 0; i < data.item_num_n; i++) {            unpacked_itemsJ_.add(new Item(i));        }        //iterate until the unpacked item set J_ is empty        while (!unpacked_itemsJ_.isEmpty()) {            HashMap<Integer, ArrayList<Item>> selected_items_S = new HashMap<>();            double minimal_ratio_i = Double.POSITIVE_INFINITY;            int winner_index = -1;            //only those bins that can include the largest unpacked items are considered            double max_weight = Double.NEGATIVE_INFINITY;            for (Item item : unpacked_itemsJ_) {                double weight = item.weight;                if (weight > max_weight) max_weight = weight;            }            //step 1: compute maximum_load_z and selected_items_S for each bin            for (int i = 0; i < data.bin_num_m; i++) {                maximum_load_z[i] = 0.0000000001;                if (data.bin_capacicty_W[i] < max_weight) continue;                selected_items_S.put(i,                        solveSSP(i, unpacked_itemsJ_,maximum_load_z));                if(maximum_load_z[i] == 0.0) maximum_load_z[i] = 0.0000000001;                double ratio = data.bin_cost_C[i] / maximum_load_z[i];                //step 2: compute minimal ratio i*                if (ratio < minimal_ratio_i) {                    minimal_ratio_i = ratio;                    winner_index = i;                }            }            //step 3: load the items of i* into the bin at winner_index            packed_bins.add(new Bin(winner_index, selected_items_S.get(winner_index)));            //step 4: remove the packed items from the unpacked item set J_            for (Item item : selected_items_S.get(winner_index)) {                unpacked_itemsJ_.remove(item);            }        }        //store solutions        solution_counter++;        Solution solution = new Solution(solution_counter, packed_bins);        solutions.add(solution);        solutions_GA.add(solution);//        System.out.println(solutions.get(solution_counter));    }

2.2.3 SSP3

SSP3是在SSP2的基础上定义的。

SSP3比SSP更加严格,它要求每次循环开始,都只挑出那些Si里有最大物品的箱子。

用伪代码来看,仍然是在Step1里插入一个判断,只不过这次条件更加严格了,伪代码差不多的,小编就不写啦。

代码如下:

  private void SSP3() {        System.out.println("Starting SSP3.");        //step 0: initialization        HashMap<Integer, Item> unpacked_itemsJ_ = new HashMap<>(data.item_num_n);        double[] maximum_load_z = new double[data.bin_num_m];        ArrayList<Bin> packed_bins = new ArrayList<>();        for (int i = 0; i < data.item_num_n; i++) {            unpacked_itemsJ_.put(i, new Item(i));        }        //iterate until the unpacked item set J_ is empty        while (!unpacked_itemsJ_.isEmpty()) {            HashMap<Integer, ArrayList<Item>> selected_items_S = new HashMap<>();            double minimal_ratio_i = Double.POSITIVE_INFINITY;            int winner_index = -1;            //only those bins that can include the largest unpacked items are considered            //at each iteration the largest unpacked item is forced to be included in the set of items            //that are packed into the selected bin            double max_weight = Double.NEGATIVE_INFINITY;            int heaviest_item_index = -1;            for (Item item : unpacked_itemsJ_.values()) {                double weight = item.weight;                if (weight > max_weight) {                    max_weight = weight;                    heaviest_item_index = item.index;                }            }            //step 1: compute maximum_load_z and selected_items_S for each bin            for (int i = 0; i < data.bin_num_m; i++) {                maximum_load_z[i] = 0.00000000001;                if (data.bin_capacicty_W[i] < max_weight) continue;                selected_items_S.put(i,                        solveSSP3(i, heaviest_item_index, max_weight, unpacked_itemsJ_,maximum_load_z));                if(maximum_load_z[i] == 0.0) maximum_load_z[i] = 0.0000000001;                double ratio = data.bin_cost_C[i] / maximum_load_z[i];                //step 2: compute minimal ratio i*                if (ratio < minimal_ratio_i) {                    minimal_ratio_i = ratio;                    winner_index = i;                }            }            //step 3: load the items of i* into the bin at winner_index            packed_bins.add(new Bin(winner_index, selected_items_S.get(winner_index)));            //step 4: remove the packed items from the unpacked item set J_            for (Item item : selected_items_S.get(winner_index)) {                unpacked_itemsJ_.remove(item.index);            }        }        //store solutions        solution_counter++;        Solution solution = new Solution(solution_counter, packed_bins);        solutions.add(solution);        solutions_GA.add(solution);//        System.out.println(solutions.get(solution_counter));    }

2.2.4 SSP4

SSP4与前三个就有比较大的差别了,它的目的是提供m个可行解。也正因为这样,SSP4比起前面三个都多了一层循环。

以下是伪代码:

简单说明一下,在第i次外循环中:

0、先初始化起始点位k=i,

1、然后选择那些重量wj比k号箱子容量Wk小的物品,使其构成集合Jk,

2、然后在集合Jk上解决一维装箱问题(所谓一维装箱就是箱子容量是固定哒,这里是Wk),得到物品集合Sk,

3、看看有没有容量更小的箱子能装下Sk,有则更新

4、更新物品集合J,把选完的物品拿掉,

5、如果物品没选完,就对下一个箱子再来一遍1-5

6、物品选完了,保存一个可行解Πi

外循环都结束后,最后会有m个可行解,在这里面选一个最优的。

代码如下:

/*
* 提示:该行代码过长,系统自动注释不进行高亮。一键复制会移除系统注释 
* private void SSP4() {        System.out.println("Starting SSP4.");        ArrayList<ArrayList<Bin>> each_bin_packing = new ArrayList<>(data.bin_num_m);        for (int i = 0; i < data.bin_num_m; i++) {            //0: set J_ to J (all items are unpacked)            ArrayList<Item> unpacked_itemsJ_ = new ArrayList<>(data.item_num_n);            //store a solution at this iteration            ArrayList<Bin> packed_bins = new ArrayList<>();            for (int j = 0; j < data.item_num_n; j++) {                unpacked_itemsJ_.add(new Item(j, data.item_weights_w[j]));            }            int k = i;            boolean isJEmpty = false;            //step 1:            while (!isJEmpty) {                ArrayList<Bin> packed_bins_at_this_iteration = new ArrayList<>();                //set Jk - items that can be packed into bin of type k                ArrayList<Item> items_fit_for_bin_Jk = new ArrayList<>();                for (Item item : unpacked_itemsJ_) {                    if (item.weight <= data.bin_capacicty_W[k]) {                        items_fit_for_bin_Jk.add(item);                    }                }                //store items in non-increasing order                Queue<Item> itemQueue = new PriorityQueue<>(Collections.reverseOrder());                for (Item item : items_fit_for_bin_Jk) {                    itemQueue.offer(item);                }                //step 2: load items of maximum weight not exceeding the capacity                while (!itemQueue.isEmpty()) {                    double total_weight = 0;                    ArrayList<Item> selected_items_at_this_iteration = new ArrayList<>();                    boolean canPackMore = true;                    while (canPackMore) {                        Item item = itemQueue.poll();                        total_weight += item.weight;                        selected_items_at_this_iteration.add(item);                        if (itemQueue.size() >= 1) {                            Item next = itemQueue.peek();                            if (total_weight + next.weight > data.bin_capacicty_W[k]) canPackMore = false;                        } else canPackMore = false;                    }                    packed_bins_at_this_iteration.add(new Bin(k, selected_items_at_this_iteration));                }                //step 3: assign to each bin the smallest feasible capacity                if (k != 0) { //if k'th bin is 0, then there does not exist a smaller feasible packing                    for (Bin bin : packed_bins_at_this_iteration) {                        ArrayList<Item> items = bin.selected_items;                        double total_weight = 0;                        for (Item item : items) {                            total_weight += item.weight;                        }                        int smallest_bin_index = k;                        for (int j = data.bin_num_m - 1; j > -1; j--) {                            if (j == k) continue; //no need to check for the same bin                            if (total_weight > data.bin_capacicty_W[j]) break; //a smaller bin can not be found                            else smallest_bin_index = j;                        }                        //check if smallest bin index has been updated                        if (smallest_bin_index != k) bin.setBinIndex(smallest_bin_index);                    }                }                //step 4: update the unpacked item set                for (Bin bin : packed_bins_at_this_iteration) {                    for (Item item : bin.selected_items) {                        unpacked_itemsJ_.remove(item);                    }                }                //step 5: test if the set is empty                if (!unpacked_itemsJ_.isEmpty()) {                    k++;                    isJEmpty = false; //go to step 1                } else {                    isJEmpty = true; //go to step 6                }                for(Bin bin : packed_bins_at_this_iteration){                    packed_bins.add(bin);                }            }            //step 6: store the obtained solution            each_bin_packing.add(packed_bins);        }        //select least cost packing from the set each_bin_packing        double[] costs = new double[data.bin_num_m];        for (int i = 0; i < each_bin_packing.size(); i++) {            double total_cost = 0;            for (Bin bin : each_bin_packing.get(i)) {                total_cost += bin.bin_cost;            }            costs[i] = total_cost;        }        double least_cost = Double.POSITIVE_INFINITY;        int least_cost_bin_index = -1;        for (int i = 0; i < costs.length; i++) {            if (costs[i] < least_cost) {                least_cost = costs[i];                least_cost_bin_index = i;            }        }        //store the best obtained solution        solution_counter++;        Solution solution = new Solution(solution_counter, each_bin_packing.get(least_cost_bin_index));        solutions.add(solution);        solutions_GA.add(solution);//        System.out.println(solutions.get(solution_counter));    }
*/

3 基于集合覆盖的启发式算法

在介绍集合覆盖启发式算法之前

我们先来看一下集合分割公式

下面介绍的是专门针对VSBPP的

3.1 集合分割公式

对于每种箱子i,定义Πi为对于这个箱子的可行装箱的集合。对于每种可行装箱k∈Πi都有一个二进制向量aik=(aik1,...,aikn)(代表方案k是否包含物品j)以及一个二进制决策变量xik(代表方案k是否包含在解中)。以下是伪代码:

目标函数(4)是最小化箱子的成本;

约束条件(5)是为了保证每个物品都只被装了一次;

约束条件(6)说明决策变量xik是二进制的。

然而,集合分割问题的线性规划松弛通常是难以解决的。所以,为了计算便捷,我们可以考虑下集合覆盖公式。

但是还有一个问题,那就是集合分割或覆盖都需要大量的数组(可行装箱)。为了克服这个困难,我们使用了一个两阶段的启发式算法。

3.2 基于集合覆盖(Set Covering,简称SC)的两段式启发式

第一步,对于每一个箱子i,我们先产生一组较好的可行装箱,它们可以组成一个集合。显然这个集合是上文Πi的子集,即:

在理想情况下,这个集合不能太大(这样才能高效解决集合覆盖问题)、集合应包括高质量的装箱(这是高质量近似最优解的由来)。为了这个目标,我们拿出之前的SSP4,不过要做一些改变。

在SSP4的步骤2,我们使SSP4不再去解决一维装箱问题,而是使用随机FFD(First-fit Decreacing)。具体来说,就是在每次迭代时,在两个最重的未装物品里随机选一个,然后依次寻找还能装下它的箱子,如果没有就开个新的箱子,这个过程被不断重复,直到所有物品都被装完。

认真看过上文的小伙伴知道,原SSP4可以生成m个可行解;而由于随机FFD的随机性,每次生成的m个解其实是不同的,这样,我们可以设计一个重复次数iter,最后可以得到iter×m个可行解,然后由这些可行解可以衍生出许多优质的可行装箱。

这部分的代码如下:

/*
* 提示:该行代码过长,系统自动注释不进行高亮。一键复制会移除系统注释 
* private void probabilisticHeuristic() {        System.out.println("Starting Probabilistic Heuristic.");        for (int z = 0; z < iter; z++) {            ArrayList<ArrayList<Bin>> each_bin_packing = new ArrayList<>(data.bin_num_m);            for (int i = 0; i < data.bin_num_m; i++) {                //0: set J_ to J (all items are unpacked)                ArrayList<Item> unpacked_itemsJ_ = new ArrayList<>(data.item_num_n);                //store a solution at this iteration                ArrayList<Bin> packed_bins = new ArrayList<>();                for (int j = 0; j < data.item_num_n; j++) {                    unpacked_itemsJ_.add(new Item(j));                }                int k = i;                boolean isJEmpty = false;                //step 1:                while (!isJEmpty) {                    ArrayList<Bin> packed_bins_at_this_iteration = new ArrayList<>();                    //set Jk - items that can be packed into a bin of type k                    ArrayList<Item> items_fit_for_bin_Jk = new ArrayList<>();                    for (Item item : unpacked_itemsJ_) {                        if (item.weight <= data.bin_capacicty_W[k]) {                            items_fit_for_bin_Jk.add(item);                        }                    }                    //store items in non-increasing order                    Queue<Item> itemQueue = new PriorityQueue<>(Collections.reverseOrder());                    for (Item item : items_fit_for_bin_Jk) {                        itemQueue.offer(item);                    }                    ArrayList<Item> selected_items_at_this_iteration = new ArrayList<>();                    double total_weight = 0;                    //step 2: one item is randomly selected out of the two heaviest unpacked items                    //and is assigned to the first bin where it fits (or else into an empty bin)                    while (!itemQueue.isEmpty()) {                        boolean canPackMore = true;                        Item selected_item = null;                        while (canPackMore) {                            Item item1 = itemQueue.poll();                            Item item2 = null;                            selected_item = item1;                            if (itemQueue.size() >= 1) {                                double random = ThreadLocalRandom.current().nextDouble(0, 101.0);                                //select item 2 if random > 0.5                                if (random > 0.50) {                                    item2 = itemQueue.poll();                                    selected_item = item2;                                    itemQueue.offer(item1);                                }                            }                            total_weight += selected_item.weight;                            if (total_weight > data.bin_capacicty_W[k]) {                                canPackMore = false;                                //pack the previously selected items into a bin                                packed_bins_at_this_iteration.add(new Bin(k, selected_items_at_this_iteration));                                //selected item that does not fit goes to the next empty bin                                selected_items_at_this_iteration = new ArrayList<>();                                selected_items_at_this_iteration.add(selected_item);                                total_weight = selected_item.weight;                            } else {                                selected_items_at_this_iteration.add(selected_item);                            }                            if (itemQueue.isEmpty()) {                                canPackMore = false;                                packed_bins_at_this_iteration.add(new Bin(k, selected_items_at_this_iteration));                            }                        }                    }                    //step 3: assign to each bin the smallest feasible capacity                    if (k != 0) { //if k'th bin is 0, then there does not exist a smaller feasible packing                        for (Bin bin : packed_bins_at_this_iteration) {                            ArrayList<Item> items = bin.selected_items;                            total_weight = 0;                            for (Item item : items) {                                total_weight += item.weight;                            }                            int smallest_bin_index = k;                            for (int j = data.bin_num_m - 1; j > -1; j--) {                                if (j == k) continue; //no need to check for the same bin                                if (total_weight > data.bin_capacicty_W[j]) break; //a smaller bin can not be found                                else smallest_bin_index = j;                            }                            //check if smallest bin index has been updated                            if (smallest_bin_index != k) bin.setBinIndex(smallest_bin_index);                        }                    }                    //step 4: update the unpacked item set                    for (Bin bin : packed_bins_at_this_iteration) {                        for (Item item : bin.selected_items) {                            unpacked_itemsJ_.remove(item);                        }                    }                    //step 5: test if the set is empty                    if (!unpacked_itemsJ_.isEmpty()) {                        k++;                        isJEmpty = false; //go to step 1                    } else {                        isJEmpty = true; //go to step 6                    }                    for(Bin bin : packed_bins_at_this_iteration){                        packed_bins.add(bin);                    }                }                //step 6: store the obtained solution                each_bin_packing.add(packed_bins);            }            //store the best obtained solutions            for (int i = 0; i < each_bin_packing.size(); i++) {                solution_counter++;                solutions.add(new Solution(solution_counter, each_bin_packing.get(i)));//                System.out.println(solutions.get(solution_counter));            }        }    }
*/

仔细的同学已经发现啦,这些可行装箱其实是很有可能重复的(在论文中SSP1-SSP4产生的解也都加进去了,这样就更多重复了)。为了减少不必要的计算,我们在进行第二步之前就要把重复的可行装箱删掉。那么该如何高效得删除重复装箱呢,同学们可以自己仔细想想。

第二步,通过下面的步骤(7)-(9),我们就可以得到一个近似最优解。

是不是有点眼熟

请你看看这和上面步骤(4)-(6)有什么不同

相信聪明的你已经发现了,约束条件(8)比(5)变得宽松了,从=变为了>=。从这里也可以看出集合覆盖于集合分割的区别。

两段式启发式算法的代码如下:

private void twoPhaseHeuristic() {//        generateNewVectors();        generateHashVectors();        setCoveringProblem();    }private void generateHashVectors() {    HashMap<ArrayList<Double>,Bin> label_hashMap = new HashMap<>();    for(Solution solution : solutions){        for(Bin bin : solution.bins){            ArrayList<Double> labels = new ArrayList<>();            double[] label = bin.getLabel();//label是每个bin的标识值,相等就认为是同一个            for(Double d : label) labels.add(d);            label_hashMap.put(labels,bin);//这步就是在去重        }    }    for(Bin bin : label_hashMap.values()){        bin.addIncidence();//使标识item出现的数组的对应位为1    }    //force the best solution columns to be included in the incidence matrix    for(Bin bin : solutions.get(best_initial_solution_index).bins){        bin.addIncidence();    }}private void setCoveringProblem() {        try {            IloCplex cplex = new IloCplex();            IloLinearNumExpr obj = cplex.linearNumExpr();            for(Solution solution : solutions){                int uniqueID = 1;                for(Bin bin : solution.bins){                    bin.setVar(cplex, solution.solutionID,uniqueID);                    uniqueID++;                }            }            for(Solution solution : solutions){                for(Bin bin : solution.bins){                    obj.addTerm(bin.bin_cost,bin.getVar());                }            }            cplex.addMinimize(obj);            //subject to (8)            for (int j = 0; j < data.item_num_n; j++) {                IloLinearNumExpr expr = cplex.linearNumExpr();                for(Solution solution : solutions){                    for(Bin bin : solution.bins){                        for(Item item : bin.selected_items){                            if(item.index == j){                                expr.addTerm(bin.getIncidence()[item.index],bin.getVar());                            }                        }                    }                }                cplex.addEq(expr,1.0,"Item"+j);//                cplex.addGe(expr,1.0);//TODO: Ge sometimes produces infeasible solutions            }            cplex.setParam(IloCplex.DoubleParam.TiLim,T_max);            cplex.setOut(null); //turn off display            if(cplex.solve()){                ArrayList<Bin> selected_bins = new ArrayList<>();                for(Solution solution : solutions){                    for(Bin bin : solution.bins){                        if(cplex.getValue(bin.getVar()) > 0.995){                            selected_bins.add(bin);                        }                    }                }                Solution solution_SCP = new Solution(solution_counter++,selected_bins);                System.out.println(solution_SCP);                this.best_cost_cplex = cplex.getObjValue();                System.out.println("SCP Total Cost: " + best_cost_cplex);                System.out.println("vs");                System.out.println("Initial cost : " + solutions.get(best_initial_solution_index).calculateCost());            } else {                System.err.println("Failed to solve the Set-Covering Problem!");            }//            String output_path = "out" + File.separator + "SCP";//            if (!Files.exists(Path.of(output_path))) {//                try {//                    Files.createDirectory(Path.of(output_path));//                } catch (IOException e) {//                    e.printStackTrace();//                }//            }//            cplex.writeSolution(output_path + File.separator + "SCP"+"_sol.txt");//            cplex.exportModel(output_path + File.separator + "SCP"+"_model.lp");            cplex.clearModel();            cplex.endModel();            cplex.end();        } catch (IloException e) {            e.printStackTrace();        }    }}

4 遗传算法(GA)

遗传算法是我们的老相识了

它在组合优化问题中有广泛的应用

还不太了解的小伙伴们可以看看我们以前的推文鸭:

遗传算法求解混合流水车间调度问题(附C++代码)

干货 | 遗传算法(Genetic Algorithm) (附代码及注释)

4.1 解的编码

我们把一个物品的装载序列(packing sequence)σ=(σ(1),...,σ(n))定义为一个染色体;

public class Chromosome {    public ArrayList<Item> items = new ArrayList<>();    public double fitness;    public int solution_index;//后面方法比较多,我就不贴了

而对于每一个染色体,通过下次适应(next-fit)算法,可以计算出一个VSBPP的解(也就是一个染色体对应一个解),具体的计算过程如下:

首先,我们暂时给定一个q个箱子的列表ω=(ω(1),...,ω(q))。从第一个物品和第一个箱子开始,我们把物品依次放入箱子中(如果放不下就关闭箱子,开始放下一个箱子,依次类推),最后当所有物品放完的时候,我们便可以获得一个可行解。

接下来的目标,就是如何使这个可行解的成本最小化(也就是确定一个最佳的箱子顺序)。为了达到这个目的,我们定义一个无环有向图G=(V,A):

点集V:

● 包括物品集合J和一个虚拟终节点(dummy node)n+1。

弧集A:

● 对于弧(σ(j),σ(j+1)),它的权重是能装下σ(j)的最小的箱子的。

● 对于弧(σ(j),σ(k)),如果物品σ(j),...σ(k-1)都能被装进一个箱子的话,它的花费就是那个最小箱子的花费。

怎么样

是不是很容易理解

这样对于一个给定的染色体(即物品装载序列),求它的最小成本(最佳箱子排序)的问题就巧妙地变成了最短路径问题(Shortest Path Problem),是不是很有意思~给个例子,看图理解下。

这是一个例子

在这个例子中,有六件物品(w=(2,4,5,7,9,10) )和两种箱子(W,c)=((12,3),(18,5))。

对于这个例子,最优箱子排序是i2,i1,i1。这个染色体对应的(最低)花费是11。

求解最短路径的代码小编就不放了。

4.2 交叉算子

算子用来通过当前解来获取新解。

4.2.1 二点交叉

二点交叉随机两个交叉点,子染色体的两点外的基因(物品)取自第一个父染色体,剩余的基因按第二个父染色体中的顺序插入子染色体。

4.2.2 三点交叉

三点交叉随机生成三个交叉点,把染色体分为四个部分。子染色体的第一与第三部分的染色体取自第一个父染色体,剩余的基因按第二个父染色体中的顺序插入子染色体。

4.2.3 相似任务一点交叉

相似任务一点交叉(Similar job one-point crossover)最初来自流水线排列问题,在这里,“Similar job”指的就是两个父染色体先把所有位置相同的相同基因给子染色体。然后随机一个交叉点,其他的和上面一样的。

代码有点多,代码里这三种交叉都是以一定概率进行的。

/*
* 提示:该行代码过长,系统自动注释不进行高亮。一键复制会移除系统注释 
* private Chromosome crossover(Chromosome parent1, Chromosome parent2, double random) {    Chromosome child = new Chromosome();    HashMap<Integer, Item> child_items_HashMap = new HashMap<>();    HashMap<Integer, Item> child_items_ordered_HashMap = new HashMap<>();    ArrayList<Item> child_items_resultant_list = new ArrayList<>();    if(random > 0.7){        //similar-job-one-point crossover        int point = ThreadLocalRandom.current().nextInt(0, data.item_num_n);        ArrayList<Item> parent1_items = parent1.getItems();        ArrayList<Item> parent2_items = parent2.getItems();        child_items_HashMap = new HashMap<>();        child_items_ordered_HashMap = new HashMap<>();        //copy the identical items        for (int i = 0; i < data.item_num_n; i++) {            Item parent1_item = parent1_items.get(i);            Item parent2_item = parent2_items.get(i);            if(parent1_item.index == parent2_item.index                    && !child_items_HashMap.containsKey(parent1_item.index)){                child_items_HashMap.put(parent1_item.index, parent1_item);                child_items_ordered_HashMap.put(i,parent1_item);            }        }        int counter1 = 0;        //copy parent1 items before the point        for (int i = 0; i < point; i++) {            Item parent1_item = parent1_items.get(counter1);            if(!child_items_HashMap.containsKey(parent1_item.index)){                if(!child_items_ordered_HashMap.containsKey(i)){                    child_items_HashMap.put(parent1_item.index, parent1_item);                    child_items_ordered_HashMap.put(i,parent1_item);                    counter1++;                } else continue; //i++, while keeping counter at the same place            } else {                i--;                counter1++;            }        }        int counter2 = 0;        //copy parent2 items after the point        for (int i = point; i < data.item_num_n; i++) {            Item parent2_item = parent2_items.get(counter2);            if (!child_items_HashMap.containsKey(parent2_item.index)){                if(!child_items_ordered_HashMap.containsKey(i)){                    child_items_HashMap.put(parent2_item.index, parent2_item);                    child_items_ordered_HashMap.put(i,parent2_item);                    counter2++;                } else continue; //i++, while keeping counter at the same place            } else {                i--;                counter2++;            }        }    }    else if (random > 0.4) {        //three-point crossover        int point1 = ThreadLocalRandom.current().nextInt(0, data.item_num_n - 2);        //ensures that point2 is greater than point1        int point2 = ThreadLocalRandom.current().nextInt(point1 + 1, data.item_num_n - 1);        //ensures that point3 is greater than point2        int point3 = ThreadLocalRandom.current().nextInt(point2 + 1, data.item_num_n);        ArrayList<Item> parent1_items = parent1.getItems();        ArrayList<Item> parent2_items = parent2.getItems();        child_items_HashMap = new HashMap<>();        child_items_ordered_HashMap = new HashMap<>();        //the items outside point1 are copied into the offspring        for (int i = 0; i < point1; i++) {            child_items_HashMap.put(parent1_items.get(i).index, parent1_items.get(i));            child_items_ordered_HashMap.put(i, parent1_items.get(i));        }        for (int i = point2 + 1; i < point3; i++) {            child_items_HashMap.put(parent1_items.get(i).index, parent1_items.get(i));            child_items_ordered_HashMap.put(i, parent1_items.get(i));        }        int counter = 0;        for (int i = point1; i <= point2; i++) {            Item parent2_item = parent2_items.get(counter);            //check if the child already contains parent2 item            if (child_items_HashMap.get(parent2_item.index) == null) {                child_items_HashMap.put(parent2_item.index, parent2_item);                child_items_ordered_HashMap.put(i, parent2_item);            } else {                i--;            }            counter++;        }        for (int i = point3; i < data.item_num_n; i++) {            Item parent2_item = parent2_items.get(counter);            //check if the child already contains parent2 item            if (child_items_HashMap.get(parent2_item.index) == null) {                child_items_HashMap.put(parent2_item.index, parent2_item);                child_items_ordered_HashMap.put(i, parent2_item);            } else {                i--;            }            counter++;        }    } else {        //two-point crossover        int point1 = ThreadLocalRandom.current().nextInt(0, data.item_num_n - 1);        //ensures that point2 is greater than point1        int point2 = ThreadLocalRandom.current().nextInt(point1 + 1, data.item_num_n);        ArrayList<Item> parent1_items = parent1.getItems();        ArrayList<Item> parent2_items = parent2.getItems();        child_items_HashMap = new HashMap<>();        child_items_ordered_HashMap = new HashMap<>();        //the items outside the selected two points are copied into the offspring        for (int i = 0; i < point1; i++) {            child_items_HashMap.put(parent1_items.get(i).index, parent1_items.get(i));            child_items_ordered_HashMap.put(i, parent1_items.get(i));        }        for (int i = (point2 + 1); i < data.item_num_n; i++) {            child_items_HashMap.put(parent1_items.get(i).index, parent1_items.get(i));            child_items_ordered_HashMap.put(i, parent1_items.get(i));        }        int counter = 0;        for (int i = point1; i <= point2; i++) {            Item parent2_item = parent2_items.get(counter);            //check if the child already contains parent2 item            if (child_items_HashMap.get(parent2_item.index) == null) {                child_items_HashMap.put(parent2_item.index, parent2_item);                child_items_ordered_HashMap.put(i, parent2_item);            } else {                i--;            }            counter++;        }    }    //---------------------------------    child_items_resultant_list = new ArrayList<>();    for (int i = 0; i < data.item_num_n; i++) {        child_items_resultant_list.add(child_items_ordered_HashMap.get(i));    }
*/
    //add the dummy node    child_items_resultant_list.add(data.item_num_n, new Item(data.item_num_n, 0));    HashMap<Integer, Item> duplicateChecker = new HashMap<>();    for (Item item : child_items_resultant_list) {        duplicateChecker.put(item.index, item);    }    if (duplicateChecker.size() < data.item_num_n + 1)        System.err.println("Error! Item number is wrong in crossover().");    child.addItems(child_items_resultant_list);    return child;}

4.3 变异算子

此处的变异算子其实是一种领域搜索方法,以一定概率进行。

首先我们选择一系列箱子,然后在这箱子集合q‘所包含的物品集合J'上进行SSP3,若在q'上更好的解产生了,那么就替换掉原来的解。

下面具体解释一下选择和替换的过程。

4.3.1 选择

先计算每个箱子的花费-实际装载量之比,然后选择比例最大的箱子加入到选择的集合中,不断循环,直到这些箱子所属的物品数量超过一个上限K,这里K=15。

4.3.2 替换

为了简单说明替换的过程,我们把每个箱子p用它所装载的物品集Bp来代表(Bp是无序的),然后我们把所有箱子的物品集(B1,...,Bq')按任意次序连接起来,显然,最后得到的解的花费小于或等于原解。

/*
* 提示:该行代码过长,系统自动注释不进行高亮。一键复制会移除系统注释 
* private void mutate(Chromosome chromosome) {        boolean improved_solution = false;        //solution index obtained after solving shortest path problem        Solution solution = solutions.get(chromosome.getSolution_index());        //compute the ratio of the cost to the actual load for each bin        //sort bins in decreasing order        Queue<Bin> binQueue = new PriorityQueue<>(Collections.reverseOrder());        HashMap<Integer, Item> remaining_items = new HashMap<>();        for(Bin bin : solution.bins){            double load = 0;            for(Item item : bin.selected_items){                load += item.weight;                remaining_items.put(item.index,item);            }            bin.setRatio_GA(bin.bin_cost / load);            binQueue.offer(bin);        }        HashMap<Integer, Item> items_selected_subset_J_ = new HashMap<>();        ArrayList<Bin> selected_bins = new ArrayList<>();        double original_cost_total = 0;        int k = 0;        //stop when cardinality exceeds the preset upper bound k        while(k < parameters.item_UB_k){            Bin bin = binQueue.poll();            for(Item item : bin.selected_items){                items_selected_subset_J_.put(item.index,item);                remaining_items.remove(item.index);            }            original_cost_total += bin.bin_cost;            k = items_selected_subset_J_.size();            selected_bins.add(bin);        }        double best_cost = original_cost_total;        HashMap<Integer, Item> unpacked_itemsJ_ = new HashMap<>();        for(Item item : items_selected_subset_J_.values()){            unpacked_itemsJ_.put(item.index,item);        }        ArrayList<Bin> bins_SSP3 = new ArrayList<>();        //invoke SSP3 to solve a restricted VSBPP        bins_SSP3 = mutate(unpacked_itemsJ_);//这里的mutate是下面的方法//        bins_SSP3 = mutateSSP4(unpacked_itemsJ_);        double cost_ssp3 = 0;        for(Bin bin : bins_SSP3){            cost_ssp3 += bin.bin_cost;        }        if(cost_ssp3 < best_cost){//                System.out.println("Mutation produced a better cost!");            best_cost = cost_ssp3;            //replace the selected bins by a new solution            selected_bins.clear();            selected_bins.addAll(bins_SSP3);            improved_solution = true;        }        items_selected_subset_J_.clear();        //replacement of the mutated chromosome        if(improved_solution){            ArrayList<Item> items = new ArrayList<>();            for(Bin bin : selected_bins){                items.addAll(bin.selected_items);            }            items.addAll(remaining_items.values());            chromosome.clearItems();            chromosome.setFitness(0);            chromosome.setSolution_index(-1);            chromosome.addItems(items);            chromosome.addItems(new Item(data.item_num_n, 0)); //dummy node        }    }private ArrayList<Bin> mutate(HashMap<Integer, Item> given_item_set) {    //step 0: initialization    HashMap<Integer, Item> unpacked_itemsJ_ = new HashMap<>();    double[] maximum_load_z = new double[data.bin_num_m];    ArrayList<Bin> packed_bins = new ArrayList<>();    for(Item item : given_item_set.values()) unpacked_itemsJ_.put(item.index, item);    //iterate until the unpacked item set J_ is empty    while (!unpacked_itemsJ_.isEmpty()) {        HashMap<Integer, ArrayList<Item>> selected_items_S = new HashMap<>();        double minimal_ratio_i = Double.POSITIVE_INFINITY;        int winner_index = -1;        //only those bins that can include the largest unpacked items are considered        //at each iteration the largest unpacked item is forced to be included in the set of items        //that are packed into the selected bin        double max_weight = Double.NEGATIVE_INFINITY;        int heaviest_item_index = -1;        for (Item item : unpacked_itemsJ_.values()) {            double weight = item.weight;            if (weight > max_weight) {                max_weight = weight;                heaviest_item_index = item.index;            }        }        //step 1: compute maximum_load_z and selected_items_S for each bin        for (int i = 0; i < data.bin_num_m; i++) {            maximum_load_z[i] = 0.00000000001;            if (data.bin_capacicty_W[i] < max_weight) continue;            List<Integer> index_list = new ArrayList<>(unpacked_itemsJ_.size() - 1); //excluding the heaviest item            for (Item item : unpacked_itemsJ_.values()) {                int index = item.index;                if (index != heaviest_item_index) index_list.add(index);            }            Solver cplexSolver = new Solver(i,heaviest_item_index,max_weight,                    maximum_load_z,index_list,unpacked_itemsJ_,selected_items_S);            cplexSolver.run();            maximum_load_z = cplexSolver.getMaximum_load_z();            selected_items_S.put(i,cplexSolver.getSelected_items_at_this_iteration());            cplexSolver = null;            //----------------------------------            if(maximum_load_z[i] == 0.0) maximum_load_z[i] = 0.0000000001;            double ratio = data.bin_cost_C[i] / maximum_load_z[i];            //step 2: compute minimal ratio i*            if (ratio < minimal_ratio_i) {                minimal_ratio_i = ratio;                winner_index = i;            }        }        //step 3: load the items of i* into the bin at winner_index        packed_bins.add(new Bin(winner_index, selected_items_S.get(winner_index)));        //step 4: remove the packed items from the unpacked item set J_        for (Item item : selected_items_S.get(winner_index)) {            unpacked_itemsJ_.remove(item.index);        }    }    return packed_bins;}
*/

关于代码:感谢安东提供的代码,由于代码过多,我只把重要的部分贴了出来。

欲下载完整代码,或进行交流讨论,请移步留言区

END


文案:王金远(华中科技大学管理学院本科一年级)

代码:安东(华中科技大学管理学院博一)

指导老师:秦虎老师(华中科技大学管理学院)

排版:程欣悦

如对文中内容有疑问,欢迎交流。PS:部分资料来自网络。


如有需求,可以联系:

秦虎老师(华中科技大学管理学院,tigerqin1980@qq.com)

王金远 (华中科技大学管理学院本科一年级,1002448017@qq.com)

Moriakin Anton(华中科技大学管理学院博一,moryakin.anton@yandex.ru)

程欣悦(1293900389@qq.com)

欢迎大家加入数据魔术师粉丝群,我们的活动将会通过粉丝群优先发布, 学习资料将通过粉丝群分享。

欲入群,请转发此文,然后扫描下方二维码联系数据魔术师小助手