Next Article in Journal
A New Robust Iterative Scheme Applied in Solving a Fractional Diffusion Model for Oxygen Delivery via a Capillary of Tissues
Previous Article in Journal
On the Integrability of Persistent Quadratic Three-Dimensional Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-Stage Probe-Based Search Optimization Algorithm for the Traveling Salesman Problems

1
Department of Information and Computational Sciences, School of Mathematical Sciences and LMAM, Peking University, Beijing 100871, China
2
Mathematics Discipline, Science, Engineering and Technology School, Khulna University, Khulna 9208, Bangladesh
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(9), 1340; https://doi.org/10.3390/math12091340
Submission received: 29 February 2024 / Revised: 15 April 2024 / Accepted: 25 April 2024 / Published: 28 April 2024
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
As a classical combinatorial optimization problem, the traveling salesman problem (TSP) has been extensively investigated in the fields of Artificial Intelligence and Operations Research. Due to being NP-complete, it is still rather challenging to solve both effectively and efficiently. Because of its high theoretical significance and wide practical applications, great effort has been undertaken to solve it from the point of view of intelligent search. In this paper, we propose a two-stage probe-based search optimization algorithm for solving both symmetric and asymmetric TSPs through the stages of route development and a self-escape mechanism. Specifically, in the first stage, a reasonable proportion threshold filter of potential basis probes or partial routes is set up at each step during the complete route development process. In this way, the poor basis probes with longer routes are filtered out automatically. Moreover, four local augmentation operators are further employed to improve these potential basis probes at each step. In the second stage, a self-escape mechanism or operation is further implemented on the obtained complete routes to prevent the probe-based search from being trapped in a locally optimal solution. The experimental results on a collection of benchmark TSP datasets demonstrate that our proposed algorithm is more effective than other state-of-the-art optimization algorithms. In fact, it achieves the best-known TSP benchmark solutions in many datasets, while, in certain cases, it even generates solutions that are better than the best-known TSP benchmark solutions.

1. Introduction

The traveling salesman problem (TSP) is a well-known combinatorial optimization problem, which can be specifically expressed as the problem of finding the lowest-cost route throughout a given set of cities. It has been proven to be one of the most difficult NP-hard problems, i.e., NP-complete problems [1,2], so there is no algorithm to effectively solve it in polynomial time via the conventional computer systems. Because of its wide applicability and computational complexity, many researchers have been attracted to investigating the TSP for effective and efficient solutions. Actually, a variety of feasible optimization algorithms have been designed and exploited in the last few decades. According to [2,3], these algorithms can be categorized into two main streams: exact algorithms and heuristic algorithms. Each stream offers different tradeoffs in terms of solution quality, computational efficiency, and applicability to different problem instances. The exact algorithms guarantee optimal solutions, but the execution time is increased exponentially with the problem size. They are only suitable for small-size TSPs and become rather difficult for medium- to large-scale problems, even though supercomputer systems are adopted in the computational process [4]. Therefore, the exact algorithms may become computationally prohibitive. On the other hand, heuristic algorithms are designed with some efficient search rules or systems to obtain good approximate solutions instead.
Heuristic optimization algorithms are more applicable in practice because of their ability to deal with large-scale TSPs. These algorithms, however, cannot guarantee an optimal solution but can provide a satisfactory solution with an affordable computational cost. They are generally designed with the help of certain specific knowledge and intuitive experiences to construct a reasonable route solution. During the search process, they use some greedy strategies for a better solution to guide the search operation within a limited solution space. In this way, the solutions become better and better in the sequential iterations while the search is always set in a local mode. It is clear that better heuristic algorithms require a deeper understanding of the solution domains and structures, from which high-quality solutions can be found effectively and efficiently. However, a heuristic algorithm can perform well on certain specific instances but not work well for other instances, and it is often expensive to adapt to new instances and problems. In general, heuristic algorithms can serve as building blocks for more sophisticated meta-heuristic approaches.
Meta-heuristic approaches are more generic and flexible than heuristic algorithms and are often used when heuristic approaches are insufficient or impractical. Indeed, they have been widely adopted for TSPs due to their ability to efficiently explore large solution spaces and find good approximate solutions. Most of these meta-heuristic algorithms are inspired by natural and biological behaviors and employ certain top-level strategies for the solutions of TSPs. Actually, they can offer high-quality solutions with relatively less computational cost. In such an approach, the search scheme is generally designed to guide some local improvement operators in an intelligent way so that a robust iterative generation process can be produced through a proper balance regarding the diversification and intensification strategy during each search iteration. Diversification and intensification refer to the exploration and exploitation in the solution search space, respectively. The strength of meta-heuristics is the effectiveness of the employed intensification and diversification strategy, and the efficiency depends on the decision between the global search reinforcement and convergence search in the promising region. However, they often meet with the problem of premature convergence, which traps the search process in a local optimum solution. Moreover, meta-heuristics utilize many parameters that need to be tuned. In addition, most of these approaches yield probabilistic solutions due to the randomness in the process. It may be possible to enhance the effectiveness of meta-heuristics by combining two or more algorithms into a hybrid form. The performance of the hybrid algorithm is certainly better; however, it necessitates a higher computational cost, while effective hybridization is difficult to achieve.
In 2016, a new conceptual computational model named Probe Machine (PM) was proposed by Xu [5]. It is a completely parallel computing model in which the data placement mode is nonlinear. Motivated by the probe concept and data structure of the PM, we first designed a PM-based optimization algorithm for solving symmetric TSPs [6]. It is a route construction and search procedure coupled with a certain filtering mechanism. Specifically, it starts with all the possible sub-routes consisting of three cities, and then each potential sub-route is extended and enhanced step by step from its two ends until complete routes are finally formed. At the same time, the worst routes are filtered out according to the filtering proportion value. Its advantages are the clarity of the idea and the easy implementation. However, there is no scope of modifying and developing the possible routes from the current existing potential routes, so certain potential routes may easily be left out.
To ameliorate this potential drawback, we further designed a dynamic route construction optimization algorithm for both symmetric and asymmetric TSPs through the integration of the probe concept and local search mechanism [7]. This PM-based dynamic algorithm adopts the route modification and development as the routes are built. In fact, the embedded local search operators are imposed consecutively on the retained potential routes to produce more potential routes before each subsequent expansion. In this paper, we extend and develop our previous study of the PM-based search optimization framework methodologically and theoretically. Specifically, we design a two-stage search optimization algorithm for solving both symmetric and asymmetric TSPs through the stages of probe-based route development and a self-escape mechanism. In the first stage, the key idea is to design a potential route filter in each step of the probe-based route development process such that at least a sufficient number of potential or valuable partial routes are retained in each step; hence, the probability of producing the best complete route is increased under limited computational resources. Moreover, certain local augmentation operators are further employed to extend and improve the retained potential partial routes in each step. In the second stage, a self-escape mechanism is implemented regarding the obtained complete routes from the first stage to prevent the above probe-based search from being trapped in a locally optimal solution. Therefore, a local optimal solution can be skipped and the global optimization search capability can be enhanced. In fact, it is an effective search optimization framework in which the first stage is to construct and develop a set of better complete routes step by step dynamically and the second stage is to self-escape from the stagnant routes (if possible). The main contributions of this work are summarized as follows:
  • A two-stage probe-based search optimization algorithm is designed for solving both symmetric and asymmetric TSPs through the stages of probe-based route development and a self-escape mechanism. The experimental results demonstrate that our proposed algorithm performs better than the other state-of-the-art algorithms with respect to the quality of the solution over a wide range of TSP datasets.
  • A proportion value threshold filter is designed and integrated into the probe-based search optimization framework to retain at least a sufficient number of potential routes in each step of the route development process. Actually, we set up an initial value for the proportion value of the potential partial route filtering in the first step and then dynamically adjust it in the following steps.
  • Four local augmentation operators are designed and employed on each of the developed potential routes in an efficient manner so that all the retained potential routes are further augmented and improved consecutively at each step.
  • A self-escape mechanism is further implemented on the obtained complete routes from the first stage to prevent the probe-based search from being trapped in a locally optimal solution.
  • A statistical analysis is conducted to validate the computed results of our proposed algorithm against the other benchmark optimization algorithms by using the Wilcoxon signed rank test.
The rest of this paper is organized as follows. The mathematical description of the TSP is provided in Section 2. Then, the concept of the probe is introduced in Section 3. We further describe our adopted proportion threshold filter in the probe-based search in Section 4 and our employed local augmentation operators in Section 5. Section 6 presents our proposed probe-based search optimization algorithm in detail. The experimental results and discussion are summarized in Section 7. Finally, we include a brief conclusion in Section 8.

2. TSP Mathematical Description

The TSP is a path planning optimization problem of finding the lowest-cost route through a given set of cities. The route must be designed in such a way that each city is visited once and only once and eventually returns to the starting city. It is known to be NP-complete, meaning that there is no known effective algorithm that can solve it for large instances in polynomial time. As the number of cities increases, the number of possible routes grows factorially, making it computationally infeasible to find the optimal solution through brute force for large instances. Despite its computational complexity, the TSP has attracted great attention from scientists and engineers due to its great value for practical applications and its connections to other optimization problems. Until now, no general method has been able to tackle this problem effectively [8].
The TSP was first mathematically formulated by Karl Menger in 1930 [9] and, since then, it has been extensively investigated in diverse applicable fields. Typical examples of the TSP real-life applications include transport routing, circuit design, X-ray crystallography, micro-chip production, scheduling, mission planning, aviation, logistics management, DNA sequencing, data association, image processing, pattern recognition, and many more [10,11,12]. Therefore, it is very important and valuable to design and implement an effective algorithm for the TSP solution.
The TSP can be represented as a graph-theoretic problem. Let G n = ( C , E ) be a directed graph, where C = { c 1 , c 2 , , c n } is the set of vertices (nodes) and E = { e i j : e i j = ( c i , c j ) ; ( c i , c j ) C × C ; i j ; i , j = 1 , 2 , , n } is the set of edges. Each vertex (node) c i C denotes the position of a city and each edge e i j E indicates the path from the i-th city to the j-th city. Moreover, a non-negative cost (distance) d i j R + is associated with each edge e i j E for representing the edge weight of the graph. If  d i j = d j i ; e i j E , the graph G n is referred to as symmetric TSP, whereas the asymmetric TSP corresponds to the case with d i j d j i for at least one pair of edges e i j and e j i E of the graph G n . The aim of this problem is to construct a complete route T with n distinct cities such that the total traveling cost (distance) function F ( T ) of the route is minimized; i.e., the fitness function of the route f ( T ) is maximized. Let T = ( c 1 , c 2 , , c n , c 1 ) with all distinct cities c i C be a complete route and F ( T ) be its route cost (distance). Then, the objective function of the TSP can be formulated as follows [13]:
Generate a complete route T = ( c 1 , c 2 , , c n , c 1 ) to minimize F ( T ) = i = 1 n 1 d c i c i + 1 + d c n c 1 i . e . , to maximize f ( T ) = 1 F ( T )
where d c i c i + 1 is the cost (distance) of the local path between the cities c i and c i + 1 . If  c i ( x i , y i ) and c i + 1 ( x i + 1 , y i + 1 ) are coordinates of c i and c i + 1 , then d c i c i + 1 is calculated by Euclidean distance as follows:
d c i c i + 1 = ( x i x i + 1 ) 2 + ( y i y i + 1 ) 2 .

3. Probe Concept

The probe is conceptually a detection device or related operator that accurately recognizes a piece of a particular structure or pattern from the whole description of an object and implements certain operations, such as connection or transmission, between the detected structures. It has been extensively used for various purposes in diverse fields like medical, engineering, biology, computer science, electronics, information security, archaeology, and so on [5,14]. In the medical field, ultrasonic probes are utilized to generate acoustic signals and detect return signals. These probes are an essential component of ultrasound systems and work by emitting high-frequency sound waves into the body or material being examined and then receiving the echoes that bounce back. On the other hand, a short single-stranded DNA or RNA fragment (approximately 20 to 500 bp) is designed as a biological probe to detect its complementary DNA sequence or locate a particular clone. In addition, the probe concept is adopted in electronics to perform various electronic tests, while archaeologists use it to interpret the soil’s nature and to decide where and how to excavate.
As a computer model, the Probe Machine (PM) [5] was developed where each probe is assumed to be an operator to make a connection between any two pieces of fiber-tailed data or to transmit information from one piece of fiber-tailed data to another if their tails are consistent. The probe in PM accomplishes three different functions. First, it accurately finds any two target data pieces with perfectly matching adjacent edges or fiber tails. Then, it takes any pair of possible target data pieces from the database of available fiber-tailed data. Finally, it performs some well-defined operations to extend fiber-tailed data step by step to a problem solution. Motivated by the probe of the PM, we design a new kind of probe for our PM-based search optimization approach to solving TSPs. In our approach, the probes are assumed to be a connection operator of city sequences or possible sub-routes that are consistent so that the complete route can be produced step by step. The sub-route consisting of ( m 1 ) edges is referred to as the m-city basis probe. The outer two edges are called the wings of the natural probe. The actual probe performs two actions such that it first finds out the required adjacent edges and then generates the next possible basis probes based on the availability of these edges. Actually, each basis probe can use its wings to accurately detect and append two other different adjacent edges on the route to extend the current route solution. In this way, each basis probe is enhanced automatically on both ends in every step of the procedure and continues until all cities are included in the basis probe, i.e., until complete (n-city) basis probes are formed. Mathematically, to arrive at a complete probe search of n-city problem, the procedure needs to execute [ n 2 ] steps, where
[ n 2 ] = n 2 , if n is even ; n 1 2 , if n is odd .
To facilitate the understanding of the network expansion mechanism of the probe, it is illustrated graphically in Figure 1. In the figure, a 5-city basis probe is hybridized with two 3-city basis probes and generates a 7-city basis probe. In our model, the first step generates 3-city basis probes, the second step generates 5-city basis probes, the third step generates 7-city basis probes, and in this way the n-th step generates n-city basis probes. The number of cities visited step-wise by the probe operation is provided in Table 1. The sample basis probes in the figure are denoted by p m k s , p i j k l m , and  p l t j , respectively. Actually, the probe p m k s is a sub-route consisting of three cities ( c m , c k , c s ) with the wings ω ( p m k s ) = { p m k s k , p m k s s } . Similarly, the probe p l t j is a sub-route consisting of three cities ( c l , c t , c j ) having wings ω ( p l t j ) = { p l t j t , p l t j j } , while the probe p i j k l m is a sub-route consisting of five cities ( c i , c j , c k , c l , c m ) with the wings ω ( p i j k l m ) = { p i j k l m l , p i j k l m m } . For the expansion of the network of 5-city basis probe p i j k l m , it explores for two 3-city basis probes through the wings p i j k l m l and p i j k l m m . On the other hand, the wing p m k s k of the basis probe p m k s and the wing p l t j j of the basis probe p l t j are consistent with the wings of the probe p i j k l m . The other wings of the basis probes p m k s and p l t j are not consistent with the basis probe p i j k l m . After finding these types of basis probes, the 5-city basis probe p i j k l m hybridizes with these 3-city basis probes through the wings p i j k l m l and p i j k l m m . This hybridization leads to the expansion of the current network p i j k l m and generates a 7-city basis probe comprising the cities ( c i , c j , c k , c l , c m , c s , c t ) . The new 7-city basis probe is denoted by p i j k l m s t , and it also has two wings, namely p i j k l m s t s and p i j k l m s t t . In this way, the hybridized probes extend their network, and the redundant probes not being hybridized are left out.

4. Adopted Filtering Mechanism

In searching for the solution of a TSP, a ”filter” typically refers to a technique or mechanism of reducing the search space or eliminating the unpromising solutions during the optimization process. This technique can play a crucial role in improving the efficiency of the algorithms that aim to determine the optimal or near-optimal solutions to the TSP. Some of the key filtering concepts used in TSPs include bounding filters, dominance rules, symmetry-breaking filters, etc. These filters can significantly improve the efficiency of the algorithms for solving the TSP, enabling the exploration of the larger spaces and finding the near-optimal solutions within a reasonable time frame.
The adopted filter of our approach is a proportion threshold filter that is set up rigorously in the probe-based search process. Actually, it assists the search operator in retaining the least but necessary number of potential partial routes in each step of the complete route development process. The potential routes of the current step are used to generate possible basis routes in the next step during the route construction process. Therefore, the performance of the probe-based search process is highly influenced by the appropriate choice of the filtering proportion value. In fact, setting an appropriate proportion value of generated routes is a rather challenging problem for the effectiveness of our approach. An inappropriate choice of proportion value leads to trapping the whole process, which not only yields a worse solution but consumes a longer computational time. Specifically, a larger proportion value may provide more chances to produce a better optimum solution. However, this also means that the number of possible basis routes generated in the next step is increased rapidly and the computational cost is increased too, and, sometimes, the model cannot even produce a feasible complete route. On account of this, it is important to set up a reasonable proportion threshold filter in the working steps so that the worst generated routes are filtered out in each filtering process. Through theoretical analysis and the experiments, we set a proportion value function ψ 2 k + 1 for the kth step as follows:
ψ 2 k + 1 = γ n + k 3 ,
where k denotes the step number that changes in { 1 , 2 , , [ n 2 ] } , n stands for the number of cities contained in the test dataset, and  γ is a constant. Actually, the value of γ is dependent on the value of n and can be fitted by the trial and error method. It can be observed from the experiments that, in some cases, a smaller value of γ is needed to provide a good solution, while, in some other cases, a larger value of γ is required to obtain a satisfactory solution. Therefore, the value of γ is not biased, and it offers different tradeoffs in terms of solution quality, computational efficiency, and applicability to different problem instances. The experiments and theoretical analysis demonstrate that the value of γ lies within an interval of [ n + 1 n 3 3 n 2 + 2 n , 900 n + 900 n 3 3 n 2 + 2 n ] R + , i.e.,  γ [ n + 1 n 3 3 n 2 + 2 n , 900 n + 900 n 3 3 n 2 + 2 n ] R + , which is a good adjustment for computing a satisfactory solution in an acceptable time frame.
The design of the adopted filter with the threshold value in Equation (4) is based on the idea that, initially, the routes are too short to clear. As the step increases, the routes are also gradually becoming more and more clear. Thus, it is reasonable to set up a large proportion value in the first step and then to reduce with time in the following steps, i.e.,  ψ 3 > ψ 5 > > ψ 2 [ n 2 ] + 1 . From the experiments, it can be found that, as the number k of steps increases, the number n r l ( l = 3 , 5 , 7 , , n ) of retaining potential partial routes is increased at first in a few steps and then starts to decrease. From the step where the reduction begins, it is decreased very rapidly in the remaining steps; in some cases, it even retains one single route. To mitigate this problem, we can fix the proportion value unchanged after conducting 50 % of the whole steps. In addition, if the number n r l of retaining potential partial routes at any step is too small (e.g., n r l = 1 ), it is believed that the procedure has already fallen into the trap of local optimum solution. Once trapped, it cannot jump out from there as the proportional value is still decreasing and it retains one single route in the remaining steps. To get rid of having one single possible route, the proportion value can be increased at that step. This increment is created in such a way that the number n r l of retaining potential routes belongs to an interval of [ 1 , 100 ] N . After that, it is decreased as before in the remaining steps of the procedure. This strategy allows the proportion value to be increased in certain steps of the procedure. More precisely, it can be said that we set up an initial filtering proportion value in the first step, and then the algorithm dynamically adjusts it in the rest of the steps. Therefore, the search process is able to avoid having one single route, and hence the probability of producing a better complete route is increased.
The filtering mechanism can be explained through a concrete example. Suppose that we would like to solve a six-city symmetric TSP problem. In the first step of our approach, 60 basis sub-routes of three cities will be generated. If we set the proportion value to 1 2 in the first step, then 30 potential 3-city sub-routes will be retained before leaving the first step, and the remaining 30 routes will be filtered out based on their fitness value. These retained potential basis routes are then modified and improved into good routes through the local augmentation operators, which are discussed in the next section. These good sub-routes are referred to as the root basis probes or sub-routes. In the second step, 5-city basis sub-routes will be generated using these 30 good basis routes. Let us say 180 basis sub-routes are produced in the second step, and, if we set 1 3 as the proportion value in this step, then 60 basis sub-routes will be retained for route extension. These 60 basis sub-routes are used to construct 6-city complete routes, and, finally, a best 6-city complete route is found from there.

5. Employed Local Augmentation Operators

In our solution augmentation process, four types of local augmentation operators are employed consecutively on each retained potential basis probe or sub-route in each step of the route development procedure. These operators help to modify the existing sub-route iteratively and potentially improve its quality. Actually, the local augmentation operators are adopted in the first improvement manner; i.e., once an improvement is found, subsequent improvements are explored based on this improvement. In addition, a well-defined decision function for each operator is used to avoid generating a worse route and consuming a longer time. We briefly describe them in the following subsections, respectively. The pseudocode of improving potential basis probes or sub-routes through local augmentation operators is offered in Algorithm 1.
Algorithm 1: Pseudocode of the potential basis probe improvement procedure via local augmentation operators
Input: A retained potential basis probe p 1 × l , l = 5 , 7 , 9 , , n , Fitness value of the basis probe f ( p ) , Distance matrix d n × n
Output: Improved basis probe p i m , Fitness value of the improved basis probe f ( p i m )
  • For each city c i p , i = 1 to n 2 for 2-opt, i = 1 to n 1 for reversion and swap, i = 1 to n for insertion. For an n-city probe, i 1 = n when i = 1
  • For each city c j p , j = i + 2 to n for 2-opt, j = i + 1 to n for reversion and swap, j = 1 to n , j i for insertion. For an n-city probe, j + 1 = 1 when j = n
  • Check the inequality defined in Equations (5)–(8) of carrying operator
  • If the inequality is satisfied, then
  • Generate a new basis probe p based on p by applying carrying operator
  • Compute the fitness value of p , i.e.,  f ( p ) by using Equation (1)
  • In case of asymmetric TSP, calculate the fitness value in reverse order also of p and then take the best one from { f ( p original order ) , f ( p reverse order ) }
  • If f ( p ) > f ( p ) , then
  • Update the older potential basis probe p by assigning p p
  • Update the fitness value f ( p ) by assigning f ( p ) f ( p )
  • End if
  • End if
  • End for
  • End for
  • Assign p i m p and f ( p i m ) f ( p )
  • Return p i m and f ( p i m )

5.1. The 2-Opt Operator

The 2-opt operator eliminates two edges from the existing potential basis probes and reconnects them together to create a new good basis probe or possible route. Let ( c i , c i + 1 ) , where i = 1 , 2 , , n 2 , and  ( c j , c j + 1 ) , where j = i + 2 , , n , be the two edges of a potential basis probe p = ( c 1 , c 2 , , c i , c i + 1 , , c j , c j + 1 , , c l ) , where l = 5 , 7 , 9 , , n . Then, the 2-opt operator reverses the local path between the cities c i + 1 and c j , and generates a new good basis probe, which is denoted by p and is defined by p = ( c 1 , c 2 , , c i , c j , c j 1 , , c i + 2 , c i + 1 , c j + 1 , , c l ) . To avoid generating a worse basis probe by the 2-opt operator, it can be dominated by the following decision inequality:
2 - opt ( i , j ) : { d ( c i , c i + 1 ) + d ( c j , c j + 1 ) } > { d ( c i , c j ) + d ( c i + 1 , c j + 1 ) } .

5.2. Reversion Operator

The reversion operator first locates the position of two different cities of a potential basis probe and then reverses the local path between these two cities. Consider a potential basis probe p = ( c 1 , c 2 , , c i 1 , c i , c i + 1 , , c j 1 , c j , c j + 1 , , c l ) , where l = 5 , 7 , 9 , , n , and the two cut points i ( i = 1 , 2 , , n 1 ) and j ( j = i + 1 , , n ) on p. This reversion operator generates a new good basis probe, which is denoted by p and is defined by p = ( c 1 , c 2 , , c i 1 , c j , c j 1 , , c i + 1 , c i , c j + 1 , , c l ) . To determine whether the reversion is beneficial, we can check the following inequality:
Reversion ( i , j ) : { d ( c i 1 , c i ) + d ( c j , c j + 1 ) } > { d ( c i 1 , c j ) + d ( c i , c j + 1 ) } .

5.3. Swap Operator

The swap operator simply exchanges the positions of two cities of a potential basis probe to create a new good basis probe. Let the two positions i ( i = 1 , 2 , , n 1 ) and j ( j = i + 1 , , n ) be selected from a potential basis probe p = ( c 1 , c 2 , , c i 1 , c i , c i + 1 , , c j 1 , c j , c j + 1 , , c l ) , where l = 5 , 7 , 9 , , n . The new good basis probe through swap operator on p is denoted by p and is defined by p = ( c 1 , c 2 , , c i 1 , c j , c i + 1 , , c j 1 , c i , c j + 1 , , c l ) . We accept the new basis probe if the following inequality holds:
Swap ( i , j ) = { d ( c i 1 , c i ) + d ( c i , c i + 1 ) + d ( c j 1 , c j ) + d ( c j , c j + 1 ) } > { d ( c i 1 , c j ) + d ( c j , c i + 1 ) + d ( c j 1 , c i ) + d ( c i , c j + 1 ) } , j i 1 ; { d ( c i 1 , c i ) + d ( c j , c j + 1 ) } > { d ( c i 1 , c j ) + d ( c i , c j + 1 ) } , j i = 1 .

5.4. Insertion Operator

The insertion operator first picks two positions i ( i = 1 , 2 , , n ) and j ( j = 1 , 2 , , n ) with j i and then the city c i is inserted into the c j ’s back position. Consider a potential basis probe p = ( c 1 , c 2 , , c i 1 , c i , c i + 1 , , c j 1 , c j , c j + 1 , , c l ) , where l = 5 , 7 , 9 , , n . The new basis probe generated based on this operator is defined by p = ( c 1 , c 2 , , c i 1 , c i + 1 , , c j 1 , c j , c i , c j + 1 , , c l ) . In fact, the new basis probe can be accepted if the following inequality holds:
Insertion ( i , j ) = { d ( c i 1 , c i ) + d ( c i , c i + 1 ) + d ( c j , c j + 1 ) } > { d ( c i 1 , c i + 1 ) + d ( c j , c i ) + d ( c i , c j + 1 ) } , i < j ; { d ( c j 1 , c j ) + d ( c i 1 , c i ) + d ( c i , c i + 1 ) } > { d ( c j 1 , c i ) + d ( c i , c j ) + d ( c i 1 , c i + 1 ) } , i > j .

6. Two-Stage Search Optimization Algorithm

In order to maintain a good balance between the effectiveness and efficiency of probe-based route development, we present a two-stage search optimization algorithm for solving both symmetric and asymmetric TSPs. Functionally, the first stage is to construct and develop an appropriate set of good complete routes through the probe extension step by step dynamically, while the second stage is a self-escape loop to prevent the search from being trapped in a locally optimal solution if possible. We describe the two stages in detail in the following two subsections, respectively.

6.1. Stage 1—Good Complete Route Development

In the first stage, we consider the partial routes as basis probes and extend them step by step to construct and develop a set of good complete routes in the end. In each step, a proportion value threshold element is set as a filter to filter out the worst partial routes or basic probes. Moreover, four local augmentation operators are employed to modify and improve the existing partial routes. According to the number of cities in the given TSP, this stage consists of certain steps (defined in Equation (3) to complete the route development. Actually, except the first one, each step implements three operations: basis probes or partial route generation, basis probe filtering, and basis probe improvement. The working steps of the first stage can be described as follows:
Step-1: 3-city basis probes are generated by taking all possible local two-path routes for each city as a center. Actually, a local two-path route is generated with three cities in which one city is central or internal and the other two cities are adjacent to it. We simply refer to it as a 3-city basis probe. Considering an n-city TSP problem with city list L = { c 1 , c 2 , , c n } where n 3 , we let c i L be a city located at the i t h position and L ( c i ) be a set of cities adjacent to c i ; i.e.,  L ( c i ) = L { c i } . Then, the set of possible local two-path routes with internal city c i ( i = 1 , 2 , , n ) , with adjacent cities c j and c k , can be defined by L 2 ( c i ) as follows:
L 2 ( c i ) = { c j c i c k p i j k : c j , c k L ( c i ) ; i j , k ; j k ; j = 1 , 2 , , n , and k = 1 , 2 , , n } ,
where p i j k represents a 3-city basis probe whose outer two wings are p i j k j = e i j and p i j k k = e i k , respectively. Actually, it can utilize these two wings to extend and develop the route in the next step. In total, the set of possible 3-city basis probes is a union of all L 2 ( c i ) provided as follows:
Ω 3 = i = 1 n L 2 ( c i ) = i = 1 n { p i j k : c j , c k L ( c i ) ; i j , k ; j k ; j = 1 , 2 , , n , and k = 1 , 2 , , n }
After generating all the possible basis probes, the quality or fitness function f ( p i j k ) of basis probe p i j k can be defined and computed from its cost (distance). That is, the quality of the basis probe is inversely proportional to its cost (distance); i.e., the basis probe with a higher value of f ( p i j k ) is fitter and vice versa. Then, these basis probes can be ordered through the fitness function; e.g., the order of the fittest basis probe is 1, the order of the next fittest one is 2, etc. Finally, we retain certain potential basis probes from all the generated ones with the help of a threshold value filter (defined in Equation (4)). These retained basis probes are considered as the good and root probes of this step. Actually, root probes are 3-city potential basis probes that are kept for further route extension. Assuming that ψ 3 is just the threshold value of the filter or proportion value threshold element at this step, the set of the retained good and root basis probes can be constructed as follows:
Π 3 = { p i j k : p i j k Ω 3 ; the order of ( p i j k ) ψ 3 n p 3 ; n p 3 = | Ω 3 | } ,
where n p 3 denotes the number of generated 3-city basis probes in Ω 3 . For clarity, we let n g 3 = | Π 3 | be the number of the retained good and root basis probes of Π 3 .
Step-2: In this step, we generate 5-city basis probes with the retained good basis 3-city probes obtained from Step-1. Indeed, each retained good basis probe of Step-1 can be hybridized with other consistent 3-city basis probes and generate the next possible 5-city basis probes. Actually, the set of possible 5-city basis probes can be constructed through the connection operations as follows:
Ω 5 = { p i j k l m : p i j k Π 3 ; p j l i , p k i m Ω 3 ; l m ; l , m i , j , k } ,
where p i j k l m c l c j c i c k c m indicates a 5-city basis probe as it is a route of 5 different cities. The outer two wings of this basis probe are p i j k l m l = e l j and p i j k l m m = e k m , respectively. Like in Step-1, the quality or fitness function f ( p i j k l m ) of basis probe p i j k l m can be defined and computed in the same way to order the basis probes. Once the basis probes are constructed and their orders are obtained, the lower fitness basis probes are filtered out while the good potential basis probes are retained according to the threshold value of the filter. Actually, assuming that the filtering threshold or proportion value at this step is ψ 5 (defined in Equation (4)), the set of possible potential basis probes can be constructed as follows:
T 5 = { p i j k l m : p i j k l m Ω 5 ; order of ( p i j k l m ) ψ 5 n p 5 ; n p 5 = | Ω 5 | } ,
where n p 5 is the number of generated 5-city basis probes of Ω 5 . The number n r 5 of retaining potential basis probes can be denoted as n r 5 = | T 5 | . For the possible improvement of each potential basis probe p i j k l m T 5 , four local search operators (explained in Section 5) are implemented consecutively on it. If any better or fitter basis probe is developed, the earlier basis probe is directly replaced by the better basis probe. As a result, these retained potential probes are developed and improved. For clarity, we refer to the improved basis probes as the good basis probes. Therefore, the set of good basis 5-city probes can be constructed as follows:
Π 5 = { p i j k l m : p i j k l m = L S ( p i j k l m ) ; p i j k l m T 5 ; f ( p i j k l m ) f ( p i j k l m ) } ,
where p i j k l m denotes the improved basis probe of p i j k l m , L S denotes the total operation of implementing the four local augmentation operators consecutively, f ( p i j k l m ) and f ( p i j k l m ) are the fitness functions of the developed and original basis probes, respectively. Therefore, it can be easily found that n g 5 = | Π 5 | n r 5 = | T 5 | since Π 5 T 5 .
Like Step-2, the basis probe generation, filtering, and improvement operations can be carried out in the remaining steps of the complete route development process. In general, after completing the filtering task of the ( k + 1 ) th step, we obtain the set of ( 2 k + 3 ) -city potential basis probes, which can be denoted by T 2 k + 3 and constructed by
T 2 k + 3 = { p i j k l m t s v u h g : p i j k l m t s v u h g Ω 2 k + 3 ; order of ( p i j k l m t s v u h g ) Ψ 2 k + 3 n p 2 k + 3 ; n p 2 k + 3 = | Ω 2 k + 3 | } .
In Equation (15), p i j k l m t s v u h g is a retained ( 2 k + 3 ) -city potential basis probe, Ω 2 k + 3 denotes the set of possible generated basis probes at the ( k + 1 ) th step, being expressed by Equation (17), n p 2 k + 3 denotes the number of basis probes in Ω 2 k + 3 , and  Ψ 2 k + 3 is the filtering proportion value at the ( k + 1 ) th step. Each retained potential basis probe p i j k l m t s v u h g is further improved by the four local augmentation operators. Thus, the set of ( 2 k + 3 ) -city good basis probes is obtained at the end of the ( k + 1 ) th step, which is provided by
Π 2 k + 3 = { p i j k l m t s v u h g : p i j k l m t s v u h g = L S ( p i j k l m t s v u h g ) ; p i j k l m t s v u h g T 2 k + 3 ; f ( p i j k l m t s v u h g ) f ( p i j k l m t s v u h g ) } ,
Ω 2 k + 3 = { p i j k l m t s v u h g : p i j k l m t s v u Π 2 k + 1 ; p v h t , p u s g Ω 3 ; h g ; h , g v , t , , l , j , i , k , m , , s , u } ,
In Equation (16), p i j k l m t s v u h g is the improved basis probe obtained by implementing the four local search operators on the basis probe p i j k l m t s v u h g , f ( p i j k l m t s v u h g ) and f ( p i j k l m t s v u h g ) represent the fitness values of the improved and original basis probes, respectively.
In Equation (17), p i j k l m t s v u h g { c h c v c t c l c j c i c k c m c s c u ( 2 k + 1 ) - city good probe c g ( 2 k + 3 ) - city generated probe is a ( 2 k + 3 ) -city generated basis probe having the outer wings p i j k l m t s v u h g h = e v h and p i j k l m t s v u h g g = e u g , Π 2 k + 1 is the set of ( 2 k + 1 ) -city good basis probes obtained from the k t h step, and  p v h t and p u s g represent the 3-city basis probes in Ω 3 .
In this way, after executing the last step, the route development process has produced a set of good basis probes, Π n , where each good basis probe consists of n cities, being a complete one or a complete route for the TSP. It should be noted that, in the last step of the even-number TSP, one city remains to be connected and thus the basis probe uses any one of its wings to include the remaining city properly.

6.2. Stage 2—Self-Escape Mechanism

After the first stage, we arrive at a set of good complete basis probes or routes as the search result of the TSP. However, there may be some stagnant complete basis probes that can be considered as being trapped in a locally optimal solution during our route development and search process. In order to alleviate this locally trapped search problem, we can couple our general route development and search process (the first stage) with a self-escape mechanism, which is referred to as the second stage, i.e., Stage 2. Through this self-escape mechanism, we can enhance the diversity of the complete basis probes and further improve the search results. In fact, the self-escape mechanism was first introduced in the PSO algorithm by Wang et al. [15] in 2007. Recently, Wang et al. [16] applied it to promote the performance of the DSOS algorithm. Here, we utilize it to solve our locally trapped search problem such that, as a complete basis probe is trapped in a local optimum solution, we enforce it to effectively jump out of itself and search for the better complete basis probe. Specifically, this self-escape mechanism accomplishes two different tasks: first, identifying whether a complete basis probe is trapped in a local optimum, and, second, if it is, helping it to jump out of itself to develop a new solution.
Let p l , p b e s t Π n , where p l ( l = 1 , 2 , , g n ) and p b e s t represent a complete basis probe and the current best basis probe in Π n , respectively. We can judge whether a complete basis probe p l is a local optimum solution if and only if the following inequality holds [15,16]:
| Γ l | < 1 g n l = 1 g n | Γ l | ,
where
Γ l = E ( p l ) E ( p b e s t ) ,
where E ( p l ) and E ( p b e s t ) denote the sets of edges in p l and p b e s t , respectively, Γ l denotes the set of common edges between E ( p l ) and E ( p b e s t ) , | Γ l | denotes the number of common edges in Γ l , and  g n is the number of complete basis probes in Π n . That is, if the inequality Equation (18) holds, it is believed that p l is trapped in a local optimum solution. In such a situation, we implement the self-escape mechanism on p l as the following 3-opt operator such that we can transform it into promising basis probes.
We first remove 3 edges from the stagnant complete basis probe and divide it into three partial routes as it is considered a ring route. We then connect these partial routes in different ways to generate new complete routes that may be better than the original route. For example, let p l be a stagnant complete basis probe and its three edges, namely ( c i , c i + 1 ( m o d n ) ) with i = 0 , 1 , 2 , , n 1 , ( c j , c j + 1 ( m o d n ) ) , where j = i + m ( m o d n ) and m = 1 , 2 , , n 3 , and  ( c k , c k + 1 ( m o d n ) ) , where k = i + t ( m o d n ) and t = m + 1 , , n 1 are selected. Removing these 3 edges from p l makes 3 partial routes, namely p l q , where q = 1, 2, 3. The reverse of these partial routes is denoted by p l q , where q = 1, 2, 3. Then, the new complete probe can be generated by combining p l q and p l q in the following seven different ways:
  • { p l 1 , p l 2 , p l 3 } , Judge by the inequality { d ( c i , c i + 1 ( m o d n ) ) + d ( c k , c k + 1 ( m o d n ) ) } > { d ( c i , c k ) + d ( c i + 1 ( m o d n ) , c k + 1 ( m o d n ) ) }
  • { p l 1 , p l 2 , p l 3 } , Judge by the inequality { d ( c j , c j + 1 ( m o d n ) ) + d ( c k , c k + 1 ( m o d n ) ) } > { d ( c j , c k ) + d ( c j + 1 ( m o d n ) , c k + 1 ( m o d n ) ) }
  • { p l 1 , p l 2 , p l 3 } , Judge by the inequality { d ( c i , c i + 1 ( m o d n ) ) + d ( c j , c j + 1 ( m o d n ) ) } > { d ( c i , c j ) + d ( c i + 1 ( m o d n ) , c j + 1 ( m o d n ) ) }
  • { p l 1 , p l 2 , p l 3 } , Judge by the inequality { d ( c i , c i + 1 ( m o d n ) ) + d ( c j , c j + 1 ( m o d n ) ) + d ( c k , c k + 1 ( m o d n ) ) } > { d ( c i , c j ) + d ( c i + 1 ( m o d n ) , c k ) + d ( c j + 1 ( m o d n ) , c k + 1 ( m o d n ) ) }
  • { p l 1 , p l 2 , p l 3 } , Judge by the inequality { d ( c i , c i + 1 ( m o d n ) ) + d ( c j , c j + 1 ( m o d n ) ) + d ( c k , c k + 1 ( m o d n ) ) } > { d ( c i , c k ) + d ( c j + 1 ( m o d n ) , c i + 1 ( m o d n ) ) + d ( c j , c k + 1 ( m o d n ) ) }
  • { p l 1 , p l 2 , p l 3 } , Judge by the inequality { d ( c i , c i + 1 ( m o d n ) ) + d ( c j , c j + 1 ( m o d n ) ) + d ( c k , c k + 1 ( m o d n ) ) } > { d ( c i , c j + 1 ( m o d n ) ) + d ( c k , c j ) + d ( c i + 1 ( m o d n ) , c k + 1 ( m o d n ) ) }
  • { p l 1 , p l 2 , p l 3 } , Judge by the inequality { d ( c i , c i + 1 ( m o d n ) ) + d ( c j , c j + 1 ( m o d n ) ) + d ( c k , c k + 1 ( m o d n ) ) } > { d ( c i , c j + 1 ( m o d n ) ) + d ( c k , c i + 1 ( m o d n ) ) + d ( c j , c k + 1 ( m o d n ) ) }
We use the cases of 3, 6, and 7 for the symmetric TSP and all cases for the asymmetric TSP to generate new complete basis probes. The flowchart and pseudocode of our proposed two-stage search optimization algorithm are sketched in Figure 2 and Algorithm 2, respectively.
Algorithm 2: Pseudocode of the proposed two-stage probe-based search optimization algorithm
Input: Distance matrix d n × n , Problem size n, Filter Ψ 2 k + 1
Output: Best complete probe p b e s t , Fitness of best complete probe f ( p b e s t )
  • For each city c i , i = 1 to n
  • Construct 3-city basis probes p i j k with center c i and insert them in a set Ω 3
  • Calculate fitness value f ( p i j k ) of each basis probe p i j k Ω 3
  • End for
  • Determine order of each basis probe p i j k Ω 3 on the basis of f ( p i j k )
  • Retain potential basis probes based on filter Ψ 3 and insert them in a set Π 3
  • For each step k, k = 2 to maximum step, [ n 2 ]
  • Construct new basis probes p i j k 2 k + 1 based on retained potential basis probes p i j k 2 k 1 Π 2 k 1 and insert them in a set Ω 2 k + 1
  • Calculate fitness value f ( p i j k 2 k + 1 ) of each new basis probe p p i j k 2 k + 1 Ω 2 k + 1
  • Determine order of each basis probe p i j k 2 k + 1 Ω 2 k + 1 on the basis of f ( p i j k 2 k + 1 )
  • Retain potential basis probes based on filter Ψ 2 k + 1 and insert them in a set T 2 k + 1
  • For each basis probe p i j k 2 k + 1 l T 2 k + 1 , l = 1 to n r 2 k + 1 = | T 2 k + 1 |
  • Improve the retained potential basis probe p i j k 2 k + 1 l by applying
    (i) 2-opt operator (ii) Insertion operator
    (iii) Reversion operator (iv) Swap operator
  • Let p i j k 2 k + 1 ( i m ) l be the improved probe and f ( p i j k 2 k + 1 ( i m ) l ) be its fitness value
  • If f ( p i j k 2 k + 1 ( i m ) l ) > f ( p i j k 2 k + 1 l ) , then
  • Update the retained potential basis probe p i j k 2 k + 1 l by assigning p i j k 2 k + 1 l p i j k 2 k + 1 ( i m ) l
  • Update fitness value f ( p i j k 2 k + 1 l ) by assigning f ( p i j k 2 k + 1 l ) f ( p i j k 2 k + 1 ( i m ) l )
  • End if
  • End for
  • The set T 2 k + 1 of retained potential basis probes is updated and identified as a set of good basis probes Π 2 k + 1
  • End for
  • Apply self-escape mechanism on each stagnant complete basis probe (if any) of Π 2 [ n 2 ] + 1
  • Determine best complete probe p b e s t from Π 2 [ n 2 ] + 1 , p b e s t = p i j k 2 [ n 2 ] + 1 ( b e s t ) and its fitness value f ( p b e s t )
  • Return best complete probe p b e s t and its fitness value f ( p b e s t )

7. Experimental Results and Analysis

In this section, various experiments are carried out to evaluate the performance of our proposed algorithm on the typical benchmark datasets of the TSP with a different number of cities [17,18,19,20]. The obtained experimental results are further compared with the best-known TSP benchmark results reported by the data library, as well as with the results obtained by the state-of-the-art algorithms. Finally, a rigorous statistical analysis is conducted to substantiate the advantages of our proposed algorithm against the other state-of-the-art optimization algorithms.

7.1. Experimental Configurations and Evaluation Protocol

To conduct the experiments on the datasets whose city size is up to 561, we use a desktop computer with the specifications of Intel Core i5-4590 3.30 GHz processors, 8 GB RAM, and 64-bit Windows 10 operating system. For the other datasets, we run with a 2 core GPU Linux operating system due to requiring high computational resources. The proposed algorithm is implemented in MATLAB R2019a programming language for all the simulations. Two performance evaluation indicators, error (measured in %) and computational time (measured in seconds), are calculated to evaluate the performance of the proposed algorithm. The percentage deviation of the simulated result from the best-known results (i.e., error) is enumerated based on the following formulae:
Error ( % ) = Z ( p ) Z ( p ) Z ( p ) × 100 ,
where Z ( p ) and Z ( p ) denote the obtained solution and best-known solution on a particular TSP dataset, respectively. For the execution time, we run the algorithm 10 times independently on each TSP dataset and compute the average and standard deviation (SD) of the times.

7.2. Performance Evaluation and Analysis

In this subsection, we conduct the experiments of our proposed algorithm on 83 symmetric and 18 asymmetric benchmark TSP test problems to evaluate its performance. Actually, the experimental results are summarized in Table 2, where the “BKS” column element indicates the best-known solution reported by the data library, while the “Our Result” column element indicates the solution obtained by our proposed algorithm. The “Difference” and “Error ( % ) ” column elements denote the deviation and the percentage deviation of the obtained result from the best-known result, respectively. The computational time(s) column element denotes the average execution time (in seconds) of the algorithm with the SD value in 10 runs. We boldface the names of the datasets whose best-known solutions or new solutions are obtained by our proposed algorithm.
It can be observed from Table 2 that our proposed algorithm yields the closest best-known solutions on the considered symmetric and asymmetric TSP test problems. In fact, the errors are very small. Our proposed algorithm has found exactly the best-known solution for each symmetric TSP dataset whose city size is up to 180 (except ch150) and some other symmetric TSP datasets such as tsp225, ts225, pr264, si535, and si1032, and almost all the asymmetric TSP datasets (except ftv64, ft70, rbg323, and rbg358). Specifically, in some cases (shown separately in Table 3), it exhibits a strong global exploration capability and produces better solutions than the best-known solutions with negative percentage error entries in the table. Statistically, our proposed algorithm obtains exactly the best-known solution in 65.06 % of the symmetric datasets (54 out of 83), and in 77.78 % of the asymmetric datasets (14 out of 18). For the rest of the TSP test problems, the loss of efficiency is no more than 1.84 % and 0.60 % for symmetric and asymmetric cases, respectively. In addition, the error belongs to an interval of [ 0.007 % , 0.70 % ] in 26.73 % (27 out of 101) cases, while only in 4.95 % (5 out of 101) cases is it more than 1 % . Furthermore, the average error over the entire 101 datasets is 0.14 % , with a standard deviation value of 0.38 , which strongly demonstrates the outstanding performance of our proposed algorithm.
On the other hand, in terms of execution time, our proposed algorithm consumes a small amount of time to solve a small-scale TSP problem. As the scale of the TSP problem expands ( n < 417 ) , its computational time may be relatively increased or decreased. For example, it takes 59.19 s to solve the gr96 problem, while it requires only 10.88 s for the brg180 problem. In the case of large-scale datasets ( n 417 ) , it is rapidly increased with the scale of the TSP problem. Although our proposed algorithm takes a longer time in the large-scale case, the quality of the solution is satisfactory (the maximum error is equal to 1.84 % ). On average, its computational time is acceptable. Therefore, we can consider that our proposed algorithm is a reliable search optimization method that can provide a good-quality solution for a general TSP within an acceptable time frame. Most importantly, it is deterministic; i.e., we run it to obtain the same result on a TSP dataset or problem at any time. The new best routes with the route length found by our proposed algorithm are further displayed in Figure 3.

7.3. Performance Comparison

In this subsection, we further compare our proposed algorithm with the state-of-the-art optimization algorithms. Table 4 describes the details of our selected state-of-the-art optimization algorithms for solving the TSPs. Specifically, our proposed algorithm is first compared with the state-of-the-art optimization algorithms with a self-escape mechanism and then compared with the other state-of-the-art algorithms (without a self-escape mechanism). Actually, the experimental results of our proposed approach are compared with those of the comparative algorithms reported in the literature, displayed in Table 5, Table 6, Table 7 and Table 8. The average (of solution route length) on all the TSP problems or datasets and the number of BKSs found corresponding to each comparison are presented at the bottom of each compared algorithm. Indeed, the number of BKSs found indicates how many datasets in which the algorithm can exactly find the best-known solution.

7.3.1. Comparison with Self-Escape Mechanism-Based Algorithms

We begin by comparing our proposed algorithm with the state-of-the-art optimization algorithms with the self-escape mechanism. In fact, the self-escape strategy or mechanism was adopted to solve symmetric TSPs [15,16]. In the self-escape hybrid DPSO (SEHDPSO) algorithm [15], the five nearest neighbors for each node were considered to skip out the local optimum of the current best route. On the other hand, a swap-based randomized local search operator was coupled with the self-escape hybrid DSOS (ECSDSOS) algorithm [16] to find a satisfactory solution. However, these algorithms tend to produce a longer route or relatively worse solution. To find a better-quality solution, we actually employ an efficient 3-opt operator in our proposed algorithm at the self-escape mechanism stage.
Specifically, Table 5 displays the comparison results of our proposed algorithm with self-escape mechanism-based algorithms such as the SEHDPSO and ECSDSOS algorithms. In comparison with the SEHDPSO algorithm, it can be seen from Table 5 that our proposed algorithm finds better solutions than the average as well as the best solutions of the SEHDPSO algorithm in almost all the TSP test problems or datasets. In fact, the average error of our proposed algorithm over 20 datasets is 0.013864 % , which is significantly better than the average error 0.20200 % and the best solution error 0.09200 % of the SEHDPSO algorithm. At the same time, our proposed algorithm captures the best-known solutions in more datasets (fourteen cases) than the SEHDPSO algorithm (one case and eight cases in the two versions). In comparison with the ECSDSOS algorithm, it can also be seen from Table 5 that the average error and BKS finding number of our proposed algorithm over 24 datasets are 0.20601 % and 11, respectively, while those of the ECSDSOS algorithm are, respectively, 0.89683 % and 0 (on the average solution version), and 0.45110 % and 9 (on the best solution version). In addition, our proposed algorithm yields new solutions on certain datasets (negative entries in the table), but neither of these two algorithms can find such a solution. It can be further observed from Table 5 that, for certain small-scale datasets, all three algorithms are capable of finding the best-known solution. However, as the scale becomes larger and larger, our proposed algorithm shows better performance than both the SEHDPSO and ECSDSOS algorithms.

7.3.2. Comparison with the Other State-of-the-Art Optimization Algorithms

We further compare our proposed algorithm with the other typical state-of-the-art optimization algorithms without a self-escape mechanism. Actually, those optimization algorithms were tested on different sets of datasets. Thus, we use different sets of datasets in each group to compare the proposed algorithm with them. Specifically, Table 6, Table 7 and Table 8 offer side-by-side comparisons between our proposed algorithm and 21 state-of-the-art TSP-solving optimization algorithms.
The performance comparison of our proposed algorithm with ASA-GS [21], GSA-ACO-PSO [22], HGA+2local [24], IVNS [29], HSIHM+2local [31], ABCSS [30], DSMO [35], MMA [2], and DSCA+LS [34] is illustrated in Table 6. According to Table 6, in comparison with ASA-GS, GSA-ACO-PSO, HGA+2local, IVNS, DSMO, and DSCA+LS for the symmetric TSP datasets, our proposed algorithm achieves better results than the average results of all six algorithms in almost all the datasets except in one case of ASA-GS, in four cases of GSA-ACO-PSO, and in one case of IVNS. On the other hand, in comparison with the best results of these algorithms, the performance of our proposed algorithm is only inferior in two cases out of forty-five with ASA-GS, in six cases out of twenty-five regarding GSA-ACO-PSO, no worse out of twenty-one with HGA+2local, in one case out of fifty-seven related to IVNS, no worse out of forty concerning DSMO, and in one case out of twenty-seven connected to DSCA+LS. On the remaining test datasets, our proposed algorithm achieves better or equal scores with the above six algorithms. In addition to these, our proposed algorithm traces the best-known solution in nineteen cases as compared with five out of forty-five for ASA-GS, in eleven cases as compared with thirteen out of twenty-five for GSA-ACO-PSO, in sixteen cases as compared with four out of twenty-one for HGA+2local, in twenty-seven cases as compared with twelve out of fifty-seven for IVNS, in twenty-one cases as compared with five out of forty for DSMO, and in eighteen cases as compared with seventeen out of twenty-seven for DSCA+LS. It is also notable that, for some small-scale datasets, our proposed algorithm yields similar scores to these comparative algorithms. However, as the scale becomes larger and larger, it finds better results than these six comparative optimization algorithms.
As demonstrated in Table 6, it is clear that, in most of the considered datasets, except three small-scale datasets, namely ftv64, eil51, and berlin 52, our proposed algorithm is better than HSIHM+2local and ABCSS for symmetric and asymmetric TSPs. On the test dataset ftv64, its solution is not better than the best solution of HSIHM+2local, but it is better than the average one. It is noticed that our proposed algorithm determines the best-known solution in 14 and 11 datasets, whereas HSIHM+2local and ABCSS provide such a solution in eight and eleven datasets, respectively. It is also observed that, for some small-scale datasets, all three algorithms determine the best-known solutions. As the scale becomes increasingly large, our proposed algorithm achieves greater global exploration capability than both the HSIHM+2local and ABCSS algorithms. As shown in Table 6, it is observed that our proposed algorithm outperforms the deterministic algorithm MMA for symmetric TSP datasets on nearly all the considered datasets. Indeed, MMA performs better regarding dj38 and eil76 and achieves the best-known solution in one case, while our proposed algorithm is better in 22 datasets and captures the best-known solutions in 12 cases. In addition, MMA is suitable only for small-size TSP datasets, whereas our proposed algorithm can be applied to solve small- and relatively large-scale datasets. Therefore, it can be concluded that the performance of our proposed algorithm is better in 87.5 % (21 out of 24) cases compared to MMA.
Table 7 and Table 8 demonstrate the comparison of our proposed algorithm with IBA [26], DWCA [28], DSOS [27], MCF-ABC [32], GA-PSO-ACO [23], DIWO [25], PRGA [33], MPSO [3], SCGA [37], VDWOA [36], DSSA [8], and DA-GVNS [38]. According to Table 7 and Table 8, in comparison with DSOS, GA-PSO-ACO, DIWO, MPSO, SCGA, and DSSA for the symmetric TSP datasets, our proposed algorithm provides better results than the average results of all six algorithms regarding almost all the datasets except in two cases of DSOS, in one case of GA-PSO-ACO, in one case of DIWO, and in four cases of DSSA. On the other hand, compared with the best results of these algorithms, the performance of our algorithm is inferior in five cases out of twenty-eight with DSOS, in four cases out of thirty regarding GA-PSO-ACO, in three cases out of nineteen related to DIWO, in three cases out of thirty-five with MPSO, no worse out of twelve concerning SCGA, and in five cases out of thirty-two connected to DSSA. On the remaining test datasets, our proposed algorithm achieves better or equal results with these six algorithms. In addition, our proposed algorithm obtains the best-known solution in fourteen cases as compared with eleven out of twenty-eight by DSOS, in eleven cases as compared with one out of thirty by GA-PSO-ACO, in eight cases as compared with zero out of nineteen by DIWO, in twenty-three cases as compared with twenty-five out of thirty-five by MPSO, in six cases as compared with one out of twelve by SCGA, and in nineteen cases as compared with twenty-two out of thirty-two by DSSA. In a word, the solutions obtained by our algorithm have better accuracy than those obtained by the above six comparative algorithms.
According to the results in Table 7 and Table 8, in comparison with IBA, DWCA, and DA-GVNS for symmetric and asymmetric test problems, our proposed algorithm produces better or equal solutions compared to the average as well as the best solutions of these three algorithms in each considered dataset by excluding two smaller datasets, namely eil51 and berlin52. Moreover, our proposed algorithm obtains the best-known solution in 23 cases as compared with 18 out of 29 by IBA, in 23 cases as compared with 14 out of 27 by DWCA, and in 41 cases as compared with 20 out of 68 by DA-GVNS. It is also observed that our proposed algorithm significantly outperforms these three algorithms regarding almost all the datasets. Specifically, regarding the large-scale datasets, it obtains more accurate results than these three comparative algorithms. As demonstrated in Table 7, in comparison with the best results of MCF-ABC, VDWOA, and PRGA for symmetric test problems, it is apparent that our proposed algorithm achieves worse solutions than those computed by VDWOA and PRGA on the datasets of oliver30, berlin52, st70, rat99, and kroA200. On the rest of the test datasets, it obtains similar or better results than the best of these algorithms. Actually, our proposed algorithm obtains the best-known solution in twenty-eight cases as compared with twenty-seven out of twenty-eight by MCF-ABC, in four cases as compared with two out of twelve by VDWOA, and in five cases as compared with one out of nine by PRGA. It is also evident that MCF-ABC has a strong capability regarding escaping the local optimum; nevertheless, this algorithm is not capable of producing a better solution, which is obtained by our algorithm in seven cases.
From the above discussion and analysis, it can be determined that our proposed algorithm achieves better results on both the average and best solutions compared to all 21 algorithms regarding almost all the TSP test datasets. Overall, regarding the test datasets, the average route length of our proposed algorithm is actually smaller than that of each comparative algorithm. Furthermore, our proposed algorithm traces the best-known solutions in many more datasets than each of the comparative algorithms. Therefore, our proposed algorithm is superior to these benchmark optimization algorithms in terms of solution quality. In the literature, it is generally believed that meta-heuristics and hybridization optimization algorithms can provide good-quality solutions for the TSPs. However, our proposed algorithm has better performance than such optimization algorithms, including recently improved ones. Apart from this, most benchmark algorithms (except IBA, DWCA, ABCSS, and DA-GVNS) are designed with many fine-tuned parameters to solve either the symmetric TSP or asymmetric TSP, while our proposed algorithm is capable of handling both of these cases by tuning only one parameter. Furthermore, our proposed algorithm is designed without any randomness, so running it multiple times on a dataset always produces the same result. Thus, it is deterministic, while most of the existing standard optimization algorithms are probabilistic and their solutions may change in every single run. In this way, it is quite convenient for practical applications.

7.4. Statistical Analysis

To determine the difference between the performance of our proposed and comparative optimization algorithms, we finally undertake a rigorous statistical analysis in this subsection. Actually, the popular Wilcoxon signed rank test [32,39] at the 95 % confidence level is implemented to statistically examine the superior performance of the proposed algorithm against the other standard optimization algorithms. In this non-parametric test, we evaluate our proposed algorithm with one of the comparative optimization algorithms. The test statistic is calculated by the following formula:
W c a l , N = m i n ( W 1 , W + ) ,
where N is the number of datasets tested by the two algorithms. If the two algorithms perform equally on a dataset, we ignore this dataset and adjust the value of N accordingly. W 1 denotes the sum of ranks of the datasets on which our proposed algorithm performs better than the comparative algorithm, while W + denotes the sum of ranks of the datasets on which the comparative algorithm dominates our proposed algorithm. Moreover, the rank of the dataset is defined by the ascending order of the absolute error difference of the two algorithms (i.e., the rank 1 to the dataset with the smallest absolute difference, rank 2 to the next, etc.). m i n ( W 1 , W + ) returns to the minimum value of { W , W + } .
The critical values corresponding to N effective datasets ( W c r i , N ) at different confidence levels can be found in [39]. If W c a l , N > W c r i , N , it failed to reject the null hypothesis ( H 0 : there is no significant difference regarding the performance of the two algorithms). On the other hand, the null hypothesis is rejected under the condition W c a l , N W c r i , N , and the statistical test concludes that there is a significant difference regarding the performance of the two algorithms. Since most of the comparative algorithms (except MMA) are probabilistic, we perform the statistical test separately with the percentage deviation of average results (PDavg.(%)) and percentage deviation of best results ((PDbest.(%))). The test results are summarized in Table 9, where ‘*’ indicates that the test result is undetermined (for a critical value at 95 % confidence level, value of N must be ≥6) and ‘-’ means the original reference did not provide any results.
According to the test results of Table 9, on the test with PDavg.(%), in all the cases (except ABCSS), the statistical test yields W c a l , N W c r i , N with W > W + . That is, the test suggests that our proposed algorithm is significantly better in comparison with all the comparative algorithms except ABCSS. Moreover, our proposed algorithm is still comparable with ABCSS due to having W > W + . Therefore, our proposed algorithm outperforms all the state-of-the-art comparative algorithms. Specifically, it significantly dominates eight recent optimization algorithms because W + = 0 . On the other hand, on the test with PDbest(%), the test results indicate that the performance of our proposed algorithm is significantly better than all the comparative algorithms (except SEHDPSO and GSA-ACO-PSO) owing to W c a l , N W c r i , N and W > W + . Its performance against SEHDPSO and GSA-ACO-PSO is not statistically sound; however, our proposed algorithm is still comparable with them due to having W > W + .

8. Conclusions

We have established a reliable two-stage optimization algorithm to deterministically solve both symmetric and asymmetric TSPs, which utilizes the probe concept to design the local augmentation operators for dynamically generating and developing TSP routes step by step. It also utilizes a proportion value to filter out the worst routes automatically during each step. Furthermore, a self-escape mechanism is imposed on each stagnant complete route for its further possible variation and improvement. It is demonstrated by the experiments on various real-world TSP datasets that our proposed algorithm outperforms the state-of-the-art optimization algorithms with respect to solution accuracy. In addition, it can ascertain the best-known solution regarding a significant number of datasets and even determine a new solution in certain cases (as shown in Table 3). In fact, our proposed algorithm is designed without randomness so that it is a deterministic algorithm, which can be a benefit over the existing algorithms in certain ways. Moreover, our proposed algorithm can deal with both symmetric and asymmetric TSPs by fitting only one parameter, while most of the existing standard optimization algorithms are designed with many fine-tuned parameters for either symmetric or asymmetric problems.
The main drawback of our proposed optimization algorithm is the computational time required to solve large-scale datasets. In fact, as the number of cities becomes larger, a general computer system will eventually run out of memory and cease functioning. It would be beneficial to extend the current work by investigating ways to improve the computational time required to solve large-scale datasets. In the future, we plan to improve the computational complexity by integrating our algorithm with the Delaunay Triangulation (DT) geometric concept because DT can provide potential edges rather than all the edges that are more likely to appear in an optimal solution of the TSP even though it does not contain a route of the TSP [40,41]. We also plan to apply this framework to solve other NP-complete problems, such as the vehicle routing problem, the job-shop scheduling problem, the flow-shop scheduling problem, etc.

Author Contributions

Conceptualization, M.A.R. and J.M.; Methodology, M.A.R. and J.M.; Software, M.A.R.; Validation, M.A.R.; Formal analysis, M.A.R.; Investigation, M.A.R.; Resources, J.M.; Data curation, M.A.R.; Writing—original draft, M.A.R.; Writing—review & editing, J.M.; Visualization, M.A.R.; Supervision, J.M.; Project administration, J.M.; Funding acquisition, J.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key Research and Development Program of China under grant 2019YFA0706401.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found in reference number [17,18,19,20].

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Papadimitriou, C.H. The Euclidean travelling salesman problem is NP-complete. Theor. Comput. Sci. 1977, 4, 237–244. [Google Scholar] [CrossRef]
  2. Naser, H.; Awad, W.S.; El-Alfy, E.S.M. A multi-matching approximation algorithm for Symmetric Traveling Salesman Problem. J. Intell. Fuzzy Syst. 2019, 36, 2285–2295. [Google Scholar] [CrossRef]
  3. Yousefikhoshbakht, M. Solving the Traveling Salesman Problem: A Modified Metaheuristic Algorithm. Complexity 2021, 2021, 6668345. [Google Scholar] [CrossRef]
  4. Applegate, D.L.; Bixby, R.E.; Chvátal, V.; Cook, W.; Espinoza, D.G.; Goycoolea, M.; Helsgaun, K. Certification of an optimal TSP tour through 85,900 cities. Oper. Res. Lett. 2009, 37, 11–15. [Google Scholar] [CrossRef]
  5. Xu, J. Probe machine. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1405–1416. [Google Scholar] [CrossRef]
  6. Rahman, M.A.; Ma, J. Probe Machine Based Consecutive Route Filtering Approach to Symmetric Travelling Salesman Problem. In Proceedings of the Third International Conference on Intelligence Science (ICIS), Beijing, China, 2–5 November 2018; Springer: Cham, Switzerland, 2018; Volume 539, pp. 378–387. [Google Scholar]
  7. Rahman, M.A.; Ma, J. Solving Symmetric and Asymmetric Traveling Salesman Problems Through Probe Machine with Local Search. In Proceedings of the Fifteenth International Conference on Intelligent Computing (ICIC), Nanchang, China, 3–6 August 2019; Springer: Cham, Switzerland, 2019; Volume 11643, pp. 1–13. [Google Scholar]
  8. Zhang, Z.; Han, Y. Discrete sparrow search algorithm for symmetric traveling salesman problem. Appl. Soft Comput. 2022, 118, 108469. [Google Scholar] [CrossRef]
  9. Menger, K. Das botenproblem. Ergeb. Eines Math. Kolloquiums 1930, 2, 11–12. [Google Scholar]
  10. Banaszak, D.; Dale, G.; Watkins, A.; Jordan, J. An optical technique for detecting fatigue cracks in aerospace structures. In Proceedings of the ICIASF 99. 18th International Congress on Instrumentation in Aerospace Simulation Facilities. Record (Cat. No. 99CH37025), Toulouse, France, 14–17 June 1999; pp. 1–7. [Google Scholar]
  11. Matai, R.; Singh, S.P.; Mittal, M.L. Traveling salesman problem: An overview of applications, formulations, and solution approaches. Travel. Salesm. Probl. Theory Appl. 2010, 1, 1–24. [Google Scholar]
  12. Puchinger, J.; Raidl, G.R. Combining metaheuristics and exact algorithms in combinatorial optimization: A survey and classification. In International Work-Conference on the Interplay Between Natural and Artificial Computation; Springer: Berlin/Heidelberg, Germany, 2005; pp. 41–53. [Google Scholar]
  13. Khanra, A.; Maiti, M.K.; Maiti, M. Profit maximization of TSP through a hybrid algorithm. Comput. Ind. Eng. 2015, 88, 229–236. [Google Scholar] [CrossRef]
  14. Ultrasonic Probe. Available online: https://www.ndk.com/en/products/search/ultrasonic/ (accessed on 25 February 2024).
  15. Wang, W.F.; Liu, G.Y.; Wen, W.H. Study of a self-escape hybrid discrete particle swarm optimization for TSP. Comput. Sci. 2007, 34, 143–145. [Google Scholar]
  16. Wang, Y.; Wu, Y.; Xu, N. Discrete symbiotic organism search with excellence coefficients and self-escape for traveling salesman problem. Comput. Ind. Eng. 2019, 131, 269–281. [Google Scholar] [CrossRef]
  17. Kocer, H.E.; Akca, M.R. An improved artificial bee colony algorithm with local search for traveling salesman problem. Cybern. Syst. 2014, 45, 635–649. [Google Scholar] [CrossRef]
  18. Rajabi Bahaabadi, M.; Shariat Mohaymany, A.; Babaei, M. An efficient crossover operator for traveling salesman problem. Iran Univ. Sci. Technol. 2012, 2, 607–619. [Google Scholar]
  19. TSPLIB. Available online: http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/ (accessed on 25 February 2024).
  20. TSP Test Data. Available online: http://www.math.uwaterloo.ca/tsp/world/countries.html (accessed on 25 February 2024).
  21. Geng, X.; Chen, Z.; Yang, W.; Shi, D.; Zhao, K. Solving the traveling salesman problem based on an adaptive simulated annealing algorithm with greedy search. Appl. Soft Comput. 2011, 11, 3680–3689. [Google Scholar] [CrossRef]
  22. Chen, S.M.; Chien, C.Y. Solving the traveling salesman problem based on the genetic simulated annealing ant colony system with particle swarm optimization techniques. Expert Syst. Appl. 2011, 38, 14439–14450. [Google Scholar] [CrossRef]
  23. Deng, W.; Chen, R.; He, B.; Liu, Y.; Yin, L.; Guo, J. A novel two-stage hybrid swarm intelligence optimization algorithm and application. Soft Comput. 2012, 16, 1707–1722. [Google Scholar] [CrossRef]
  24. Wang, Y. The hybrid genetic algorithm with two local optimization strategies for traveling salesman problem. Comput. Ind. Eng. 2014, 70, 124–133. [Google Scholar] [CrossRef]
  25. Zhou, Y.; Luo, Q.; Chen, H.; He, A.; Wu, J. A discrete invasive weed optimization algorithm for solving traveling salesman problem. Neurocomputing 2015, 151, 1227–1236. [Google Scholar] [CrossRef]
  26. Osaba, E.; Yang, X.S.; Diaz, F.; Lopez-Garcia, P.; Carballedo, R. An improved discrete bat algorithm for symmetric and asymmetric traveling salesman problems. Eng. Appl. Artif. Intell. 2016, 48, 59–71. [Google Scholar] [CrossRef]
  27. Ezugwu, A.E.S.; Adewumi, A.O. Discrete symbiotic organisms search algorithm for travelling salesman problem. Expert Syst. Appl. 2017, 87, 70–78. [Google Scholar] [CrossRef]
  28. Osaba, E.; Del Ser, J.; Sadollah, A.; Bilbao, M.N.; Camacho, D. A discrete water cycle algorithm for solving the symmetric and asymmetric traveling salesman problem. Appl. Soft Comput. 2018, 71, 277–290. [Google Scholar] [CrossRef]
  29. Hore, S.; Chatterjee, A.; Dewanji, A. Improving variable neighborhood search to solve the traveling salesman problem. Appl. Soft Comput. 2018, 68, 83–91. [Google Scholar] [CrossRef]
  30. Khan, I.; Maiti, M.K. A swap sequence based Artificial Bee Colony algorithm for Traveling Salesman Problem. Swarm Evol. Comput. 2019, 44, 428–438. [Google Scholar] [CrossRef]
  31. Boryczka, U.; Szwarc, K. The Harmony Search algorithm with additional improvement of harmony memory for Asymmetric Traveling Salesman Problem. Expert Syst. Appl. 2019, 122, 43–53. [Google Scholar] [CrossRef]
  32. Choong, S.S.; Wong, L.P.; Lim, C.P. An artificial bee colony algorithm with a modified choice function for the Traveling Salesman Problem. Swarm Evol. Comput. 2019, 44, 622–635. [Google Scholar] [CrossRef]
  33. Kaabi, J.; Harrath, Y. Permutation rules and genetic algorithm to solve the traveling salesman problem. Arab J. Basic Appl. Sci. 2019, 26, 283–291. [Google Scholar] [CrossRef]
  34. Tawhid, M.A.; Savsani, P. Discrete sine-cosine algorithm DSCA with local search for solving traveling salesman problem. Arab. J. Sci. Eng. 2019, 44, 3669–3679. [Google Scholar] [CrossRef]
  35. Akhand, M.; Ayon, S.I.; Shahriyar, S.; Siddique, N.; Adeli, H. Discrete Spider Monkey Optimization for Travelling Salesman Problem. Appl. Soft Comput. 2020, 86, 105887. [Google Scholar] [CrossRef]
  36. Zhang, J.; Hong, L.; Liu, Q. An Improved Whale Optimization Algorithm for the Traveling Salesman Problem. Symmetry 2021, 13, 48. [Google Scholar] [CrossRef]
  37. Deng, Y.; Xiong, J.; Wang, Q. A Hybrid Cellular Genetic Algorithm for the Traveling Salesman Problem. Math. Probl. Eng. 2021, 2021, 6697598. [Google Scholar] [CrossRef]
  38. Karakostas, P.; Sifaleras, A. A double-adaptive general variable neighborhood search algorithm for the solution of the traveling salesman problem. Appl. Soft Comput. 2022, 121, 108746. [Google Scholar] [CrossRef]
  39. Wilcoxon, F.; Katti, S.; Wilcox, R.A. Critical values and probability levels for the Wilcoxon rank sum test and the Wilcoxon signed rank test. Sel. Tables Math. Stat. 1970, 1, 171–259. [Google Scholar]
  40. Lau, S.K.; Shue, L.Y. Solving travelling salesman problems with an intelligent search approach. Asia Pac. J. Oper. Res. 2001, 18, 77–88. [Google Scholar]
  41. Lau, S.K. Solving Travelling Salesman Problems with Heuristic Learning Approach. Ph.D. Thesis, Department of Information System, University of Wollongong, Wollongong, Australia, 2002. [Google Scholar]
Figure 1. Illustration of probe hybridization mechanism: (a) 3-city basis probe; (b) 5-city basis probe; (c) 3-city basis probe; (d) resulting 7-city basis probe by the probe hybridization of (b) with (a,c).
Figure 1. Illustration of probe hybridization mechanism: (a) 3-city basis probe; (b) 5-city basis probe; (c) 3-city basis probe; (d) resulting 7-city basis probe by the probe hybridization of (b) with (a,c).
Mathematics 12 01340 g001
Figure 2. Flowchart of the proposed two-stage probe-based search optimization algorithm.
Figure 2. Flowchart of the proposed two-stage probe-based search optimization algorithm.
Mathematics 12 01340 g002
Figure 3. The new best route with route length found by our proposed algorithm for the datasets (a) wi29, (b) ncit30, (c) pg88, (d) kroB100, (e) pr107, (f) pr136, (g) pr144, (h) kroB150, (i) u159, and (j) tsp225.
Figure 3. The new best route with route length found by our proposed algorithm for the datasets (a) wi29, (b) ncit30, (c) pg88, (d) kroB100, (e) pr107, (f) pr136, (g) pr144, (h) kroB150, (i) u159, and (j) tsp225.
Mathematics 12 01340 g003
Table 1. The number of cities visited step-wise by the probe operation.
Table 1. The number of cities visited step-wise by the probe operation.
Step Number123 k k + 1 [ n 2 ]
# of Cities357 2 k + 1 2 k + 3 n
Table 2. Computational results of our proposed algorithm for 83 symmetric and 18 asymmetric benchmark TSP datasets.
Table 2. Computational results of our proposed algorithm for 83 symmetric and 18 asymmetric benchmark TSP datasets.
S/NDatasetsNo. of CitiesBKSOur ResultDifferenceError ( % ) Computational Time (s)
Symmetric Travelling Salesman Problem (STSP)
1burma14143323.0003323.0000.000000.000000.00840000 ± 0.005700
2p0115291.0000291.00000.000000.000000.00710000 ± 0.002700
3F15151105.0001105.0000.000000.000000.12410000 ± 0.010400
4ulysses16166859.0006859.0000.000000.000000.04010000 ± 0.002800
5gr17172085.0002085.0000.000000.000000.00820000 ± 0.002300
6C202062,575.0062,575.000.000000.000000.15070000 ± 0.005400
7S212160,000.0060,000.000.000000.000000.87340000 ± 0.082000
8gr21212707.0002707.0000.000000.000000.06220000 ± 0.003800
9ulysses22227013.0007013.0000.000000.000002.26550000 ± 0.091800
10gr24241272.0001272.0000.000000.000000.66050000 ± 0.016800
11fri2626937.0000937.00000.000000.000000.02900000 ± 0.005500
12bays29292020.0002020.0000.000000.000000.03010000 ± 0.004600
13bayg29291610.0001610.0000.000000.000000.03540000 ± 0.004600
14wi292927,603.0027,601.00−2.0000−0.00720.19490000 ± 0.005300
15C303062,716.0062,716.000.000000.000000.30170000 ± 0.011600
16ncit303048,873.0048,872.00−1.0000−0.00200.56890000 ± 0.020600
17F323284,180.0084,180.000.000000.000000.34970000 ± 0.009600
18C404062,768.0062,768.000.000000.000000.55040000 ± 0.026900
19F414168,168.0068,168.000.000000.000001.75490000 ± 0.102800
20dantzig4242699.0000699.00000.000000.000000.27560000 ± 0.003400
21swiss42421273.0001273.0000.000000.000000.16430000 ± 0.006000
22gr48485046.0005046.0000.000000.000000.78070000 ± 0.021500
23att484810,628.0010,628.000.000000.000000.47480000 ± 0.014700
24hk484811,461.0011,461.000.000000.000000.71980000 ± 0.011700
25brazil585825,395.0025,395.000.000000.000000.13380000 ± 0.006700
26ncit64646400.0006400.0000.000000.000001.09990000 ± 0.009400
27pr7676108,159.0108,159.00.000000.000001.88750000 ± 0.015000
28pg88886548.0006544.000−4.0000−0.06115.79330000 ± 0.091300
29gr969655,209.0055,209.000.000000.0000059.1940000 ± 0.835700
30rd1001007910.0007910.0000.000000.000001.20260000 ± 0.025700
31kroA10010021,282.0021,282.000.000000.0000062.6235000 ± 2.465600
32kroB10010022,141.0022,139.08−1.9200−0.0087035.3965000 ± 0.767400
33kroC10010020,749.0020,749.000.000000.00000161.282400 ± 3.299000
34kroD10010021,294.0021,294.000.000000.000005.74360000 ± 0.040300
35kroE10010022,068.0022,068.000.000000.0000094.3792000 ± 1.119300
36lin10510514,379.0014,379.000.000000.00000189.325800 ± 1.374200
37pr10710744,303.0044,301.68−1.3200−0.003085.5968000 ± 0.555200
38gr1201206942.0006942.0000.000000.0000088.2677000 ± 2.551300
39pr12412459,030.0059,030.000.000000.000007.40980000 ± 0.450500
40ch1301306110.0006110.0000.000000.00000333.304540 ± 1.121356
41pr13613696,772.0096,770.92−1.0800−0.00112549.330100 ± 9.581700
42gr13713769,853.0069,853.000.000000.00000119.285700 ± 0.847900
43pr14414458,537.0058,535.22−1.7800−0.003040.88570000 ± 0.020300
44ch1501506528.0006530.9002.900000.04442589.790700 ± 8.477500
45kroA15015026,524.0026,524.000.000000.00000346.632200 ± 3.379500
46kroB15015026,130.0026,127.36−2.6400−0.0101730.277200 ± 2.632800
47pr15215273,682.0073,682.000.000000.00000198.222800 ± 3.427600
48u15915942,080.0042,075.67−4.3300−0.010284.60580000 ± 0.235000
49si17517521,407.0021,407.000.000000.00000526.824300 ± 3.132500
50brg1801801950.0001950.0000.000000.0000010.8813000 ± 0.573100
51qa1941949352.0009353.6601.660000.01775899.238500 ± 12.65310
52kroA20020029,368.0029,385.7217.72000.06033277.726900 ± 0.901200
53kroB20020029,437.0029,441.384.380000.01488217.566400 ± 4.342400
54gr20220240,160.0040,187.0027.00000.06723163.689300 ± 3.651700
55tsp2252253916.0003865.004−50.996−1.30225145.628300 ± 0.862400
56ts225225126,643.0126,643.00.000000.000001363.19560 ± 15.66020
57pr22622680,369.0080,374.335.330000.006631643.20400 ± 17.55140
58gr229229134,602.0134,658.056.00000.04160658.919400 ± 6.308800
59gil2622622378.0002389.05011.05000.464674659.63020 ± 55.62780
60pr26426449,135.0049,135.000.000000.000001981.23760 ± 22.60660
61a2802802579.0002587.8008.800000.341217792.27620 ± 67.94830
62pr29929948,191.0048,200.169.160000.019001261.25450 ± 17.86110
63lin31831842,029.0042,082.4253.42000.127101307.43090 ± 4.553100
64rd40040015,281.0015,307.1526.15000.171122434.54950 ± 343.0932
65fl41741711,861.0011,914.4553.45000.4506317,069.3660 ± 1581.106
66att53253227,686.0027,786.00100.0000.3611926,496.9020 ± 1888.922
67ali535535202,339.0203,016.0677.0000.3345960,731.8600 ± 2354.098
68si53553548,450.0048,450.000.000000.00000390,160.000 ± 912.5270
69pa5615612763.0002775.00012.00000.4343169,279.8030 ± 1121.304
70u57457436,905.0037,049.29144.2900.3909874,990.6660 ± 4201.612
71rat5755756773.0006851.73078.73001.1624174,209.5970 ± 2722.342
72p65465434,643.0034,646.833.830000.01106105,823.570 ± 1893.281
73d65765748,912.0049,127.83215.8300.44126102,080.410 ± 3084.854
74gr666666294,358.0295,988.01630.000.55375104927.200 ± 3639.270
75u72472441,910.0042,124.40214.4000.51157128,591.020 ± 3094.913
76rat7837838806.0008934.090128.0901.45458154466.440 ± 5676.663
77pr10021002259,045.0259,250.0205.0000.07914180,250.380 ± 6114.756
78si1032103292,650.0092,650.000.000000.00000235990.000 ± 1234.210
79pcb1173117356,892.0057,528.29636.2901.11842259,876.200 ± 13 , 507.02
80d1291129150,801.0051,618.54817.5401.60929329533.800 ± 12680.29
81rl13231323270199.0272,083.961884.960.69762646416.070 ± 9695.350
82fl1400140020,127.0020,315.84188.8400.93824737,727.33 ± 8046.5143
83d1655165562,128.0063,268.611140.611.8359012,166.5.30 ± 28 , 064.93
Aymmetric Travelling Salesman Problem (ATSP)
1br171739.0000039.000000.000000.000000.00890000 ± 0.002700
2ftv33341286.0001286.0000.000000.000000.19100000 ± 0.006800
3ftv35361473.0001473.0000.000000.000000.38450000 ± 0.009900
4ftv38391530.0001530.0000.000000.0000031.7042000 ± 0.396400
5p43435620.0005620.0000.000000.000000.18200000 ± 0.006900
6ftv44451613.0001613.0000.000000.0000028.6533000 ± 0.290400
7ftv47481776.0001776.0000.000000.0000051.4585000 ± 1.589600
8ry48p4814,422.0014,422.000.000000.0000033.9627000 ± 0.754800
9ft53536905.0006905.0000.000000.00000134.984900 ± 1.949300
10ftv55561608.0001608.0000.000000.0000035.1772000 ± 0.669600
11ftv64651839.0001850.00011.00000.5981518.8923000 ± 0.177000
12ftv70711950.0001950.0000.000000.0000094.5148000 ± 3.095900
13ft707038,673.0038,869.00196.0000.50681178.463400 ± 1.380800
14kro124p10036,230.0036,230.000.000000.000004.31640000 ± 0.094300
15rbg3233231326.0001331.0005.000000.377073268.51000 ± 178.4000
16rbg3583581163.0001164.0001.000000.085984293.80000 ± 451.2900
17rbg4034032465.0002465.0000.000000.000007500.67580 ± 278.5200
18rbg4434432720.0002720.0000.000000.000008742.91860 ± 366.8200
Average Percentage Error (SD):0.137823 (0.381285)
Note: Best-known solution or a new solution obtained by our proposed algorithm is set in bold.
Table 3. Best solutions found thus far by our proposed algorithm compared with the best-known solutions from data library.
Table 3. Best solutions found thus far by our proposed algorithm compared with the best-known solutions from data library.
S/NDatasetsNo. of CitiesBKSOur ResultDifferenceError ( % ) Computational Time (s)
1wi292927,603.0027,601.00−2.0000−0.00720.19490000 ± 0.005300
2ncit303048,873.0048,872.00−1.0000−0.00200.56890000 ± 0.020600
3pg88886548.0006544.000−4.0000−0.06115.79330000 ± 0.091300
4kroB10010022,141.0022,139.08−1.9200−0.0087035.3965000 ± 0.767400
5pr10710744,303.0044,301.68−1.3200−0.003085.5968000 ± 0.555200
6pr13613696,772.0096,770.92−1.0800−0.00112549.330100 ± 9.581700
7pr14414458,537.0058,535.22−1.7800−0.003040.88570000 ± 0.020300
8kroB15015026,130.0026,127.36−2.6400−0.0101730.277200 ± 2.632800
9u15915942,080.0042,075.67−4.3300−0.010284.60580000 ± 0.235000
10tsp2252253916.0003865.004−50.996−1.30225145.628300 ± 0.862400
Note: The new solution obtained by our proposed algorithm is set in bold.
Table 4. List of state-of-the-art optimization algorithms of TSPs considered for comparison.
Table 4. List of state-of-the-art optimization algorithms of TSPs considered for comparison.
S/NAbbreviationAuthorsYearName of the Optimization Algorithms
1SEHDPSO [15]Wang et al.2007Self-escape hybrid discrete particle swarm optimization algorithm
2ASA-GS [21]Geng et al.2011Adaptive simulated annealing algorithm with greedy search
3GSA-ACO-PSO [22]Chen and Chien2011Genetic simulated annealing ant colony system with particle swarm optimization algorithm
4GA-PSO-ACO [23]Deng et al.2012Hybrid swarm intelligence optimization algorithm based on the genetic algorithm, particle swarm optimization and ant colony optimization
5HGA+2local [24]Wang2014Hybrid genetic algorithm with two local optimization strategies
6DIWO [25]Zhou et al.2015Discrete invasive weed optimization algorithm
7IBA [26]Osaba et al.2016Improved discrete bat algorithm
8DSOS [27]Ezugwu and Adewumi2017Discrete symbiotic organisms search algorithm
9DWCA [28]Osaba et al.2018Discrete water cycle algorithm
10IVNS [29]Hore et al.2018Improving variable neighborhood search algorithm
11ABCSS [30]Khan and Maiti2019A swap sequence based artificial bee colony algorithm
12ECSDSOS [16]Wang et al.2019Discrete symbiotic organism search with excellence coefficients and self-escape mechanism
13HSIHM+2local [31]Boryczka and Szwarc2019Harmony search algorithm with additional improvement of harmony memory
14MCF-ABC [32]Choong et al.2019Artificial bee colony algorithm with modified choice function
15PRGA( ( 2019 ) ) [33]Kaabi and Harrath2019Permutation rules and genetic algorithm
16DSCA+LS [34]Tawhid and Savsani2019Discrete sine-cosine algorithm with local search
17MMA [2]Naser et al.2019A multi-matching approximation algorithm
18DSMO [35]Akhand et al.2020Discrete spider monkey optimization algorithm
19VDWOA [36]Zhang et al.2021An improved whale optimization algorithm
20SCGA [37]Deng et al.2021A hybrid cellular genetic algorithm
21MPSO [3]Yousefikhoshbakht2021A modified metaheuristic algorithm
22DSSA [8]Zhang and Han2022Discrete sparrow search algorithm
23DA-GVNS [38]Karakostas and Sifaleras2022A double-adaptive general variable neighborhood search algorithm
Table 5. Performance comparison of our proposed algorithm with the state-of-the-art optimization algorithms containing self-escape mechanism.
Table 5. Performance comparison of our proposed algorithm with the state-of-the-art optimization algorithms containing self-escape mechanism.
Comparison with SEHDPSOComparison with ECSDSOS
S/NDatasetsScaleBKSSEHDPSO(2007) [15]Our Error ( % ) S/NDatasetsScaleBKSECSDSOS(2019) [16]Our Error ( % )
PDavg. ( % ) PDbest ( % ) PDavg. ( % ) PDbest ( % )
1pr7676108,1590.050000.020000.000001kroA10010021,282.00.047930.000000.00000
2kroB10010022,141.00.090000.0000−0.008702kroB10010022,141.00.049680.00452−0.00870
3kroC10010020,749.00.010000.00000.000003kroC10010020,749.00.019280.000000.00000
4kroD10010021,294.00.050000.12000.000004kroD10010021,294.00.187850.000000.00000
5rd1001007910.000.250000.30000.000005kroE10010022,068.00.244100.022660.00000
6lin10510514,379.00.030000.00000.000006pr10710744,303.00.221200.00000−0.0029
7pr10710744,303.00.180000.0000−0.00297pr12412459,030.00.201590.000000.00000
8pr12412459,030.00.230000.30000.000008pr13613696,772.00.528340.00930−0.00001
9bier127127118,2820.100000.00000.009749pr14414458,537.00.240870.00000−0.0030
10ch1301306110.000.290000.11000.0000010ch1501506528.000.454150.398280.04442
11kroA15015026,524.00.050000.00000.0000011pr15215273,682.00.254500.000000.00000
12kroB15015026,130.00.180000.1000−0.010112pr22622680,369.00.302360.000000.00663
13u15915942,080.00.000000.0000−0.0102813pr26426449,135.00.109090.000000.00000
14kroB20020029,437.00.250000.05000.0148814pr29929948,191.01.320750.616300.01900
15ts225225126,6430.080000.02000.0000015lin31831842,029.00.899380.480620.12710
16tsp2252253916.000.230000.0000−1.3022516rd40040015,281.01.177930.693670.17112
17a2802802579.000.150000.03000.3412117fl41741711,861.01.946720.767220.45063
18rd40040015,281.00.250000.16000.1711218pr439439107,217.02.287881.283380.20000
19p65465434,643.00.930000.41000.0100019u57457436,905.01.732290.972770.39000
20u72472441,910.00.640000.22000.5100020d65765748,912.01.691201.161270.44000
Average0.202000.09200−0.01386421u72472441,910.01.052490.517780.51000
BKS found/No. of datasets1/208/2014/2022pr10021002259,045.02.132051.209820.07000
23rl13231323270,199.01.840530.797560.69000
24d1655165562,128.02.581771.891261.84000
Average0.896830.451100.20601
BKS found/No. of datasets 0/249/2411/24
Table 6. Performance comparison of our proposed algorithm with the other state-of-the-art optimization algorithms (ASA-GS, GSA-ACO-PSO, HGA+2local, IVNS, HSIHM+2local, ABCSS, DSMO, MMA, and DSCA+LS).
Table 6. Performance comparison of our proposed algorithm with the other state-of-the-art optimization algorithms (ASA-GS, GSA-ACO-PSO, HGA+2local, IVNS, HSIHM+2local, ABCSS, DSMO, MMA, and DSCA+LS).
Comparison with ASA-GS(2011) [21]Comparison with IVNS(2018) [29]Comparison with DSMO(2020) [35]
S/NDatasetsASA-GSOur ResultS/NDatasetsIVNSOur ResultS/NDatasetsDSMOOur Result
AverageBestAverageBestAverageBest
1eil51428.87428.87428.871gr172085208520851burma1430.8730.8730.87
2berlin527544.377544.377544.362gr212707270727072ulysses1673.9973.9973.99
3st70677.11677.11677.113gr241272127212723ulysses2275.475.3175.31
4eil76544.37544.37544.374fri269379379374bayg299074.159074.159074.15
5pr76108,159108,159108,1595bays292020202020205eil51436.96428.86428.86
6rat991219.491219.241219.246dantzig426996996996berlin527633.67544.377544.36
7rd1007910.47910.479107swiss421273127312737st70702.64677.11677.11
8kroA10021,285.421,285.421,2828gr485046504650468eil76572.7558.68544.37
9kroB10022,139.122,139.122,139.089eil51428.98428.98428.879pr76111299.3108,159.4108,159
10kroC10020,750.820,750.820,74910berlin527544.367544.367544.3610gr96530.45518.38510.89
11kroD10021,301.021,294.321,29411brazil5825,592.7225,42525,39511rat991291.931225.561219.24
12kroE10022,112.322,106.322,06812st70677.11677.11677.1112kroA10022,024.2721,298.2121,282
13eil101640.51640.21641.3213eil76552.57545.39544.3713kroB10023,022.3722,30822,139.08
14lin10514,38314,38314,37914pr76108,159108,159108,15914rd1008377.768041.37910
15pr10744,301.744,301.744,301.6815rat991241.261240.381219.2415eil101674.4648.66641.32
16pr12459,030.759,030.759,03016rd1007918.367910.4791016lin105151141438314,379
17bier127118,349118,294118,293.5217kroA10021,695.7921,618.221,28217pr10745,666.9944,385.8644,301.68
18ch1306121.156110.72611018kroB10022,140.2022,139.0722,139.0818pr12462,443.4960,285.2159,030
19pr13697,078.996,966.396,770.9219kroB10020,809.2920,750.7620,74919pr13610287297,538.6896,770.92
20pr14458,545.658,535.258,535.2220kroD10021,490.6221,294.2921,29420gr137736.67709.48706.29
21ch1506539.86530.96530.9021kroE10022,193.822,174.622,06821kroA15028354.0927591.4426,524
22kroA15026,538.626,524.926,52422eil101648.27642.31641.3222kroB15027,576.1626,601.9426,127.36
23kroB15026,178.126,140.726,127.3623lin10514,395.6414,38314,37923pr15276,526.7774,243.9173,682
24pr15273,694.773,683.673,68224pr10744,314.9244,301.6844,301.6824u15942,598.342,598.342,075.67
25u15942,398.942,392.942,075.6725pr12459,051.8259,030.7459,03025rat1952488.552372.892342.25
26rat1952348.052345.222342.2526bier127119,006.39118,974.6118,293.5226d19816,270.4715,978.1315,868.04
27kroA20029,438.429,411.529,385.7227ch1306153.726140.66611027kroA20031,828.6430,481.3529,385.72
28kroB20029,513.129,504.229,441.3828pr13697,985.8497,979.1196,770.9228kroB20031,781.6230,716.529,441.38
29ts225126,646126,646126,64329pr14458,563.9758,535.2258,535.2229gr202508.81501.83486.96
30pr22680,687.480,542.180,374.3330ch1506644.956639.526530.9030tsp2254162.794013.683865.004
31gil2622398.612393.642389.0531kroA15026,947.1726,943.3126,52431pr22685,935.6983,587.9880,374.33
32pr26449,138.949,13549,13532kroB15026,537.0426,527.5726,127.3632gr2291730.461683.451660.12
33pr29948,326.448,269.248,200.1633pr15273,855.1173,847.673,68233gil2622627.872543.152389.05
34lin31842,383.742,306.742,082.4234u15942,467.6142,436.2342,075.6734pr29951,747.9950,579.8248,200.16
35rd40015,429.815,350.715,307.1535rat1952453.812450.142342.2535lin31845,460.2544,118.6642,082.42
36fl41712,043.811,940.411,914.4536d19816,079.2816,075.8415,868.0436linhp31845,730.5743,831.4442,529
37rat5756904.826872.116851.7337kroA20030,339.6730,300.5629,385.7237fl41712,950.7712,218.9811,914.45
38u72442,470.442,274.742,124.4038kroB20030,453.2230,447.3029,441.3838gr4312042.771993.151974.70
39rat7838982.198954.368934.0939pr22680,514.6480,469.3180,374.3339pr439116,379.2112,105.2107,431.43
40pr1002264,274263,512259,25040gil2622501.862492.852389.0540d49337,861.1436,844.6335,772
41pcb117357,820.557,760.657,528.2941pr26451,197.1451,155.3849,135Average26,930.4226,064.2925,490.64
42d129152,252.351,751.251,618.5442pr29950,373.1250,271.6948,200.16BKS found/No. of datasets3/405/4021/40
43rl1323273,444271,964272,083.9643lin31843,964.9343,924.0842,082.42Comparison with MMA(2019) [2]
44fl140020,782.220,647.420,315.8444rd40016,250.2116,155.9115,307.15S/NDatasetsBKSMMAOur Result
45d165564,155.963,635.963,268.6145fl41712,183.1412,180.7811,914.451wi2927,60328,387.027,601.0
Average45,273.6345,173.5845,026.8246pr439111,771.2111,750.3107,431.432dj3866566656.006659.40
BKS found/No. of datasets3/455/4519/4547pcb44250,800.2450,783.5551,3623eil51426430.000428.870
Comparison with GSA-ACO-PSO(2011) [22]48u57439,629.1139,573.8837,049.294berlin5275427574.007544.40
S/NDatasetsGSA-ACO-PSOOur Result49rat5757362.517349.816851.7305st70675691.000677.109
AverageBest50u72445,729.7145,725.3942,124.406eil76538540.000544.370
1eil51427.27427428.8751rat7839707.3649707.1668934.0907rat9912111239.001219.240
2berlin52754275427544.3652pr1002280,563.9280,368.2259,250.08kroA10021,28221,367.021,282.00
3ncit6464006400640053pcb117363,435.9563,354.8257,528.299kroB10022,14123,251.022,139.08
4eil76540.20538544.3754d129156,095.3356,088.3151,618.5410kroC10020,74921,461.020,749.00
5rd1007987.577910791055rl1323295,611.2295,607.3272,083.9611kroD10021,29422,066.021,294.00
6kroA10021,370.4721,28221,28256fl140021,085.9821,040.6520,315.8412kroE10022,06822,590.022,068.00
7kroB10022,282.8722,14122,139.0857d165570,337.2369,992.4963,268.6113eil101629641.000641.0000
8kroC10020,878.9720,74920,749Average39324.4939291.1237766.8214lin10514,37915,127.014,379.00
9kroD10021,620.4721,30921,294BKS found/No. of datasets10/5712/5727/5715pr12459,03059,824.059,030.00
10kroE10022,183.4722,06822,068Comparison with HSIHM+2local ( 2019 )  [31]16bier127118,282121,942.0118,293.52
11eil101635.23630641.32S/NDatasetsHSIHM+2localOur Result17ch13061106281.0006110.000
12lin10514,406.3714,37914,379AverageBest18xqf131564592.0000566.4200
13bier127119,421.83118,282118,293.521br1739393919ch15065286661.0006530.900
14ch1306205.63614161102ftv331320.571286128620kroA15026,52427,244.0026,524.00
15ch1506563.7065286530.903ftv351490.61473147321kroB15026,13027,155.0026,127.36
16kroA15026,899.2026,52426,5244ftv381547.131530153022u15942,08044,027.0042,075.67
17kroB15026,448.3326,13026,127.365p435620.275620562023qa19493529437.0009353.660
18kroA20029,738.7329,38329,385.726ftv441645.41613161324kroA20029,36830,450.0029,385.72
19kroB20030,035.2329,54129,441.387ftv471800.4317761776Average21,068.0420,467.65
20lin31843,002.9042,48742,082.428ry48p14,513.914,49514,422BKS found/No. of datasets1/2412/24
21rat5756933.8768916851.739ft537148.369836905Comparison with DSCA+LS(2019) [34]
22rat7839079.2389888934.0910ftv551625.1716081608S/NDatasetsDSCA+LSOur Result
23rl1323280,181.47277,642272,083.9611ftv641876.218461850AverageBest
24fl140021,349.6320,59320,315.8412ftv702027.83197719501pr76108,159108,159108,159
25d165565,621.1364,15163,268.6113ft7039,722.0339,21238,8692kroA10021,28221,28221,282
Average32,710.2332,346.2432,053.1814kro124p38,348.237,21336,2303kroB10022,14122,14122,139.08
BKS found/No. of datasets2/2513/2511/2515ftv1703393.07299929284kroC10020,74920,74920,749.00
Comparison with HGA+2local(2014) [24]16rbg3231555.4150213315kroD10021,30021,29421,294
S/NDatasetsHGA+2localOur Result17rbg3581424.63134211646kroE10022,06822,06822,068.00
AverageBest18rbg4032637.8259724657lin10514,37914,37914,379
1pr76108,255.94108,159.42108,15919rbg4432914.33285327208pr10744,30344,30344,301.68
2kroA10021,312.4521,285.4421,282Average6876.336734.956619.959pr12459,03059,03059,030
3kroC10020,812.2220,750.7620,749.00BKS found/No. of datasets1/198/1914/1910ch130612461116110
4kroD10021,344.6721,294.2921,294Comparison with ABCSS ( 2019 )  [30]11pr13697,164.696,92896,770.92
5lin10514,422.8914,382.9914,379S/NDatasetsABCSSOur Result12pr14458,53758,53758,535.22
6pr10744,341.6744,301.6844,301.68AverageBest13kroA15026,525.426,52426,524
7pr12459,094.1359,030.7359,0301gr1720852085208514kroB15026,134.826,13026,127.36
8ch1306130.2776110.7261102bays2920202020202015pr15273,68273,68273,682
9pr13697,019.29196,785.85296,770.923swiss4212731273127316kroB20029,467.429,44729,441.38
10pr14458,535.2258,535.2258,535.224eil51427.01427428.8717ts225126,709.8126,643126,643
11kroA15026,597.7826,524.8626,5245berlin52754275427544.3618tsp225391739163865.004
12kroB15026,335.8526,127.3526,127.366kroA10021,287.1921,28221,28219pr22680,380.480,36980,374.33
13pr15273,765.7073,683.6373,6827lin10514,379.1014,37914,37920pr26449,1354913549,135
14kroB20029,583.3829,450.5029,441.388pr12459,054.6459,03059,03021pr29948,306.848,25048,200.16
15ts225128,295.65128,141.92126,6439pr15273,691.6473,68273,68222lin31842,221.44216742,082.42
16tsp2253892.883878.663865.00410kroA20029,469.0029,45029,385.7223rd40015,422.615,40815,307.15
17pr22680,534.3980,436.0480,374.3311br1739393924fl41711,933.411,92011,914.45
18pr26449,163.2649,151.2249,13512ftv3312861286128625rat5756898.668816851.730
19pr29949,757.6649,462.4348,200.1613ry48p14,452.7914,42214,42226rat7839402.493438934.090
20lin31842,877.2442,624.3442,082.4214ftv551642.191629160827pr1002272,739.6272,323259,250
21rd40016,143.9616,049.5915,307.15
Average46581.7446484.1746285.36Average16332.0416324.7116318.93Average48819.0148782.1948264.81
BKS found/No. of datasets2/214/2116/21BKS found/No. of datasets6/1411/1411/14BKS found/No. of datasets11/2717/2718/27
Table 7. Performance comparison of our proposed algorithm with the other state-of-the-art optimization algorithms (IBA, DWCA, DSOS, MCF-ABC, GA-PSO-ACO, DIWO, PRGA, MPSO, SCGA, and VDWOA).
Table 7. Performance comparison of our proposed algorithm with the other state-of-the-art optimization algorithms (IBA, DWCA, DSOS, MCF-ABC, GA-PSO-ACO, DIWO, PRGA, MPSO, SCGA, and VDWOA).
Comparison with IBA(2016) [26]Comparison with DSOS(2017) [27]Comparison with GA-PSO-ACO(2012) [23]
S/NDatasetsIBAOur ResultS/NDatasetsDSOSOur ResultS/NDatasetsGA-PSO-ACOOur Result
AverageBestAverageBestAverageBest
1eil51428.1426428.871eil51427.90426428.871eil51431.84426428.87
2berlin527542.075427544.362berlin52754275427544.362berlin527544.377544.377544.36
3kroA10021,445.321,28221,2823st70679.20675677.113st70694.60679.60677.11
4kroB10022,506.422,14022,139.084eil76547.40542544.374eil76550.16545.39544.37
5kroC10021,050.020,74920,7495pr76108,548.37108,159108,1595pr76110,023109,206108,159
6kroD10021,593.421,29421,2946rat991228.3712241219.246rat99127512181219.24
7kroE10022,349.622,06822,0687kroA10021,409.5021,28221,2827rd100803979367910
8pr10744,793.844,30344,301.688kroB10022,339.2022,14022,139.088kroD10021,48421,39421,294
9pr12459,412.159,03059,0309kroC10020,881.6020,74920,7499eil101637.93633.07641.32
10pr13699,351.297,54796,770.9210kroD10021,493.1021,29421,29410lin10514,52114,39714,379
11pr14458,876.258,53758,535.2211kroE10022,231.1022,06822,06811pr10744,58944,31644,301.68
12pr15274,676.973,92173,68212eil101650.60640641.3212pr12460,15759,05159,030
13pr26450,908.349,75649,13513lin10514,431.7314,38114,37913bier127120,301118,476118,293.52
14pr29949,674.148,31048,200.1614pr10744,445.1044,31444,301.6814ch1306203.476121.156110
15br1739393915pr12459,429.1059,03059,03015pr14458,66258,59558,535.22
16ftv331318.11286128616pr13697,673.2097,43796,770.9216kroA15026,80326,67626,524
17ftv351493.71473147317pr14458,817.1058,56558,535.2217pr15273,98973,86173,682
18ftv381562.01530153018ch1506552.5865426530.9018u15942,50642,39542,075.67
19p4356205620562019pr15274,785.7074,01373,68219rat195236223412342.25
20ftv441683.71613161320kroA20029,651.2329,47729,385.7220kroA20031,01529,73129,385.72
21ftv471863.61796177621tsp225-38773865.00421gil262243923992389.05
22ry48p14,544.814,42214,42222pr226-80,40780,374.3322pr29948,76348,66248,200.16
23ft537294.17001690523pr26452,798.9050,45449,13523lin31842,77142,63342,082.42
24ftv551737.51608160824pr29950,335.2049,16248,200.1624rd40015,50315,46415,307.15
25ftv641999.21879185025lin31842,972.4242,20142,082.4225rat575695269126851.73
26ftv702233.22111195026rat5757117.3270736851.7326u72442,71342,65742,124.40
27ft7040,309.739,90138,86927rat7839102.6790458934.0927rat783912690308934.09
28kro124p39,213.737,53836,23028pr1002278,381.51272,381259,25028pr1002266,774265,987259,250
29rbg3231640.916151331Average40,556.6240,182.1439,573.3829d129152,44352,37851,618.54
Average23,350.3722,977.1422,815.94BKS found/No. of datasets1/2811/2814/2830d165565,24164,40163,268.61
BKS found/No. of datasets3/2918/2923/29Comparison with MCF-ABC ( 2019 )  [32]Average39,483.7839,202.1938,770.12
Comparison with DWCA(2018) [28]S/NDatasetsBKSMCF-ABCOur ResultBKS found/No. of datasets0/301/3011/30
S/NDatasetsDWCAOur Result1swiss42127312731273Comparison with DIWO(2015) [25]
AverageBest2att4810,62810,62810,628S/NDatasetsDIWOOur Error ( % )
1eil51428.4426428.873pr76108,159108,159108,159PDavg. ( % ) PDbest ( % )
2berlin527542.075427544.364rd1007910791079101att480.00210.00210.0000
3kroA10021,348.121,28221,2825kroA10021,28221,28221,2822eil510.69990.67410.6737
4kroB10022,450.722,17822,139.086kroB10022,14122,14122,139.083berlin520.03130.03130.0318
5kroC10020,934.720,76920,7497kroC10020,74920,74920,7494st700.31250.31250.3124
6kroD10021,529.621,36121,2948kroD10021,29421,29421,2945kroA1000.03750.01610.0000
7kroE10022,246.222,13022,0689kroE10022,06822,06822,0686kroB1000.88160.6471−0.0087
8pr10744,647.144,44244,301.6810lin10514,37914,37914,3797pr1070.48370.3096−0.0029
9pr12459,338.959,03059,03011pr10744,30344,30344,301.688pr1360.94000.2356−0.00001
10pr13698,761.497,48896,770.9212gr1206942694269429chn1440.89350.10160.40853
11pr14458,734.658,53758,535.2213pr12459,03059,03059,03010kroA1500.77800.74010.00000
12pr15274,202.673,68273,68214ch13061106110611011kroB1500.32290.2789−0.0101
13pr26449,528.649,31049,13515pr13696,77296,77296,770.9212d1980.66910.43040.55792
14br1739393916gr13769,85369,85369,85313tsp2252.39490.4470−1.30225
15ftv331308.71286128617pr14458,53758,53758,535.2214pr2260.22380.01170.00663
16ftv351485.81473147318kroA15026,52426,52426,52415a2800.76790.42970.34121
17ftv381549.01530153019kroB15026,13026,13026,127.3616rd4002.42291.71530.17112
18p4356205620562020pr15273,68273,68273,68217pcb4422.17311.61371.15010
19ftv441665.01613161321u15942,08042,08042,075.6718att5321.87371.29810.36119
20ftv471827.81776177622si17521,40721,40721,40719pr10023.18732.69700.07914
21ry48p14,517.814,42914,42223brg180195019501950Average1.00500.63120.14578
22ft537199.46971690524tsp225391639163865.004BKS found/No. of datasets0/190/198/19
23ftv551691.41608160825ts225126,643126,643126,643Comparison with PRGA( ( 2019 ) ) [33]
24ftv6419611900185026pr26449,13549,13549,135S/NDatasetsBKSPRGAOur Result
25ftv702126.22014195027si53548,45048,498.348,4501rat99121112181219.24
26ft7040,111.139,66938,86928si103292,65092,65092,6502kroB10022,14122,40722,139.08
27kro124p39,252.837,41236,230Average39430.1939426.183kroA10021,28221,29221,282
Average23,038.8122,796.9322,671.52BKS found/No. of datasets27/2828/284rd100791080207910
BKS found/No. of datasets3/2714/2723/27 5lin10514,37914,43414,379
Comparison with MPSO ( 2021 )  [3] 6ch130611062836110
S/NDatasetsMPSOOur ResultS/NDatasetsMPSOOur Result7ch150652865806530.90
AverageBestAverageBest8d19815,78015,88415,868.04
1burma1433233323332319pr14458,61258,53758,535.229kroA20029,36829,36829,385.72
2gr1720852085208520ch150657965286530.90Average13942.8913869.33
3gr2127072707270721kroA15026,71126,52426,524BKS found/No. of datasets1/95/9
4gr2412721272127222pr15273,91573,68273,682Comparison with SCGA ( 2021 )  [37]
5fri2693793793723d19816,02415,78015,868.04S/NDatasetsSCGAOur Result
6bayg2916101610161024kroA20029,81829,53329,385.72AverageBest
7bays2920202020202025ts225128,385126,643126,6431att4833,526.8733,523.7133,522
8att4810,62810,62810,62826pr22680,99080,54580,374.332berlin527544.377544.377544.36
9pr76108,159108,159108,15927pr26450,17849,32549,1353bier127118,469.71118,293.52118,293.52
10kroA10021,29221,28221,28228a280263225982587.804ch1306199.596183.036110
11kroB10022,14122,14122,139.0829pr29948,58948,33248,200.165eil51431.77428.87428.87
12rd10080547910791030lin31845,39143,71042,082.426eil76549.47547.17544.37
13lin10514,37914,37914,37931rd40016,50315,89215,307.157kroA10021,379.2721,285.4421,282
14pr10744,30344,30344,301.6832pr439115,994111,875107,431.438kroA20029,671.5529,533.0629,385.72
15pr12459,11359,03059,03033si53552,32650,38848,4509oliver30423.74423.74423.74
16bier127118,282118,282118,293.5234u57439,75736,90537,049.2910pr10744,301.6844,301.6844,301.68
17ch13061766110611035rat575735570146851.73011pr13697,042.3196,795.4096,770.92
18pr13697,54396,77296,770.92 12pr15273,683.6473,683.6473,682
Average37,822.3737,336.0337,074.15Average36,101.9936,045.3036,024.09
BKS found/No. of datasets13/3525/3523/35BKS found/No. of datasets1/121/126/12
Comparison with VDWOA ( 2021 )  [36]
S/NDatasetsBKSVDWOAOur ResultS/NDatasetsBKSVDWOAOur ResultS/NDatasetsBKSVDWOAOur Result
1oliver30420420423.745eil76538554544.379ch150652868636530.90
2eil51426429428.876pr76108,159108,353108,15910d19815,78016,31315,868.04
3berlin52754275427544.367kroA10021,28221,72121,28211tsp225391641363865.004
4st70675676677.118pr10744,30345,03044,301.6812fl41711,86112,46211,914.45
Average18,708.2518,461.63
BKS found/No. of datasets2/124/12
Table 8. Performance comparison of our proposed algorithm with the other state-of-the-art optimization algorithms (DSSA and DA-GVNS).
Table 8. Performance comparison of our proposed algorithm with the other state-of-the-art optimization algorithms (DSSA and DA-GVNS).
Comparison with DSSA(2022) [8]Comparison with DA-GVNS(2022) [38]
S/NDatasetsDSSAOur ResultsS/NDatasetsBKSDA-GVNSOur ResultsS/NDatasetsBKSDA-GVNSOur Results
AverageBest
1att4833,52233,52233,5221br1739393935kroA10021,28221,28221,282
2eil51426.64264282ft5369057011690536kroB10022,14122,16522,139.08
3berlin52754275427544.363ft7038,67339,58538,86937kroC10020,74920,74920,749
4st70675.15675677.114ftv3312861286128638kroD10021,29421,29421,294
5pr76108,159108,159108,1595ftv3514731473147339kroE10022,06822,12122,068
6kroA10021,290.221,28221,2826ftv3815301535153040kroA15026,52426,81726,524
7kroB10022,173.122,14122,139.087ftv4416131631161341kroB15026,13026,25626,127.36
8kroC10020,770.520,74920,7498ftv4717781788177642kroA20029,36829,80729,385.72
9kroD10021,319.0521,29421,2949ftv5516081636160843kroB20029,43730,01529,441.38
10kroE10022,091.922,06822,06810ftv6418391895185044lin10514,37914,39014,379
11lin10514,37914,37914,37911ftv7019502078195045lin31842,02943,20142,082.42
12pr10744,32244,30344,301.6812kro124p36,23036,40336,23046pcb44250,77853,00951,362
13pr12459,03059,03059,03013p4356205620562047pcb117356,89261,72557,528.29
14ch1306153.656110611014rbg32313261451133148pr76108,159108,159108,159
15pr13697,302.3596,92096,770.9215rbg35811631276116449pr10744,30344,30344,301.68
16pr14458,53758,53758,535.2216rbg40324652481246550pr12459,03059,05059,030
17ch1506590.1565286530.917rbg44327202761272051pr13696,77297,06296,770.92
18kroA15026,699.8526,52526,52418ry48p14,42214,46514,42252pr14458,53758,53758,535.22
19kroB15026,220.426,13026,127.3619bays2920202020202053pr15273,68273,83973,682
20pr15273,731.3573,68273,68220bier127118,282119,122118,293.5254pr22680,36980,88080,374.33
21u15942,262.7542,08042,075.6721brazil5825,39525,39525,39555pr26449,13549,88049,135
22kroA20029,682.1529,45929,385.7222ch13061106154611056pr29948,19149,71948,200.16
23kroB20029,850.5529,56429,441.3823ch150652865956530.957pr439107,217112,600107,431.43
24tsp2253926.0539163865.00424d129150,80154,77851,618.5458pr1002259,045277,867259,250
25pr22680,369.280,36980,374.3325d165562,12867,29263,268.6159rat195232323642342.25
26pr26449,271.8549,13549,13526dantzig4269969969960rat575677371796851.73
27lin31842,742.742,49542,082.4227fl41711,86112,01911,914.4561rat783880694458934.09
28pr439107,844.9107,494107,431.428fl140020,12721,85820,315.8462rd100791079107910
29pr1002266,352.4264,212259,25029fri2693793793763rd40015,28115,91515,307.15
30pr29948,605.0548,40948,200.1630gil262237824512389.0564rl1323270,199292,819272,083.96
31rat5756961.769386851.7331gr1720852085208565swiss42127312731273
32rat783916390978934.0932gr2127072707270766u15942,08042,16842,075.67
33gr2412721272127267u57436,90539,58337,049.29
34gr4850465046504668u72441,91044,81442,124.4
Average43,373.9843,224.0643,027.52 34,162.3833,068.18
BKS found/No. of datasets6/3222/3219/32 20/6841/68
Table 9. Statistical test results based on the performance of our proposed algorithm against each state-of-the-art optimization algorithm.
Table 9. Statistical test results based on the performance of our proposed algorithm against each state-of-the-art optimization algorithm.
ComparisonsTest with PDavg. (%)Test with PDbest (%)
N W W + W cal , N W cri , N Sign. N W W + W cal , N W cri , N Sign.
Proposed Algorithm Vs. Diff. Diff.
SEHDPSO(2007) [15]20196141452yes17118353534no
ASA-GS(2011) [21]418441717279yes387023939235yes
GSA-ACO-PSO(2011) [22]24276242481yes18115565640no
GA-PSO-ACO(2012) [23]3046144137yes304415454137yes
HGA+2local(2014) [24]202100052yes191891146yes
DIWO(2015) [25]191882246yes19168222246yes
IBA ( 2016 )  [26]2737533107yes16125111129yes
DSOS(2017) [27]263483398yes22199545465yes
DWCA ( 2018 )  [28]253223389yes16124121229yes
IVNS(2018) [29]4610631818361yes449721818327yes
ABCSS ( 2019 )  [30]93312125no4644**
ECSDSOS(2019) [16]243000081yes181683340yes
HSIHM+2local ( 2019 )  [31]181710040yes11651110yes
MCF-ABC ( 2019 )  [32]836003yes------
PRGA( ( 2019 ) ) [33]------941445yes
DSCA+LS(2019) [34]191900046yes161333329yes
MMA(2019) [2]232706673yes232706673yes
DSMO ( 2020 )  [35]3770300221yes3459500182yes
VDWOA ( 2021 )  [36]------1268101013yes
SCGA ( 2021 )  [37]1055008yes836003yes
MPSO ( 2021 )  [3]253003389yes18137343440yes
DSSA ( 2022 )  [8]283763030116yes21181505058yes
DA-GVNS ( 2022 )  [38]50127500434yes------
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rahman, M.A.; Ma, J. Two-Stage Probe-Based Search Optimization Algorithm for the Traveling Salesman Problems. Mathematics 2024, 12, 1340. https://doi.org/10.3390/math12091340

AMA Style

Rahman MA, Ma J. Two-Stage Probe-Based Search Optimization Algorithm for the Traveling Salesman Problems. Mathematics. 2024; 12(9):1340. https://doi.org/10.3390/math12091340

Chicago/Turabian Style

Rahman, Md. Azizur, and Jinwen Ma. 2024. "Two-Stage Probe-Based Search Optimization Algorithm for the Traveling Salesman Problems" Mathematics 12, no. 9: 1340. https://doi.org/10.3390/math12091340

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop