fulltext - DiVA Portal

OPTIMIZATION HEURISTIC SOLUTIONS, HOW GOOD CAN
THEY BE?
WITH EMPIRICAL APPLICATIONS IN LOCATION
PROBLEMS
XIANGLI MENG
SCHOOL OF TECHNOLOGY AND BUSINESS STUDIES
DALARNA UNIVERSITY
BORLÄNGE, SWEDEN
MARCH 2015
ISBN: 978-91-89020-94-8
i
Abstract
Combinatorial optimization problems, are one of the most important
types of problems in operational research. Heuristic and metaheuristics
algorithms are widely applied to find a good solution. However, a
common problem is that these algorithms do not guarantee that the
solution will coincide with the optimum and, hence, many solutions to
real world OR-problems are afflicted with an uncertainty about the
quality of the solution.
The main aim of this thesis is to investigate the usability of statistical
bounds to evaluate the quality of heuristic solutions applied to large
combinatorial problems. The contributions of this thesis are both
methodological and empirical. From a methodological point of view, the
usefulness of statistical bounds on p-median problems is thoroughly
investigated. The statistical bounds have good performance in providing
informative quality assessment under appropriate parameter settings.
Also, they outperform the commonly used Lagrangian bounds. It is
demonstrated that the statistical bounds are shown to be comparable with
the deterministic bounds in quadratic assignment problems.
As to empirical research, environment pollution has become a worldwide
problem, and transportation can cause a great amount of pollution. A
new method for calculating and comparing the CO2-emissions of online
and brick-and-mortar retailing is proposed. It leads to the conclusion that
online retailing has significantly lesser CO2-emissions. Another problem
is that the Swedish regional division is under revision and the border
effect to public service accessibility is concerned of both residents and
politicians. After analysis, it is shown that borders hinder the optimal
location of public services and consequently the highest achievable
economic and social utility may not be attained.
ii
Sammanfattning
Kombinatoriska optimeringsproblem, är en av de viktigaste typerna av
problem i operationsanalys (OR). Heuristiska och metaheuristiska
algoritmer tillämpas allmänt för att hitta lösningar med hög kvalitet. Ett
vanligt problem är dock att dessa algoritmer inte garanterar optimala
lösningar och sålunda kan det finnas osäkerhet i kvaliteten på lösningar
på tillämpade operationsanalytiska problem. Huvudsyftet med denna
avhandling är att undersöka användbarheten av statistiska
konfidensintervall för att utvärdera kvaliteten på heuristiska lösningar då
de tillämpas på stora kombinatoriska optimeringsproblem. Bidragen från
denna avhandling är både metodologiska och empiriska. Ur
metodologisk synvinkel har nyttan av statistiska konfidensintervall för
ett lokaliseringsproblem (p-median problemet) undersökts. Statistiska
konfidensintervall fungerar väl för att tillhandahålla information om
lösningens kvalitet vid rätt implementering av problemen. Statistiska
konfidensintervall överträffar även intervallen som erhålls vid den
vanligen använda Lagrange-relaxation. I avhandlingen visas även på att
metoden med statistiska konfidensintervall är fungerar väl jämfört med
många andra deterministiska intervall i ett mer komplexa
optimeringsproblem som det kvadratiska tilldelningsproblemet.
P-median problemet och de statistiska konfidensintervallen har
implementerats empiriskt för att beräkna och jämföra e-handelns och
traditionell handels CO2-utsläpp från transporter, vilken visar att ehandel medför betydligt mindre CO2-utsläpp. Ett annat
lokaliseringsproblem som analyserats empiriskt är hur förändringar av
den regionala administrativa indelningen av Sverige, vilket är en aktuell
och pågående samhällsdiskussion, påverkar medborgarnas tillgänglighet
till offentlig service. Analysen visar att regionala administrativa
iii
gränserna lett till en suboptimal placering av offentliga tjänster. Därmed
finns en risk att den samhällsekonomiska nyttan av dessa tjänster är
suboptimerad.
iv
Acknowledgment
Four score and five years, my parents brought forth on this world a new
boy, conceived in their smartness and affection, and dedicated to
academic research. Now I am engaged in a PhD, testing whether that
statistical bounds, so effective and so distinguished, can long be
applicable. I am met on a great battlefield of that academic war. I have
come to dedicate a portion of that field, as a thriving area for those who
gave their work that that research might flourish.
Many years later as I faced this thesis, I was always to remember that
distant afternoon when my supervisor Kenneth Carling, asked me why I
chose for PhD. At that time, I was just a young man wanted to enjoy
myself. Now, your marvelous guidance enlightened me to become a
good researcher, a good runner, a fine medicine ball thrower, and most
importantly, a good person. This experience of working with you, is
supercalifragilisticexpialidocious.
The guidance from my second supervisor, Johan Håkansson, is a box of
chocolate, you never know what you are gone get. Your supervision
helped me greatly in many ways and always gave me pleasant surprises.
With you being there to supervise me in thesis, I feel secure, just as
when you were there with me in downhill skiing.
Hard work leads to good career, intelligence leads to pleasure moments,
and nice virtues lead to happy life. Lars Rönnegård shows me a perfect
combination of these three. Beyond that, you show me there is no limits
on how nice can a man be. You taught me not only the three virtues, but
also being a virtuous man in almost all aspects.
Working under you is a pleasure, an experience that I will truly treasure.
Being the first boss in my life, Anders Forsman, you set the bar too high.
It would be really difficult for me to find another boss as patient as you,
as considerate as you, or as responsible as you. I am grateful for
v
everything you have done for me, as well as every signature you signed
for me which cost a fortune.
It is great to have you, Song William, Wei, to assist my work, and host
the warm dinners with your family. In a place 20000 leagues over the sea
away from China, the dinners taste like home, so is your support.
I developed the statistical bounds theories further by standing on the
shoulders of Pascal Rebreyend and Mengjie Han. With your excellent
work, you have built up strong shoulders, and my research benefits
greatly. I am glad you two were there.
Catia Cialani, you are always bringing us smiles as well as the sweet
Italy calories. Grazie per i dolci e le conversazioni e pasti belle. E
'davvero fortunato ad averti qui. Fate vita veramente felice e desidera che
abbiate una vita felice!
I deeply thank the person(s) who took Moudud Alam and Daniel
Wikström into this department. It requires a smart and hardworking
person to recruit another smart and hardworking person. Moudud, your
quick reaction and strong memory are amazing. You set a splendid
example for me not only in work but also in blodomloppet. Daniel, you
warm up atmosphere, and make every big and small conversion
interesting. It is so enjoyable to work with you that I cannot image how
the work would be done without you.
In this journey, the PhD students have been my great companions.
During the hard time when I got stuck in my paper, it is always
comforting to hear other students also get trapped in research. Xiaoyun
Zhao, you have the nicest characteristics of people from our province,
hardworking, smart, honest, and kind to others; every time I ask some
favor from you, I know that I can bet on it. It is lucky to share office with
you. Dao Li, even though we have different ideas on almost everything,
you are always generous to send me your help. Xia Shen, it is always
delightful to have you visit us, and we miss your presence here. Kristin
Svenson, your stories and topics are delightful and insightful, we enjoy
vi
that a lot. Majbritt Felleki, you are not talkative most of the time, but
every word you shines with intelligence. Yujiao Li, thanks for hosting
delicious dinners for us. Mevludin Memedi, when I work here in the
nights and weekends and holidays, it is so happy seeing you coming to
work also.
Ola Nääs, you are always nice and smiling and seeing you is always a
pleasure. It feels so warm every time I went for you for favors. Thanks
for the things you have done for me. Rickard Stridbeck, I still remember
the full mark I had in Statistical Inference exam. It was my happiest
moment in years. You are the exact person that witnesses me changing
from a master student who is talkative but not good at language into a
PhD student who is talkative but not good at language. Thanks for
bearing my English. Siril Yella, your corrections for my first paper
greatly improve its quality a lot. I couldn’t make it without your help. I
am very grateful for that.
Special thanks to Professor Sune Karlsson, I always admire you, and I
will never forget what you have done for me.
To myself, this PHD is not the end, it is not even the beginning of the
end. But it is, perhaps, the end of the beginning. From now on, I will be
strong, and I will not depend on anything, or anyone, except Kenneth,
Johan, Lars, Anders, William, Pascal, Catia, Daniel, Moudud, Xiaoyun,
Kristin, Sune, everyone in this university and every good people in the
world. If the sky in which you did not manage to fly yesterday is still
there, what are you waiting for?
Wish you sleep like a baby. So say we all!
Xiangli Meng
2015-03-16
vii
Contents
1. Introduction ................................................................................................... 1
2. Combinatorial problems and data processing ................................................ 3
2.1 Combinatorial problems .......................................................................... 3
2.2. Data processing ...................................................................................... 4
3. Methodological analysis of statistical bounds ............................................... 4
3.1 Theories of statistical bounds .................................................................. 5
3.2 Experimental design ................................................................................ 8
4. Results ........................................................................................................... 8
4.1. SOET on p-median problems ................................................................. 8
4.2 SOET on Quadratic Assignment problem ............................................. 10
4.3. Data processing .................................................................................... 10
4.4. Empirical location optimization problems ............................................11
4.4.1 CO2-emissions induced by online and brick-and-mortar retailing ......11
4.4.2 On administrative borders and accessibility to public services: The
case of hospitals in Sweden ......................................................................... 13
5. Future research ............................................................................................ 15
6. Paper list: ..................................................................................................... 16
7. Reference ..................................................................................................... 17
viii
1. Introduction
Operations research or operational research, which has its origins in
military planning before World War II, provides optimized solutions of
complex decision-making problems in pursuit of improved efficiency. Its
significant benefits have been recognized in many fields and thus been
widely studied. Combinatorial optimization problems, abbreviated as
combinatorial problems, are one of the most important types of problems
in operational research. They aim at finding the optimal solutions in a
finite solution set. In many combinatorial problems, the number of
solutions is too large, rendering exhaustive search impossible. Therefore,
heuristic and metaheuristics algorithms are widely applied to find a good
solution. However, a common problem is that these algorithms do not
guarantee that the solution will coincide with the optimum and, hence,
many solutions to real world OR-problems are afflicted with an
uncertainty about the quality of the solution. One common strategy is to
use algorithms providing deterministic bounds, such as Lagrangian
relaxation (Fisher, 2004), and Branch and bound (Land and Doig, 1960).
This strategy is popular and reasonable for many problems, but it
confines the choice of heuristic algorithms, and its performance (largely)
depends on the choice of parameters. Another lesser but potentially
useful approach is to employ statistical bounds, or statistical
optimization estimation techniques (SOETs), which estimates the value
of the minimum, based on a sample of heuristic solutions, and places
confidence limits around it. However, the investigation of the usage for
SOETs is quite limited and many questions remaining unanswered, thus
the application of SOETs is hindered.
The main aim of this thesis is to investigate the usability of statistical
bounds to evaluate the quality of heuristic solutions applied to large
combinatorial problems.
The results are applied to two widely studied empirical location
problems in social research. The first is to analyze the environment
effect of online and brick-and-mortar retailing regarding the CO21
emissions they induce, and the second is to investigate the borders effect
on accessibility to public services. To do so, the data processing issues
for location models are also discussed.
The contributions of this thesis are both methodological and empirical.
From a methodological point of view, the usefulness of statistical bounds
on p-median problems is thoroughly investigated. The statistical bounds
have good performance in providing informative quality assessment
under appropriate parameter settings. Also, they outperform the
commonly used Lagrangian bounds. It is demonstrated that the statistical
bounds are shown to be comparable with the deterministic bounds in
quadratic assignment problems. The location problems need a complete
accurate road network graph which is not directly derived by original
data base. Suggestions regarding how to transform a large road network
database into a usable graph are proposed in an example of Sweden. It
leads to a graph containing 188325*1938 nodes with over 5 million
residents.
As to empirical research, environment pollution has become a worldwide
problem, and transportation can cause a great amount of pollution. A
new method for calculating and comparing the CO2-emissions of online
and brick-and-mortar retailing is proposed. It leads to the conclusion that
online retailing has significantly lesser CO2-emissions. Another problem
is that the Swedish regional division is under revision and the border
effect to public service accessibility is concerned of both residents and
politicians. After analysis, it is shown that borders hinder the optimal
location of public services and consequently the highest achievable
economic and social utility may not be attained.
The content of the research is presented as follows: Section 2 introduces
combinational problems; Section 3 describes the methodology of
statistical bounds; Section 4 describes all the results of this thesis; and
Section 5 discusses future research.
2
2. Combinatorial problems and data processing
2.1 Combinatorial problems
Combinatorial problems aim at finding an optimal object from a finite
set of objects. There are many different combinatorial problems, such as
travelling salesman problems and scheduling problems. In this thesis,
two of the important combinatory problems are considered, p-median
problems and quadratic assignment problems (QAPs). The p-median
problem is a widely used location problem in many areas, such as
transportation and location planning, see Reese (2006) for a complete
survey. It deals with the challenge of allocating P facilities to a
population geographically distributed in Q demand points, so that the
population’s average or total distance to its nearest service center is
minimized. The distances considered are network distance and time.
Like many other combinatorial problems, the p-median problem is NPhard, (Kariv and Hakimi, 1979), and further shown to be NP-complete
(Garey and Johnson, 2000).
The quadratic assignment problem (QAP) is a classical combinational
problem in operational research. Besides location planning and
transportation analysis, it fits quite a variety of situations, such as control
panel design, archaeological analysis, and chemical reaction analysis
(Burkard, et al., 1997). It is formulated as follows: Consider two Ndimension square matrixes 𝐴 = (𝑎𝑖𝑗 )𝑁 and 𝐵 = (𝑏𝑖𝑗 )𝑁 , find a
permutation (𝑥1 , 𝑥2 , … , 𝑥𝑁 ) of integers from 1 to 𝑁 that minimises the
𝑁
objective function 𝑔 = ∑𝑁
𝑖=1 ∑𝑗=1 𝑎𝑖𝑗 𝑏𝑧𝑖 𝑧𝑗 . The QAPs are also NP-hard
and difficult to solve, especially when 𝑁 > 15 (Loiola, et al. 2007).
These two combinatorial problems are classical and widely applied.
SOETs have not been studied systematically for these problems. Thus
this thesis investigates the usage for SOETs on heuristic solutions, for
both combinatorial problems.
3
2.2. Data processing
The p-median problems are often studied with data from problem
libraries, which have a sound organized, road network matrix. In realworld applications, the data sets usually do not provide such a complete
road network. Solving p-median problems requires complete and
accurate road networks with the corresponding population data in a
specific area. The Euclidian distance, which is one choice, would
underestimate the real distance and lead to poor location choices
(Carling, et al., 2012). To address that problem, using network distance
is suggested. However, the original network data sets, which researchers
have obtained, usually do not fit and are unable to be used directly. The
data sets need to be cleaned and transformed to a connected road
network graph before being used for finding the best locations for
facilities. Many challenging problems can arise in this process, such as
incomplete information or isolated sub-graphs. Dealing with these
problems inappropriately could produce an inaccurate graph and then
lead to poor solutions. Especially for data sets on a large scale,
inappropriate treatment can cause significant trouble, and could lead to
seriously deviated results. Thus, the necessity of poorly addressing the
problems in the data set, and the transformation process is crucial. This
thesis proposes a procedure of converting the road network database to a
road graph which could be used for localization problems.
3. Methodological analysis of statistical bounds
Following the pioneering work of Golden and Alt (1979), on statistical
bounds of travelling salesman problems and quadratic assignment
problems, some researches have been carried out in the 1980s, but since
then the statistical approach has received little attention. Giddings, et al.,
(2014) summarizes the current research and application situation of
SOETs on operational problems and gives the following framework. A
problem class 𝒥 is a group of problem instances 𝛪, so 𝛪 ∈ 𝒥. This class
contains well-known combinatorial optimization problems, such as TSP,
Knapsack, and Scheduling. A heuristic 𝐻 is a solution method with a
given random number seed. The heuristic class ℋ is the collection of
4
possible heuristics. 𝑛 is the number of replicates arising from unique
random number seeds. The SOETs consist of all the combination sets for
𝒥 × ℋ 𝑛 . For a complete investigation of the SOET performance, all the
restrictive types 𝐼 × 𝐻 𝑛 need to be checked.
The usefulness of SOET discussed for instance by Derigs (1985),
presumably still applies and therefore deserves a critical examination
before application, which is the focus of the theoretical analysis in this
thesis. For a specific combination set of 𝐼 × 𝐻 𝑛 , the application of
SOETs on p-median location problems is examined. Then the analysis is
extended to the Genetic solutions of Quadratic assignment problems.
Table 1 gives the statement of corresponding work.
Table 1. 𝐼 × 𝐻 𝑛 combination cases analysed
I
H
Simulated
Annealing
Vertex
Substitution
Genetic
Algorithm
p-median
Yes
Yes
No
QAP
No
No
Yes
3.1 Theories of statistical bounds
From a statistical point of view, solving a p-median problem means
identifying the smallest value of a distribution. A few notations that are
used here are provided for the sake of clarity. Such notations are:
𝑧𝑝 = feasible solution of locating P facilities in N nodes.
𝐴 = the set of all feasible solutions 𝐴 = {𝑧1 , 𝑧2 , … , 𝑧(𝑁) }.
𝑃
𝑔(𝑧𝑝 ) = the value of the objective function at solution 𝑧𝑝 .
5
𝜃 = min𝐴 𝑔(𝑧𝑝 ).
𝜃̂ = an estimator of 𝜃.
𝑛 = the number of runs of the heuristic algorithm with random starting
values.
𝑥̃𝑖 = the heuristic solution of ith run 𝑖 = 1,2, … , 𝑛.
𝑥̃(𝑖) = the ith order statistic of the n heuristic solutions.
Since the aim of p-median problems is to identify the unknown
minimum of the objective function, 𝜃, and the corresponding solution
𝑧𝑝 , from a statistical point of view, the distribution of the objective
function needs to be checked before further analysis. The problems in
OR-lib (Beasley, 1990) are employed for analysis. A large random
sample is drawn from each problem. The distributions of most problems
in the OR-library mimic the Normal distribution. Only when P is small,
slight skewness is exhibited. Thus, the objective function can be
regarded as approximately Normally distributed truncated in the left tail,
namely the minimum 𝜃. For a good 𝜃̂, feasible solutions whose values
are close to 𝜃 would be required. For 𝜃 far out in the tail, a huge subset
of A is required. For many of the OR-library p-median problems, the
minimum is at least 6 standard deviations away from the mean requiring
a subset of a size of 1/𝛷(−6) ≈ 109 ( 𝛷 is the standard Normal
distribution function) to provide expections of obtaining feasible
solutions close to 𝜃.
Good SOETs require a high quality sample and good techniques.
Previous research shows that repeated heuristic solutions mimic a
random sample in the tail, so long as the starting values are picked at
random (McRoberts, 1971 and Golden and Alt, 1979). Thereby, random
values in the tail can be obtained with much less computational effort,
and used for estimating 𝜃. As for the estimation techniques, there are 2
6
general approaches of SOETs, the Jackknifing approach (JK), and the
extreme value theory approach (EVT). The JK-estimator is derived by:
𝜃̂𝐽𝐾 = ∑
𝑀+1
(−1)(𝑚−1) (
) 𝑧̃(𝑚)
𝑚
𝑚=1
𝑀+1
where M is the order and 𝑧̃(𝑚) is the 𝑚𝑡ℎ smallest value in the sample.
Dannenbring (1977) and Nydick and Weiss (1988) suggest using the first
order, i.e. 𝑀 = 1, for point estimating the minimum. The upper bounds
of the JK estimators are the minimum of 𝑥̃𝑖 . The lower bound is
suggested to be
[𝜃̂𝐽𝐾 − 3𝜎 ∗ (𝜃̂𝐽𝐾 )] , where 𝜎 ∗ (𝜃̂𝐽𝐾 ) is the standard
deviation of 𝜃̂𝐽𝐾 obtained from bootstrapping the n heuristic solutions
(1,000 bootstrap samples are found to be sufficient). The scalar of 3 in
computing the lower bound renders the confidence level to be 99.9%,
under the assumption that the sampling distribution of the JK-estimator
is Normal.
The extreme value theory (EVT) approach assumes the heuristic
solutions to be extreme values from different random samples, and they
follow the Weibull distribution (Derigs, 1985). The confidence interval is
derived by the characteristic of the Weibull distribution. The estimator
for 𝜃 is 𝑧̃(1) , which is also the upper bound of the confidence interval.
The Weibull lower bound is [𝑧̃(1) − 𝑏̂] at a confidence level of (1 −
𝑒 −𝑛 ), 𝑏̂ is the estimated shape parameter of the Weibull distribution. The
Weibull approach has been commonly accepted. However, based on the
extreme value theory, when the parent distribution is Normal, the
extreme values follow the Gumbel distribution. Therefore the EVT
approach is completed by considering the Gumbel estimators. It has the
same point estimator and upper bound as the Weibull estimator, but
different lower bounds. The Gumbel lower bound was derived by its
7
theoretic percentile [𝜇 − 𝜎 𝑙𝑛(−𝑙𝑛(1 − 𝛼) )] , where μ and σ are the
location and shape parameters of the Gumbel distribution while α is the
confidence level.
3.2 Experimental design
To verify the usefulness of SOETs, full factorial experiments are conducted.
The test problems used are from QAPLIB (Burkard et al., 1997). There
are 40 p-median problems in total with known 𝜃 . The experiments
consist of 4 estimators, 40 complexities, 2 heuristics, 4 sample size, and
3 computing time. The four estimators considered are the 1st and the 2nd
order JK-estimators, the Weibull estimator and the Gumbel estimator.
Forty problems with distinct complexities are considered. Two heuristics
considered are Simulated Annealing (SA, Al-Khedhairi, 2008), and
Vertex Substitution (VS, Densham and Rushton 1992). They are known
to be among the most commonly used heuristic approaches in solving pmedian problems (Reese, 2006), and the ideas of these two heuristics are
quite different from each other. Four sample sizes considered are 3, 10,
𝑉
𝑉
𝑉
25, 100. Three computing times are 2 ∗ (100) , 20 ∗ (100) , 60 ∗ (100)
seconds per replicate, where 𝑉 is the number of nodes of the test
problem, and computing time refers to CPU time of one processor, on an
Intel i5-2500 and 3.30GHz. To compare SOETs with deterministic
bounds, Lagrangian Relaxation (LR, Daskin, 1995) is employed as
benchmark, these are run with the same computing time to get the
deterministic bounds. To assess the performance, the following statistics
are considered, average relative biasness (
𝑏𝑖𝑎𝑠
𝜃
∗ 100%), coverage rate
(the proportion of intervals cover 𝜃 ) and the proportion of SOET
intervals shorter than the length of the deterministic bounds.
4. Results
4.1. SOET on p-median problems
The experiments lead to the following results regarding SOETs on pmedian problems.
8
(1) The SOETs are quite informative, given that the heuristic solutions
derived are close enough to the optimum. A statistic named SR is
proposed for evaluating whether this condition is satisfied. The statistical
bounds will cover the optimum almost certainly if SR is smaller than the
threshold 4.
(2) Comparing the performances of different SOET estimators, the 2nd
order JK-estimator and the Weibull estimator have better performance
with smaller bias and statistical bounds covering the optimum with
higher probability. When SR<4, the bounds cover the optimum almost
certainly. The Gumbel estimator and the 1st order Jackknife estimator
perform worst.
(3) Small sample size n such as 3 leads to unstable intervals, but 10
heuristic solutions provide almost equally good statistical bounds with
100 heuristic solutions. Thus, the effect of having more than 10 heuristic
processes would have small effect on the functioning of SOET.
(4) Different heuristics do not affect the performance of statistical
intervals. The solutions derived by Simulated Annealing are not
significantly different from those derived by Vertex Substitution. The
performance of point estimators and statistical bounds are almost the
same as long as SR<4.
(5) Under the same computing time, statistical intervals give better
results than deterministic intervals derived by Lagrangian relaxation. The
statistical intervals have much shorter lengths in most of the cases while
covering the optimum almost certainly.
(1), (2), and (4) are novel conclusions and could not trace back to similar
previous research results. (3) is analogous with Brandeau and Chiu
(1993), which states 𝑛 = 10 would obtain as good solutions as 𝑛 =
2000, and statistical bounds yield better lower bounds than the available
analytical bounds. (5) coincides with Brandeau and Chiu (1993), and
Derigs (1985).
9
4.2 SOET on Quadratic Assignment problem
The SOETs work well with the p-median problems. Next, the research is
generalized to another combination set in the SOET framework 𝒥 × ℋ 𝑛 ,
namely the Genetic algorithm on Quadratic assignment problems
(QAPs). The Genetic algorithm is one of the most widely used
algorithms in solving operational problems, including QAPs (Loiola, et
al. 2007). It is known to be able to find good solutions consistently,
while computationally affordable and exceptionally robust to different
problem characteristics or implementation (Tate and Smith, 1995). It is
the leading algorithm that researchers seek to solve QAPs although the
quality of the solutions remains ambiguous.
The functioning of different SOETs is examined with similar procedures
to those in the p-median problems. The 1st, 2nd, 3rd, 4th order JKestimators are compared and the Weibull estimator, regarding biasness,
coverage rate, length of interval and functioning of SR. Then the SOET
is compared with deterministic bounds. The following conclusions are
derived. The Jackknife estimators have better performance than the
Weibull estimators, and when the number of heuristic solutions is as
large as 100, higher order JK-estimators perform better than lower order
ones. Compared with the deterministic bounds, the SOET lower bound
performs significantly better than most deterministic lower bounds and is
comparable with the best deterministic ones. One disadvantage of
SOETs is that they have a certain probability of not covering the
optimum which should be considered in practical usage.
4.3. Data processing
The problems accounted are quite common in location models. The
problems are listed in Table 2, together with their treatments. The
reasons for choosing these treatments and the effect of these are given in
detail in the thesis. By solving all these problems, a road network is
derived with high accuracy and could be used in location model analysis.
Other researchers may not have the same data set, but this approach will
provide them with appropriate suggestions for deriving good network
graphs.
10
Table 2 Problems encountered in data processing and their treatments
Problem
Treatment
Missing
crossing, Round the coordinates to the nearest multiple
connectivity information of 2 meters and add virtue edges into the graph
Super large graph
Remove the dead end and divide the whole
network into small segments
Long time to calculate Add the Fibonacci heap into the Dijkstra
distance
algorithm
4.4. Empirical location optimization problems
The theoretical research of SOETs can be quite useful in empirical
location optimization problems. The location problems have wide
empirical applications, especially in transportation and resource
planning. With SOETs, it is possible to find the solutions close enough to
optimal solutions (henceforth refer to derived solutions as optimal
solutions). The optimal solutions would serve as a bench mark to see the
effects of, for instance, logistic planning under the best scenario. Thus
the thesis considers two applications regarding transportation and
resource allocation issues. The first it to compare CO2-emissions induced
by online and brick-and-mortar retailing, and the second to investigate
administrative borders effect on accessibility to public services. The
SOETs are employed to find the solution that is sufficiently close to the
optimum.
4.4.1 CO2-emissions induced by online and brick-and-mortar retailing
Environmental aspects are of high priority and much research is devoted
to meeting the challenges of climate change and sustainability, as well as
related environmental issues. The environmental impact of the retail
industry on CO2-emissions should not be underestimated. The primary
aim of this study is to calculate and compare the environmental impact
of buying a standard electronic product online with buying the same
product in a brick-and-mortar store. The focus is on consumer
11
electronics as this category of consumer products is the largest category
in online retailing in Sweden and presumably is leading the way for
online shopping of other consumer products in the future.
As demonstrated by Carling, Håkansson, Jia (2013), and Jia, Carling,
Håkansson (2013), customers tend to take the shortest route from their
home to the brick-and-mortar shopping areas, and this route highly
correlated with the route that minimized CO2-emissions. Thus,
shopping-related trips by the shortest route can approximate CO2emissions. This scenario fits the p-median model well and therefore
SOETs can be used to derive a solution sufficiently close to the
optimum. By finding the environmentally best locations of brick-andmortar stores and post offices, it is possible to compare the CO2emissions result from the two retailing methods. The comparison is
conducted in a representative geographical area of Sweden. The CO2emissions from both the supply and demand side are computed, and then
aggregated to calculate the total carbon footprint. In addition, potential
long-term general equilibrium effects of increased online retailing such
as the exit and/or relocation of mortar-and-brick stores, and potential
effects on consumer demography, are also analyzed. The following
conclusions are derived.
(1) Online shopping induces much less CO2-emissions compared with
brick-and-motar stores. The estimated, yearly reduction of CO2emissions of e-tailing of consumer electronics amounts to 28 million kg
in Sweden, due to the recent emergence of this form of distribution.
(2) When the locations of stores are optimized, the CO2-emissions by
brick-and-motar shopping will decrease, but will still be much larger
than that of online shopping.
(3) Online shopping will retain its environmental effect for a long time.
The conclusions are shown stable for most assumptions by sensitivity
check. Only one exception exists when over 80% consumers first visit
12
the brick-and-mortar store and thereafter order goods online. In this case
the advantage of online retailing would be completely offset.
The method of calculating and comparing the CO2 emissions is another
contribution of this paper. This proposes a new and fair way to compare
the emissions efficiently and fairly. The idea and the structure of the
method could be used in other similar research.
4.4.2 On administrative borders and accessibility to public services: The
case of hospitals in Sweden
An administrative border might hinder the optimal allocation of a given
set of resources by restricting the flow of goods, services, and people.
Thereby resources are commonly trapped into suboptimal locations and
may reduce efficiency in using these resources. A core part of EU policy
has been to promote cross-border transaction of goods, services, and
labor towards a common European market. There is also a growing
amount of cross-border cooperation of public authorities in Europe,
while public services in the EU are still normally confined by national or
regional borders. As an illustration, López et al. (2009) discuss the
funding of Spanish rail investments, in light of their having substantial
spill-over in French and Portuguese regions bordering Spain. It studies
how regional borders affect the spatial accessibility to hospitals within
Sweden. Since Swedish regions are comparable in geographical size to
many European countries, such as, Belgium, Denmark, Estonia,
Slovenia, Switzerland, and the Netherlands, as well as provinces in Italy
and Spain, and states in Germany with a self-governing of the health
care, the results are informative regarding the internal borders’ of Europe
effect on the accessibility of health care. To be specific, three issues are
addressed. The first is the effect of borders on inhabitants’ spatial
accessibility to hospitals. The second is the quality of the location of
hospitals, and the resulting accessibility. The third is accessibility, in
relation to population dynamics.
Sweden, for several reasons, is a suitable case for a borderland study of
accessibility to hospitals. Firstly, we have access to good data of the
13
national road network, and a precise geo-coding of inhabitants, hospitals,
and regional borders. Secondly, hospital funding, management, and
operation are confined by the regional borders. Thirdly, after 200 years
of a stable regional division of the country a substantial re-organization
of the regions is due.
To measure the accessibility, the average travel distance/time to the
nearest hospital is employed. Compared with other location models
commonly used for optimizing spatial accessibility of hospitals (Daskin
and Dean, 2004), the p-median model fits this scenario best in finding
the best locations of hospitals. The SOETs are used in finding the best
locations without borders in Sweden nationwide. With the age data, the
dynamic of the results are checked. The experiments lead to the
following conclusions.
(1) Half of the inhabitants are unaffected by removing borders. Border
inhabitants are affected and have moderate benefits.
(2) Most hospitals are currently located in optimal locations. A small
number of hospitals need to be relocated and significant benefits should
be derived from that.
(3) The results are robust to the population dynamics.
These findings imply that administrative borders only marginally worsen
accessibility. However, borders will hinder the optimal location of public
services. As a consequence, in particular in borderlands, highest
achievable economic and social utility may not be achieved. For this
reason, it seems sensible that EU policy has been to promote crossborder transaction of goods, services, and labor towards a common
European market. Public services have, however, been exempted from
the free flow of services and largely confined by national and regional
borders.
14
5. Future research
As shown above, the SOETs are a potentially useful method of assessing
the quality of heuristic solutions. Much more research however remains
to be done. Future research relates to the following three issues: (1) pmedian problems and QAPs are shown to have differences in SOET
functioning. Thus, the SOETs need to be adjusted according to different
problems. (2) The reason SOETs perform differently to problems has to
be explained. Is it because of the characteristics of different problems? If
so, what characteristics cause the difference? (3) Mathematical support
of SOETs is necessary. The heuristic processes can be treated as
stochastic processes, therefore the solutions might be derived by a
mathematical approach and provide us with some information, such as,
asymptotic distributions.
As to empirical research, many issues depending on different problems
need to be explored, especially when data size is currently becoming
larger. Analyzing location problems with very large data sets would be
important. The big data analysis techniques would hopefully be
incorporated into solving these problems.
15
6. Paper list:
I. Carling, K., Meng, X., (2015). On statistical bounds of heuristic
solutions to location problems. Journal of Combinatorial Optimization,
10.1007/s10878-015-9839-0.
II. Carling, K., Meng, X., (2014). Confidence in heuristic solutions?.
Journal of Global Optimization, to appear.
III. Meng, X., (2015). Statistical bounds of genetic solutions to quadratic
assignment problems. Working papers in transport, tourism, information
technology and microdata analysis, 2015:02
IV. Meng, X., Rebreyend, P., (2014), On transforming a road network database
to a graph for localization purpose. International Journal on Web Services
Research, to appear.
V. Carling, K., Han, M., Håkansson, J., Meng, X., Rudholm, N., (2014).
Measuring CO2 emissions induced by online and brick-and-mortar
retailing (No. 106). HUI Research.
VI. Meng, X., Carling, K., Håkansson, J., Rebreyend, P., (2014). On
administrative borders and accessibility to public services: The case of
hospitals in Sweden. Working papers in transport, tourism, information
technology and microdata analysis, 2014:15.
Papers not included:
Meng, X., Carling, K., (2014). How to Decide Upon Stopping a
Heuristic Algorithm in Facility-Location Problems?. In Web Information
Systems Engineering–WISE 2013 Workshops (pp. 280-283). Springer
Berlin Heidelberg.
Meng, X., He, C., (2012). Testing Seasonal Unit Roots in Data at Any
Frequency, an HEGY approach. Working papers in transport, tourism,
information technology and microdata analysis, 2012:08.
16
Meng, X., (2013). Testing for Seasonal Unit Roots when Residuals
Contain Serial Correlations under HEGY Test Framework. Working
papers in transport, tourism, information technology and microdata
analysis, 2013:03.
7. Reference
Al-Khedhairi, A., (2008). Simulated annealing metaheuristic for solving
p-median problem. International Journal of Contemporary Mathematical
Sciences, 3:28, 1357-1365.
Beasley, J.E., (1990), OR library: Distributing test problems by
electronic mail, Journal of Operational Research Society, 41:11, 10671072.
Burkard, R. E., Karisch, S. E., Rendl, F. (1997), QAPLIB–a quadratic
assignment problem library. Journal of Global optimization, 10(4), 391403.
Carling, K., Han, M., Håkansson, J. (2012). Does Euclidean distance
work well when the p-median model is applied in rural areas?. Annals of
Operations Research, 201(1), 83-97.
Carling, K, Håkansson, J, and Jia, T (2013) Out-of-town shopping and
its induced CO2-emissions, Journal of Retailing and Consumer Services,
20:4, 382-388.
Daskin, M.S., (1995). Network and discrete location: models,
algorithms, and applications. New York: Wiley.
Daskin M S, Dean L K, (2004). Location of health care facilities. In
operations research and health care (Springer, US) pp 43-76.
Densham, P.J, Rushton, G., (1992). A more efficient heuristic for solving
large p-median problems. Papers in Regional Science, 71, 307-329.
17
Fisher, M.L., (2004), The Lagrangian relaxation method for solving
integer
programming
problems.
Management
science,
50(12_supplement), 1861-1871.
Garey, M.R., Johnson, D.S, (2002). Computers and intractability, 29;
W.H. Freeman, New York.
Giddings, A.P., Rardin, R.L, Uzsoy, R, (2014). Statistical optimum
estimation techniques for combinatorial problems: a review and critique.
Journal of Heuristics, 20, 329-358.
Golden, B.L., Alt, F.B., 1979. Interval estimation of a global optimum
for large combinatorial optimization, Operations Research 33:5, 10241049.
Hakimi, S.L., (1964). Optimum locations of switching centers and the
absolute centers and medians of a graph, Operations Research, 12:3,
450-459.
Hakimi, S.L., (1965). Optimum Distribution of Switching Centers in a
Communication Network and Some Related Graph Theoretic Problems,
Operations Research, 13:3, 462-475.
Han, M., Håkansson, J., Rebreyend, P. (2013). How do different densities in a
network affect the optimal location of service centers? Working papers in
transport, tourism, information technology and microdata analysis, ISSN 16505581; 2013:15
Jia, T, Carling, K, Håkansson, J, (2013). Trips and their CO2 emissions induced
by a shopping center, Working papers in transport, tourism, information
technology and microdata analysis, 2013:02.
Kariv, O., Hakimi, S.L., (1979). An algorithmic approach to network
location problems. part 2: The p-median, SIAM Journal of Applied
Mathematics, 37, 539-560.
18
Kotz, S., Nadarajah, S., (2000). Extreme value distributions, theory and
applications, Imperial College Press.
Land, A.H., Doig, A.G. (1960), An automatic method of solving discrete
programming problems. Econometrica: Journal of the Econometric
Society, 497-520.
Loiola, E.M., Abreu, N.M.M, Boaventura-Netto, P.O., Hahn.P., Querido,
T., (2007). A survey for the quadratic assignment problem. European
Journal of Operational Research 176.2, 657-690.
McRobert, K.L., (1971). A search model for evaluating combinatorially
explosive problems, Operations Research, 19, 1331-1349.
Reese, J. (2006), Solution methods for the p‐median problem: An
annotated bibliography. Networks, 48(3), 125-142.
Tate, D.M., Smith, A.E., (1995). A genetic approach to the quadratic
assignment problem. Computers & Operations Research, 22(1), 73-83.
19
20
PAPER I
This paper is published by Journal of Combinatorial Optimization. We
acknowledge the journal and Springer for this publication.
J Comb Optim
DOI 10.1007/s10878-015-9839-0
On statistical bounds of heuristic solutions to location
problems
Kenneth Carling · Xiangli Meng
© Springer Science+Business Media New York 2015
Abstract Combinatorial optimization problems such as locating facilities frequently
rely on heuristics to minimize the objective function. The optimum is often sought
iteratively; a criterion is therefore necessary to be able to decide when the procedure attains such an optimum. Pre-setting the number of iterations is dominant in OR
applications, however, the fact that the quality of the solution cannot be ascertained
by pre-setting the number of iterations makes it less preferable. A small and, almost
dormant, branch of the literature suggests usage of statistical principles to estimate
the minimum and its bounds as a tool to decide upon the stopping criteria and also to
evaluate the quality of the solution. In the current work we have examined the functioning of statistical bounds obtained from four different estimators using simulated
annealing. P-median test problems taken from Beasley’s OR-library were used for the
sake of testing. Our findings show that the Weibull estimator and 2nd order Jackknife
estimators are preferable and the requirement of sample size to be about 10. It should
be noted that reliable statistical bounds are found to depend critically on a sample
of heuristic solutions of high quality; we have therefore provided a simple statistic
for checking the quality. The work finally concludes with an illustration of applying
statistical bounds to the problem of locating 70 post distribution centers in a region in
Sweden.
Keywords p-Median problem · Simulated annealing · Jackknife · Discrete
optimization · Extreme value theory
K. Carling · X. Meng (B)
School of Technology and Business Studies, Dalarna university, 791 88 Falun, Sweden
e-mail: xme@du.se
123
J Comb Optim
1 Introduction
Consider the problem of finding a solution to min f () where the complexity of the
function renders analytical solutions infeasible. If a solution is found by a heuristic,
how can the quality of the heuristic solution be assessed? The issue can be exemplified
by the common p-median problem. The p-median problem deals with the challenge of
allocating P facilities to a population geographically distributed in Q demand points
such that the population’s average or total distance to its nearest service center is
minimized. Hakimi (1964) considered the task of locating telephone switching centers
and has further shown (Hakimi 1965) that the optimal solution of the p-median model
existed at the nodes in a given network. If N is the number of nodes, then there are
N
possible solutions for a p-median problem. A substantial amount of research has
P
been devoted to finding efficient (heuristic) algorithms to solve the p-median model
(see Handler and Mirchandani 1979 and Daskin 1995 as examples); bearing in mind
that enumerating all the solutions is not possible as the problem size grows.
In this particular work a common heuristic known as simulated annealing1 has been
investigated and is based on the implementation reported by Levanova and Loresh
(2004).2 The virtue of simulated annealing, as other heuristics is that the algorithm
will iterate towards a good solution, not necessarily the actual optimum.
The prevailing practice is to run the heuristic algorithm for a pre-specified number of
iterations or until improvements in the solution become infrequent. But given a specific
problem, such practice does not readily lend itself to the determination of quality of
the solution. One approach to assess the quality therefore is to seek deterministic
bounds for the minimum employing techniques such as Lagrangian Relaxation (see
Beasley 1993). Such an approach is popular and reasonable for many problems, but
the deterministic bounds depend on the chosen parameters and are available for only
a limited set of heuristic algorithms.
An alternative approach is to employ statistical bounds. In short, the statistical
approach is to estimate the value of the minimum based on a sample of heuristic
solutions and put confidence limits around it. Golden and Alt (1979) did pioneering
work on statistical bounds followed by others in the 1980s, but thereafter the statistical
approach has received little attention. Akyüz et al. (2012) state to the best of their
knowledge that the statistical approach had not been used in location problems since
1993.
However, the usefulness of statistical bounds, as discussed for instance by Derigs
(1985), presumably still applies and they therefore deserve a critical examination.
A few open questions relevant to statistical bounds and their implementation are as
follows. How to estimate the minimum? How and according to which principle should
the bounds be derived? What is the required sample size? Are they reliable? Are
they computationally affordable? Does the choice of heuristic matter? How do they
perform in various OR-problems and are they competitive with deterministic bounds?
1 Simulated annealing is one of the most common solution methods to the p-median problem according
to Reese’s (2006) review.
2 Simulated annealing was implemented in R (www.r-project.org) and the code is attached in the Appendix.
123
J Comb Optim
To address all these questions at once would be an insurmountable task, and therefore
we limit the analysis to the first four questions in connection with the p-median
problem. More specifically, the aim of the current work is to test by experimentation
whether if statistical bounds can provide information on the optimum in p-median
problems solved by simulated annealing.
The remainder of the paper is organized as follows. Section 2 presents a review of the
suggested methods for statistically estimating the minimum of the objective function,
as well as bounds for the minimum. A few remarks on the issue are also added for
the sake of completeness. Section 3 compares the different methods by applying them
to incapacitated p-median test problems of a known optimum of varying complexity.
Test problems available from the Beasley’s OR-library (Beasley 1990) were used.
Section 4 illustrates the problem of locating post distribution centers in a region in
mid-Sweden. The paper finally presents concluding remarks.
2 Statistical estimation of the minimum and its bounds
From a statistical point of view, solving a p-median problem means identifying the
smallest value of a distribution. The notation that is used throughout the paper is
provided for the sake of clarity. Such notation is:
z p = feasible solution of locating P facilities
⎧ in N nodes. ⎫
⎨
⎬
A = the set of all feasible solutions A = z 1 , z 2 , . . . , z N .
⎩
⎭
P
g z p = thevalue
of the objective function at solution z p .
θ = min A g z p .
θ̂ = an estimator of θ .
n = the number of runs of the heuristic algorithm with random starting values.
x̃i = the heuristic solution of the i th run i = 1, 2, . . . , n.
x̃(i) = the i th order statistic of the n heuristic solutions.
The aim is to identify the unknown minimum of the objective function, θ , and the
corresponding solution z p . Since p-median problems usually are complex, most of
the times one can only hope to get a solution near to θ (Levanova and Loresh 2004).
Statistical bounds would add information about the solution by, ideally, providing an
interval that almost certainly covers θ .
Figure 1 gives an example of the distribution of feasible solutions to a p-median
problem, namely the 14th problem in the OR-library (Beasley 1990). One million z p ’s
are drawn at random from A and the histogram of the corresponding g(z p ) is given.
This empirical distribution as well as the distributions of most of the other problems in
the OR-library mimics the Normal distribution. However, this large sample is almost
useless for identifying θ . The crux is the required size of the subset of A. The objective
function in this p-median problem might be regarded as approximately Normal with
a truncation in the left tail being the minimum θ . For a good θ̂ , feasible solutions
whose values approach θ would be required. For θ far out in the tail, a huge subset
of A is required. For many of the OR-library p-median problems, the minimum is
123
J Comb Optim
Fig. 1 Sample distribution of the 14th problem in the OR-library
at least 6 standard deviations away from the mean requiring a subset of a size of
1/Φ(−6) ≈ 109 (Φ is the standard Normal distribution function) to render hope of
having feasible solutions close to θ .
Work reported earlier has pointed out that if the starting values are picked at random,
repeated heuristic solutions mimic a random sample in the tail (McRobert 1971 and
Golden and Alt 1979). Thereby random values in the tail can be obtained with much
less computational effort and used for estimating θ . The fact that a good θ̂ needs
not only a random sample in the tail but also a good estimation method led to the
consideration of four point and interval estimators where the interval estimator gives
the statistical bounds. The first two estimators follow from the extreme value theory
(EVT). According to this theory, the distribution of the extreme value x̃i will be the
Weibull distribution if g(z p ) follows a skewed (or a uniform distribution) (Derigs
1985). This is the assumption and the estimator conventionally used in the interest of
obtaining statistical bounds for various problems in operations research.
We have, however, taken large random subsets of A for the 40 problems in the
OR-library. We have found that g(z p ) is typically symmetrically distributed, and only
slightly skewed in instances in which P is small. Consequently, the extreme values
might be better modelled by the Gumbel distribution due to their proximity to the
Normal distribution (see Kotz and Nadarajah 2000 (p. 59)). Hence, the EVT approach
is completed by considering the possibility that the distribution is Gumbel. Finally,
to loosen the distributional assumptions of the EVT approach, two non-parametric
estimators relying on the Jackknifing and bootstrapping methods are considered. As
a side-remark, the computational cost of calculating statistical bounds for all four
estimators is trivial compared with solving the p-median problem and we therefore
ignore this cost.
The Weibull point estimator is θ̂W = x̃(1) , which is also the statistical upper bound.
The lower bound, with a confidence of (1 − e−n ), is (x̃(1) − b̂) where b̂ is the estimated
shape parameter of the Weibull distribution (Wilson et al. 2004). There are several
ways of estimating the shape parameter, including the maximum likelihood estimation
technique. We found the following simple estimator to be fast, stable, and giving good
2 )/( x̃
results: b̂ = x̃[0.63(n+1)] − (x̃(1) x̃(n) − x̃(2)
(1) + x̃ (n) − 2 x̃ (2) ) where [0.63(n + 1)]
is the floor of the value of the function (Derigs 1985).
123
J Comb Optim
The Weibull and the Gumbel (θ̂G ) estimators have the same point estimator and
upper bound, but different lower bounds. The Gumbel lower bound was derived by its
theoretic percentile [μ − σ ln (− ln (1 − α))] where μ and σ are the location and shape
parameters of the Gumbel distribution while α is the √
confidence level. The Gumbel
n
6var(x̃i )
i =1 x̃ i
parameters are estimated by the moments as σ̂ =
and
μ̂
=
−
π
n
0.57722σ̂ and are based on the details provided in Kotz and Nadarajah (2000, p. 12).
In the Weibull approach the confidence level is determined by n. To render the Weibull
and the Gumbel approach comparable in terms of confidence level we will let α = e−n .
Finally, as briefly discussed in the literature, Jackknifing (hereafter JK) may be used
for point estimation of the minimum as
θ̂ J K =
M+1
(−1)
(i−1)
i=1
M +1
i
x̃(i)
where M is the order (Quenouille 1956). Earlier relevant work suggested usage of
the first order for point estimating the minimum (Dannenbring 1977; Nydick JR and
Weiss 1988). The rationale for this suggestion is a lower mean square error of the first
order JK-estimator compared with higher orders, in spite of a larger bias (Robson and
Whitlock 1964). Both first and second order JK-estimators were used in the current
(1)
(2)
work. The point-estimators are θ̂ J K = 2 x̃(1) − x̃(2) and θ̂ J K = 3x̃(1) − 3x̃(2) + x̃(3) .
The upper bounds of both JK-estimators are identical to the other two estimators. As a
lower statistical bound for the JK-estimators, we suggest to use the bootstrap method
by Efron (1979). We define the lower bound as [θ̂ J K − 3σ ∗ (θ̂ J K )] where σ ∗ (θ̂ J K ) is
the standard deviation of θ̂ J K obtained from bootstrapping the n heuristic solutions
(we found 1000 bootstrap samples to be sufficient). With a scalar of 3 in computing
the lower bound, the confidence level is 99.9 % provided that the sampling distribution
of the JK-estimator is Normal.
3 Experimental evaluation of the estimators
The two EVT-estimators Weibull and Gumbel (θ̂W , θ̂G ), together with the JKestimators θ̂ J K and the accompanying bounds are justified by different arguments
and there is no way to deem one superior to the others unless they are put to test. The
OR-library’s 40 test problems were used as it is crucial to know the exact minimum
θ of the problems used for comparison. Moreover, the problems vary substantially in
(statistical) complexity, which we define here as ((μg(z p ) − θ )/σg(z p ) ) with μg(z p ) and
σg(z p ) being the expectation and the standard deviation of g(z p ), as their minimum
are varyingly far out in the tail of the distribution of g(z p ).
The estimators’ ability to generate intervals that cover the problems’ minimum as
well as the lengths of the intervals were investigated. Details concerning the calculation
of the estimators and the bounds are given in Sect. 2, except for the size of n and the
required number of iterations of the heuristic algorithm. Based on pre-testing the
heuristic algorithm, we decided to evaluate the estimators after 1000, 10,000, and, for
the more complex problems, 100,000 iterations. In the early literature on statistical
123
J Comb Optim
Table 1 Description of 6 problems from the OR-library
Problem
θ
μg z p σg z p P11
7696
10,760
1195
Complexity
2.56
P2
4093
6054
506
3.88
P36
9934
13,436
735
4.77
P13
4374
6293
276
6.95
P33
4700
6711
186
10.81
P30
1989
3335
90
14.93
bounds little was said on the size of n, whereas Akyüz et al. (2012) and Luis et al.
(2009) advocate n to be at least 100. We have examined n = 3, 10, 100 and have
considered, if necessary by the experimental results, even larger values of n.
3.1 The complexity of the test problems
In Table 1 the known minimum θ as well as the estimates of μg(z p ) and σg(z p ) are
presented. The complexity of the problems where the parameters for the mean and the
standard deviation are estimated on a random subset of size 1,000,000 of A. Instead
of following the original order of the test problems, Table 1 gives the problems in
ascending order of complexity (see also Table 5). The complexity varies between 2.56
for problem P11 to 14.93 for problem P30.
All estimators require an efficient heuristic that produces random solutions in the
tail of the distribution g(z p ). In the experiments random solutions x̃i are consistently
generated in the tail by employing simulated annealing (SA) heuristic. SA has been
found capable to provide good solutions to the problems in the OR-library (Chiyoshi
and Galvão 2000). For each problem we run SA for 10,000 iterations (or 100,000
iterations for 22 test problems with a complexity of 6.95 and higher). For each problem
we run SA 100 times with unique random starting values.
Consider the issue of determining whether the solutions are in the tail of the distribution g(z p ). A sample of solutions obtained after running only a few iterations
is expected to have solutions far from the optimum with great variation due to the
starting points being selected at random. However, after many iterations the solutions
are expectantly concentrated near to the optimum. Hence, a measure of the similarity
of the heuristic solutions might be indicative of whether the sample is in the tail near
to θ or not. A natural measure of similarity is the standard deviation of the heuristic solutions, σ (x̃i ).3 The variability in heuristic solutions is however sensitive to the
metric (or scale) of the problem. One candidate for rescaling the variation in heuristic
solutions is θ , but that parameter is of course unknown in practice. It is replaced by
an estimator of it instead. The JK-estimators are the best point estimators of θ and the
3 The sample standard deviation is actually negatively biased. The bias is negligible unless n small (Cureton
1968). In the experiments, the bias of the sample standard deviation is of practical importance only in the
case .
123
J Comb Optim
Iterations
1000
10000
100000
20
SR
15
10
5
0
1.0
1.2
1.4
1.6
1.8
2.0
2.2
2.4
2.6
2.8
Complexity (ln)
Fig. 2 S R as a function of the test problems complexity
(1)
second order estimator is slightly less biased than the first order. Nonetheless, θ̂ J K
was chosen for standardization since it has a smaller variation than the second order
point estimator. In the following the statistic (hereafter referred to as SR) given by the
(1)
ratio of 1000σ (x̃i )/θ̂ J K is considered for the purpose of checking if the solutions are
in the tail of the distribution.
Figure 2 shows S R as a function (regression lines imposed) of the problems’ complexity evaluated after 1000, 10,000, and 100,000 iterations when applicable. Some
points that are worth noting in this light are as follows. A more complex, i.e. a more
difficult, p-median problem will have a greater variation in the heuristic solutions
because many of the solutions obtained are far from the minimum. As the number
of iterations increases, the SR generally decreases, as more and more solutions are
approaching the minimum.
3.2 The bias of point-estimation
An initial check to see if the estimators can point-estimate the minimum was conducted.
Bearing in mind the computational cost, it was decided to run simulated annealing
100 times for 100,000 iterations for each of the 40 test problems. Upon examining the
statistical properties of the estimators, a re-sample with replacement from the sample
of 100 heuristic solutions was chosen upon examining the statistical properties of the
estimators. Table 2 contains a few results that were achieved. A complete list of the
results can be found in the appendix Tables 6, 7 and 8. The table shows the results for
the heuristic after 10,000 iterations and for n = 10 and n = 100. It was evident that
the size of n = 3 typically led to the failure in the estimation of the minimum as well
as in setting the statistical bounds and is hence not thoroughly discussed hereafter.
For the three simpler problems the bias is about 0 meaning that the minimum is
123
J Comb Optim
Table 2 Bias of the estimators evaluated after 10,000 iterations
Problem
P11
Complexity
2.56
n
10
θ
7696
3.88
10
4.77
P13
6.95
P33
10.81
P30
14.93
10
0
0.20
0
0
0.44
0
0
0
1.46
0
0
0
1.46
9934
4
16
0
1.45
1
1
1
3.32
4374
15
23
14
5.88
15
15
15
6.19
4700
118
129
116
5.19
115
115
117
5.34
84
92
82
7.14
80
81
82
7.38
4093
100
10
100
10
SR
0
100
10
θ̂W,G − θ
0
100
P36
(2)
θ̂ J K − θ
0
100
P2
(1)
θ̂ J K − θ
1989
100
well estimated by all the four approaches. For the three more complex problems all
approaches are biased and over-estimate the minimum to the same degree. Hence,
none of the approaches seems superior in terms of estimating the minimum. The bias
persists when n = 100 is considered, indicating that there is no apparent gain in
increasing n beyond 10. By increasing the number of iterations to 100,000 the bias of
the 13th problem is eliminated and reduced for the two most complex problems in a
similar way for all estimators (see Appendix Tables 6, 7, 8).
Although it was expected to find the Weibull and Gumbel estimators of the minimum
to be positively biased, it was dejecting to find the bias of the Jackknife-estimators
to be significant. The practical value of the estimators lies in their ability in providing intervals containing the minimum. We therefore examine the intervals’ coverage
percentage of the minimum θ .
3.3 Do the intervals cover the optimum?
In Table 3 the same subset of test problems as of Table 2 are shown. The intervals
obtained from the 2nd order Jackknife and the Weibull approaches are presented.
To compute the coverage percentage of the Weibull approach for Problem 11 we
took a sample with replacement of 10 (or 100) of the 100 heuristic solutions to the
problem. Thereafter we computed the lower bound and checked if the minimum of
7696 was within the lower bound and the best solution (upper bound). The procedure
was repeated 1000 times and the findings suggest the proportion of times the interval
covered the minimum being 1.00 for this problem. A complete list of results and
estimators of other test problems are presented in the appendix. The eighth column of
the table gives the bounds of the intervals being the mean of the lower and the upper
bounds in the 1000 replications. The coverage and bounds for the other estimators
were computed in an identical manner. With a theoretical confidence level of almost 1
123
J Comb Optim
Table 3 Bounds and coverage of Jackknife (2nd order) and Weibull after 10,000 iterations
Problem
P11
Complexity
2.56
n
10
θ
CoverageJK
IntervalJK
CoverageW
IntervalW
SR
7696
1.00
[7696,7696]
1.00
[7696,7696]
0.20
1.00
[7696,7696]
1.00
[7696,7696]
0.44
4093
1.00
[4079,4093]
1.00
[4083,4093]
1.46
1.00
[4093,4093]
1.00
[4081,4093]
1.46
9934
0.96
[9853,9950]
1.00
[9887,9950]
1.45
0.98
[9908,9935]
1.00
[9867,9935]
3.32
4374
0.91
[4329,4397]
0.99
[4355,4397]
5.88
0.23
[4374,4389]
1.00
[4344,4389]
6.19
0.20
[4737,4829]
0.03
[4773,4829]
5.19
0.00
[4793,4817]
0.00
[4758,4817]
5.34
0.19
[2019,2081]
0.03
[2047,2081]
7.14
0.00
[2051,2071]
0.00
[2034,2071]
7.38
100
P2
3.88
P36
4.77
P13
6.95
10
100
10
100
10
100
P33
10.81
10
4700
100
P30
14.93
10
100
1989
for all the interval estimators, regardless of whether n equals 10 or 100, the coverage
percentage of the experiments should also be almost 100 per cent.
This is the case for the simpler test problems, but certainly not for more complex
problems. On the contrary, for the most complex problem (P30) all of the 1000 intervals
exceeded the minimum for n = 100. It is reasonable to infer that x̃i will converge when
the solutions come close to the optimum, and consequently the standard deviation of
them would also decrease as discussed above. By dividing the standard deviation of
x̃i by θ̂ J(1)
K a measure of similarity amongst the solutions is obtained allowing one
to observe how far out in the tail and how close to the optimum the solutions are
(reported in Tables 2 and 3 as column SR). The first three problems all have small
SR. Correspondingly, the bias of the point-estimators is small and the coverage of the
intervals is close to 1. For the last three problems being more complex, SR and the
bias is large and the coverage percentage is poor. Hence, a large SR indicates that x̃i :s
are not sufficiently near to the optimum for a reliable estimation of the minimum and
bounds of it. Additional iterations of the heuristic might improve the situation.
The number of iterations is further increased for problem P13 and the other problems with an even higher complexity. Table 4 gives again the coverage and the bounds
of Weibull and the 2nd order Jackknife approach where the procedure is identical to the
one described in relation to Table 3 with the exception that the number of iterations is
100,000. The problems P13 and P33 have a value of SR far below 5 after 100,000 iterations (Table 4). As a result the intervals generally cover the minimum. The large number
of iterations is on the other hand possibly insufficient for the most complex problem
(P30) as SR is about 5 and has intervals that often fail to cover the actual minimum.
3.4 The coverage and SR
An examination of the intervals’ coverage for a few test problems is insufficient for
drawing any general conclusions about the relationship between coverage and SR.
123
J Comb Optim
Table 4 Bounds and coverage of Jackknife (2nd order) and Weibull after 100,000 iterations
Problem
Complexity
P13
n
6.95
10
θ
CoverageJK
IntervalJK
CoverageW
IntervalW
SR
4374
1.00
[4370,4374]
1.00
[4370,4374]
0.91
1.00
[4374,4374]
1.00
[4369,4374]
0.93
4700
0.84
[4681,4714]
0.99
[4694,4714]
2.07
0.80
[4690,4707]
1.00
[4682,4707]
2.12
1989
0.49
[1982,2018]
0.44
[1995,2018]
4.71
0.28
[1990,2009]
0.96
[1984,2009]
4.83
100
P33
10.81
P30
14.93
10
100
10
100
Estimator
Gumbel
JK (1st)
JK (2nd)
Weibull
1.0
Coverage
0.8
0.6
0.4
0.2
0.0
0
2
4
6
8
10
12
14
16
SR
Fig. 3 Coverage as a function of SR, n = 100
Figure 3 shows the coverage as a function of SR for all four estimators and n = 100.
The first thing to note is that the coverage decreases drastically from about the nominal
level of one to about 0.5 around SR being 4 and higher. For SR below 4 both the
Weibull and the Gumbel estimator have excellent coverage whereas the two Jackknife
estimators are less reliable with the first order Jackknife-estimator being the worst.
Lastly, the functions depicted in the figure were estimated based on all the experiments
and on all levels of iterations being 1000, 10,000, and (for about half of the test
problems) 100,000. In estimating the functions we did not find any indication of
different patterns depending on the number of iterations. Hence we think the results
can be interpolated for iterations in the range of 1000–100,000, and probably also
extrapolated beyond 100,000 iterations.
However, running the heuristic algorithm in 100 parallel processes is computationally costly and it is therefore worthwhile to check how the estimators manage to
uncover the minimum using a smaller sample of heuristic solutions. In Figs. 4 and 5
the coverage as a function of SR for the two lower levels of n is shown. It is evident
123
J Comb Optim
Estimator
Gumbel
JK (1st)
JK (2nd)
Weibull
1.0
Coverage
0.8
0.6
0.4
0.2
0.0
0
2
4
6
8
10
12
14
16
SR
Fig. 4 Coverage as a function of SR, n = 10
Estimator
Gumbel
JK (1st)
JK (2nd)
Weibull
1.0
Coverage
0.8
0.6
0.4
0.2
0.0
0
2
4
6
8
10
12
14
16
SR
Fig. 5 Coverage as a function of SR, n = 3
from the two figures that the Gumbel estimator is quite poor particularly when its
parameters are estimated on 10 or less observations. The Weibull estimator works
well in the case of n = 10. The second order Jackknife estimator is the best estimator
for n ≤ 10 and gives decent coverage even in the case of n = 3 as long as SR is below
4.
Coverage is not the only factor of interest in deciding between n = 10 and n = 100.
A greater value of n will reduce the bias and the choice might also affect the length
123
J Comb Optim
Fig. 6 Empirical distribution of objective function for the Swedish Post problem
of the interval. However, the work of Brandeau and Chiu (1993) suggests the bias
reduction to be marginal. Investigation was limited to the Weibull estimator, on all
the 36 test problems for which SR was below 4 upon running 10,000 or 100,000
iterations. The bias on average was 0.04 % larger for n = 10 than n = 100 with 0.2 %
as a maximum; relative to the optimum, n = 10 produced slightly shorter intervals in
75 % of the test problems and slightly longer in the other 25 % problems.
To sum up the findings in this section. Extensive experimentation on the 40 test
problems from the OR-library reveal that, the Gumbel estimators and the first order
Jackknife estimators are inferior to the alternatives. A sample of n = 10 seems sufficient for a reliable estimation of statistical bounds for the minimum, given that SR
is modest (around 4 or less). If the SR is large, then no estimator provides reliable
bounds. A complete list of all the results can be found in the appendix.
4 An illustrating problem
Finally, application of the estimators to a location problem is practically illustrated.
The problem of allocating 71 post distribution centers to 6,735 candidate nodes in
Dalarna, Sweden is investigated. The landscape of the region and its population of
277,725 inhabitants, distributed in 15729 demand points, is described by Carling et al.
(2012). Han et al. (2013) provide a detailed description of the road network and suggest
that network distance is used as distance measure. The average distance in meters on
the road network to the nearest postal center for the population was considered as the
objective function in this particular case. A distribution of the objective function is
also provided to further help realize the complexity of the problem.
A random sample of 1,000,000 is drawn and the empirical distribution of g(z p ) is
shown in Fig. 6. The distribution is slightly skewed to the right, but still approximately
(2)
Normal. To evaluate the complexity of the problem, θ̂ J K was used to the estimate of
the minimum θ to be able to evaluate the complexity of the problem, whereas the mean
and variance of g(z p ) are derived by the 1 million random sample. The complexity
being 5.47 corresponds to an intermediate OR-library problem.
123
J Comb Optim
Variable
SR
JK
Best sol.
12
10
8
6
4
2
0
0
50000
100000
150000
200000
250000
300000
Iterations
Fig. 7 The Swedish Post problem and SR by the number of iterations. The best solution (%) and the
Jackknife (1st order) point-estimate (%) relative to the best solution after 300,000 iterations
Drawing on the experimental results above, with n = 10 the heuristics algorithm
was set to run until SR went well below 4 with a check after every 10,000 iterations.
Figure 7 shows the evolution of SR (as well as the best heuristic solution and the
Jackknife estimator of the minimum) as a function of the number of iterations. It was
decided to stop the heuristic processes after 300,000 iterations where SR was equal to
3.12 and the statistical bounds were quite tight.
Upon stopping the heuristic algorithm we used the sample of the 10 heuristic solutions to compute statistical bounds for the minimum using all the four estimators. The
(1)
(2)
point estimators were θ̂ J K = 2973, θ̂ J K = 2972, and θ̂W,G = 2975 with the latter
being the upper bound. The lower bounds were 2964, 2956, 2959, and 2966 for first
and second order Jackknife, Weibull, and Gumbel estimators, respectively. Hence, all
estimators suggest an interval of at the most 20 m which is a tolerable error for this
application.
The problem was also addressed by using Lagrangian Relaxation (LR) to obtain
deterministic bounds (Daskin 1995).4 LR was run for the computing time it took to run
300,000 iterations for SA on the problem. The deterministic upper and lower bounds
were 3275 and 2534 with no improvement after half of the computing time.
5 Concluding remarks
In the current work, the problem of knowing when a solution provided by a heuristic
is close to optimal was investigated. Deterministic bounds may sometimes be applica4 Han et al. (2013) implemented LR for the test problems and, by pre-testing, found the mean of the
columns of the distance matrix divided by eight to yield good starting values for the algorithm. Han’s
implementation of LR is mimicked.
123
J Comb Optim
ble and may be tight enough to shed knowledge on the problem. We have studied
statistical bounds potentially being of a more general applicability. We have studied
the occasionally used Weibull estimator as well as two variants of the Jackknife estimator which, to our knowledge, has never been used for obtaining bounds in location
problems. Furthermore, we have given arguments for an alternative EVT estimator,
namely the Gumbel estimator, and examined its performance.
Derigs (1985) made a number of concluding observations upon studying statistical
bounds with regard to TSP and QAP. We think that most of his conclusions are still
valid, except one. Derigs stated “The Weibull approach leads to a proper approach,”.
We have demonstrated that none of the estimators, including Weibull, are reliable
unless the quality of the sample of heuristic solutions used for deriving the bounds is
of high quality. To assess the quality we considered using SR which is the standard
deviation of the n solutions divided by the Jackknife point-estimator. The experiments
suggest that SR exceeding 4 causes unreliable bounds. The threshold of 4 should be
understood as a conservative choice as the statistical bounds might be reliable in some
cases where SR exceeds 4.
The estimators performed similarly with a slight advantage for the second order
Jackknife and the Weibull estimator. In fact, if one cannot afford to run many processes
in parallel to get large n, the second order Jackknife is the first choice. We did address
the question of the size of n. There is not much research on the size of n, but previous
researchers have indicated that n needs to be at least 100. Our results suggest this size
to be overly pessimistic, in fact most estimators provided reliable bounds at n equal to
10 (and the second order Jackknife provided fairly reliable bounds even at n equal to 3).
We have limited our study to location problems by means of the p-median problems
in the OR-library for which the optimum is known, and a real world p-median problem.
It seems that g(z p ) closely follows the Normal distribution in these cases. Other
combinatorial problems may imply, from a statistical perspective, objective functions
of a more complicated kind such as multi-modal or skewed distributions. Moreover,
our results seem to hold for all of the OR-library problems that represents a substantial
variation in kind and complexity. However, we consistently used simulated annealing
as the heuristic. We do not think this choice is crucial for our findings since the heuristic
only serves to obtain a sample in the tail of the distribution and any heuristic meeting
this requirement should work. However, it goes without saying that extending the
variation in combinatorial problems and heuristic algorithms used would expand the
knowledge about the value of statistical bounds in combinatorial optimization. In order
words, it seems worthwhile for future research to address the auxiliary questions posed
in the introduction; Does the choice of heuristic matter? How do statistical bounds
perform in various OR-problems and are they competitive with deterministic bounds?
As for the last question, there is some indications that statistical bounds are competitive with deterministic bounds. Derigs (1985) compared statistical bounds to deterministic bounds for 12 Travelling Salesman Problems and 15 Quadratic Assignment
Problems. With regard to the latter, he found statistical bounds tighter (and thus more
informative) than the deterministic bounds. Brandeau and Chiu (1993) found similar
results on a subset of location problems. We are conducting additional experiments
on statistical bounds and the preliminary results suggest statistical bounds to be competitive (Carling and Meng 2014).
123
J Comb Optim
Acknowledgments We are grateful to participants at INFORMS Euro 2013 in Rome, two anonymous
reviewers, and Siril Yella for useful comments on an earlier version. Financial support from the Swedish
Retail and Wholesale Development Council is gratefully acknowledged.
Appendix
Table 5 Description of the other 34 problems of the OR-library
Problem
θ
μg(z p )
σg(z p )
Complexity
P1
5819
8426
877
2.97
P16
8162
11,353
1033
3.09
P6
7824
10,522
869
3.10
P26
9917
13,644
1133
3.29
P21
9138
12,906
1067
3.52
P38
11,060
15,078
1143
3.52
P31
10,086
13,960
1077
3.60
P35
10,400
14,179
1085
3.81
P7
5631
7930
598
3.84
P3
4250
6194
500
3.89
P27
8307
11,428
727
4.29
P17
6999
9819
631
4.47
P22
8579
11,699
676
4.62
P12
6634
9387
586
4.70
P39
9423
12,988
736
4.84
P32
9297
12,687
699
4.85
P4
3034
4618
320
4.95
P5
1355
2376
197
5.18
P8
4445
6604
356
6.07
P9
2734
4250
202
7.51
P18
4809
6769
248
7.92
P10
1255
2278
127
8.02
P23
4619
6586
220
8.94
P14
2968
4501
168
9.12
P28
4498
6369
188
9.95
P19
2845
4327
144
10.32
P15
1729
2896
109
10.67
P24
2961
4486
134
11.42
P37
5057
7246
188
11.65
P20
1789
3108
112
11.73
P40
5128
7329
179
12.32
P29
3033
4559
118
12.93
P25
1828
3131
95
13.64
P34
3013
4617
112
14.36
123
123
1000
10,000
1000
10,000
P7
P7
P2
P2
10,000
P38
P35
1000
P21
1000
1000
P26
10,000
10,000
P26
P35
1000
P6
P31
10,000
P6
1000
1000
P16
P31
10,000
P16
10,000
1000
P1
10,000
10,000
P1
P38
1000
P11
P21
1000
10,000
P11
Iter.
Problem
1.2
9.4
0.9
9.8
2.1
7.5
0.5
8.9
2.0
0.7
7.5
11.5
0.9
8.9
0.0
5.2
0.7
7.6
0.0
4.1
0.1
6.8
SR
1.00
0.92
−2
−9
0.97
0.92
0.84
2
−3
0.80
0.93
40
1.00
−5
−2
0.78
0.94
−1
69
0.83
40
1.00
0.97
0
0.78
−10
69
0.85
−2
33
0.81
40
1.00
0.93
−7
0
1.00
0.99
0
−10
23
141
20
198
62
333
15
338
77
16
333
416
25
355
0
151
17
222
0
81
2
184
–4
15
–1
64
22
98
6
64
6
10
74
42
8
42
0
3
5
20
0
9
1
19
0.93
1.00
−1
0
Bias
Cov.
Bias
Length
JK (2nd)
JK (1st)
Table 6 Results for the estimators in the computer experiments, n = 3
0.84
0.97
0.95
0.92
1.00
0.92
0.94
0.94
0.98
1.00
0.91
0.94
0.97
0.92
1.00
0.97
1.00
0.97
1.00
0.99
1.00
0.97
Cov.
41
246
34
345
110
636
27
589
135
29
580
726
44
621
0
264
31
387
0
142
3
336
Length
1.00
0.62
0.77
0.37
0.95
0.26
0.94
0.55
0.72
0.97
0.30
0.57
0.98
0.44
1.00
0.62
0.92
0.61
1.00
0.77
1.00
0.64
Cov.
Weibull
0
17
0
29
0
43
0
12
0
0
0
0
7
0
0
9
0
0
0
0
0
0
Length
2
32
2
80
1
146
0
113
3
0
146
128
0
123
0
23
0
38
0
3
0
35
Bias
θ̂W,G
0.87
0.26
0.82
0.08
0.99
0.02
0.92
0.14
0.87
1.00
0.04
0.13
0.95
0.10
1.00
0.47
0.98
0.39
1.00
0.92
1.00
0.35
Cov.
Gumbel
2
15
2
25
14
36
3
34
11
4
29
40
6
29
0
16
4
26
0
12
1
23
Length
J Comb Optim
10,000
1000
10,000
1000
10,000
P32
P4
P4
P5
P5
1000
1000
P32
P13
10,000
P39
10,000
1000
P39
P8
10,000
P36
1000
1000
P36
P8
10,000
P12
10,000
P17
1000
1000
P17
P12
10,000
P27
1000
1000
P27
10,000
10,000
P3
P22
1000
P3
P22
Iter.
Problem
Table 6 continued
9.9
3.6
14.6
3.2
17.3
2.0
12.7
3.4
9.5
3.3
8.7
2.9
9.4
1.2
12.2
3.2
8.5
2.8
10.5
2.5
8.6
0.9
11.1
SR
0.99
192
0.28
0.96
−4
0.97
0.59
129
−2
0.96
0.69
−3
36
0.70
0.96
−8
57
0.67
0.90
0.57
0.86
152
3
192
9
0.61
−2
173
0.83
0.96
48
0.56
162
−10
0.97
−5
0.95
0.67
−4
105
0.67
114
1.00
179
57
272
14
93
21
156
114
353
120
340
108
354
24
315
100
280
63
294
75
286
10
174
195
3
127
2
39
1
60
5
163
8
190
17
199
8
61
1
182
9
111
5
119
6
15
0.90
−1
0
Bias
Cov.
Bias
Length
JK (2nd)
JK (1st)
0.61
0.99
0.79
0.98
0.86
0.97
0.86
0.99
0.85
0.95
0.78
0.93
0.82
1.00
0.92
0.98
0.80
0.99
0.86
0.98
0.85
1.00
0.96
Cov.
314
100
474
25
163
37
272
200
615
209
593
188
617
42
549
175
487
111
513
131
499
17
303
Length
0.06
0.68
0.15
0.87
0.19
0.86
0.19
0.67
0.17
0.56
0.15
0.60
0.19
0.92
0.36
0.66
0.18
0.77
0.17
0.63
0.22
0.96
0.63
Cov.
Weibull
0
3
9
1
0
1
0
7
27
3
0
7
52
3
39
10
0
5
0
4
48
0
11
Length
233
7
193
1
57
1
93
15
232
29
272
33
250
0
119
10
223
6
172
11
180
0
35
Bias
θ̂W,G
0.00
0.54
0.00
0.87
0.00
0.83
0.01
0.48
0.00
0.24
0.00
0.17
0.00
0.98
0.08
0.60
0.00
1.00
1.00
0.51
0.00
1.00
0.35
Cov.
Gumbel
16
7
23
2
9
3
14
13
34
12
29
11
35
5
33
12
31
8
14
9
27
2
20
Length
J Comb Optim
123
123
10,000
100,000
1000
P9
P9
P18
1000
10,000
100,000
1000
10,000
100,000
1000
10,000
100,000
1000
10,000
100,000
1000
10,000
P10
P10
P10
P23
P23
P23
P14
P14
P14
P28
P28
P28
P19
P19
10,000
1000
P9
100,000
100,000
P13
P18
10,000
P13
P18
Iter.
Problem
Table 6 continued
6.0
10.6
1.8
5.5
9.2
2.0
7.8
12.1
1.4
6.7
10.6
2.4
8.1
16.4
0.9
4.9
8.9
2.4
5.8
14.2
0.8
5.3
SR
0.97
0.96
73
248
4
85
0.33
0.07
0.83
0.40
0.09
−1
306
0.60
0.19
0.91
0.51
45
210
0
77
0.14
−1
298
0.67
0.24
0.88
0.59
0.21
16
102
0
52
233
0.82
0.87
8
−2
0.45
0.97
−2
123
0.80
17
69
129
30
105
177
20
93
152
24
121
195
10
39
87
15
91
178
29
62
160
12
87
75
251
6
83
307
3
47
212
2
83
312
0
19
104
3
57
236
–3
10
126
0
23
Bias
Length
Bias
Cov.
JK (2nd)
JK (1st)
0.60
0.32
0.92
0.65
0.37
0.98
0.81
0.46
0.96
0.74
0.44
0.98
0.86
0.56
0.93
0.79
0.48
0.92
0.92
0.70
0.97
0.91
Cov.
120
225
52
184
308
34
162
265
42
211
340
18
67
151
27
159
310
50
109
280
22
151
Length
0.08
0.03
0.50
0.12
0.05
0.79
0.14
0.05
0.63
0.15
0.05
0.78
0.30
0.06
0.89
0.17
0.06
0.61
0.31
0.08
0.90
0.26
Cov.
Weibull
8
0
2
6
11
2
1
4
1
12
21
0
0
1
1
10
10
0
1
7
0
4
Length
88
277
10
110
347
2
67
245
5
104
341
0
24
122
3
72
274
5
22
160
0
36
Bias
θ̂W,G
0.00
0.00
0.13
0.00
0.00
0.59
0.00
0.00
0.32
0.00
0.00
0.88
0.01
0.00
0.35
0.00
0.00
0.43
0.08
0.00
0.94
0.06
Cov.
Gumbel
6
12
3
9
15
3
8
14
3
12
20
1
4
8
2
9
16
2
6
14
2
10
Length
J Comb Optim
100,000
1000
P40
P40
P29
100,000
10,000
P40
P29
1000
P20
10,000
100,000
P20
P29
1000
10,000
P20
100,000
P37
1000
P24
10,000
100,000
P33
1000
10,000
P33
P37
1000
P33
P37
100,000
P15
10,000
10,000
P15
100,000
1000
P15
P24
100,000
P19
P24
Iter.
Problem
Table 6 continued
3.0
7.2
10.8
2.9
5.1
7.1
4.4
8.9
14.9
2.6
4.9
9.8
2.7
6.0
12.1
1.9
4.8
8.4
2.4
9.2
15.9
2.0
SR
22
106
345
32
155
481
4
62
263
23
150
434
12
94
296
11
125
381
1
39
177
4
0.56
0.27
0.04
0.59
0.17
0.01
0.85
0.35
0.06
0.63
0.17
0.08
0.65
0.25
0.08
0.73
0.21
0.04
0.89
0.53
0.16
0.82
35
90
143
60
107
158
32
66
124
50
104
217
32
76
157
35
93
170
15
64
120
20
24
106
345
33
157
484
5
63
262
25
151
434
13
93
297
12
124
382
2
41
178
7
Bias
Length
Bias
Cov.
JK (2nd)
JK (1st)
0.79
0.55
0.22
0.80
0.44
0.10
0.94
0.66
0.25
0.82
0.46
0.30
0.85
0.52
0.31
0.88
0.51
0.24
0.95
0.79
0.45
0.92
Cov.
61
158
250
106
187
275
55
116
217
87
181
379
56
133
274
64
163
296
27
111
210
34
Length
0.12
0.05
0.03
0.13
0.06
0.02
0.28
0.07
0.03
0.14
0.05
0.03
0.21
0.04
0.04
0.32
0.05
0.04
0.51
0.13
0.04
0.25
Cov.
Weibull
4
0
0
0
0
3
0
3
5
0
7
1
2
0
4
2
4
0
1
0
0
2
Length
29
127
379
46
180
518
12
78
292
34
174
485
20
112
332
19
147
420
4
54
205
8
Bias
θ̂W,G
0.00
0.00
0.00
0.00
0.00
0.00
0.06
0.00
0.00
0.00
0.00
0.00
0.02
0.00
0.00
0.02
0.00
0.00
0.21
0.00
0.00
0.07
Cov.
Gumbel
4
8
13
6
10
15
3
6
10
5
9
19
3
6
14
4
8
14
2
6
11
3
Length
J Comb Optim
123
123
100,000
P34
P30
100,000
P34
1000
10,000
P34
10,000
1000
P25
P30
100,000
P25
P30
1000
10,000
P25
Iter.
Problem
Table 6 continued
4.3
6.5
6.8
3.8
5.9
5.9
3.8
7.8
7.8
SR
27
89
372
30
119
404
21
78
308
0.40
0.13
0.00
0.50
0.10
0.01
0.47
0.16
0.03
34
55
119
44
72
157
28
56
118
28
89
377
32
122
405
22
82
308
Bias
Length
Bias
Cov.
JK (2nd)
JK (1st)
0.69
0.40
0.08
0.74
0.39
0.17
0.71
0.50
0.16
Cov.
59
96
207
77
126
273
49
98
206
Length
0.10
0.04
0.01
0.11
0.04
0.02
0.09
0.06
0.02
Cov.
Weibull
0
0
0
3
1
8
0
2
1
Length
34
102
399
40
135
441
27
90
336
Bias
θ̂W,G
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
Cov.
Gumbel
3
5
11
4
7
14
3
6
10
Length
J Comb Optim
10,000
1000
1000
10,000
10,000
1000
10,000
1000
10,000
P26
P26
P21
P38
P21
P38
P31
P31
P35
P35
10,000
1000
P6
P2
10,000
P6
1000
1000
P16
P2
10,000
P16
1000
1000
P1
10,000
10,000
P1
P7
1000
P11
P7
1000
10,000
P11
Iter.
Problem
1.5
10.0
1.0
11.3
2.9
8.2
0.8
9.9
2.3
1.1
8.3
12.4
1.2
9.3
0.0
5.7
0.9
8.2
0.0
4.9
0.2
7.5
SR
0.93
0
3
0
33
0
53
0
16
0
0
53
28
0
17
1.00
0.94
1.00
0.71
1.00
0.77
1.00
0.85
1.00
1.00
0.80
0.87
1.00
0.82
1.00
0.97
−5
0
1.00
0.93
1.00
1.00
1.00
0
2
0
0
0
3
6
55
5
90
1
180
1
181
10
0
180
202
2
209
0
57
1
68
0
7
0
62
0
2
0
29
0
72
0
7
0
0
48
22
0
6
0
–3
0
1
0
0
0
1
Bias
Length
Bias
Cov.
JK (2nd)
JK (1st)
Table 7 Results for the estimators in the computer experiments, n = 10
1.00
0.99
1.00
0.88
1.00
0.87
1.00
0.93
1.00
1.00
0.94
0.95
1.00
0.96
1.00
1.00
1.00
0.99
1.00
1.00
1.00
0.98
Cov.
14
97
12
155
4
291
2
309
23
0
310
349
4
357
0
101
2
123
0
19
0
109
Length
1.00
1.00
0.99
0.99
1.00
0.97
1.00
1.00
1.00
1.00
1.00
1.00
1.00
0.99
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
Cov.
Weibull
10
66
8
89
8
185
2
179
26
0
226
220
5
197
0
66
2
99
0
25
0
77
Length
0
14
0
52
0
89
0
54
0
0
89
68
0
62
0
4
0
14
0
0
0
15
Bias
θ̂W,G
0.99
0.90
0.99
0.60
1.00
0.27
1.00
0.73
1.00
1.00
0.38
0.69
1.00
0.51
1.00
0.97
1.00
0.98
1.00
1.00
1.00
0.97
Cov.
Gumbel
7
41
7
64
51
90
14
89
36
18
69
97
20
65
0
47
12
72
0
42
3
66
Length
J Comb Optim
123
123
1000
10,000
1000
10,000
P5
P8
P8
P32
P5
10,000
P32
1000
1000
P39
10,000
10,000
P39
P4
1000
P36
P4
1000
10,000
P36
10,000
P12
10,000
P17
1000
1000
P17
P12
10,000
P27
1000
1000
P27
10,000
10,000
P3
P22
1000
P3
P22
Iter.
Problem
Table 7 continued
4.2
16.3
3.8
20.0
2.3
14.0
3.7
10.3
3.6
9.7
1.5
10.0
1.6
13.6
3.4
9.8
3.3
11.6
2.8
9.4
1.3
12.5
SR
0.58
0.96
−1
1.00
0.62
1.00
107
0
29
0
0.56
0.97
−2
48
0.39
0.93
0.40
1.00
0.44
1.00
145
0
173
4
145
0
0.78
0.99
37
0.32
145
−2
0.98
−1
0.99
0.59
−1
93
0.56
92
1.00
16
176
2
53
4
82
34
160
58
194
57
203
1
155
29
154
14
156
24
167
0
77
–1
98
0
27
0
43
–1
142
–2
160
0
129
0
31
0
137
–1
86
–1
81
0
–6
0.94
−4
0
Bias
Cov.
Bias
Length
JK (2nd)
JK (1st)
0.99
0.80
1.00
0.86
1.00
0.77
1.00
0.77
0.97
0.65
0.96
0.68
1.00
0.93
1.00
0.61
0.99
0.83
1.00
0.78
1.00
0.96
Cov.
29
300
4
92
8
142
60
280
99
334
97
349
2
269
52
266
26
269
43
285
0
133
Length
1.00
0.85
1.00
0.97
1.00
0.95
1.00
0.85
0.99
0.84
1.00
0.89
1.00
1.00
1.00
0.76
1.00
0.95
1.00
0.96
1.00
1.00
Cov.
Weibull
23
153
4
55
7
92
48
183
63
213
58
201
3
173
41
162
20
175
32
168
0
83
Length
2
144
0
40
0
66
4
176
11
215
16
191
0
69
1
177
1
125
2
128
0
11
Bias
θ̂W,G
0.99
0.02
1.00
0.11
1.00
0.09
0.99
0.01
0.82
0.00
0.78
0.03
1.00
0.57
1.00
0.01
1.00
1.00
0.98
0.06
1.00
0.94
Cov.
Gumbel
22
50
8
22
10
35
38
83
29
69
29
85
19
80
36
72
19
30
26
57
11
53
Length
J Comb Optim
100,000
P20
10,000
1000
P28
P40
100,000
P14
1000
10,000
P14
P40
1000
1000
P23
P14
100,000
P10
10,000
10,000
P10
100,000
1000
P10
P23
100,000
P18
P23
1000
10,000
P18
100,000
P9
P18
1000
10,000
P13
P9
100,000
P13
P9
1000
10,000
P13
Iter.
Problem
Table 7 continued
5.4
7.9
4.8
10.1
2.3
8.4
13.0
1.6
7.2
11.6
2.7
9.0
17.7
1.2
5.6
9.7
2.6
6.3
15.9
0.9
5.9
10.4
SR
142
467
5
294
0
40
196
0.11
0.00
0.78
0.00
0.96
0.45
0.06
0.34
0.93
68
0.02
1.00
0.32
0.13
0.80
0.41
−1
288
0
15
95
0
45
0.10
0.97
−1
215
0.84
0.28
1.00
0.76
0.11
6
110
0
15
185
76
105
13
102
4
53
94
12
69
105
1
19
54
5
53
116
13
32
98
2
40
94
133
456
5
289
0
38
189
–1
65
282
0
13
90
0
42
207
–1
5
106
0
14
177
Bias
Length
Bias
Cov.
JK (2nd)
JK (1st)
0.32
0.06
0.95
0.11
0.99
0.78
0.27
0.98
0.65
0.19
1.00
0.66
0.33
0.96
0.66
0.32
0.99
0.94
0.59
1.00
0.91
0.28
Cov.
130
181
24
174
8
91
160
20
118
181
3
33
93
9
91
198
24
55
167
4
68
162
Length
0.10
0.02
1.00
0.01
1.00
0.82
0.05
1.00
0.52
0.02
1.00
1.00
0.16
1.00
0.79
0.06
0.97
0.99
0.37
1.00
0.99
0.09
Cov.
Weibull
81
117
17
105
7
52
94
12
71
113
4
21
57
5
53
114
15
32
98
4
42
100
Length
160
490
7
315
1
51
217
2
83
310
0
19
107
1
56
241
1
13
130
0
23
205
Bias
θ̂W,G
0.00
0.00
0.54
0.00
1.00
0.01
0.00
0.92
0.03
0.00
1.00
0.11
0.00
0.89
0.01
0.00
0.88
0.60
0.00
1.00
0.51
0.00
Cov.
Gumbel
20
32
8
37
9
19
30
7
28
48
5
11
17
8
22
33
6
15
32
6
25
38
Length
J Comb Optim
123
123
100,000
P30
100,000
P34
1000
10,000
P34
10,000
1000
P34
P30
100,000
P25
P30
1000
10,000
P25
100,000
P29
P25
1000
10,000
P29
100,000
P40
P29
Iter.
Problem
Table 7 continued
4.7
7.1
7.2
4.2
6.5
6.5
4.2
8.3
8.3
3.4
7.9
11.8
3.3
SR
24
84
367
27
114
396
18
78
300
20
98
328
28
0.31
0.02
0.02
0.30
0.06
0.07
0.33
0.04
0.00
0.24
0.07
0.01
0.48
21
36
62
24
41
87
18
24
74
17
56
99
36
22
82
364
25
110
391
17
76
296
20
95
320
26
Bias
Length
Bias
Cov.
JK (2nd)
JK (1st)
0.49
0.19
0.00
0.58
0.16
0.03
0.59
0.10
0.04
0.61
0.34
0.10
0.77
Cov.
36
62
106
42
71
151
31
42
126
30
96
169
62
Length
0.44
0.03
0.00
0.34
0.05
0.01
0.49
0.00
0.01
0.41
0.06
0.02
0.82
Cov.
Weibull
23
34
70
23
45
97
16
27
69
19
58
92
37
Length
29
92
380
32
123
414
22
83
316
24
110
349
35
Bias
θ̂W,G
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.01
Cov.
Gumbel
7
11
28
11
16
32
6
15
21
9
17
26
12
Length
J Comb Optim
1000
10,000
1000
10,000
1000
10,000
1000
10,000
1000
10,000
1000
1000
10,000
10,000
1000
10,000
1000
10,000
1000
10,000
1000
10,000
P11
P11
P1
P1
P16
P16
P6
P6
P26
P26
P21
P38
P21
P38
P31
P31
P35
P35
P7
P7
P2
P2
Iter.
Problem
1.5
10.5
1.0
11.6
3.3
8.4
1.1
10.2
2.4
1.6
8.4
12.7
1.5
9.6
0.0
6.0
1.0
8.4
0.0
5.0
0.4
7.9
SR
1.00
0.94
−1
0
1.00
0.59
1.00
0.67
0
21
0
33
1.00
0.97
−3
0
1.00
1.00
0
0
0.67
0.74
33
1.00
−1
0.92
1.00
1.00
0
4
0
0
1.00
1.00
−1
0
1.00
1.00
0
0
0
17
0
30
0
56
0
55
0
0
56
86
0
51
0
0
0
10
0
0
0
15
0
−1
0
21
0
74
0
−3
0
0
34
−8
0
2
0
0
0
0
0
0
0
1.00
0.97
1.00
0.81
1.00
0.37
1.00
0.99
1.00
1.00
0.93
0.79
1.00
0.99
1.00
1.00
1.00
1.00
1.00
1.00
1.00
0.96
1.00
0.86
−1
0
0
Cov.
Bias
Length
Bias
Cov.
JK (2nd)
JK (1st)
Table 8 Results for the estimators in the computer experiments, n = 100
0
30
0
52
0
70
0
96
0
0
98
147
0
89
0
1
0
19
0
0
0
26
Length
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
Cov.
Weibull
12
74
8
109
1
213
1
203
20
0
221
246
5
226
0
67
0
112
0
23
0
87
Length
0
1
0
27
0
41
0
5
0
0
41
16
0
13
0
0
0
1
0
0
0
2
Bias
θ̂W,G
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
Cov.
Gumbel
18
108
17
165
121
239
40
229
86
53
194
261
54
191
0
132
28
186
0
96
13
166
Length
J Comb Optim
123
123
1000
1000
10,000
1000
10,000
1000
10,000
1000
10,000
1000
P32
P32
P4
P4
P5
P5
P8
P8
P13
10,000
P36
10,000
1000
P36
P39
10,000
P12
P39
1000
P17
P12
10,000
P17
1000
1000
P27
10,000
10,000
P27
P22
1000
P3
P22
1000
10,000
P3
Iter.
Problem
Table 8 continued
10.7
4.3
16.7
4.0
20.2
2.4
14.4
3.8
10.6
3.7
10.0
3.3
10.5
1.8
13.9
3.7
9.9
3.6
11.8
3.0
9.7
1.9
12.8
SR
1.00
0
158
0
100
0
26
0
38
0.00
1.00
0.00
1.00
0.05
1.00
0.19
1.00
0.00
0.97
129
0.38
−1
0.94
−1
123
0.48
1.00
0.29
1.00
0.31
1.00
0.03
1.00
0.26
1.00
92
0
32
0
106
0
83
0
68
0
0
57
0
34
0
13
0
27
2
49
10
109
15
117
0
30
0
96
0
36
1
61
0
5
156
0
99
0
25
0
40
0
126
0
118
−1
87
0
34
0
107
0
84
0
69
0
0
Bias
Length
Bias
Cov.
JK (2nd)
JK (1st)
0.15
1.00
0.12
1.00
0.35
1.00
0.50
1.00
0.18
0.98
0.55
0.98
0.77
1.00
0.71
1.00
0.68
1.00
0.16
1.00
0.68
1.00
1.00
Cov.
97
1
59
0
23
0
48
5
85
18
184
27
196
0
56
0
162
0
63
3
108
0
10
Length
0.01
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
Cov.
Weibull
128
22
173
3
62
5
106
49
209
69
248
63
245
2
192
37
193
19
186
33
194
0
79
Length
168
0
107
0
28
0
41
0
138
0
144
1
114
0
35
0
122
0
88
0
77
0
0
Bias
θ̂W,G
0.00
1.00
0.98
1.00
1.00
1.00
1.00
1.00
1.00
1.00
0.88
1.00
1.00
1.00
1.00
1.00
0.99
1.00
1.00
1.00
1.00
1.00
1.00
Cov.
Gumbel
88
57
154
18
61
24
91
100
228
85
174
75
204
43
216
93
174
54
79
71
158
30
144
Length
J Comb Optim
100,000
1000
10,000
100,000
1000
10,000
100,000
1000
10,000
100,000
P18
P18
P10
P10
P10
P23
P23
P23
P14
P14
P14
1000
10,000
100,000
P19
P19
P19
100,000
10,000
P18
P28
1000
P9
1000
100,000
P9
10,000
10,000
P9
P28
1000
P13
P28
10,000
100,000
P13
Iter.
Problem
Table 8 continued
2.3
6.7
11.7
2.0
6.5
10.4
2.5
8.6
13.4
1.7
7.6
12.0
2.7
9.4
18.4
1.3
5.6
10.0
2.6
6.5
16.3
0.9
6.2
SR
4
60
236
1
50
289
0
36
181
0
61
268
0
4
74
0
36
191
0
6
99
0
15
0.00
0.00
0.00
0.86
0.36
0.00
1.00
0.01
0.00
1.00
0.00
0.00
1.00
0.66
0.15
0.99
0.35
0.00
1.00
0.20
0.00
1.00
0.05
2
21
22
4
41
20
0
13
37
1
21
48
0
24
46
2
23
58
0
4
32
0
7
4
58
235
1
47
290
0
36
181
0
61
270
0
1
71
0
35
192
0
6
99
0
15
Bias
Length
Bias
Cov.
JK (2nd)
JK (1st)
0.16
0.38
0.00
0.96
0.53
0.00
1.00
0.09
0.00
1.00
0.08
0.00
1.00
0.66
0.39
0.99
0.37
0.05
1.00
0.59
0.13
1.00
0.23
Cov.
3
35
38
7
70
36
1
22
64
2
36
88
0
42
78
3
40
98
0
8
55
0
12
Length
1.00
0.00
0.00
1.00
1.00
0.00
1.00
1.00
0.00
1.00
0.99
0.00
1.00
1.00
0.38
1.00
1.00
0.00
1.00
1.00
0.88
1.00
1.00
Cov.
Weibull
9
48
78
17
88
115
6
59
106
12
77
128
4
28
68
6
62
129
16
36
111
5
45
Length
4
64
240
2
57
291
0
39
188
0
65
274
0
9
83
0
41
201
0
6
104
0
15
Bias
θ̂W,G
1.00
0.00
0.00
1.00
0.05
0.00
1.00
0.97
0.00
1.00
0.77
0.00
1.00
1.00
0.00
1.00
0.99
0.00
1.00
1.00
0.20
1.00
1.00
Cov.
Gumbel
19
38
81
21
42
106
24
54
79
20
76
123
11
24
39
19
57
88
18
41
92
13
69
Length
J Comb Optim
123
123
1000
100,000
1000
10,000
100,000
1000
10,000
100,000
1000
10,000
P20
P40
P40
P40
P29
P29
P29
P25
P25
100,000
P37
10,000
10,000
P37
P20
1000
P37
P20
100,000
P33
P24
100,000
P33
1000
10,000
P33
10,000
1000
P15
P24
100,000
P15
P24
1000
10,000
P15
Iter.
Problem
Table 8 continued
8.6
15.3
3.4
8.1
12.2
3.4
5.7
8.1
4.8
10.0
17.2
2.9
5.5
11.2
3.1
6.6
13.9
2.1
5.3
9.4
2.8
10.2
18.3
SR
72
281
19
96
298
24
118
428
4
48
225
20
133
396
7
84
239
4
115
332
1
25
155
0.00
0.00
0.00
0.00
0.00
0.14
0.02
0.00
0.37
0.06
0.00
0.00
0.00
0.00
0.20
0.00
0.00
0.69
0.00
0.00
0.28
0.43
0.00
14
43
5
12
68
13
52
79
4
25
54
7
26
46
5
16
71
12
14
65
1
25
26
71
278
19
95
296
23
115
423
4
48
221
19
131
396
7
84
235
4
115
328
1
23
155
Bias
Length
Bias
Cov.
JK (2nd)
JK (1st)
0.00
0.00
0.01
0.00
0.00
0.36
0.36
0.00
0.70
0.29
0.00
0.08
0.00
0.00
0.64
0.01
0.11
0.80
0.00
0.00
0.66
0.63
0.00
Cov.
24
74
8
21
115
22
89
133
7
42
92
12
45
81
10
27
120
17
24
111
2
43
45
Length
0.00
0.00
0.55
0.00
0.00
1.00
0.00
0.00
1.00
0.63
0.00
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
1.00
1.00
0.00
Cov.
Weibull
35
88
19
61
121
40
89
134
19
50
104
27
71
152
23
51
132
25
59
134
8
46
85
Length
74
289
20
98
311
26
128
443
4
53
235
21
138
403
8
86
253
7
117
345
1
30
160
Bias
θ̂W,G
0.00
0.00
0.97
0.00
0.00
0.98
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.78
0.00
Cov.
Gumbel
37
54
25
51
62
37
43
68
20
34
53
35
55
116
17
38
64
22
53
78
12
34
66
Length
J Comb Optim
Iter.
100,000
1000
10,000
100,000
1000
10,000
100,000
Problem
P25
P34
P34
P34
P30
P30
P30
Table 8 continued
4.8
7.4
14.5
4.3
6.7
12.8
4.3
SR
19
80
361
26
98
373
15
0.11
0.00
0.00
0.00
0.00
0.00
0.07
10
12
18
5
33
58
9
20
81
360
26
96
368
14
Bias
Length
Bias
Cov.
JK (2nd)
JK (1st)
0.28
0.00
0.00
0.01
0.14
0.00
0.43
Cov.
18
20
31
10
56
99
15
Length
0.96
0.00
0.00
0.91
0.00
0.00
0.99
Cov.
Weibull
25
37
71
27
55
110
20
Length
20
82
364
26
105
385
16
Bias
θ̂W,G
0.08
0.00
0.00
0.70
0.00
0.00
0.26
Cov.
Gumbel
17
29
75
28
36
82
15
Length
J Comb Optim
123
J Comb Optim
R-code of simulated annealing
N=100 # Number of candidates
p=5 # Number of facilities
n<-100 # Number of SA processes
ni<-10000 # Number of iterations
heuristic.solution<-matrix(0, nrow=ni,ncol=n)
heuristic.location<-matrix(0, nrow=p, ncol=n)
Store.solution<-numeric()
Best.solution<-numeric()
Store.location<-matrix(0, nrow=ni, ncol=p)
Best.location<-numeric()
for (i in 1:n){
subset<-numeric(0)
select.location<-sample(1:N,p,replace=F)
objective.function<-sum(apply(Distance.matrix[,select.location],1,min))
iteration<-0;Tempearture<-400;beta<-0.5 #initial parameter setting.
while (iteration<ni){
sam<-sample(1:p,1)
substitution<-sample((1:N)[-select.location[sam]],1)
store.selection<-select.location
select.location[sam]<-substitution
updated.objective.function<-sum(apply(Distance.matrix[,select.location],1,min))
if (updated.objective.function<=objective.function)
{objective.function<-updated.objective.function;beta<-0.5;count<-0}
if (updated.objective.function>objective.function){
delta<-updated.objective.function-objective.function
unif.number<-runif(1,0,1)
if (unif.number<exp(-delta/Tempearture))
{objective.function<-updated.objective.function;beta<-0.5;count<-0}
if (unif.number>=exp(-delta/Tempearture)) {count<-count+1;select.location<-
123
J Comb Optim
store.selection } }
iteration<-iteration+1
Tempearture<-Tempearture*0.95
Store.solution[iteration]<-objective.function
Best.solution[iteration]<-min(Store.solution[1:iteration])
Store.location[iteration,]<-select.location
Best.location<-Store.location[min(which(Store.solution==Best.solution[iteration])),]
}
heuristic.solution[,i]<-Best.solution
heuristic.location[,i]<-Best.location
}
References
Akyüz MH, Öncan T, Altınel IK (2012) Efficient approximate solution methods for the multi-commodity
capacitated multi-facility Weber problem. Comput Oper Res 39(2):225–237
Beasley JE (1990) OR library: distributing test problems by electronic mail. J Oper Res Soc 41(11):1067–
1072
Beasley JE (1993) Lagrangian heuristics for location problems. Eur J Oper Res 65:383–399
Brandeau ML, Chiu SS (1993) Sequential location and allocation: worst case performance and statistical
estimation. Locat Sci 1:289–298
Carling K, Han M, Håkansson J (2012) Does Euclidean distance work well when the p-median model is
applied in rural areas? Ann Oper Res 201(1):83–97
Carling K, Meng X (2014) Confidence in heuristic solutions? Working papers in transport,
tourism, information technology and microdata analysis. http://du.diva-portal.org/smash/record.jsf?
pid=diva2%3A727755&dswid=-6054
Chiyoshi FY, Galvão RD (2000) A statistical analysis of simulated annealing applied to the p-median
problem. Ann Oper Res 96:61–74
Cureton EE (1968) Unbiased estimation of the standard deviation. Am Stat 22(1):22
Dannenbring DG (1977) Procedures for estimating optimal solution values for large combinatorial problems.
Manag Sci 23(12):1273–1283
Daskin MS (1995) Network and discrete location: models, algorithms, and applications. Wiley, New York
Derigs U (1985) Using confidence limits for the global optimum in combinatorial optimization. Oper Res
33(5):1024–1049
Efron B (1979) Bootstrap methods: another look at the Jackknife. Ann Stat 7(1):1–26
Golden BL, Alt FB (1979) Interval estimation of a global optimum for large combinatorial optimization.
Oper Res 33(5):1024–1049
Hakimi SL (1964) Optimum locations of switching centers and the absolute centers and medians of a graph.
Oper Res 12(3):450–459
Hakimi SL (1965) Optimum distribution of switching centers in a communication network and some related
graph theoretic problems. Oper Res 13(3):462–475
Handler GY, Mirchandani PB (1979) Location on networks: theorem and algorithms. MIT Press, Cambridge
Han M, Håkansson J, Rebreyend P (2013) How do different densities in a network affect the optimal location
of service centers?. Working papers in transport, tourism, information technology and microdata
analysis 2013:15
Kotz S, Nadarajah S (2000) Extreme value distributions, theory and applications. Imperial College Press,
London
123
J Comb Optim
Levanova T, Loresh MA (2004) Algorithm of ant system and simulated annealing for the p-median problem.
Autom Remote Control 65:431–438
Luis M, Sahli S, Nagy G (2009) Region-rejection based heuristics for the capacitated multi-source Weber
problem. Comput Oper Res 36:2007–2017
McRobert KL (1971) A search model for evaluating combinatorially explosive problems. Oper Res 19:1331–
1349
Nydick RL, Weiss HJ (1988) A computational evaluation of optimal solution value estimation procedures.
Comput Oper Res 5:427–440
Quenouille MH (1956) Notes on bias in estimation. Biometrika 43:353–360
Reese J (2006) Solution methods for the p-median problem: An annotated bibliography. Networks 48:125–
142
Robson DS, Whitlock JH (1964) Estimation of a truncation point. Biometrika 51:33–39
Wilson AD, King RE, Wilson JR (2004) Case study on statistically estimating minimum makespan for flow
line scheduling problems. Eur J Oper Res 155:439–454
123
PAPER II
This paper is accept by Journal of Global Optimization. We acknowledge the
journal and Springer for this publication.
Confidence in heuristic solutions?
Authors: Kenneth Carling and Xiangli Meng
Abstract: Solutions to combinatorial optimization problems frequently
rely on heuristics to minimize an intractable objective function. The
optimum is sought iteratively and pre-setting the number of iterations
dominates in operations research applications, which implies that the
quality of the solution cannot be ascertained. Deterministic bounds offer a
mean of ascertaining the quality, but such bounds are available for only a
limited number of heuristics and the length of the corresponding interval
may be difficult to control in an application. A small, almost dormant,
branch of the literature suggests using statistical principles to derive
statistical bounds for the optimum. We discuss alternative approaches to
derive statistical bounds. We also assess their performance by testing them
on 40 test p-median problems on facility location, taken from Beasley’s
OR-library, for which the optimum is known. We consider three popular
heuristics for solving such location problems; simulated annealing, vertex
substitution, and Lagrangian relaxation where only the last offers
deterministic bounds. Moreover, we illustrate statistical bounds in the
location of 71 regional delivery points of the Swedish Post. We find
statistical bounds reliable and much more efficient than deterministic
bounds provided that the heuristic solutions are sampled close to the
optimum. Statistical bounds are also found computationally affordable.
Key words: p-median problem, deterministic bounds, statistical bounds,
jackknife, discrete optimization, extreme value theory

Kenneth Carling is a professor in Statistics and Xiangli Meng is a PhD-student in
Micro-data analysis at the School of Technology and Business Studies, Dalarna university,
SE-791 88 Falun, Sweden.

Corresponding author. E-mail: xme@du.se. Phone: +46-23-778509.
1
1. Introduction
Consider the challenge of finding a solution to the minimum of an
intractable objective function of a combinatorial optimization problem.
This challenge arises frequently in operations research (OR) dealing with
issues like automation, facility location, routing, and scheduling. Usually a
good, but biased, solution (called heuristic solution) is sought by applying
one of the many iterative (heuristic) methods available. None of the
heuristics guarantees that the solution will coincide with the optimum and,
hence, many solutions to real world OR-problems are plagued with an
uncertainty about the quality of the solution.
It goes without saying that many heuristic users find this uncertainty
unappealing and try to take measures to reduce it. There are four relatively
widespread measures for doing so. The first is to carefully choose the
heuristic amongst them thoroughly tested on various problems of similar
kind (see e.g. Taillard, 1995), the second is to set the number of iterations
large or run until no improvements are encountered (see e.g. Levanova,
and Loresh, 2004), the third is to use methods that provide deterministic
bounds (see e.g. Beasley, J.E., 1993), and the fourth is to compute
statistical bounds (see e.g. Gonsalvez, Hall, Rhee, and Siferd, 1987). We
assume that the first measure (as well as reasonable data-reduction of the
problem) is taken by the user and we will not discuss it further. The second
measure is common practice, whereas the third measure occasionally is
taken. Deterministic bounds are informative on the quality of the solution,
but often at a high computational cost. The fourth measure is rarely taken
by heuristic users in spite of its potential powerfulness (Derigs, 1985).
One reason for heuristic users hesitating in taking the fourth measure is the
lack of systematic investigation of statistical bounds. The aim of this paper
is therefore to systematically investigate statistical bounds for the
fundamental location-allocation problem being the p-median problem. In
the investigation we vary heuristic method, sample size, estimator of the
bounds, the number of iterations (computing time), and the test problems
leading to variations in complexity. We present results on the reliability of
the statistical bounds and the length of the interval compared with the
length of intervals of deterministic bounds. The computer experiment is
accompanied with an illustrating problem concerning the location of
regional delivery points of the Swedish Post.
2
This paper is organized as follows: section two presents statistical
optimum estimation techniques. In the third section we present the pmedian problem and discuss three solution methods used in the
investigation. In section four details about the experiments are given
concerning the implementation of the solution methods, factors varied in
the experiment and factors kept constant, as well as outcome variables of
interest. The fifth section gives the results and the sixth section illustrates
the use of statistical bounds in a practical problem of locating 71 delivery
points of the Swedish Post in a region. The seventh section concludes the
paper.
2. Statistical optimum estimation techniques
First some notations used throughout the paper: 𝑧𝑘 = the k:th feasible
solution to the combinatorial problem; 𝐴 = the set of all feasible solutions,
𝐴 = {𝑧1 , 𝑧2 , … , 𝑧𝐾 }; 𝑔(𝑧𝑘 ) = the value of the objective function of solution
𝑧𝑘 (in the following breifly referred to as solution); 𝜃 = min𝐴 𝑔(𝑧𝑘 ). The
challenge is to identify 𝜃 and the corresponding solution 𝑧𝑘 . The
abbreviation SOET (statistical optimum estimation techniques) was
recently introduced by Giddings, Rardin, and Uzsoy (2014) and it refers to
techniques both for point and interval estimation of 𝜃.
In short, the statistical approach is to estimate the value of the minimum
based on a sample of heuristic solutions and put bounds (or confidence
limits) around it. Golden and Alt (1979) did pioneering work on statistical
bounds followed by others in the 1980:s, but thereafter the statistical
approach has been dormant. In fact, Akyüz, Öncan, and, Altınel (2012)
state that the statistical approach had not been used in location problems
since 1993 to the best of their knowledge. Neither has our literature review
finds any such application.
There are four approaches to estimating statistically the optimum
according to the review article of Giddings et al (2014). The dominating
one is based on Extreme Value Theory (EVT) and another is the
truncation-point approach which we will refer to as the Weibull (W)
estimator and the Jackknife (JK) estimator, respectively. The Weibull
estimators are usually the only estimator based on EVT approach.
Recently, Carling and Meng (2014) proposes the Gumbel estimator based
3
on EVT approach but it is outperformed by Weibull estimators. Giddings
et al, (2014) also mention the limiting-distribution approaches as well as
the multinomial approaches. Neither of them is used nor seems promising
and we will disregard them.
As a point of departure in discussing the SOET:s, we note that even a very
large random sample of the random quantity 𝑔(𝑧𝑘 ) is almost useless for
identifying 𝜃. The crux is that a sample of the gigantic size of 109 will be
hard pressed to contain a feasible solution close to 𝜃 even for
combinatorial optimization problems of modest size (Meng and Carling,
2014). Fortunately, as several authors have pointed out, if the starting
values are picked at random, repeated heuristic solutions mimic a random
sample in the tail (see e.g. McRoberts, 1971, and Golden and Alt, 1979).
Thereby random values in the tail can be obtained at a much less
computational effort and used for estimating 𝜃. To discuss the estimation
of 𝜃 and statistical bounds for it we need additional notations: 𝜃̂ = an point
estimator of 𝜃; 𝑛 = the number of replicates of the heuristic algorithm with
random starting values. The attained minimum value of the objective
function in the ith will be noted as 𝑥̃𝑖 , and referred to as heuristic solution.
The Weibull point estimator of 𝜃 is 𝜃̂𝑊 = 𝑥̃(1) where 𝑥̃(1) =
min(𝑥̃1 , 𝑥̃2 , … , 𝑥̃𝑛 ) which is the best (smallest) heuristic solution in all the
n replicates. The JK-estimator is introduced by Quenouille (1956):
𝑀+1
𝑀+1
𝜃̂𝐽𝐾 = ∑ (−1)(𝑖−1) (
) 𝑥̃(𝑖)
𝑖
𝑖=1
where M is the order. Dannenbring (1977) and Nydick and Weiss (1988)
suggest to use the first order, i.e. 𝑀 = 1 , for point estimating the
minimum. The first order JK-estimator is more biased than higher order
ones, but its mean square error is lower compared with higher orders as
shown by Robson and Whitlock (1964). Carling and Meng (2014)
however consider both the first and the second order JK-estimator and
note that the second order JK-estimator performs quite well. The JK point(1)
(2)
estimators are 𝜃̂𝐽𝐾 = 2𝑥̃(1) − 𝑥̃(2) and 𝜃̂𝐽𝐾 = 3𝑥̃(1) − 3𝑥̃(2) + 𝑥̃(3)
respectively.
As for interval estimation, the upper bounds of the Weibull and the
4
Jackknife estimators are the same, i.e., 𝑥̃(1) . However their lower bounds
differ. The Weibull lower bound with a confidence of 100(1 − 𝑒 −𝑛 )% is
[𝑥̃(1) − 𝑏̂] where 𝑏̂ is the estimated shape parameter of the Weibull
distribution (Wilson, King, and Wilson, 2004). There are several ways of
estimating the shape parameter, including the maximum likelihood
estimation technique. We and others (e.g. Derigs, 1985) have found the
following simple estimator to be fast, stable, and giving good results:
𝑏̂ = 𝑥̃([0.63(𝑛+1)]) − (𝑥̃(1) 𝑥̃(𝑛) − 𝑥̃(2) 2 )/(𝑥̃(1) + 𝑥̃(𝑛) − 2𝑥̃(2) )
where
[0.63(𝑛 + 1)] means the integer part of the value of the function.
The Jackknife estimator, and extensions of it (Dannenbring, 1977), has
only served the purpose of point estimating 𝜃 . However, Meng and
Carling (2014) suggests a lower bound computed by means of
bootstrapping the point estimator (Efron, 1979). The lower bound is
[𝜃̂𝐽𝐾 − 3𝜎 ∗ (𝜃̂𝐽𝐾 )] where 𝜎 ∗ (𝜃̂𝐽𝐾 ) is the standard deviation of 𝜃̂𝐽𝐾 obtained
from bootstrapping the n heuristic solutions (1,000 bootstrap samples were
sufficient for estimation of 𝜎 ∗ (𝜃̂𝐽𝐾 )). By multiplying 𝜎 ∗ (𝜃̂𝐽𝐾 ) by 3, the
confidence is 99.9% provided that the sampling distribution of the JKestimator is approximately Normal.
Giddings et al (2014) provide a critique of the SOET:s which deserves to
be re-stated briefly. Firstly, all the estimators presume a continuous
distribution of the heuristic solutions for being theoretically justified.
However, the discrete, feasible solutions amounts to a finite K and a strong
heuristic method might produce a sample of clustered, discrete heuristic
solutions. Thus, this assumption is, strictly speaking, false. 1 Secondly,
clustering of heuristic solutions suggests a violation of the independence
assumption up on which the probability statement of the interval hinges.
Although several authors (see e.g. Wilson et al, 2004) have proposed
goodness-of-fit tests for checking the assumed Weibull distribution and the
assumed independence, one cannot expect the power of the tests to be high
from a small sample in the extreme tail of the distribution. A failure of
1
We believe less complex problems to be more amenable to discreteness and
consequent improper statistical bounds, whereas highly complex problems have a large
number of service and demand points rendering the parent distribution almost
continuous. Exact solutions are usually feasible for non-complex problems, while
deterministic or statistical bounds are critical for complex problems.
5
rejecting the null hypotheses of the Weibull distribution and independence
is perhaps a stronger indication of low power of the test than an indication
of the correctness of the assumptions.
A conclusion from Giddings’ et al (2014) critique is that theoretically
perceived statistical properties of the SOET:s are possibly misleading and
the properties’ applicability are specific to the heuristic and the
combinatorial problem, due to the improper assumptions. As a
consequence, the theoretical properties need to be empirically checked for
various problems and heuristic methods. The contribution of this paper is
that we check empirically the theoretical properties of SOET:s for the pmedian problem when it is solved by two of the most used solution
methods for it.
3. The p-median problem and heuristic methods
Location theory is an important part of operations research and it is
concerned with the issue of locating facilities and assigning demand points
to them in some desired way. Continuous problems like the original Weber
problem deals with location in the plane, whereas discrete problems deals
with location on networks with vertices (or nodes) connected by edges.
The p-median problem is the most important of the four primary discrete
location problems, with the other three being the p-center, the
uncapacitated facility location, and the quadratic assignment problems
(Reese, 2006). A nice virtue of operations research is the vast availability
of test problems for which the optimum either is known or consistently is
updated as improved solutions emerge. Consequently, the SOET:s can be
checked with respect to known optima. We focus on the p-median problem
because of its fundamental role in location theory and the large number of
existing test problems in the OR-library (Beasley, 1990).
The problem is to allocate P facilities to a demand geographically
distributed in Q demand points such that the weighted average or total
distance to its nearest service center is minimized for the demand points.2
Hakimi (1964) considered the task of locating telephone switching centers
2
The p-median problem is NP-hard (Kariv and Hakimi, 1979). In fact, it is also NPcomplete and therefore it is to be expected that solving a p-median problem is
intractable (Garey and Johnson, 2000).
6
and showed later (Hakimi, 1965) that, in a network, the optimal solution
of the p-median model exists at the nodes of the network. If V is the
𝑉
number of nodes, then there are 𝐾 = ( ) feasible solutions for a p-median
𝑝
problem for the largest test problem we consider in the computer
experiment has about 𝐾 ≈ 2.5 ∗ 10164 .
As enumerating all feasible solutions is not possible as the problem size
grows, much research has been devoted to efficient methods to solve the
p-median model (see Handler and Mirchandani, 1979 and Daskin, 1995 as
examples). Reese (2006) reviews the literature for solution methods3 to the
p-median problem. Lagrangian relaxation (LR) is the most used method
and since it gives deterministic bounds we will use it as a benchmark for
SOET:s (Beasley, 1993).
In the class of heuristic methods, vertex substitution (VS) is the most
frequent. VS starts with a (random) 𝑧𝑘 , say, and seeks a local improvement
by examining all local adjacent nodes for the first facility, then the second
and so on, until it reaches the last facility up on which an iteration is
completed. After reaching the last facility, it returns to the first facility and
repeats the procedure until no further improvement is encountered. The
algorithm has no inherent randomness as it updates the solution according
to a deterministic scheme. Randomness in the heuristic solution comes
from either selecting the starting nodes, 𝑧𝑘 , at random or by randomizing
the order of examination of the nodes in 𝑧𝑘 .
Another class (the largest) of solution methods to the p-median problem is
metaheuristics with a domination of genetic algorithms 4 and simulated
annealing (SA) (Reese, 2006). SA does not apply a deterministic scheme
in its search for a good heuristic solution. Hence, randomness in the
heuristic solution will come both from random starting points and from
inherent randomness in the algorithm. SA starts with a randomly picked 𝑧𝑘
and selects at random one facility in that 𝑧𝑘 , and evaluates a move of it to
3
Reese (2006) uses the term solution methods to indicate any approach to find
(approximately) the optimum. Heuristics and meta-heuristics are a part of all solutions
methods in his sense; however we will use solution method and heuristic
interchangeably in what follows.
4
See for instance Michalewicz and Janikow (1991).
7
another node picked at random. The facility will be moved if it implies an
improvement, but it may also be moved with a small probability in spite of
no improvement. One random pick including the evaluation constitutes an
iteration. SA runs until a pre-specified (usually large) number of iterations
are performed.
We limit the study to VS, SA, and LR. There are of course details
regarding the implementation of the algorithms and we therefore
implement all of them in the statistical language R and provide the code in
the Appendix in the interest of making our study replicable.5 The virtue of
SA is that the algorithm will yield a solution that gradually iterates
towards the optimum since the inherent randomness implies that
eventually all feasible solutions will be evaluated. Hence, by increasing
the number of iterations better solutions are to be expected. VS and the
primal iterates of LR are more susceptible to be trapped in local minima
and thereby never approaching the optimum however large the number of
iterations. Such a fundamental difference in the characteristics of SA and
VS may very well have implication on the functioning of the SOET:s and
the comparison with regard to LR.
4. The computational experiment
The first factor varied in the computational experiment is the estimator.
The point and the interval Weibull-estimator are defined in Section 2
except for the confidence level. The confidence level will be kept at
𝛼 = 0.9987 throughout the experiments which means that the lower
bound will be calculated as [𝑥̃(1) − 𝑏̂ /𝑠] , where 𝑠 = (−𝑛/ln𝛼)1/𝑎̂ and
𝑎̂ = (𝑥̃(1) 𝑥̃(𝑛) − 𝑥̃(2) 2 )/(𝑥̃(1) + 𝑥̃(𝑛) − 2𝑥̃(2) ) being an estimator of the
shape parameter of the Weibull distribution (Giddings et al, 2014). The
first and the second Jackknife estimators are also defined in Section 2
where 𝜎 ∗ (𝜃̂𝐽𝐾 ) is calculated from 1,000 bootstrap samples of the n
heuristic solutions.
The second factor is complexity, in the context usually understood as the
number of feasible solutions, of the p-median test problems. Carling and
Meng (2014) proposes a different definition of complexity. They show that
5
(www.r-project.org).
8
the normal distribution gives a close fit to the distribution of 𝑔(𝑧𝑘 ) for
most of the 40 test problems and defines complexity as the distance
between the optimum and the estimated center of 𝑔(𝑧𝑘 ) in terms of
standard deviations. With this definition of complexity the test problems
ranges from 2.97 to 14.93.
The third factor is the variation in heuristic methods. The implementations
of these methods are explicitly given by the R-code provided in the
Appendix. Our implementation of the solution methods follows closely
Densham and Rushton (1992) (VS), Al-Khedhairi (2008) (SA), and
Daskin (1995) (LR)6.
In the early literature on statistical bounds little was said on the size of n,
whereas Akyüz et al (2012) and Luis, Sahli, and Nagy (2009) advocate n
to be at least 100. However, Brandeau and Chiu (1993) as well as Carling
and Meng (2014) find 𝑛 = 10 to work well. We will examine as the fourth
factor varied in the computational experiments, 𝑛 = 10, 25. However,
running the heuristics SA and VS is computationally costly and repeating
them is of course even more costly. For this reason we run the algorithms
100 times per test problem to obtain 100 heuristic solutions. Thereafter,
heuristic solutions are sampled with replacement from the set of 100
solutions. LR is run only once per experimental combination as its initial
values are deterministically determined by the distance matrix of the test
problem.
The fifth, and the last, factor is the computing time allotted to the solution
methods. The time per iteration varies by the complexity of the test
problems and it is therefore reasonable to assign more computing time to
𝑉
the more difficult problems. We run the algorithms for 2 ∗ (100) , 20 ∗
𝑉
𝑉
(100) , 60 ∗ (100) seconds per replicate where, again, 𝑉 is the number of
nodes of the test problem and computing time refers to CPU time of one
processor on an Intel i5-2500 and 3.30GHz. The nodes vary from 100 to
900 in the test problem so the computing time of the algorithms varies
between 2 seconds to 9 minutes.
6
Han (2013) implemented LR for the test problems and, by pre-testing, found the mean
of the columns of the distance matrix divided by eight to yield good starting values for
the algorithm. In our implementation of the algorithm, we follow Han’s approach.
9
We present results on the reliability of the SOET:s as the relative bias
(𝜃̂ − 𝜃)/𝜃 , the proportion of intervals with lower and upper bound
containing the optimum (hereafter referred to as coverage), the length of
the intervals, and the proportion of intervals that are shorter than the
length of the intervals defined by the deterministic bounds.
5. Results
Our computer experiment is a full factorial experiment of 3 × 40 × 2 ×
2 × 3 (estimator, complexity, heuristic, n, and computing time) resulting
in 1440 experimental combinations for which bias, coverage, length of
interval, and the proportion of intervals shorter than the length of the
interval defined by the deterministic bounds are outcome variables. 7 We
begin by checking for which experimental combinations it is meaningful
to compute statistical bounds (and refer to them as used combinations in
the following). Thereafter we check the bias of the three estimators in
point estimating the optimum. Finally, we give results on the coverage of
the intervals as well as their length.
5.1. Statistical bounds and the quality of heuristic solutions
Monroe and Sielken (1984) observe that the heuristic solutions need to be
close to the optimum for the computing of statistical bounds to be
meaningful. Carling and Meng (2014) suggest the statistic SR, given by
(1)
the ratio 1000𝜎(𝑥̃𝑖 )/𝜃̂𝐽𝐾 , as a measure of concordance and as a check of
the heuristic solutions being in the tail near to 𝜃. Figure 1 shows how
coverage decreases with SR (for the about 75 per cent of experimental
combinations with SR below 15). For SR above 15 the coverage is
approaching zero (not shown). Carling and Meng (2014) found SR = 4 to
be an operational threshold in the case of SA, but the threshold also seems
to apply to vertex substitution in ensuring statistical bounds to correspond
reasonably well to the stipulated confidence level.
As a consequence of a large value of SR implying improper statistical
bounds, we will only examine the outcome of the treatment combinations
for which 𝑆𝑅 ≤ 4. There are 480 experimental combinations per estimator
7
The complete outcome matrix may be requested from the authors by contacting
xme@du.se.
10
of which 247 are kept for analysis. Combinations, especially for vertex
substitution, of short computing times and high complexity tend to fail the
threshold check. Figure 2 illustrates the non-linear relationship with
LOWESS curves between SR (in logarithms) and complexity for the three
levels of computing time (Cleveland, 1979). Of the 247 combinations used
in the analysis, 83 are VS (5, 38, and 40 with short, intermediate, and long
computing time), and respectively 164 are simulated annealing (35, 57,
and 72 with short, intermediate, and long computing time) combinations.
The partial failure of vertex substitution to pass the threshold check is
typically a consequence of the algorithm managing only a few iterations in
the allotted computing time. For instance, the average number of iterations
was 2.6 for the combinations possessing short computing time. We
considered extending the allotted computing time in the experiment, but
discarded this option as the allotted computing time generally was
sufficient for both SA and LR.
1.0
Coverage
0.8
0.6
0.4
0.2
0.0
0
2
4
6
8
10
12
14
SR
Figure 1: Coverage as a function of SR. The Jackknife estimator is denoted by
st
nd
the solid line (1 order) and the dashed line (2 order), and the Weibull estimator
is denoted by the dotted line.
One factor in the experiment is the heuristic. We found, by formal testing
with Analysis of Variance (ANOVA), this factor to be unrelated to the
11
outcomes of the experiment for used combinations. Outcomes of SA and
VS are therefore pooled in the following when results related to the used
combinations are presented.
6
5
ln (1+SR)
4
3
2
1
0
2
4
6
8
10
Complexity
12
14
16
Figure 2: Experimental combinations removed from the analysis (SR = 4, solid
reference line). Short computing time (solid line, asterisks), intermediate
computing time (dashed line, squares), and long computing time (dotted line,
pluses).
5.2. The bias of point-estimation
Before studying the properties of the statistical bounds, we first examine
the estimators’ ability to point-estimate the minimum. Table 1 gives the
bias of the estimators focusing on the case 𝑛 = 25. Theoretically the 2nd
order Jackknife estimator has the smallest bias followed by the 1st order
Jackknife estimator, while the Weibull point-estimate is simply equal to
𝑥̃(1) and thereby always larger than (or possibly equal to) the optimum. In
practice, the three estimators are comparable in terms of bias. For a given
computing time, solutions obtained by SA are much closer to the optimum
compared with VS although solutions are consistently improving by time
for both methods. LR, on the other hand, finds a decent solution after a
short computing time, and trivially improves with added computing time.
Looking specifically at the combinations for which 𝑆𝑅 ≤ 4, the bias is
petite as the heuristic solutions as clustered near to the optimum. The bias
12
of LR is positive, particularly for the used combinations with a long
computing time. These combinations have a high proportion of the most
complex test problems. Although not shown in the table, the general
conclusions regarding bias also apply to the case 𝑛 = 10.
Table 1: Average relative bias (%) in the point estimation of 𝜃. 𝑛 = 25.
a
st
E-C
JK 1
SA
Short
40
1.7
1.6
1.8
Interm.
40
0.4
0.4
0.4
Long
40
0.1
0.1
0.2
Short
40
22.2
21.8
23.4
Interm.
40
16.0
15.7
16.7
Long
40
13.0
12.8
13.5
Short
40
3.4
Interm.
40
3.3
Long
40
3.3
LR
Used
b
combinations
a)
Weibull
c
Time
VS
JK 2
nd
Method
LR
19
-0.00
-0.00
0.01
0.31
Interm.
47
0.02
0.02
0.03
1.18
Long
56
0.04
0.04
0.05
1.74
Short
b)
Note: Number of experimental combinations. SA and VS pooled due to
c)
similarities. Results based on one replicate due to the deterministic outcome of
the algorithm.
5.3. Coverage and length of intervals
Ideally, the estimators should provide statistical bounds (or intervals) that
contain the unknown parameter of interest, 𝜃 at the pre-set confidence
level. If the condition is fulfilled, then a short interval is desirable. We
begin by examining the coverage of 𝜃 by the intervals.
Table 2 gives the coverage of the three estimators for the used
experimental combinations. There are two things to note in the table in
addition to the fact that the coverage is near to 100 per cent. The first is
that the coverage is similar for 𝑛 = 10 and 𝑛 = 25. The second is that the
Weibull estimator almost always covers the optimum while the Jackknife
estimators fail occasionally. From these findings we conclude that the
13
Weibull estimator is preferable and that we offer further evidence to the
findings of Brandeau and Chiu (1993) indicating that a large number of
replicates are redundant.
Table 2: Proportion (%) of intervals containing 𝜃.
Replicates
a
st
E-C
n=10
Short
Interm.
Long
n=25
Short
Interm.
Long
21
48
56
19
47
98.8
98.1
96.4
99.3
98.3
99.6
99.2
98.7
99.8
99.4
100.0
99.8
99.7
100.0
100.0
56
95.6
97.9
100.0
Note:
a)
JK 1
JK 2
nd
Time
Weibull
Experimental combinations. SA and VS pooled due to similarities.
The 1st order Jackknife estimator gives intervals of a length of about 60
per cent of the Weibull estimator, whereas the 2nd order Jackknife
estimator and Weibull estimator are comparable with regard to length of
intervals. However, we require the estimator to give intervals with a
factual coverage corresponding to the asserted confidence level, a
condition the 1st order Jackknife estimator fails to meet.
Table 3: A comparison of statistical (Weibull estimator) and deterministic
bounds (LR). Counts of experimental combinations.
UBW < UBLR
UBW = UBLR
UBW > UBLR
LBW > LBLR
92
91
9
192
LBW < LBLR
48
1
6
55
140
92
15
247
To assess the precision offered by the Weibull intervals, it is useful to
compare with the deterministic bounds given by LR. Table 3 gives a
comparison of the upper bounds (UB) and lower bounds (LB) as obtained
by the Weibull estimator and LR for the 247 used experimental
combinations. In 6 + 1 of the combinations the intervals of the
deterministic bounds are contained in the intervals 8 of the Weibull
statistical bounds whereas it is the opposite in 92 + 91 of the
8
We computed the average of the Weibull upper and lower bounds based on the 1,000
Bootstrap samples.
14
combinations. The latter happens as a consequence of the Weibull pointestimator being a slightly better predictor of the optimum than the upper
bound of LR, but more importantly because the statistical lower bound
often is closer to the optimum than the deterministic lower bound. The
latter finding conforms to what Brandeau and Chiu (1993) reported.
The fact that the statistical intervals are contained in the deterministic
intervals does not imply that the difference in length is of practical
significance. We therefore compute the relative length of the statistical
interval (average) to the deterministic interval.
Figure 3 shows the relative length of the interval as a function of
complexity, as the other factors were found to have no impact. Imposed is
a resistant line, in order to accommodate some extreme outliers, depicting
the relationship between relative length and complexity (Velleman, 1980).
For all levels of complexity, the median relative length is 0.02 while the
average is 0.25 being strongly affected by outliers coming from
combinations with short computing time.
1.0
Relative length
0.8
0.6
0.4
0.2
0.0
2
4
6
8
Complexity
10
12
14
Figure 3: Relative length of the statistical intervals to the deterministic intervals
as a function of the complexity. Resistant line imposed due to a few extreme
outliers.
15
To conclude the result section, reliable statistical bounds require heuristic
solutions of good quality (i.e. 𝑆𝑅 ≤ 4). Given high quality, the Weibull
estimator gives proper intervals covering the optimum almost always with
an interval substantially tighter than a deterministic interval. The required
number of replicates is as modest as 10. Hence, statistical bounds are
reliable, efficient and computationally affordable.
6. An illustrating case
In this section, we illustrate statistical bounds applied to a practical
location problem concerning allocating several distribution centers of the
Swedish Post in one region in Sweden. In this real location problem the
minimum is unknown. The problem is to allocate each of the 71
distribution centers of the Swedish Post to some of the 6,735 candidate
nodes in the network of Dalarna in mid-Sweden. The landscape of the
region and its population of 277,725 inhabitants, distributed on 15729
demand points, is described by Carling, Han, and Håkansson (2012).
The objective is to minimize the sum over all the demand points of the
distance between the demand point and the nearest postal center. The
minimization is done over all possible locations of postal centers on the
6,735 candidate nodes in the network. Han, Håkansson, and Rebreyend
(2013) provides detailed description of the road network and argue that
network distance, rather than the Euclidian, is used as distance measure.
Hence, we measure distance as meters on the road network.
Figure 4: Empirical distribution of objective function for the Swedish Post
problem.
16
To appreciate the complexity of this illustrative problem, we provide the
distribution of the objective function. As Carling and Meng (2014) did for
the OR-lib problems, we draw a random sample of 1 million and show the
empirical distribution of 𝑔(𝑧𝑘 ) in Figure 4. The distribution is slightly
skewed to the right, but still approximately normal. To evaluate the
(2)
complexity of the problem, we use 𝜃̂𝐽𝐾
as the estimate of the minimum 𝜃,
and the mean and variance of 𝑔(𝑧𝑘 ) are derived by the 1 million random
sample. The complexity is 5.47 and the case is therefore comparable to the
median of the 40 OR-lib problems.
Drawing on the experimental results above, we set 𝑛 = 10 and run the SA
heuristic algorithm until SR reaches 4 or less. Furthermore, we focus on
the statistical bounds of the Weibull estimator. We checked SR in steps of
5 minutes both for SA and LR. Figure 5 shows the evolution of LR’s
smallest upper and largest lower bounds as a function of time. Within 90
minutes, the lower bound of LR has stabilized at 2,534 whereas the upper
bound reaches 3,275 after 125 minutes. After our decision to stop running
the algorithms at 200 minutes, the LR’s solution is 3,275 meters as the
inhabitants average distance to their respective closest postal centers.
However, the LR-solution is imprecise as the gap between the bounds is
741 meters. The Jackknife 1st order point-estimate of the optimum is 2,973
as evaluated after 200 minutes, and this estimate is imposed in Figure 5 as
a reference line to the unknown optimum.
17
Figure 5: The solutions to the Swedish Post problem by computing time. Upper
and lower bounds in long and short dashed lines (LR) and short dashed lines
st
(SA). The embedded graph is a magnification. Jackknife (1 order) point-estimate
as solid reference line.
The SR approaches 4 gradually, but slowly. It took 150 minutes of
computing time of SA for the ratio to come down to 4, and the statistical
upper and lower bounds are thereafter computed and depicted in Figure 5.
To show clearly the evolution of the statistical bounds at 150 minutes and
onwards, a magnifying graph is embedded in Figure 5. Up to termination
of the algorithm, the gap between the upper and the lower statistical bound
is decreasing and reaches 16 meters. Since 16 meters is a tolerable error
for this application, there is no need to continue the search for a better
solution and we content ourselves with a solution of 2,975 with a lower
bound of 2,959 meters. We have also computed the statistical bounds
using the JK-estimators with results differing only by a few meters.
7. Concluding discussion
We have considered the problem of determining when a solution provided
by a heuristic is close to optimal. Deterministic bounds may sometimes be
18
applicable and enough tight to shed knowledge on the problem. We have,
however, studied statistical bounds potentially being of a more general
applicability. We have examined the Weibull estimator as well as two
variants on the Jackknife estimator. Furthermore, we have varied the
number of replicates, the allotted computing time of the heuristic
algorithms, the complexity of the combinatorial problem as well as the
heuristic algorithms. We find statistical bounds to be reliable and much
more efficient than deterministic bounds provided that the heuristic
solutions are sampled close to the optimum, an issue further addressed
below. Furthermore, statistical bounds may be computed based on a small
number of replicates (𝑛 = 10) implying a modest computational cost up
on exploiting parallel computing.
We have, however, restricted the experiment to one type of combinatorial
optimization problem, namely the p-median problem being the most
common location problem in the OR-literature. Derigs (1985) made a
number of concluding observations upon studying statistical bounds with
regard to the Travelling Salesman Problem (TSP) and Quadratic
Assignment Problem (QAP). It appears that most of the conclusions are
valid, except one. Derigs stated “The Weibull approach leads to a proper
approach,”. We have however demonstrated that none of the estimators,
including Weibull, are reliable unless the quality of the sample of heuristic
solutions used for deriving the bounds is of high quality. To assess the
quality we suggest using SR which is the standard deviation of the n
solutions divided by the Jackknife point-estimator. The experiments
suggest that SR exceeding 4 causes unreliable bounds. Nevertheless, a
systematic study of statistical bounds on various classes of combinatorial
optimization problems is warranted before advocating a general usage of
statistical bounds in the estimation of optima.
19
0.0
10
2.0
1.5
8
1.0
6
0.5
4
0.0
2
-0.5
0
10
20
30
40
2.5
5.0
7.5
10.0
0
50
SR
Figure 6: Skewness (left panel) and Relative (%) bias of average solution (right
panel) as a function of SR. Combinations with SR equal to zero are removed and
only the combinations for which 𝑛 = 25 are shown.
The empirically discovered threshold of SR less than four may seem
mystical. We have noted however that the distribution of the sample of
solutions tend to be normal until the solutions are coming close to the
optimum. The left panel of Figure 6 shows skewness of the sample of
solutions as a function of SR. For high values of SR, skewness is typically
about zero and kurtosis is about three. Once the sample of solutions is
coming close to the optimum, the skewness (and kurtosis) increases (i.e.
the sample of solutions is becoming right-skewed) while the variation in
solution decreases (i.e. SR becomes smaller). The right panel of figure 6
shows the relative bias of the average solution of the sample. For SR large
the bias is also large. However, at the threshold the bias is only about one
per cent. Hence, reliable statistical bounds seem to require a solution
method yielding solutions within one per cent deviance from the optimum.
The bias might be a better statistic than SR in deciding the reliability of the
statistical bounds, but it requires of course knowing the (unknown)
optimum.
An inherent difficulty in executing unbiased computer experiments on
20
heuristic algorithms is the issue of their implementation. We have tried to
render the comparison fair by running the algorithms on the same
computers for the same computing time in the same R environment. We
have also supplied the code which means that it is straightforward to
replicate the experiment under alternative implementations including other
parameter settings of the algorithms. For the current implementation of the
algorithms, the wide gap between the upper and lower bounds of LR did
not seem to tighten by computing time, on the contrary the bounds of the
algorithm stabilized quickly in most experimental combinations. Vertex
substitution was less successful than SA in fulfilling the requirement of SR
below 4, but in the experimental combinations when both methods
fulfilled the requirement the statistical bounds were very similar.
Acknowledgement
We are grateful to Mengjie Han, Johan Håkansson, Daniel Wikström, and
two anonymous reviewers for comments on previous versions of the
paper. Financial support from the Swedish Retail and Wholesale
Development Council is gratefully acknowledged.
References
Akyüz, M.H., Öncan, T., Altınel, I.K., (2012). Efficient approximate solution
methods for the multi-commodity capacitated multi-facility Weber problem.
Computers & Operations Research 39:2, 225-237.
Al-Khedhairi, A., (2008). Simulated annealing metaheuristic for solving pmedian problem. International Journal of Contemporary Mathematical
Sciences, 3:28, 1357-1365.
Beasley, J.E., (1990). OR library: Distributing test problems by electronic mail,
Journal of Operational Research Society, 41:11, 1067-1072.
Beasley, J.E., (1993). Lagrangian heuristics for location problems. European
Journal of Operational Research, 65, 383-399.
Brandeau, M.L., Chiu, S.S., (1993). Sequential location and allocation: worst
case performance and statistical estimation. Location Science, 1:4, 289-298.
21
Carling, K., Han, M., Håkansson, J., (2012). Does Euclidean distance work
well when the p-median model is applied in rural areas? Annals of
Operations Research, 201:1, 83-97.
Carling, K., Meng, X., (2014). On statistical bounds of heuristic solutions to
location problems. Working papers in transport, tourism, information
technology and microdata analysis, 2014:10.
Cleveland, W.S., (1979). Robust Locally Weighted Regression and Smoothing
Scatterplots, Journal of the American Statistical Association, 74, 829–836.
Dannenbring, D.G., (1977). Procedures for estimating optimal solution
values for large combinatorial problems, Management science, 23:12, 12731283.
Daskin, M.S., (1995). Network and discrete location: models, algorithms, and
applications. New York: Wiley.
Densham, P.J, Rushton, G., (1992). A more efficient heuristic for solving
large p-median problems. Papers in Regional Science, 71, 307-329.
Derigs, U, (1985). Using confidence limits for the global optimum in
combinatorial optimization. Operations research, 33:5, 1024-1049.
Efron, B., (1979). Bootstrap methods: Another look at the Jackknife. Annals
of statistics, 7:1, 1-26.
Garey, M.R., Johnson, D.S, (2002). Computers and intractability, 29; W.H.
Freeman, New York.
Giddings, A.P., Rardin, R.L, Uzsoy, R, (2014). Statistical optimum estimation
techniques for combinatorial problems: a review and critique. Journal of
Heuristics, 20, 329-358.
Golden, B.L., Alt, F.B., (1979). Interval estimation of a global optimum for
large combinatorial optimization. Operations Research, 33:5, 1024-1049.
Gonsalvez, D.J., Hall, N.G., Rhee, W.T., Siferd, S.P., (1987). Heuristic
solutions and confidence intervals for the multicovering problem, European
Journal of Operational Research, 31:1, 94-101.
22
Hakimi, S.L., (1964). Optimum locations of switching centers and the
absolute centers and medians of a graph. Operations Research, 12:3, 450-459.
Hakimi, S.L., (1965). Optimum Distribution of Switching Centers in a
Communication Network and Some Related Graph Theoretic Problems.
Operations Research, 13:3, 462-475.
Han, M., (2013). Heuristic Optimization of the p-median Problem and
Population Re-distribution. Dalarna Doctoral Dissertations, 2013:01.
Han, M., Håkansson, J., Rebreyend, P., (2013). How do different densities in a
network affect the optimal location of service centers?. Working papers in
transport, tourism, information technology and microdata analysis, 2013:15.
Handler, G.Y., Mirchandani, P.B., (1979). Location on networks: Theorem
and algorithms. MIT Press, Cambridge, MA.
Kariv, O., Hakimi, S.L., (1979). An algorithmic approach to network location
problems. part 2: The p-median, SIAM Journal of Applied Mathematics, 37,
539-560.
Levanova, T., and Loresh, M.A., (2004). Algorithm of ant system and
simulated annealing for the p-median problem, Automation and Remote
Control, 65, 431-438.
Luis, M., Sahli, S., Nagy, G., (2009). Region-rejection based heuristics for the
capacitated multi-source Weber problem. Computers & Operations Research,
36, 2007-2017.
McRobert, K.L., (1971). A search model for evaluating combinatorially
explosive problems. Operations Research, 19, 1331-1349.
Meng, X., Carling, K., (2014). How to Decide Upon Stopping a Heuristic
Algorithm in Facility-Location Problems?. In Web Information Systems
Engineering–WISE 2013 Workshops, Lecture Notes in Computer Science,
8182, 280-283, Springer, Berlin/Heidelberg.
Michalewicz, Z., Janikow, C.Z. (1991). Genetic algorithms for numerical
optimization. Statistics and Computing, 1, 75-91.
23
Monroe, H.M., Sielken, R.L., (1984). Confidence limits for global optima
based on heuristic solutions to difficult optimization problems: a simulation
study. American Journal of Mathematical and Management Sciences, 4, 139167.
Nydick, R.L., Weiss, H.J., (1988). A computational evaluation of optimal
solution value estimation procedures. Computers & Operations Research, 5,
427-440.
Quenouille, M.H., (1956). Notes on bias in estimation. Biometrika, 43, 353360.
Reese, J., (2006). Solution methods for the p-median problem: An annotated
bibliography. Networks, 48:3, 125-142.
Robson, D.S., Whitlock, J.H., (1964). Estimation of a truncation point.
Biometrika, 51, 33-39.
Taillard, E. D., (1995). Comparison of iterative searches for the quadratic
assignment problem. Location science, 3(2), 87-105.
Velleman, P.F., (1980). Definition and Comparison of Robust Nonlinear Data
Smoothing Algorithms. Journal of the American Statistical Association, 75,
609-615.
Wilson, A.D., King, R.E., Wilson, J.R., (2004). Case study on statistically
estimating minimum makespan for flow line scheduling problems. European
Journal of Operational Research, 155, 439-454.
24
Appendix: Table A1
Table A1: Description of the 40 problems of the OR-library.
Problem
P11
P1
P16
P6
P26
P21
P38
P31
P35
P7
P2
P3
P27
P17
P22
P12
P36
P39
P32
P4
P5
P8
P13
P9
P18
P10
P23
P14
P28
P19
P15
𝜃
𝜇𝑔(𝑧𝑝)
𝜎𝑔(𝑧𝑝)
Complexity
7696
5819
8162
7824
9917
9138
11060
10086
10400
5631
4093
4250
8307
6999
8579
6634
9934
9423
9297
3034
1355
4445
4374
2734
4809
1255
4619
2968
4498
2845
1729
10760
8426
11353
10522
13644
12906
15078
13960
14179
7930
6054
6194
11428
9819
11699
9387
13436
12988
12687
4618
2376
6604
6293
4250
6769
2278
6586
4501
6369
4327
2896
1195
877
1033
869
1133
1070
1143
1077
1085
598
506
500
727
631
676
586
735
736
699
320
197
356
276
202
248
127
220
168
188
144
109
2.56
2.97
3.09
3.10
3.29
3.52
3.52
3.60
3.81
3.84
3.88
3.89
4.29
4.47
4.62
4.70
4.77
4.84
4.85
4.95
5.18
6.07
6.95
7.51
7.92
8.02
8.94
9.12
9.95
10.32
10.67
25
P33
P24
P37
P20
P40
P29
P25
P34
P30
4700
2961
5057
1789
5128
3033
1828
3013
1989
6711
4486
7246
3108
7329
4559
3131
4617
3335
26
186
134
188
112
179
118
95
112
90
10.81
11.42
11.65
11.73
12.32
12.93
13.64
14.36
14.93
Appendix: R-code
Simulated Annealing:
V=100 # Number of candidates (dependent on test problem)
p=5 # Number of facilities (dependent on test problem)
n<-100 # Number of replicates (dependent on experimental combination)
ni<-10000 # Number of replicates (dependent on experimental combination)
heuristic.solution<-matrix(0, nrow=ni,ncol=n)
heuristic.location<-matrix(0, nrow=p, ncol=n)
Store.solution<-numeric()
Best.solution<-numeric()
Store.location<-matrix(0, nrow=ni, ncol=p)
Best.location<-numeric()
for (i in 1:n){
subset<-numeric(0)
select.location<-sample(1:V,p,replace=F)
objective.function<-sum(apply(Distance.matrix[,select.location],1,min))
iteration<-0;Tempearture<-400;beta<-0.5 # initial parameter setting.
while (iteration<ni){
sam<-sample(1:p,1)
substitution<-sample((1:V)[-select.location[sam]],1)
store.selection<-select.location
select.location[sam]<-substitution
updated.objective.function<-sum(apply(Distance.matrix[,select.location],1,min))
if (updated.objective.function<=objective.function) {objective.function<updated.objective.function;beta<-0.5;count<-0}
if (updated.objective.function>objective.function){
delta<-updated.objective.function-objective.function
unif.number<-runif(1,0,1)
if (unif.number<exp(-delta/Tempearture)) {objective.function<updated.objective.function;beta<-0.5;count<-0}
if (unif.number>=exp(-delta/Tempearture)) {count<-count+1;select.location<store.selection } }
iteration<-iteration+1
Tempearture<-Tempearture*0.95
Store.solution[iteration]<-objective.function
Best.solution[iteration]<-min(Store.solution[1:iteration])
Store.location[iteration,]<-select.location
Best.location<Store.location[min(which(Store.solution==Best.solution[iteration])),]
}
heuristic.solution[,i]<-Best.solution
heuristic.location[,i]<-Best.location
}
27
Vertex Substitution:
iteration=0
current.P.node<-sample(1:V,p,replace=F)
minimum.P.list<-current.P.node
candidate.node<-c(rep(0, (V-p)))
temp<-numeric()
min.obj.funciton=sum(apply(Distance.matrix[,current.P.node],1,min))
i=0; j=0; k=0; f=0
for(i in 1:V){ f=0;
for(j in 1:p){
if(current.P.node[j]==i){ f=1;
break} }
if(f==0){
candidate.node[k]=i;k=k+1;
if(k==(V-p)) # if duplicate in plist stop earlier otherwise may have error break;} }
while(iteration<ni){
for(i in 1:p){
for(j in 1:(V-p)){
for(k in 1:p){
if(k==i){ temp[k]=candidate.node[j]
}else{ temp[k]=current.P.node[k]}}
curOFV=sum(apply(Distance.matrix[,temp],1,min))
if(curOFV<min.obj.funciton){
min.obj.funciton=curOFV;
minimum.P.list<-temp
cont=1}}}
current.P.node=minimum.P.list
i=0; j=0; k=0; f=0
for(i in 1:V){ f=0;
for(j in 1:p){if(current.P.node[j]==i){ f=1;
break } }
if(f==0){candidate.node[k]=i;
k=k+1;
if(k==(V-p)) # if duplicate in plist stop earlier otherwise may have error break} }
iteration=iteration+1
tbq1[iteration]<-min.obj.funciton}
Lagrangian Relaxation:
objective.f.matrix<-indicator.matrix<-matrix(0,N,N)
Distance.matrix<-ma1
lambda<-numeric()
V<-numeric()
alpha<-2
lb.cu<-lb.pr<-ub.cu<-ub.pr<-NULL
Lower.bound<--1e13;Upper.bound<-1e13
counter<-0;counter.up<-0;iter<-1
28
location.m<-sample(1:V,p,replace=F)
lambda<-apply(Distance.matrix,1,mean)/8
run.time=0
start.time<-as.numeric(Sys.time())
while(run.time<40){
objective.f.matrix<-Distance.matrix-lambda # Distance.matrix is the data
indicator.matrix<-(objective.f.matrix<0)
for (j in 1:N){V[j]<-sum(objective.f.matrix[,j][indicator.matrix[,j]])}
location<-order(V)[1:p]
Lower.cu<-sum(lambda)+sum(V[location])
if(Lower.cu>Lower.bound){Lower.bound<-Lower.cu}
if(Lower.cu<Lower.bound){counter<-counter+1} # made when no impro. in lower
bound
lb.cu<-c(lb.cu,Lower.cu);lb.pr<-c(lb.pr,Lower.bound)
Upper.cu<-sum(apply(Distance.matrix[,location],1,min))
if(Upper.cu<Upper.bound){Upper.bound<-Upper.cu;counter.up<-0}
if(Upper.cu>=Upper.bound){counter.up<-counter.up+1}
ub.cu<-c(ub.cu,Upper.cu);ub.pr<-c(ub.pr,Upper.bound)
location.m<-cbind(location.m,location)
violation<-sum((apply(indicator.matrix[,location],1,sum)-1)^2)
if(counter==5){alpha<-alpha/2
counter<-0}
update<-alpha*(Upper.bound-Lower.cu)/violation
multi<-(apply(indicator.matrix[,location],1,sum)-1)*update
lambda<-pmax(0,lambda-multi)
iteration.time<-as.numeric(Sys.time())
run.time<-iteration.time-start.time
iter=iter+1
}
node<-which(ub.pr==Upper.bound)[1]+1
optimal<-sum(apply(Distance.matrix[,location.m[,node]],1,min))
c(Upper.bound,Lower.bound)
29
PAPER III
Statistical bounds of genetic solutions to quadratic assignment
problems
Author: Xiangli Meng
Abstract: Quadratic assignment problems (QAPs) are commonly solved
by heuristic methods, where the optimum is sought iteratively. Heuristics
are known to provide good solutions but the quality of the solutions, i.e.,
the confidence interval of the solution is unknown. This paper uses
statistical optimum estimation techniques (SOETs) to assess the quality of
Genetic algorithm solutions for QAPs. We examine the functioning of
different SOETs regarding biasness, coverage rate and length of interval,
and then we compare the SOET lower bound with deterministic ones. The
commonly used deterministic bounds are confined to only a few
algorithms. We show that, the Jackknife estimators have better
performance than Weibull estimators, and when the number of heuristic
solutions is as large as 100, higher order JK-estimators perform better than
lower order ones. Compared with the deterministic bounds, the SOET
lower bound performs significantly better than most deterministic lower
bounds and is comparable with the best deterministic ones.
Key words: quadratic assignment problem, genetic algorithm, Jack-knife,
discrete optimization, extreme value theory

PhD-student in Micro-data analysis at the School of Technology and Business
Studies, Dalarna University, SE-791 88 Falun, Sweden. E-mail: xme@du.se. Phone: +4623-778509.
1. Introduction
The combinatorial problems in operational research have been widely
studied due to the significant utility in improving the efficiency of many
reality problems. However, many combinatorial problems are NP-hard and
enumerating all possible solutions becomes impossible when the problem
size increases. Many studies have been devoted to developing efficient
(meta-) heuristic algorithms to solve the problem and provide a good
solution. A shortage of the heuristic solutions is that it is difficult to assess
the quality of the solution, i.e., the difference between the heuristic
solution and the exact optimum is unknown. One common strategy is to
use algorithms providing deterministic bounds, such as Lagrangian
relaxation (Fisher 2004) and Branch and bound (Land and Doig, 1960).
This strategy is popular and reasonable for many problems, but it confines
the choice of heuristic algorithms, and its performance (largely) depends
on the choice of parameters. For many widely used algorithms such as
Genetic algorithm and Simulated Annealing algorithm, the quality of their
solutions remains vague relative to deterministic algorithms.
An alternative strategy for assessing the quality of heuristic solutions is to
use the statistical bounds, which is also referred to as statistical optimum
estimation techniques (SOETs). The idea of SOETs is that parallel
heuristic processes with random starting values will result in random
heuristic solutions, thereby providing a random sample close to the
optimum. Statistical theories such as nonparametric theory and extreme
value theory are then applied on this random sample to estimate the
optimum and provide the confidence intervals. Pioneering work has been
done by Derigs (1985) on travelling salesman problems (TSPs) and
quadratic assignment problems (QAPs). It is shown that statistical bounds
are competitive with deterministic ones for both problems and has more
potential in QAP than in TSP. After Derigs (1985) there is some research
devoted into developing SOETs, but many questions remain unanswered,
hindering the wide application of SOETs. Giddings (2014) summarizes the
current research and application situation of SOETs on operational
problems. A problem class 𝒥 is a group of problems instances 𝛪, so 𝛪 ∈ 𝒥.
This class contains well-known combinatorial optimization problems such
as TSP, Knapsack, and Scheduling. A heuristic 𝐻 is a combination of
computer instructions of the solution method with a given random number
2
seed. The heuristic class ℋ is the collection of possible heuristics. 𝑛 is the
number of replicates arising from unique random number seeds. The
SOETs consists of all the combination sets for 𝒥 × ℋ 𝑛 . For a complete
investigation of the SOET performance, all the restrictive types 𝐼 × 𝐻 𝑛
need to be checked.
For a specific combination set of 𝐼 × 𝐻 𝑛 , Carling and Meng (2014a,
2014b) examine the application of SOETs on p-median problems. They
compare the performance of SOETs systematically regarding different
heuristics 𝐻 and number of replicates 𝑛 and give the following
conclusions:
(1) The SOETs are quite informative given that the heuristic solutions
derived are close enough to the optimum. A statistic named SR (standard
deviation ratio) is proposed for evaluating whether this condition is
satisfied. The statistical bounds will cover the optimum almost certainly if
SR is smaller than the threshold 4.
(2) Comparing the performances of different SOET estimators, the 2nd
order Jackknife estimator and the Weibull estimator have better
performance in having smaller bias and providing statistical bounds
covering the optimum. When SR<4, the bounds cover the optimum almost
certainly. The Gumbel estimator and the 1st order Jackknife estimator
perform worse.
(3) Small sample size n, e.g., 𝑛 = 3 leads to unstable intervals, but 10
heuristic solutions provide almost equally good statistical bounds with 100
heuristic solutions. Thus the effect of having more than 10 heuristic
processes would have small effect on the functioning of SOET.
(4)
Different heuristics do not affect the performance of
statistical intervals. The solutions derived by Simulated Annealing are not
significantly different from those derived by Vertex Substitution. The
performance of point estimators and statistical bounds are almost the same
as long as SR<4.
(5) Under the same computing time, statistical intervals give better results
than deterministic intervals derived by Lagrangian relaxation. The
statistical intervals have much shorter lengths in most of the cases while
3
almost certainly covering the optimum.
Conclusion (1), (2) and (4) are novel conclusions, i.e., they could not be
traced back to similar research results while conclusion (3) is analogous
with Brandeau and Chiu (1993) which states 𝑛 = 10 would obtain as good
solutions as 𝑛 = 2000 and statistical bounds yield better lower bounds
than the available analytical bounds. (5) coincides with Brandeau and
Chiu (1993) and Derigs (1985). These conclusions provide us with an
effective way of deriving useful statistical intervals. However, Carling and
Meng have only conducted the analysis on p-median problems, and left
the validity of SOETs unverified on many other operational problems in 𝒥.
The focus of this paper is therefore to analyse the performance of different
SOETs on another important combinatorial problem, namely the quadratic
assignment problem.
The quadratic assignment problem (QAP) is a classical combinational
problem in operational research. It is formulated as follows. Consider two
N-dimension square matrixes 𝐴 = (𝑎𝑖𝑗 )𝑁 and 𝐵 = (𝑏𝑖𝑗 )𝑁 , find a
permutation (𝑥1 , 𝑥2 , … , 𝑥𝑁 ) of integers from 1 to 𝑁 that minimises the
objective function:
𝑔=∑
𝑁
𝑖=1
∑
𝑁
𝑗=1
𝑎𝑖𝑗 𝑏𝑧𝑖 𝑧𝑗
The QAPs have many applications in real world for operational and
economic problems, see Loiola, et al. (2007) for a detailed introduction.
QAP is known to be a difficult problem to solve, especially when 𝑁 > 15.
As stated above, heuristics such as Genetic algorithm (Tate and Smith
1995), Simulated Annealing (Wilhelm and Ward 1987) and Tabu search
(Misevicius, 2005) are proposed for retrieving good solutions but the
quality of these solutions are unknown and unable to be assessed.
Deterministic lower bounds are available for only a few algorithms and
rely on parameter choices, e.g., Adams et al., (2007). Derigs (1985)
compares Weibull lower bound as representative of SOET with
deterministic lower bounds by Branch and Bound algorithm, and
concludes that Weibull bounds outperforms deterministic ones. Their
research shows the potential of SOET, but the confined experimental
design does not provide sufficient support for the usage of SOET, nor
4
suggestions on applications of SOET.
Presumably the usefulness of SOET still applies, as Derigs argued, and it
therefore deserves a critical systematic examination of QAPs, and
application advice needs to be suggested, which is the focus of this paper.
This paper aims at studying the usefulness of SOET with one combination
in the SOET framework 𝒥 × ℋ 𝑛 , namely the Genetic algorithm on QAPs.
The Genetic algorithm is one of the most widely used algorithms in
solving operational problems including QAPs (Loiola, et al. 2007). It is
known to be able to find good solutions consistently, while being
computationally affordable and exceptionally robust to different problem
characteristics or implementation (Tate and Smith 1995). It is the leading
algorithm that researchers seek to solve QAPs although the quality of the
solutions remains ambiguous. Thus it becomes our concern in applying
SOETs to see the performance of assessing the quality of Genetic
solutions. This paper is organized as follows. In Section 2 we investigate
the features of QAPs. Section 3 reviews and proposes methods for
statistically estimating the minimum of the objective function as well as
the corresponding bounds. Section 4 presents the results and analysis.
Section 5 makes a comparison between the SOET lower bounds and the
deterministic ones. The last section concludes this paper.
2. Complexity measure of quadratic assignment problem
First we introduce the notations that will be used throughout the paper.
𝑍𝑖 = feasible solution of a QAP with N dimensions, 𝑖 = 1,2, … , 𝑁!.
𝑍 = the set of all feasible solutions 𝑍 = {𝑍1 , 𝑍2 , … , 𝑍𝑁! }.
𝑔(𝑍𝑖 ) = the value of objective function for solution 𝑍𝑖 .
𝜃 = min𝑍 𝑔(𝑍𝑖 ).
Before going into the comparison of different estimators, the
characteristics of the problems need to be investigated. Here we focus on
the complexity of the problems which is interpreted here as the difficulty
for algorithms to reach 𝜃. In common sense, the complexity of QAPs is
decided by the number of dimensions N. This makes sense since the
number of possible solutions to the QAPs is 𝑁! and it determines the size
of the population of solutions. Yet, the size of the solution population is
not the only effect that influences the complexity of the problems; in fact
the structures of matrix A and B also play an influential role. For example,
5
if one matrix contains most elements equal to 0, the complexity of the
problem should be comparably smaller.
Carling and Meng (2014a) propose a new way of measuring the
complexity of the p-median problems in experimental cases. They find
that the objective function values for the p-median problems are
approximately normally distributed, therefore they propose measuring
complexity of a problem by the number of standard deviations that the
optimal value lies away from the mean, i.e., ((𝜇𝑔(𝑍) − 𝜃)/𝜎𝑔(𝑍) ), where
𝜇𝑔(𝑍) is the mean of 𝑔(𝑍) and 𝜎𝑔(𝑍) the standard deviation. 𝜇𝑔(𝑍) and 𝜎𝑔(𝑍)
are estimated by drawing a large random sample of solutions. This method
provides a good way of measuring the complexity of solving the problems
since reaching to 𝜃 would grow tough when the it lies further away in the
tail; hence the problem is more complex. Although this method is not
practically useful since 𝜃 is unknown in reality problems, it is quite
helpful in assessing the performance of SOETs in experiments. Therefore
we follow this way and check the complexity of the QAPs.
The test problems used are from QAPLIB (Burkard et al., 1997). The
QAPLIB provides QAP test problems with various N, A and B. One
important benefit of QAPLIB is that it has known 𝜃 for most problems.
We choose 40 problems with N varies between 12 and 100, and then check
their complexity.
Figure 1: Sample distribution of the 14th problem in the OR-library.
6
Table 1: Description of the problem complexity of the QAPLIB.
𝜃
𝜇𝑔(𝑧𝑝)
𝜎𝑔(𝑧𝑝)
Problem N
Complexity
tai64c
64
1855928
2955590.56
346815.9
3.17
bur26c
26
5426795
5941725.45
104825.6
4.91
nug16b
16
1240
1727.95
81.45
5.99
nug24
24
3488
4766.84
159.25
8.03
tai80b
80
818415043
1242911278
30709661
13.82
lipa60b
60
2520135
3272434.13
18091.48
41.58
Figure 1 gives the empirical distribution of a random sample for the
problem bur26c. One million random solutions are generated and
collected. The value of QAP objective function is approximately normally
distributed. The distributions of the other test problems match that in
Figure 1. The complexity together with the sample mean and the sample
standard deviation for 6 problems are given in Table 1. The full results are
given in Appendix I. The complexity of problem varies from 3 to 41. It is
easy to reach an optimum which lies only 3 times standard deviation away
from the mean, while rather difficult to reach an optimum which lies 41
times standard deviation away from the mean.
3. Statistical estimation of the minimum and its bounds
There are two approaches of SOETs which provide estimators of the
minimum and the bounds based on different statistic theories: first, the
truncation points approaches, and second, the Extreme value theory
approaches. Both approaches require the sample to be randomly selected.
However, as Meng and Carling (2014) show, the performance of SOETs
requires a randomly selected sample containing values close to 𝜃, in such
case, the size of that sample would be enormously large and infeasible to
retrieve. As several researchers have pointed out, if the starting values are
selected at random, parallel heuristic solutions simulate a random sample
in the tail (see e.g. McRoberts, 1971, and Golden and Alt, 1979). In other
words, we could get a desired random sample with much less effort. We
denote 𝑧̃𝑖 as the heuristic solution in the 𝑖 𝑡ℎ , 𝑖 = 1,2, … , 𝑛 heuristic
process, and use them to compare the functioning of SOETs.
In the truncation points approach, the most commonly used method is the
Jackknife estimator introduced by Quenouille (1956):
7
𝜃̂𝐽𝐾 = ∑
𝑀+1
(−1)(𝑚−1) (
) 𝑧̃(𝑚)
𝑚
𝑚=1
𝑀+1
where M is the order and 𝑧̃(𝑚) is the 𝑚𝑡ℎ smallest value in the sample.
Dannenbring (1977) and Nydick and Weiss (1988) suggest using the first
order, i.e. 𝑀 = 1, for point estimating the minimum. The upper bounds of
the JK estimators are the minimum of 𝑥̃𝑖 , and the lower bound is [𝜃̂𝐽𝐾 −
3𝜎 ∗ (𝜃̂𝐽𝐾 )], where 𝜎 ∗ (𝜃̂𝐽𝐾 ) is the standard deviation of 𝜃̂𝐽𝐾 obtained from
bootstrapping the n heuristic solutions (1,000 bootstrap samples are found
to be sufficient). The scalar of 3 in computing the lower bound renders the
confidence level to be 99.9% under the assumption that the sampling
distribution of the JK-estimator is Normal. The 1st order JK-estimator is
more biased than the higher order ones, but its mean square error is lower,
as shown by Robson and Whitlock (1964). Carling and Meng (2014a)
checked the performance of 1st and 2nd JK-estimators, finding that 2nd
order JK-estimator performs better by providing a higher coverage rate at
the cost of a slightly longer interval. The smaller bias of 2nd order JKestimator improves the performance of the estimator. Therefore, it is
reasonable to wonder whether JK-estimators with even higher order would
provide better estimation results. To check that, we extend to the 3rd and
4th JK-estimators in our experiments.
The extreme value theory (EVT) approach assumes the heuristic solutions
to be extreme values from different random samples, and they follow the
Weibull distribution (Derigs, 1985). The confidence interval is derived
from the characteristic of Weibull distribution. The estimator for 𝜃 is 𝑧̃(1) ,
which is also the upper bound of the confidence interval. The Weibull
lower bound is [𝑧̃(1) − 𝑏̂] at a confidence level of (1 − 𝑒 −𝑛 ), 𝑏̂ is the
estimated shape parameter of the Weibull distribution. Derigs (1985)
provides a simple fast way of estimating parameters: 𝑏̂ = 𝑧̃[0.63(𝑛+1)] −
(𝑧̃(1) 𝑧̃(𝑛) − 𝑧̃(2) 2 )/(𝑧̃(1) + 𝑧̃(𝑛) − 2𝑧̃(2) ), where [0.63(𝑛 + 1)] means the
integer value of the function.
As stated in the Introduction, Carling and Meng (2014a, 2014b) argue
SOETs would work when 𝑥̃(𝑖) are close enough to 𝜃 . They propose a
(1)
statistic 𝑆𝑅 = 1000𝜎(𝑧̃𝑖 )/𝜃̂𝐽𝐾 to evaluate if that condition is satisfied.
That statistic mimics a standardization of the standard deviation for
different heuristic solutions. 𝑆𝑅 < 4 indicates the Weibull, and JK
8
intervals cover the optimum almost certainly. It is proved useful in pmedian problems, and we will check its functioning in QAPs.
4. Experimental evaluation of SOETs
With the SOETs introduced above, we design experiments to investigate
their usefulness on Genetic solutions of QAPs. The implementation of
Genetic algorithm follows Tate and Smith (1995). The reproduction and
mutation proportions are 25% and 75% respectively. The same 40
problems as used for complexity analysis are chosen for experiments. 100
genetic processes with 1000 iterations each are carried out for each
problem. With this number of iterations, we have some problems with
solutions close to optimum and some far from the optimum, this gives us
diversified information for SOET performance in different situations.
The first factor tested is the effect of estimators, where the Weibull
estimator together with four JK-estimators are considered. The second
factor considered is the effect of 𝑛, where we vary n to be 10 and 100. The
third factor tested is the effect of complexity. These three factors result in a
40 × 1 × 2 × 5 experiment combination set, where 40 indicates 40
problem instances in 𝒥, 1 indicates 1 heuristic in ℋ, 2 indicates 2 sample
size 𝑛. To assess the performance of estimators, we first draw a random
sample of size n with replacement from the 100 solutions, and then
calculate the estimators and confidence intervals. The procedure is
repeated 1000 times for every combination. Then we get their average
𝑏𝑖𝑎𝑠
relative biasness (
∗ 100%), coverage rate (the proportion of intervals
𝜃
𝑙𝑒𝑛𝑔𝑡ℎ
cover 𝜃 ), and average relative length of the interval ( 𝜃 ∗ 100% ).
These three indicators are used to evaluate the performance of the
estimators under different circumstances. The performance of SR statistic
will also be checked. The results of the experiments are reported below
with figures, and the details are provided in Appendix II.
4.1. The relative bias of estimators
First we check the performance regarding the biasness. It is reasonable to
expect Jackknife type estimators to have smaller bias than the Weibull
estimator. Figure 1 confirms this by giving the Lowess smoothing line
(Cleveland, 1979) of the relative bias for the five estimators when sample
size 𝑛 is 10 and 100. But the difference between the four JK-estimators is
9
marginal for both levels of n. When the complexity of the problem is
larger than 25, the biases for all the estimators increase sharply. The 2nd
and 3rd order JK-estimator have smaller superiority than the 1st and 4th
order ones under 𝑛 = 10. The mean difference between 1st and 3rd order
JK-estimator is merely 0.23% of 𝜃. When the sample size increases to
100, the advantage of JK-estimators over the Weibull estimator still exists,
but the differences between different order JK-estimators diminish. 1st and
2nd order JK-estimators reduce their relative bias by 0.3% and 0.1% of 𝜃
respectively, while there is almost no drop for both 3rd and 4th order JKestimators.
Sample size 10
15
10
5
0
0
5
10
15
Relative Bias of estimators
20
20
Sample size 100
0.00
10.00
20.00
30.00
Complexity of the problem
1st JK-estimator Bias
3rd JK-estimator Bias
Weibull estimator Bias
40.00
0.00
2nd JK-estimator Bias
4th JK-estimator Bias
10.00
20.00
30.00
Complexity of the problem
1st JK-estimator Bias
3rd JK-estimator Bias
Weibull estimator Bias
40.00
2nd JK-estimator Bias
4th JK-estimator Bias
Figure 1. Lowess line of the biasness of the 5 estimators for sample size 10 and 100.
4.2.Interval coverage rate and relative length
Next, we check the coverage rate of the 5 estimators when 𝑛 = 10 and
100. Figure 2 gives the Lowess smoothing line of the results. Table 2 gives
the mean and median coverage rate together with lengths. For both levels
of n, when the complexity of the problems increases, the coverage rates
for all the 5 estimators decline sharply. The JK-estimators again
outperform the Weibull ones with a higher coverage rate. Among the JKestimators, the 4th order performs better than the other three orders except
when the complexity goes beyond 30. The 1st order JK-estimator has the
worst performance among JK-estimators. Due to the deterioration of 4th
order JK-estimator coverage rate, the mean difference between 1st and 4th
order JK-estimators is 20% when 𝑛 = 10, and 14% when 𝑛 = 100.
10
Sample size 100
.8
.6
.4
.2
0
0
.2
.4
.6
.8
Coverage Rate of estimators
1
1
Sample size 10
0.00
10.00
20.00
30.00
Complexity of the problem
1st JK-estimator C.R.
3rd JK-estimator C.R.
Weibull estimator C.R.
40.00
0.00
2nd JK-estimator C.R.
4th JK-estimator C.R.
10.00
20.00
30.00
Complexity of the problem
1st JK-estimator C.R.
3rd JK-estimator C.R.
Weibull estimator C.R.
40.00
2nd JK-estimator C.R.
4th JK-estimator C.R.
Figure 2. Coverage percentage of 1st order, 2nd, 3rd and 4th order Jackknife estimator,
and Weibull estimator under sample size 10 and 100.
Sample size 100
6
4
2
0
0
10
20
30
40
Interval length of estimators
8
50
Sample size 10
0.00
10.00
20.00
30.00
Complexity of the problem
1st JK estimator Lengh
3rd JK estimator Length
Weibull estimator Length
40.00
0.00
2nd JK estimator Length
4th JK estimator Length
10.00
20.00
30.00
Complexity of the problem
1st JK estimator Lengh
3rd JK estimator Length
Weibull estimator Length
40.00
2nd JK estimator Length
4th JK estimator Length
Figure 3. Interval length of 1st order, 2nd, 3rd and 4th order Jackknife estimator, and
Weibull estimator under sample size 10 and 100.
Table 2: Coverage rate and relative length 5 estimators.
Coverage Rate (%)
Sample size 10
Relative Length (%)
Sample size 100
Sample size 10
Sample size 100
Estimator
Mean
Median
Mean
Median
Mean
Median
Mean
Median
1st JK
66
85
67
98
3.01
1.27
0.92
0.14
2nd JK
73
95
71
99
5.24
2.50
1.63
0.24
3rd JK
80
100
75
100
9.18
4.69
2.99
0.45
4th JK
86
100
81
100
16.36
8.74
5.62
0.92
Weibull
66
90
66
90
2.59
1.61
2.48
1.69
11
As for the relative length of the interval, Figure 3 shows there is no clear
tendency of the relationship between the relative length of the intervals
and the complexity of the problems. The Weibull intervals are the shortest,
with the mean being around 2.5% of 𝜃 for both 𝑛 = 10 and 100. The
lengths of the JK-estimator interval almost double when the order
increases by 1 and are highly affected by sample size. The lengths when
𝑛 = 10 are almost 3 times of 𝑛 = 100. When 𝑛 = 10, 1st JK-estimator
interval has a mean length 3% of 𝜃, a little higher than the Weibull ones
while they have the same coverage rate. The 2nd order JK-estimator has a
higher coverage rate together with a longer interval. The situation
deteriorates for 3rd and 4th JK-estimator with much longer confidence
interval. When 𝑛 = 100 , the Weibull estimator has almost the same
performance while the JK-estimators have better performance. The 3rd
order JK-estimator has a similar mean length and a much shorter median
length but with 9% more average coverage rate. Thus, when the sample
size is small, 1st and 2nd order JK estimators are suggested, otherwise 3rd
and 4th order JK-estimators are suggested when the sample size is large.
4.3.SR performance
Next, the performance of the statistic SR is checked. Based on the analysis
above, we focus specifically on 2 cases, the 2nd order JK-estimator when
𝑛 = 10 and 4th order JK-estimator, when 𝑛 = 100. Figure 4 gives the
scatter plot and Lowess line between the coverage rate and SR. It can be
seen that for both cases, a small SR close to 0 does not guarantee a high
coverage rate, while as large a SR as 60 may correspond to a coverage rate
even as high as 100%. The problems with high complexities are more
likely to have different heuristic solutions trapped in the same suboptimal
or similar suboptimals, leading to a trivial SR. As to easy problems, a
small SR does indicate a high coverage rate close to 1. Figure 5 provides
the instances with SR<7 for both cases. The threshold 7 is chosen because
it is the integer part of the smallest SR where the coverage rate for easy
problems drops below 0.95 for both sample cases and for all 5 intervals.
The size of the circle indicates the complexity of the problem. The
problems with trivial coverage rate have complexities over 17. Therefore,
the performance of SR is related to the complexity of the problem. For
easy problems, small SR supports that the confidence interval covers 𝜃,
but not for difficult problems. There is no clear pattern that can be
concluded for the functioning of SR. The application of SR remains an
12
open question.
Sample size 100
.8
.6
.4
.2
0
0
.2
.4
.6
.8
4th JK-estimator Coverage Rate
1
1
Sample size 10
0
20
40
SR
60
80
0
bandwidth = .8
20
40
SR
60
80
bandwidth = .8
Figure 4. Scatter plot between SR and coverage rate of 2 nd order JK-estimator for n=10
and 4th order JK-estimator for n=100.
.8
.6
.4
.2
0
0
.2
.4
.6
.8
4th order JK-estimator Coverage rate
1
Sample size 100
1
Sample size 10
0
1
2
3
4
5
0
SR
1
2
SR
3
4
Figure 5. Scatter plot between SR<7 and coverage rate of 2nd order JK-estimator (left),
between SR<7 and coverage rate of 4th order JK-estimator (right). The size of the circle
stands for complexity of the problem.
5. Lower bound comparison
As a small part in deriving quality of solutions, it is of great concern to
compare SOET with the common approach, namely the deterministic
bounds, especially the lower bound. Several lower bounds are proposed.
Loiola (2007) collects different lower bounds of several problems. The
deterministic bounds stated are: Gilmore-Lawler bound (GLB62), from
Gilmore (1962); the interior-point bound (RRD95), from Resende et al.
(1995); the 1-RLT dual ascent bound (HG98), from Hahn and Grant
(1998); the dual-based bound (KCCEB99), from Karisch et al. (1999); the
quadratic programming bound (AB01), from Anstreicher and Brixius
(2001); the SDP bound (RS03), from Sotirov and Rendl (2003); the lift13
and-project SDP bound (BV04), from Burer and Vandenbussche (2006);
the Hahn-Hightower 2-RLT dual ascent bound (HH01), from Adams et al.
(2007). To incorporate our results to their framework, we compare the
SOET lower bound by techniques in the previous section with
deterministic ones. The heuristic solutions are derived by running 100
Genetic processes with 3000 iterations each. Almost all the processes
stopped improving after 2500 iterations, with very few exceptions. Then
we derive the SOET lower bounds by the 4th order JK-estimator. The
lower bounds are provided in Table 3. To assess the performance
conveniently, we calculate the average absolute relative deviation of the
|𝑙𝑜𝑤𝑒𝑟.𝑏𝑜𝑢𝑛𝑑−𝑜𝑝𝑡𝑖𝑚𝑎𝑙|
lower bounds, i.e.,
∗ 100%, and report them in the
𝑜𝑝𝑡𝑖𝑚𝑎𝑙
last row of Table 3.
Out of 15 problems, SOET has 2 best lower bounds while HH01 and
BV04 share the rest 13 best lower bounds. SOET lower bounds perform
better than the first 6 deterministic lower bounds but are surpassed by
BV04 and HH01, which are acknowledged to be the best deterministic
lower bounds. The average absolute bias percentage for SOET, HH01 and
BV04 is not large, being 6%, 3% and 3% respectively, yet still
significantly smaller than the other 6 lower bounds. Out of 15 problems,
SOET lower bounds cover 14 optimums. Therefore, the SOET lower
bound is competitive with the best deterministic bounds. It shows great
potential in application even though it fails to cover the optimum with
small probability.
14
Table 3: Statistical lower bounds and deterministic lower bounds.
Proble
m
Optim
GLB62
HG98
KCCEB
99
AB01
RS03
BV04
HH01
JK4
Had16
3720
3358
3558
3553
3595
3699
3672
3720
3704
Had18
5358
4776
5083
5078
5143
5317
5299
5358
5261
Had20
6922
6166
6571
6567
6677
6885
6811
6922
6810
Kra30
a
88900
68360
75853
75566
68572
77647
86678
86247
83867
Kra30
b
91420
69065
76562
76235
69021
81156
87699
87107
88601
Nug12
578
493
523
521
498
557
568
578
515
Nug15
1150
963
1039
1033
1001
1122
1141
1150
1143
Nug20
2570
2057
2179
2173
2290
2451
2506
2508
2266
Nug30
6124
4539
4793
4785
5365
5803
5934
5750
5857
Rou15
35421
0
29854
8
32394
3
32358
9
30377
7
33328
7
35020
7
34521
0
31778
2
Rou20
72552
0
55994
8
64205
8
64142
5
60782
2
66383
3
69512
3
69939
0
67944
1
Tai20a
70348
2
58067
4
61720
6
61664
4
58513
9
66373
00
67168
5
67587
0
65679
4
Tai25a
11672
56
96241
7
10067
49
10059
78
98345
6
10413
37
11128
62
10916
53
10846
65
Tai30a
18181
46
15046
88
15663
09
15653
13
15180
59
16521
86
17068
75
16862
90
22107
30
Tho30
14993
6
90578
99995
99855
12468
4
13605
9
14281
4
13670
8
14561
6
19.08
12.99
13.17
13.65
61.92
2.94
3.26
6.34
Bias.%
Source: Loiola (2007) except for last column and last row. Bold number means best lower
bound.
6. Concluding discussion
In this paper, we analyse the performance of SOETs on QAPs. Based on
15
the framework proposed by Giddings (2014), the paper extends the work
by Derigs (1985), by systematically verifying the usefulness of SOETs and
comparing with deterministic bounds, and it extends the work of Carling
and Meng (2014a, 2014b) by testing on QAPs. We tested 5 estimators on
40 problems with different sample sizes. In our analysis, SOETs can be
useful in providing helpful intervals covering the optimum. The JKestimators have better performance than Weibull estimators. When the
sample size is small, the 2nd order JK-estimator is suggested, and when the
sample size is large, the 4th order JK-estimator is suggested. The statistics
SR do not provide accurate information, especially when the solutions are
trapped into suboptimal for complex problems. The comparison between
SOET lower bound and deterministic ones shows that the SOET performs
close to the best deterministic lower bounds. Thus, it shows SOETs have
great potential in accurately assessing the quality of the heuristic solutions.
References
Adams, W.P., Guignard, M., Hahn, P.M., Hightower, W.L., (2007), A level-2
reformulation-linearization technique bound for the quadratic assignment
problem, European Journal of Operational Research 180, 983-996.
Anstreicher, K.M., Brixius, N.W., (2001), Solving quadratic assignment
problems using convex quadratic programming relaxations. Optimization
Methods and Software, 16(1-4), 49-68.
Burer, S., Vandenbussche, D, (2006), Solving lift-and-project relaxations of
binary integer programs. SIAM Journal on Optimization, 16(3), 726-750.
Burkard, R.E., Karisch, S.E., Rendl, F., (1997), QAPLIB – A Quadratic
Assignment Problem Library, Journal of Global Optimization 10, 391-403
Carling, K., Meng, X., (2014a), On statistical bounds of heuristic solutions to
location problems. Working papers in transport, tourism, information
technology and microdata analysis, 2014:10.
Carling, K., Meng, X., (2014b), Confidence in heuristic solutions. Working
papers in transport, tourism, information technology and microdata analysis,
2014:12.
16
Cleveland, W.S., (1979), Robust Locally Weighted Regression and Smoothing
Scatterplots, Journal of the American Statistical Association, 74, 829–836.
Dannenbring, D.G., (1977), Procedures for estimating optimal solution
values for large combinatorial problems, Management science, 23:12, 12731283.
Derigs, U, (1985), Using confidence limits for the global optimum in
combinatorial optimization. Operations research, 33:5, 1024-1049.
Fisher, M.L., (2004), The Lagrangian relaxation method for solving integer
programming problems. Management science, 50(12_supplement), 18611871.
Giddings, A.P., Rardin, R.L, Uzsoy, R, (2014), Statistical optimum estimation
techniques for combinatorial problems: a review and critique. Journal of
Heuristics, 20, 329-358.
Gilmore, P.C., (1962), Optimal and suboptimal algorithms for the quadratic
assignment problem. Journal of the Society for Industrial & Applied
Mathematics, 10(2), 305-313.
Golden, B.L., Alt, F.B., (1979), Interval estimation of a global optimum for
large combinatorial optimization, Operations Research 33:5, 1024-1049.
Hahn, P., Grant, T., (1998), Lower bounds for the quadratic assignment
problem based upon a dual formulation. Operations Research, 46(6), 912-922.
Karisch, S.E., Cela, E., Clausen, J., Espersen, T., (1999), A dual framework
for lower bounds of the quadratic assignment problem based on linearization.
Computing, 63(4), 351-403.
Land, A.H., Doig, A.G. (1960), An automatic method of solving discrete
programming problems. Econometrica: Journal of the Econometric Society,
497-520.
Loiola, E.M., Abreu, N.M.M, Boaventura-Netto, P.O., Hahn.P., Querido, T.,
(2007), A survey for the quadratic assignment problem. European Journal of
Operational Research 176.2, 657-690.
17
McRobert, K.L., (1971), A search model for evaluating combinatorially
explosive problems, Operations Research, 19, 1331-1349.
Meng, X., Carling, K., (2014), How to Decide Upon Stopping a Heuristic
Algorithm in Facility-Location Problems?. In Web Information Systems
Engineering–WISE 2013 Workshops, Lecture Notes in Computer Science,
8182, 280-283, Springer, Berlin/Heidelberg.
Misevicius, A., (2005), A tabu search algorithm for the quadratic assignment
problem. Computational Optimization and Applications, 30:95-111.
Nydick JR, R.L., and Weiss, H.J., (1988), A computational evaluation of
optimal solution value estimation procedures, Computers & Operations
Research, 5, 427-440.
Resende, M.G.C., K.G. Ramakrishnan, and Z. Drezner, (1995), Computing
lower bounds for the quadratic assignment problem with an interior point
algorithm for linear programming, Operations Research 43, 781-791.
Robson, D.S., and Whitlock, J.H., (1964), Estimation of a truncation point,
Biometrika, 51, 33-39.
Sotirov, R., Rendl, F., (2003), Bounds for the Quadratic Assignment Problem
Using the Bundle Method. Department of Mathematics, University of
Klagenfurt, Austria, Tech. Rep.
Tate, D.M., Smith, A.E. (1995), A genetic approach to the quadratic
assignment problem. Computers & Operations Research, 22(1), 73-83.
Wilhelm, M.R., Ward, T.L. (1987), Solving quadratic assignment problems by
‘simulated annealing’. IIE transactions, 19(1), 107-119.
18
Appendix I
Table A1: Description of the 40 problems of the QAPLIB.
Problem
n
𝜃
𝜇𝑔(𝑧𝑝 )
𝜎𝑔(𝑧𝑝 )
Complexity
els19
19
17212548
58712359.92
10526387
3.94
chr12a
12
9552
45123.58
8833.06
4.03
esc16c
16
160
249.32
20.05
4.45
chr15a
15
9896
61378.34
11444.91
4.5
had12
12
1652
1888.27
50.18
4.71
chr18b
18
1534
4601.64
618.07
4.96
bur26d
26
3821225
4211133.26
75035.14
5.2
nug14
14
1014
1363.54
65.77
5.31
had16
16
3720
4226.97
85.82
5.91
chr20b
20
2298
10708.88
1415.13
5.94
had18
18
5358
5990.01
105.86
5.97
scr20
20
110030
226272.64
19391.22
5.99
chr20a
20
2192
10707.62
1406.16
6.06
nug18
18
1930
2565.41
98.92
6.42
chr25a
25
3796
19877.53
2494.68
6.45
ste36a
36
9526
22750.27
2022.71
6.54
had20
20
6922
7764.79
125.98
6.69
tai35b
35
283315445
516046202
33413753
6.97
nug27
27
5234
7128.04
233.68
8.11
tai40b
40
637250948
1132975203
56180695
8.82
nug30
30
6124
8132.35
212.94
9.43
kra30b
30
91420
137016.96
4621.8
9.87
kra30a
30
88900
134657.56
4487.12
10.2
kra32
32
88900
137137.22
4716.35
10.23
lipa20a
20
3683
3942
24.79
10.45
sko81
81
90998
108443.68
1004.45
17.37
tai60a
60
7208572
8518524.44
70989.41
18.45
sko90
90
115534
136878.06
1143.79
18.66
wil100
100
273038
299759.06
1367.71
19.54
lipa50a
50
62093
64035.91
98.86
19.65
19
sko100c
100
147862
174507.74
1304.2
20.43
tai80a
80
13557864
15624432.79
94608.26
21.84
lipa70a
70
169755
173757.85
168.43
23.77
lipa40b
40
476581
621324.38
5245.29
27.59
20
Appendix II
Table A2: Relative bias, coverage rate and relative length of 1st JK, 2nd JK and 3rd JK
estimators when 𝑛 = 10.
CV stands for coverage rate, RB stands for relative bias in percentage, RL stands for
relative length in percentage
Problem
1st JK
SR
2nd JK
3rd JK
CV
RB
RL
CV
RB
RL
CV
RB
RL
bur26d
0.28
0.98
0.00
0.01
0.99
0.00
0.02
1.00
0.00
0.04
lipa70a
0.3
0.00
0.95
0.08
0.00
0.94
0.14
0.00
0.94
0.24
had16
0.41
1.00
0.00
0.03
1.00
0.00
0.06
1.00
0.00
0.15
esc16c
0.44
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
bur26c
0.49
0.97
0.00
0.02
0.99
0.00
0.03
1.00
0.00
0.05
lipa50a
0.56
0.00
1.13
0.14
0.00
1.12
0.24
0.03
1.12
0.42
wil100
1.56
0.02
1.59
0.52
0.15
1.53
0.88
0.33
1.48
1.47
lipa60b
1.79
0.00
19.55
0.71
0.00
19.48
1.21
0.00
19.43
2.03
tai80a
1.81
0.00
4.25
0.45
0.00
4.23
0.77
0.00
4.22
1.32
had18
1.97
1.00
-0.01
0.12
1.00
0.01
0.24
1.00
0.01
0.54
tai64c
2.17
0.97
-0.02
0.27
0.99
-0.01
0.49
1.00
-0.01
0.93
had12
2.33
1.00
0.00
0.04
1.00
0.00
0.12
1.00
0.00
0.35
had20
2.35
1.00
0.00
0.10
1.00
0.01
0.24
1.00
0.00
0.58
tai60a
3.26
0.01
3.94
0.99
0.08
3.84
1.69
0.20
3.77
2.83
sko81
3.52
0.03
2.64
0.89
0.13
2.57
1.52
0.35
2.52
2.59
sko9100c
3.81
0.00
3.04
0.94
0.08
2.97
1.61
0.32
2.93
2.74
sko90
3.96
0.00
2.80
0.90
0.08
2.78
1.53
0.34
2.78
2.67
els19
5.18
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.01
lipa20a
7.89
0.88
-0.35
3.75
0.88
-0.46
6.43
0.90
-0.32
10.90
nug18
8.71
0.86
0.00
1.75
0.96
-0.11
3.02
1.00
-0.18
5.22
nug30
8.85
0.72
0.47
1.51
0.91
0.41
2.60
0.99
0.37
4.60
nug27
9.51
0.88
-0.16
2.22
0.96
-0.23
3.80
1.00
-0.22
6.59
nug24
10.22
0.93
-0.14
2.04
0.98
-0.17
3.52
1.00
-0.17
6.25
nug14
10.48
0.99
-0.10
1.31
1.00
-0.03
2.41
1.00
0.00
4.77
kra30b
11.7
0.84
0.22
2.33
0.95
0.11
4.01
0.99
0.07
7.00
kra32
11.73
0.53
1.21
2.74
0.76
0.96
4.71
0.95
0.77
8.08
nug16b
12.46
0.99
-0.06
1.65
0.99
0.15
3.41
1.00
0.20
7.52
21
kra30a
12.77
0.40
1.28
3.26
0.70
0.95
5.62
0.95
0.68
9.55
tai80b
14.51
0.43
2.60
3.23
0.66
2.38
5.55
0.91
2.21
9.61
chr18b
14.54
1.00
-0.03
0.73
1.00
0.05
1.56
1.00
0.02
3.64
scr20
14.85
0.99
-0.17
1.93
1.00
-0.03
3.52
1.00
0.05
6.92
tai35b
20.02
0.94
0.18
1.73
0.98
0.23
3.05
1.00
0.30
5.72
ste36a
25.24
0.81
1.19
4.81
0.94
0.98
8.31
1.00
0.83
14.68
tai40b
28.25
0.83
0.33
6.37
0.93
-0.09
10.85
0.98
-0.26
18.47
chr12a
31.36
1.00
-0.01
1.23
1.00
0.06
3.13
1.00
0.00
8.07
chr20b
36.85
0.36
6.58
6.92
0.72
6.52
12.03
0.95
6.57
21.72
lipa40b
47.08
0.74
-0.79
28.43
0.79
-3.25
48.22
0.82
-3.32
78.48
chr20a
60.44
0.61
3.76
15.68
0.80
2.09
26.80
0.96
0.90
45.06
chr15a
61.88
1.00
-0.09
2.18
1.00
0.14
4.61
1.00
0.05
10.83
chr25a
79.02
0.68
8.01
18.56
0.86
7.16
31.68
0.97
7.00
54.64
Table A3: Relative bias, coverage rate and relative length of 4th JK and Weibull estimator
when 𝑛 = 10.
CV stands for coverage rate, RB stands for relative bias in percentage, RL stands for
relative length in percentage
Problem
4th JK
SR
Weibull
CV
RB
RL
CV
RB
RL
bur26d
0.28
1.00
0.00
0.08
0.99
0.00
0.02
lipa70a
0.3
0.01
0.94
0.42
0.00
0.97
0.08
had16
0.41
1.00
0.00
0.34
1.00
0.00
0.04
esc16c
0.44
1.00
0.00
0.01
1.00
0.00
0.00
bur26c
0.49
1.00
0.00
0.11
1.00
0.00
0.02
lipa50a
0.56
0.17
1.11
0.74
0.01
1.17
0.15
wil100
1.56
0.60
1.45
2.45
0.04
1.72
0.48
lipa60b
1.79
0.00
19.38
3.42
0.00
19.72
0.19
tai80a
1.81
0.05
4.23
2.33
0.00
4.35
-0.15
had18
1.97
1.00
-0.01
1.19
1.00
0.00
0.20
tai64c
2.17
1.00
0.01
1.83
1.00
0.02
0.37
had12
2.33
1.00
0.02
0.94
1.00
0.00
0.15
had20
2.35
1.00
-0.01
1.36
1.00
0.00
0.26
tai60a
3.26
0.46
3.72
4.75
0.02
4.17
0.84
sko81
3.52
0.70
2.47
4.48
0.04
2.84
0.98
22
sko9100c
3.81
0.68
2.91
4.73
0.01
3.26
0.91
sko90
3.96
0.80
2.80
4.74
0.02
2.99
0.84
els19
5.18
1.00
0.00
0.05
1.00
0.00
0.00
lipa20a
7.89
0.95
-0.12
18.70
0.54
0.40
1.59
nug18
8.71
1.00
-0.21
9.18
0.93
0.40
1.73
nug30
8.85
1.00
0.39
8.31
0.85
0.78
1.70
nug27
9.51
1.00
-0.18
11.59
0.90
0.33
2.28
nug24
10.22
1.00
-0.17
11.34
0.97
0.24
2.19
nug14
10.48
1.00
0.03
9.54
1.00
0.08
1.76
kra30b
11.7
1.00
0.09
12.43
0.92
0.71
2.45
kra32
11.73
1.00
0.62
14.00
0.59
1.79
2.97
nug16b
12.46
1.00
0.18
16.11
0.96
0.01
1.89
kra30a
12.77
0.99
0.46
16.34
0.41
2.05
3.83
tai80b
14.51
0.99
2.06
16.89
0.39
3.31
3.40
chr18b
14.54
1.00
0.03
8.22
1.00
0.00
1.64
scr20
14.85
1.00
0.11
13.73
0.99
0.06
2.42
tai35b
20.02
1.00
0.39
11.11
0.98
0.47
2.33
ste36a
25.24
1.00
0.73
26.50
0.88
2.20
4.75
tai40b
28.25
1.00
-0.25
31.94
0.90
1.68
11.50
chr12a
31.36
1.00
0.14
19.43
1.00
0.00
3.43
chr20b
36.85
1.00
6.77
40.20
0.50
7.83
6.78
lipa40b
47.08
0.84
-1.87
127.79
0.33
5.04
-0.74
chr20a
60.44
1.00
0.18
76.18
0.63
7.20
15.66
chr15a
61.88
1.00
-0.01
24.74
1.00
0.03
4.81
chr25a
79.02
1.00
7.25
96.12
0.75
11.82
19.80
Table A4: Relative bias, coverage rate and relative length of 1st JK, 2nd JK and 3rd JK
estimators when 𝑛 = 10.
CV stands for coverage rate, RB stands for relative bias in percentage, RL stands for
relative length in percentage
Problem
1st JK
SR
2nd JK
3rd JK
CV
RB
RL
CV
RB
RL
CV
RB
RL
bur26d
0.31
0.00
0.94
0.03
0.00
0.94
0.05
0.00
0.94
0.09
lipa70a
0.43
0.98
0.00
0.00
0.98
0.00
0.00
0.99
0.00
0.00
23
had16
0.51
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
esc16c
0.56
0.00
1.09
0.10
0.00
1.09
0.18
0.00
1.09
0.31
bur26c
0.61
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
lipa50a
0.97
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
wil100
1.6
0.00
1.45
0.33
0.02
1.45
0.55
0.12
1.45
0.95
lipa60b
1.84
0.00
4.22
0.11
0.00
4.22
0.19
0.00
4.22
0.36
tai80a
1.85
0.00
19.35
0.49
0.00
19.37
0.85
0.00
19.39
1.54
had18
2.16
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
tai64c
2.22
1.00
0.00
0.00
1.00
0.00
0.01
1.00
0.00
0.02
had12
2.36
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
had20
2.47
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
tai60a
3.35
0.00
3.55
0.83
0.00
3.51
1.41
0.14
3.49
2.39
sko81
3.63
0.00
2.34
0.63
0.02
2.31
1.07
0.38
2.30
1.81
sko9100c
3.92
0.00
2.85
0.44
0.00
2.84
0.74
0.01
2.85
1.29
sko90
3.97
0.00
2.70
0.32
0.00
2.71
0.55
0.00
2.72
0.97
els19
8.31
1.00
0.00
0.02
1.00
0.00
0.07
1.00
0.00
0.22
lipa20a
8.75
0.99
-0.02
0.37
0.99
0.01
0.72
1.00
0.02
1.47
nug18
9.02
0.44
0.35
0.37
0.76
0.35
0.68
0.97
0.35
1.28
nug30
9.76
1.00
0.00
0.05
1.00
0.00
0.13
1.00
0.00
0.33
nug27
10.46
1.00
0.00
0.07
1.00
0.00
0.14
1.00
0.00
0.33
nug24
10.62
1.00
0.00
0.01
1.00
0.00
0.03
1.00
0.00
0.08
nug14
12.13
0.81
-0.34
3.25
0.89
-0.54
5.46
0.95
-0.54
8.99
kra30b
12.15
0.80
0.14
0.40
0.98
0.17
0.72
1.00
0.19
1.37
kra32
12.58
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
nug16b
13.12
0.85
-0.23
3.86
0.85
-0.16
6.70
0.86
0.11
11.76
kra30a
14.35
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
tai80b
14.89
0.40
1.83
1.84
0.65
1.76
3.17
0.86
1.76
5.47
chr18b
15.14
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
scr20
15.17
1.00
0.00
0.01
1.00
0.00
0.02
1.00
0.00
0.04
tai35b
20.9
0.20
0.22
0.17
0.56
0.21
0.29
0.91
0.21
0.54
ste36a
25.61
0.74
0.74
1.62
0.87
0.79
2.82
0.97
0.87
5.05
tai40b
29.2
1.00
-0.01
1.27
1.00
0.07
2.36
1.00
0.11
4.74
chr12a
31.36
1.00
-0.01
1.23
1.00
0.06
3.13
1.00
0.00
8.07
chr20b
37.55
0.00
6.36
1.11
0.00
6.41
1.94
0.07
6.47
3.60
lipa40b
50.75
1.00
-0.04
1.71
1.00
0.08
3.73
1.00
0.12
8.61
24
chr20a
64.2
0.96
-0.52
8.63
0.99
-0.49
14.91
1.00
-0.32
26.65
chr15a
64.72
1.00
0.00
0.00
1.00
0.00
0.00
1.00
0.00
0.00
chr25a
81.99
0.54
5.17
7.38
0.77
4.93
12.49
0.91
4.99
21.35
Table A5: Relative bias, coverage rate and relative length of 4th JK and Weibull
estimators when 𝑛 = 10.
CV stands for coverage rate, RB stands for relative bias in percentage, RL stands for
relative length in percentage
Problem
4th JK
SR
Weibull
CV
RB
RL
CV
RB
RL
bur26d
0.31
0.00
0.94
0.16
0.01
0.97
0.08
lipa70a
0.43
1.00
0.00
0.01
0.99
0.00
0.01
had16
0.51
1.00
0.00
0.00
1.00
0.00
0.05
esc16c
0.56
0.00
1.10
0.55
0.01
1.17
0.15
bur26c
0.61
1.00
0.00
0.00
1.00
0.00
0.02
lipa50a
0.97
1.00
0.00
0.00
1.00
0.00
0.00
wil100
1.6
0.56
1.46
1.68
0.05
1.72
0.31
lipa60b
1.84
0.00
4.22
0.69
0.00
4.35
0.31
tai80a
1.85
0.00
19.38
2.85
0.00
19.72
0.56
had18
2.16
1.00
0.00
0.00
1.00
0.00
0.21
tai64c
2.22
1.00
0.00
0.06
0.99
0.02
0.37
had12
2.36
1.00
0.00
0.00
1.00
0.00
0.27
had20
2.47
1.00
0.00
0.00
1.00
0.00
0.14
tai60a
3.35
0.62
3.48
4.10
0.03
4.18
0.86
sko81
3.63
0.55
2.30
3.09
0.03
2.83
0.82
sko9100c
3.92
0.23
2.88
2.27
0.02
3.24
0.86
sko90
3.97
0.09
2.74
1.75
0.01
2.99
0.89
els19
8.31
1.00
0.00
0.63
0.53
0.34
1.77
lipa20a
8.75
1.00
0.01
3.00
0.92
0.41
1.71
nug18
9.02
0.99
0.36
2.46
0.85
0.78
1.66
nug30
9.76
1.00
0.00
0.82
0.92
0.28
2.39
nug27
10.46
1.00
0.00
0.74
0.97
0.27
2.10
nug24
10.62
1.00
0.00
0.22
1.00
0.08
1.76
nug14
12.13
0.96
-0.46
14.96
0.56
1.80
2.64
kra30b
12.15
1.00
0.21
2.65
0.90
0.72
2.35
25
kra32
12.58
1.00
0.00
0.00
0.97
0.01
1.96
nug16b
13.12
0.92
0.37
20.99
0.41
2.09
3.56
kra30a
14.35
1.00
0.00
0.00
1.00
0.00
0.01
tai80b
14.89
0.97
1.83
9.56
0.38
3.35
4.16
chr18b
15.14
1.00
0.00
0.00
1.00
0.00
1.63
scr20
15.17
1.00
0.00
0.10
0.99
0.07
2.47
tai35b
20.9
0.99
0.21
1.02
0.98
0.47
2.27
ste36a
25.61
1.00
0.92
9.26
0.89
2.24
5.51
tai40b
29.2
1.00
0.10
9.61
0.90
1.69
6.23
chr12a
31.36
1.00
0.14
19.43
1.00
0.00
3.25
chr20b
37.55
0.45
6.53
6.86
0.48
7.98
8.14
lipa40b
50.75
1.00
0.25
19.53
0.33
5.51
5.50
chr20a
64.2
1.00
-0.14
48.59
0.65
7.33
9.11
chr15a
64.72
1.00
0.00
0.00
1.00
0.04
5.05
chr25a
81.99
0.98
5.16
37.22
0.72
12.12
17.97
26
27
PAPER IV
This paper is accept by International Journal on Web Services Research, we
acknowledge the journal for this publication.
28
On transforming a road network database to a graph for
localization purpose
Authors: Xiangli Meng and Pascal Rebreyend1
Abstract: The problems of finding best facility locations require complete
and accurate road networks with the corresponding population data in a
specific area. However the data obtained from road network databases
usually do not fit in this usage. In this paper we propose a procedure of
converting the road network database to a road graph which could be used
for localization problems. Several challenging problems exist in the
transformation process which are commonly met also in other data bases.
The procedure of dealing with those challenges are proposed. The data
come from the National road data base in Sweden. The graph derived is
cleaned, and reduced to a suitable level for localization problems. The
residential points are also processed in ordered to match the graph. The
reduction of the graph is done maintaining the accuracy of distance
measures in the network.
Key words: road network, graph, population, GIS.
1
Corresponding author. E-mail:prb@du.se. Phone: +46-23-778921.
1
distance
is
deteriorated,
particularly in a area with a lot
natural barriers. Consequently,
the usage of Euclidian distance
would lead to poor locations of
the model.
1. Introduction
Consider the common locationallocation model (the p-median
problem) which allocate P
facilities
to
a
population
geographically distributed in Q
demand points such that the
population’s average or total
distance to its nearest service
facility is minimized. Solving
such problems is based on
computing the distance between
candidate locations and all or a
subset of points representing the
habitat of people. The candidate
locations should be connected in
a graph and the corresponding
distance matrix should be
suitable for further calculations.
A popular approach is to avoid
the
trouble
caused
by
transforming the original data
into a connected road network
and instead use the Euclidian
distance between the candidate
locations and residential points.
Thus, all points are assumed to
be connected pairwise via
straight lines. However, the
Euclidian distance may be
inaccurate, especially if two
points are not directly connected
and require detours, in which
case the Euclidian distance
would underestimate the real
distance. Thus the quality of the
Han & al (2013) shows that using
the road network to compute
distances instead gives a good
treatment of the problem. The
road network distance gives
accurate estimate of distance
between 2 points. It reflects the
road connections and natural
barriers. In their analysis,
although the complexity of the
problem leads to sub-optimal
solutions,
the road network
distance
solutions
still
outperforms those of Euclidian
ones.
However, even though the usage
of the road network distance is
essential in solving the models,
the original network data sets
researchers obtained usually do
not fit and are unable to be used
directly. Those data sets need to
be cleaned and transformed to a
connected road network graph
before being used for finding best
locations of facilities. Many
challenging problems could pop
out in this process such as
incomplete
information
or
isolated sub graph. Dealing with
these problems inappropriately
2
would produce inaccurate graph
and then lead to bad solutions.
For data sets in a large scale, the
necessity of giving proper
treatment of the problems in the
data set and the transformation
process is even more crucial.
over 34 million data points with
auxiliary information like speed
limits and direction. Many
commonly met challenges appear
in the transformation process.
Proper methods need to be used
for handling those challenges
before getting the graph for
solving the p-median models.
The
main
challenges
we
encountered are as follows.
Despite the nontrivial role an
accurate road network graph
plays in solving the location
models, few research papers give
proper illustrations on how to
deal with the troubles in the
process of deriving it. Especially
for data at a large scale for the pmedian models, there is no
research investigation on how to
deal with the challenges in the
transformation process, which is
the focus of this paper. In our
case, we are interested in locating
public facilities such as hospitals,
public services at the scale of the
country Sweden. The objective
function we want to minimize is
the average distance between
residence’s habitants and the
closest facilities nationwide. The
scale of the problem is very
large, making the quality for
graph and distance matrix vital.
The original data we get is from
the National road data base
(NVDB), provided by the
National Swedish Road Agency
(Trafikverket). It is a very
detailed data set consisting of
a). Filling in the missing crossing
and connectivity information
properly.
b). Reducing a super large graph
to a manageable scale and keep
the distance information as
accurate as possible.
c). Calculating the distance
between different nodes in an
affordable time.
The 3 above problems are
commonly met in deriving large
graphs for location-allocation
models. Inappropriate filling of
missing information would result
in incorrect graph, giving wrong
distances. In our data set, cross
and connection information are
missing, which need to be filled
properly. Wrong fill of the cross
would connect 2 separated roads
‘together’. Adding connections to
isolated sub graph incorrectly
usually underestimates distances.
3
Reducing the large graph is
necessary since our data set is
based on national level, therefore
the scale itself to this problem
would cause trouble for further
analysis. Due to the fact that pmedian problems call for the
distance between any pair of
nodes, the distance matrix
corresponding with the graph
would
have
very
large
dimensions, and it would be
beyond the ability of the
computers to process with
heuristic algorithms or other
methods. Thus it is crucial and
tricky to reduce the data
appropriately to an affordable
level without losing too much
accuracy. The commonly used
Dijkstra algorithm could give the
distance between any pair of
nodes, but it consumes a very
long
time,
making
it
inconvenient for researchers to
make adjustment to it and
difficult to derive large distance
matrix. Thus the algorithm needs
to be refined to be able to get the
distance matrix in an affordable
time.
country. Deriving a graph in such
a large scale is quite innovative
and could provide helpful
techniques to researchers. The
methods we proposed to deal
with challenges are indicative for
data at similar scales. We have
tried our approach on different
regions separately in Sweden,
and it worked well. It shows our
methods have good adaptability
also for data with smaller scales.
The region sizes are quite
comparable to many other areas.
For example, the smallest regions
in Sweden are of the size of
Luxembourg, the middle sized
are comparable with Belgium
and German states, and the
largest has similar size with
Hungary and Portugal. This
makes our approach have wide
applications and provide helpful
technique when dealing with data
set in many other countries and
states.
The method we proposed here is
for the usage of the p-median
models. It transforms a large
detailed network data set into a
connected graph and provides a
detailed distance matrix which
can be used for finding best
locations of the models. It is also
helpful in deriving usable graphs
for other location-allocation
models since they use the same
Sweden is about 450,000 km2 in
area with 10 million population.
It is fairly large in Europe and the
world. The Swedish road
networks are quite advanced and
connect almost all the parts of the
4
type of connected graph and
distance matrix in most cases,
such as warehouse location
problems
(Beasley
1993),
maximum cover problems (Kariv
& Hakimi, 1979b). Our method
has wider application also in
circumstances other than location
models. For example, the
classical operational problems
Travelling salesman (Reinelt,
1991), the Quadratic assignment
problems (Burkard, & al 1991),
and the vehicle routing problem
(Laporte, 1992). The scales of
those problems are becoming
larger
and
larger
and
correspondingly lead to large
connected graphs and distance
matrixes, which makes our
techniques beneficial.
version of it. The paper is
organized as follows. Section 2
introduces the road data structure
and
our
pre-processing
procedure; Section 3 introduces
our process in dealing with
population data; Section 4
presents the graph reduction;
Section
5
presents
the
experiments
with
matrix
distances; Section 6 gives the
conclusions.
2. Road data
The road network data set we
used has been provided by the
National Swedish Road Agency
(Trafikverket). The data comes
from the National road data base
(NVDB) which is operated by the
Swedish Transport Agency, the
Swedish
Transport
Administration and a few other
departments. The data set has a
few good characteristics. First, it
is a very detailed data set that
includes almost all the roads in
the nation’s network. Second, the
data set reflects the true road
situation respecting to natural
barriers and detours. Thus the
distances obtained from the data
set are quite accurate. Third, it
includes
much
auxiliary
information like the speed limits
and latitude of the roads, making
it possible to get the correct road
The outline of the paper follows
our transformation process. First
we introduction the basic
information of the data set. Then
we give the detail of filling the
missing
information,
e.g.,
missing
crossings
and
connectivity information. After
getting a detailed network, we
incorporate the population data
into the road network. Then we
reduce the size of the graph to a
manageable scale. At last we
derive the distance matrix in the
graph with both Dijkstra
algorithm and an improved
5
lengths and the travelling time
required. Fourth, the data set is
updated frequently. The road
network data for the whole
country together with its previous
versions have been used for other
researches as well as business
usages,
such
as
road
maintenance,
traffic
management, navigation, etc., see
Lundgren (2000) for an example.
Since the location problems have
different requirements for graphs,
the previous transformation
techniques on the NVDB data do
not fit in our framework. There
are few researches in location
problems use the road network
distance at this large scale.
Although not nationwide, some
researches use part of the data for
the p-median and Gravity pmedian problems. See Carling
etc. (2012) for an example.
However the whole data set
would have many more problems
than data only on a single smaller
area. So applying their dealing
process directly are not sufficient
to fulfil our purpose.
as the same road ID, the same
speed limits, etc. It is represented
by 1 or several connected
polylines.
Node: a point on an edge that
indicates common vertex of 2 or
more edges, or indicates the
graphical character of the edge
shape. Each node has a 3
dimension coordinate indicating
its location and latitude.
Residential point: the centre of a
square habitant. It represents the
residents living in the square. It
is the demand point in the pmedian model.
Candidate locations: a point
represents the possible location
for setting up a facility. It is the
supply point in the p-median
model.
The raw data set we get is a
shapefile describing the road
network. Shapefiles are used in
Geographical
Information
System (GIS) to represent spatial
data. The basic entities in the
shapefile are edges represented
by polylines. The first task is to
read all these edges and find out
the connection between them.
For each edge, the direction and
the speed limit are provided. The
raw data consists of 34,180,478
nodes. Among them, 19 are not
2.1. Terminologies and basic
information of raw data.
First
we
introduce
the
terminologies used throughout
the paper.
Edge: a small part of a road
having the same parameters such
6
taken into account since they are
outside the country. After
removing duplicates ones, we
end up with 28,061,739 different
nodes and 31,406,511 edges. The
average length of an edge is 21
meters with maximum 3602
meters. 26206 edges are longer
than 200 meters. Only 844 of
them are longer than 500 meters.
Having such a high number of
short edges is due to the fact that
the database is used to represent
the geographical information
associated with a road and short
edges. It aims to reflect the road
directions precisely. Therefore,
long edges represent straight road
parts. Speed limits are kept in
this step for estimating traveling
time later. Table 1 provides the
length of roads under different
speed limits. As we can see,
more than 80% of the Swedish
road network has a speed limit of
70 km per hour.
Speed limit
(km/hour)
60
Length (km)
(km/h)
20
(total:183
679,808)
24
30
18,958
40
4,535
50
51,104
(km/h)
70
(total:1,726
679,808)
547,866
80
23,962
90
19,581
100
8,137
110
3,977
120
753
2.2 Environmental setting.
In our work, all programming has
been done in C, using gcc in a
Linux Environment (64 bits) on a
Desktop computer with 32GB of
ram with an Intel I7-3770 CPU.
Parsing the shapefile has been
done by using the library called
shapelib. The code is available
upon request. During the process,
the UNIX process needs around
6GB of memory to run with the
whole Swedish road network.
Thus 32GB ram is not mandatory
here. The computer requirement
is low in our experiment. The
process could also be processed
in a lower speed CPU with
smaller ram and produce results
in acceptable time. A personal
computer even a laptop is
sufficient.
The
software
environment is also quite
standard regarding C language
under Linux Environment. These
Table 1. Lengths of roads on
different speed limits.
Speed limit
(km/hour)
5
Length (km)
7
settings
generalize
adaptability of our methods.
the
2-D grid is applied on the map.
The grid structure is created,
which consists of cells at the size
of 500 meters by 500 meters.
Each cell contains the list of
nodes which belong to it. Thus,
we have 1295 cells in the Xcoordinates and 3043 in the Ycoordinates. All the cells have
exactly the same rectangular
shape.
2.3 Data structure
A key to have an efficient
program in C is to have good
data structures which provide a
good compromise between speed
and storage. In our case, all the
processes from reading the road
network to finalizing the reduced
graph need to store information
about
edges
and
their
connections. Since our main
goal is to represent a graph, we
have created two main structure
types in C, one to represent an
edge and the other to represent a
node. The node structure contains
information like the coordinates,
ID of its edge(s). The edge
information contains its length
and converted travelling time.
2.4 Filling missing crossings
The first challenge we encounter
is the lack of direct crossing
information. This is not only the
problem in NVDB, but also in
many other data bases. The
missing crossings are due to the
shortage of this kind of
geographical storage methods. It
identifies roads by the ID of the
node. If there are no nodes
happen to be on the crossings, no
information of crossings will be
stored. In our case, we have
identified them by ourselves.
Obviously, two different edges
going through exactly the same
node make a crossing at this
node. But only a few real
crossings can be detected by this
approach, because in most cases
two points from different edges
differ by a few centimetres. The
chosen approach is to, for each
node, round the coordinates to
the nearest multiple of 2 meters
But, we need also to have a data
structure by which we can
quickly find all nodes in a small
given area. This is needed
already from the start when we
create the graph from the road
network. In the process, we start
with an blank graph and add
nodes into it. For each node we
want to add, we need to know if a
node exists at or is closed to the
given location. For this purpose,
we have decided to use a grid to
be able to find quickly all nodes
in a given part of the country. A
8
(for all three axes). The choice of
2 meters comes from the general
size of a one-way street as well
as the height needed for a bridge
or tunnel. Threshold larger than 2
meters would connect separated
edges together, and threshold
smaller than 2 meters will fail to
identify many real crossings.
data set besides Swedish one,
since many data set needs to be
updated according to road
changes, and it is easy to have
some missing information in this
process. The difference is that
different data sets may vary in
the percentage of missing
connections,
depending
on
quality of the data set.
After these steps, we end up with
27,922,796 different nodes and a
graph since we are able to
identify crossing.
Thus, the next step is to find out
the different strongly connected
components (we have directional
edges) in the graph. The strongly
connected component is a wellknown problem in graph.
Therefore, we are using the
Tarjan's algorithm (see Tarjan,
1972), we have identified 3057
components. The mainland is
clearly recognized as the one
having the highest number of
nodes. 559 components have
more than 10 inhabitants, 171
more than 100, 111 more than
200 persons.
2.5 Filling missing connectivity
The graph we have now is
representing well the road
network and its structure. But
when looking at the whole
country
we
face
another
challenge: not all nodes are
connected
to
each
other.
Although most parts of the graph
are connected, there exists some
parts disjoint to the rest. Those
parts could not be neglected.
Otherwise the some facilities of
the location models will be
trapped in these isolated parts.
The facilities will be unable to
serve residences outside this part
even if they are closer to them.
The residence in this part can
only go to the facility inside no
matter how far it is. Thus the
solutions will be far away from
the optimal ones. This is again a
common problem in network
Others
components
are
disconnected from the main part
for two main reasons. The first is
that in Sweden we have islands
without any bridge to the
mainland, for example the second
biggest component in our case
which represents the island of
Gotland
and
its
236,235
inhabitants. It's also the case for a
lot of small islands, both in the
9
archipelago and islands along the
coasts as well as small ones on
lakes.
In
most
cases,
communications to the mainland
are done by ferry lines. It is of
course possible to use directly
Euclidian distance to fill in the
connection, however, that largely
underestimate distances and
travel time. Thus we instead of
using that, we use the ferry line
or other transportation time and
then convert it to distance.
between the mainland and the
‘islands’. The time spent on that
edge is the actual transportation
time regarding ferry lines or
other
transportations.
The
distances
are
calculated
correspondingly. We can notice
that in the case of Sweden, most
of real islands have direct
transportations to the mainland.
For the isolated parts in the main
lands, we simply add straight
edges between the 2 closest
nodes. We can also mentioned
those parts of the networks will
not affect so much results since
virtual edges added are short
(often less than 100m).
The other reason for missing
connection is the wrong or
inaccurate values in the database,
especially regarding the altitude.
This can be detected when the
distance is shorter than a
threshold or when the connection
between the components is only
possible in one direction. We
choose here to have a generic
solution instead of using
threshold. That is because
previous researches on this data
set give us no information of the
settings for threshold, and
inappropriate threshold would
lead to over or miss detection.
3. Population
3.1 Population data
Another important factor in our
location
models
is
the
population. The population are
required to be geo-coded so that
we are able to identify the
habitants in each grid and find
out the distance from the
habitants to the facilities. We are
extremely lucky to get the census
population data from the
Statistics Sweden (SCB) from the
year 2012 for people between 20
and 64 years old. The data is well
organized and do not need preprocessing.
In order to take into account the
people living on these “islands”,
we will add virtual edges to the
graph representing ferry lines or
other means of transportations. A
virtual edge will be added
between the closest pair of nodes
10
In the population data, the
residents
of
Sweden
are
represented by a set of residential
points. Each point represents the
number of people living in a
square with the residential point
as its centre. The sizes of the
square are often 500m by 500m
but variations occur between big
cities and the sparse populated
areas. The residential points are
derived after aggregating all the
residents in the same squares. It
means that all the persons in the
same square would be assumed
to have the centre of the square
as the starting point, and their
distances to the same facility are
the same. In total we have
188,325
residential
points
representing 5,411,373 persons.
Thus, each residential point
represents on average almost 29
persons. The most populated
residential point represents 2,302
persons.
population will not fit in the data
frame. In that case the demand
should be changed to the target
people. Moreover, even for
locating public facilities, the
population sometimes are not
exactly equal to the demand,
because the demand of public
facilities may vary among
different types of people and
different weight maybe added to
different residents. For example,
if we deal with the data for
locating training centres, we may
put smaller weight on babies; but
if we deal with locating child
clinics, the babies should have
the highest weight. The specific
methods for getting the demand
of location models usually
require auxiliary information of
the population such as age,
income, etc. Here we do not
concern this problem and just
propose how we connect the
population data points to the
graph. For the location problems
with special requirements should
be done in a similar way.
It should be mentioned that here
we use the population to
represent the demand for the
facilities in location-allocation
models. So we directly use the
original data in our graph. This
representation makes sense when
it comes to locating public
facilities and services like
retailing centres and hospitals.
But when the facilities are not
targeted at the public, the
3.2 Matching residential points
with nodes in the network
Each residential point needs to be
connected to the closest road
node in order to get the its
distance to the candidate
locations. We look for the closest
distance between the residential
11
point and the closest point of a
segment. We have 497 points
representing 1424 persons which
are more than one kilometre to
the closest road node. (resp. 6031
and 33,779 if for more than 500
meters). Approximations done
here are due to the inaccuracy in
data provided. Since most of the
residential
points
provided
represent
the
aggregated
population on 500 by 500 meters
squares, we may have an error of
maximum 500 √2⁄2 which is
about 353 meters (Euclidian
distance) between where a person
is living and the residential point
used to represent this person.
This is a basic assumption for pmedian models, and would lead
to errors in the graph.
which is treated as 0 in modelling
process while it is not 0 in
reality. Source C comes from the
residential points at the borders
of some grid, which are assigned
to a further facility because
centre of the grid is close to that
one. In our data with the grid
defined before, it suggests that
for Source A and B errors, the
maximum error for a person
should be at most 353 meters.
That might be important if we
locate facilities in a small but
high populated area, but it is a
small distance error when we
consider locating facilities in a
national level. Carling etc. (2012)
give the mean distance in
Dalarna part of Sweden for
people travelling to hospitals, and
it is around 40 kilometres. Thus
in our case, the Source A and B
error will not have big influence
the quality of our graph, and will
not affect the further research
results. The only exception is that
if researchers are locating a large
number of facilities in the
country and the resulting average
distance to the facility is not
large enough to neglect the error
as large as 353 meters, then some
methods should be applied to
improve that. For details of those
kinds of methods, see Current
and Schilling (1987). As to
Source C error, they will mostly
Current and Schilling (1987)
point out that there are 3 types of
error sources, A, B and C in this
kind of aggregating processes.
Source A comes from the
difference of distances from a
real population location inside a
grid to a facility outside that grid
when we “relocate” the point to
the centre of the grid, i.e., the
persons in the grid do not
actually live in the centre of the
square. Source B is the distance
between the real population
location and the facility in the
grid where the facility is located,
12
happen to the grids that have
similar distances between 2 or
more facilities. In that sense the
differences between 2 facilities
should be within 353 meters.
Consider the level of the whole
country, it should not influence
the results significantly.
need to keep the information
needed to compute distances
between people's residence and
candidate locations. According
Hakimi (1964), in case of the pmedian
problems,
locating
facilities on people’s habitants
gives the best solutions. Locating
facilities outside these locations
and into some segment will only
provide sub-optimal solutions.
Consequently
we
have
information in our graph which
are not used to compute distances
and therefore we can remove
them. In general, to compute
distances in the network, we will
use the Dijkstra algorithm and a
modified version to compute
distance from a point (typically a
residential point) to all candidate
locations.
4 Graph reduction
At this point, we have a strongly
connected graph representing our
road networks. However, the
graph is substantially large due to
the scale of our problem and the
detailed information of the
networks. The problem of this
large graph is that it will
extensively increase the work in
later calculations for finding best
locations of facilities. The time
and
computational
efforts
consumed would be enormously
large and beyond acceptable.
This problem exists in almost all
the large data sets including
Swedish
ones.
However,
reducing graph to a smaller scale
would lose information, thus it is
challenging to keep the useful
information as much as possible
during the graph reduction.
In this part, we will explain
which information can be
removed, how to remove them
and the benefits.
4.1 Removing dead ends.
In our graph, we have nodes with
degree 1, i.e. nodes which are
connected to only one neighbour
node. Such nodes can be called
dead-end nodes. If these nodes
are associated with some
residents, we should obviously
keep them in our system. If not,
we can remove this node from
our graph since it will never be
Since the goal of our work is to
minimize distances between
where people are living and
facilities are located, we only
13
used to compute the distance
matrix. Obviously, the neighbour
of a removed node can end up to
be a new dead end. Therefore, in
our algorithm, we analyse all
nodes and if a node is a dead end,
we recursively analyse and
remove the new dead end until
the graph becomes stable.
thus are denoted as degree 2.
Our algorithm works as follow:
For each node, we check if this
node
represents
some
population's residence. If not, we
check if this node is not a
crossing by checking if this node
connect exactly 2 different edges.
If so, we will analyse the
direction of edges and remove
them to disconnect this node
from the graph. Then, we add in
our graph a new edge between
the two neighbours. The distance
and traveling time of this new
edge is the corresponding sum of
the two removed edges. We
repeat this process until not
further reduction is possible.
By applying this, we can remove
9,873,764 nodes. They account
for more than 35% of the whole.
This is a huge reduction to our
graph. Also by doing this we
barely lose any information in the
population or road networks. In
practice, these deleted nodes
mainly represent small roads on
the country side, where people
move away but the road
information is still stored in the
data base.
The speed limit information here
are affected after this procedure
since the two aggregated edges
may have different speed limits.
Therefore we drop them and keep
the travelling time information.
At the stage, we can remove
10,347,191 nodes.
4.2 Removing useless node of
degree 2.
Many nodes in the graph do not
represent the structure of the road
but indicate the road shape
(curves, hills, etc.). We are not
interested in such information but
only the accurate distance by the
networks between nodes as well
as the traveling time. Thus these
nodes can be reduced and we can
get a smaller graph without
losing any accuracy in distance
and time. Nodes like this usually
are connected to 2 neighbours,
By applying these two graph
reductions, we are able to remove
20,220,955 nodes (72% of the
nodes) and now, the graph has
only 7,701,841 nodes. We keep
all useful information regarding
travelling distance and time.
14
time, the closest node which is
unvisited is chosen. A way to
optimized computations, aside
using well the C language, is to
have a better data-structure for
the queue used by the algorithm.
The Fibonacci heap is therefore
used for this purpose. We add the
Fibonacci heap into the searching
process of Dijkstra algorithm and
greatly shorten the computing
time to 1 day. For later
optimization, we can even add an
early stopping criterion to this
algorithm
and
stop
once
distances for all nodes within a
certain distance have been
computed.
5.
Distance
matrix
and
optimized Dijkstra method.
In this stage we derive the
distance matrix with the previous
graph. The first experiments we
have done in this case are to
compute the distance matrix
between the 188,325 residential
points and a set of candidate
locations. As a first trial, we have
established
1938
candidate
locations throughout Sweden.
They are related to the closest
nodes in the graph.
The graph itself provides us with
distance between some pairs of
nodes, but not all of them. The
route from a node to a not
adjacent one has many choices
and the distance can have many
true values. We aim to find the
shortest. In the first part of the
experiment, we were using the
classical
Dijkstra
algorithm
(Dijkstra, 1959) with a nonoptimized queue. Using such
methods leads to an average of
12 days of computations.
After the matrix was derived, we
make several test experiments to
detect the correctness of the
matrix. We pick out a few pairs
of nodes and compare our
distance with online distance data
base. Our results are quite close
to the online distances. This
shows we have derived an
accurate distance matrix.
6. Conclusions.
The Dijkstra algorithm is an
efficient algorithm and a run of
this algorithm is computing
distances from (or to) a points to
(or from) all others.
This
algorithm works by starting from
the source nodes and exploring in
an iterative nodes around. Each
To deal with real data leads to
facing different problems which
need to be checked carefully.
Wrong or inaccurate data is often
the reason of incorrect analysis.
After deriving the clean and
accurate data, another problem is
15
to reconstruct information we
need based on it. In our example,
we have mainly rebuilt the
structure of the road network by
identifying crossing based on a 2meters
approximation.
The
approach we proposed has been
used both on whole Sweden and
as prototype on the province of
Dalarna and in both cases results
are encouraging. The missing
information of the nodes has
been added by linking it to the
nearest neighbour and very little
error generated by doing so. It
should be noticed that for the
nodes in Gotland, there are no
bridge or tunnel connect them to
the main land, and we use virtual
link based on the ship speed and
time required to get to the main
land. In such procedure we
actually
underestimate
the
distance between the main land
and Gotland in the sense that
people need some time for
transferring and waiting between
ships and cars. However we do
not have further information on
the (average) time for the
passengers. The graph would
make more sense if we have
those time data and add that to
the virtual link.
not able to be handled efficiently
by computers. Therefore the
second goal of our approach is to
reduce as maximum as possible
the number of nodes. It should be
mentioned that it is possible to
skip the distance matrix and go
directly to searching for the
shortest distance between 2
nodes and just store the edges in
the graph. However in our trial,
that approach takes very long
time for computer to process.
Thus we are not quite flexible to
choose good heuristic methods;
neither can we apply the
algorithms in an efficient way.
By reducing the graph, we reduce
the computational time by a huge
factor, leading to more efficient
algorithm and better results.
In our data file, we have the
speed limit for different segment
of the road, therefore that factor
are taken account in our process.
We get a distance matrix and a
time distance matrix. But the two
dealing processes have quite
small difference, thus we
combine them together to give
our process without separation. It
would also be interesting to see if
there are some difference in the
solutions to location with the 2
matrixes.
Our approach is built to be as
general as possible. The graph
after identifying crossing and
connecting the whole network is
16
Han,
M.,
Håkansson,
J.,
Rebreyend, P. (2013). How do
different densities in a network
affect the optimal location of
service centers? Working papers in
transport, tourism, information
technology and microdata analysis,
ISSN 1650-5581; 2013:15
Reference
Beasley, J. E. (1993). Lagrangean
heuristics
for
location
problems. European Journal of
Operational Research, 65(3), 383399.
Burkard, R. E., Karisch, S., &
Rendl, F. (1991). QAPLIB-A
quadratic assignment problem
library. European
Journal
of
Operational Research, 55(1), 115119.
Huff, D.L., (1964). Defining and
estimating a trade area. Journal of
Marketing, 28, 34-38.
Huff, D.L., (1966). A programmed
solution for approximating an
optimum retail location, Land
Economics, 42, 293-303.
Carling, K., Han, M., Håkansson,
J., (2012). Does Euclidean distance
work well when the p-median
model is applied in rural areas?
Annals of Operations Research,
201:1, 83-97.
Kariv, O., & Hakimi, S.L., (1979a),
An algorithm approach to network
location problems. Part 2: The pmedian. SIAM Journal of Applied
Mathematics, 37, 539-560.
Current, J., D. Schilling. (1987).
Elimination of Source A and B
errors in p-median location
problems. Geographical Analysis
19, 95-110.
Kariv, O., Hakimi, S. L. (1979b).
An algorithmic approach to
network location problems. I: The
p-centers. SIAM
Journal
on
Applied Mathematics, 37(3), 513538.
Dijkstra, E.W., (1959). A note on
two problems in connexion with
graphs. Numerische Mathematik,
1, 269–271.
Laporte, G. (1992). The vehicle
routing problem: An overview of
exact
and
approximate
algorithms. European Journal of
Operational Research, 59(3), 345358.
Hakimi, S.L., (1964). Optimum
locations of switching centers and
the absolute centers and medians
of a graph, Operations Research,
12:3, 450-459.
Lundgren, M-L (2000), The
Swedish National Road Database
17
– Collaboration Enhances Quality,
Proceedings of the Seventh World
Congress on Intelligent Transport
Systems, 6-9 November, Turin,
Italy.
Mladenović, N., Brimberg,J.,
Hansen, P., Moreno-Pérez J.A.
(2007). The p-median problem: A
survey
of
metaheuristic
approaches. European Journal of
Operational Research, 179 (3), pp.
927–939
Reinelt, G. (1991). TSPLIB—A
traveling
salesman
problem
library. ORSA
journal
on
computing, 3(4), 376-384.
Tarjan, R. E. (1972), "Depth-first
search
and
linear
graph
algorithms", SIAM Journal on
Computing 1 (2): 146–160,
doi:10.1137/0201010
18
19
PAPER V
Measuring transport related CO2 emissions induced by online
and brick-and-mortar retailing
Kenneth Carling, Mengjie Han, Johan Håkansson, Xiangli Meng, Niklas
Rudholm
Abstract
We develop a method for empirically measuring the difference in transport
related carbon footprint between traditional and online retailing (“etailing”) from entry point to a geographical area to consumer residence.
The method only requires data on the locations of brick-and-mortar stores,
online delivery points, and residences of the region’s population, and on
the goods transportation networks in the studied region. Such data are
readily available in most countries. The method has been evaluated using
data from the Dalecarlia region in Sweden, and is shown to be robust to all
assumptions made. In our empirical example, the results indicate that the
average distance from consumer residence to a brick-and-mortar retailer is
48.54 km in the studied region, while the average distance to an online
delivery point is 6.7 km. The results also indicate that e-tailing increases
the average distance traveled from the regional entry point to the delivery
point from 47.15 km for a brick-and-mortar store to 122.75 km for the
online delivery points. However, as professional carriers transport the
products in bulk to stores or online delivery points, which is more efficient
than consumers’ transporting the products to their residences, the results
indicate that consumers switching from traditional to e-tailing on average
reduce their transport CO2 footprints by 84% when buying standard
consumer electronics products.
Keywords: E-tailing; Spatial distribution of firms and consumers; pmedian model; Emission measurement; Emission reduction
JEL codes: D22, L13, L81, R12
1. Introduction

Kenneth Carling is a professor in Statistics, Mengjie Han is a PhD in Microdata
Analysis, Johan Håkansson is a professor in Human Geography, Xiangli Meng is a PhD
student in Microdata Analysis, and Niklas Rudholm is professor in Economics at the
School of Technology and Business Studies, Dalarna University, SE-791 88 Falun, Sweden.
Niklas Rudholm also works at HUI Research, Stockholm, Sweden. Corresponding author:
Niklas Rudholm, e-mail:nru@du.se, phone: +46-70-6254627.
Environmental considerations are at the center of the agenda for
politicians in many countries and much research is devoted to meet the
challenges of climate change, sustainability, and related environmental
issues. The environmental impact of retailing on CO2 emissions should not
be underestimated. In Great Britain, the average consumer over 16 years
old made 219 shopping trips and travelled a total of 926 miles for
shopping in 2006 (DfT 2006). Considering that most of these trips were
reportedly made by car, and that transport vehicle miles travelled is the
main variable determining CO2 emissions, ways to reduce car use for
shopping are sought (Cullinane 2009).
In a Swedish setting, Carling et al. (2013a) studied the environmental
optimality of retail locations, finding that current retail store locations
were suboptimal. The suboptimal location of retailers generated on
average 22% more CO2 emissions than did a case in which they were
optimally located. Furthermore, in a related study, Carling et al. (2013b)
used GPS data to track 250 Swedish consumers for two months. In that
study, the authors compared downtown, edge-of-town, and out-of-town
shopping in terms of the CO2 emissions caused by shopping trips. They
concluded that downtown and edge-of-town shopping were comparable in
transport CO2 emissions, but that out-of-town shopping produced
approximately 60% more emissions from transportation.
As traditional brick-and-mortar shopping entails substantial environmental
impact, it would be pertinent to compare the CO2 emissions from
transportation induced by brick-and-mortar shopping with those of online
shopping that needs a physical distribution. Few recent empirical studies
(e.g., Edwards et al. 2010; Wiese et al. 2012) analyze the impact of online
shopping on the environment. Wiese et al. (2012) studied the CO2 effects
of online versus brick-and-mortar shopping for clothing in Germany; their
main finding is that, although online shopping usually induces lower CO2
emissions, the opposite is true when the distances involved are moderate.
In a study of the carbon footprint of the “last-mile” deliveries of
conventionally versus online-purchased goods, Edwards et al. (2010)
found that neither home delivery of online purchases nor conventional
shopping trips had an absolute CO2 advantage, though home delivery of
2
online-bought goods likely entailed lower CO2 emissions unless the
conventional shopping trips were made by bus.
In this paper, we address the issue of emissions along the entire supply
chain from entry point to the studied region to consumer residence for all
major suppliers of the product under study.
Our study aims primarily to develop an empirical method for measuring
the transportation CO2 footprint of brick-and-mortar versus e-tailing that
call for a physical distribution from entry point to a region or country to
consumer residence. 1 This method will then be used to calculate and
compare the environmental impact of buying a standard electronics
product online with buying the same product in a brick-and-mortar store in
the Dalecarlia region in Sweden. In addition, the actual locations of brickand-mortar stores and online delivery points in the region will be
compared with the locations that would minimize CO2 emissions.
Our paper contributes to the literature in the following way. First, contrary
to previous studies, the method developed makes it possible study all
transport related emissions from entry into a region of interest and to the
consumer residence in it. Previous studies have either been analyzing the
transport related emissions within one retail chain (Wiese et al. 2012) or
been focusing on the carbon footprint of the “last-mile” deliveries of
conventionally versus online-purchased products (Edwards et al. 2010).
Second, our method allow for simulations of how different locations of
both brick-and-mortar stores and online delivery points, as well as
different logistic solutions for the distribution of the goods, affects
emissions. As such, our method could also be used when constructing
environmentally friendly retail networks that minimize consumer travel.
Third, the method also allow for simulations of changes in how attractive
a consumer finds a brick-and-mortar store relative to online shopping, and
the consumer’s willingness to travel to shop for the product under study.
1
Note also that this implies that the development of theory or a conceptual framework
is outside the scope of this paper. The interested reader is referred to Cullinane (2009)
for the outline of a conceptual framework regarding how e-tailing affects the
environment.
3
We will focus on consumer electronics, as these consumer products
constitute the largest e-tailing category in Sweden (HUI Research 2014),
presumably leading the way to online shopping for other consumer
products in the future. Consumer electronics are in the vast majority of
cases imported into Sweden 2 , and pre-shipping via an entry port is
required before a product reaches a consumer’s residence, regardless of
whether the product is bought online or in a store. Consequently, the
product’s route on the Swedish transportation network to the consumer’s
residence can be identified. In brick-and-mortar shopping, the route
extends from the entry port via the store to the consumer’s residence,
while in online shopping, it extends from the entry port via the Swedish
Post distribution points to the residence. Part of the route is covered by
professional carriers, such as Swedish Post, and other parts of the route are
covered by the consumer. We focus on the CO2 emissions of the complete
route from regional entry point to consumer residence.
The study concerns the Dalecarlia region in central Sweden containing
approximately 277,000 consumers, whose residences are geo-coded. The
region contains seven brick-and-mortar consumer electronic stores and 71
delivery points for online purchases. Consumers reach the stores or
delivery points via a road network totaling 39,500 km. Mountains in the
west and north of the region restrict the number of gateways into the
region to three from the south and east, limiting the routing choices of
professional carriers. The region is representative of Sweden as a whole in
terms of the use of e-tailing and shares many geographical, economic, and
demographic characteristics with, for example, Vermont in the USA.
This paper is organized as follows. Section 2 thoroughly describes online
shopping in Sweden in 2012 and 2013. Section 3 gives details of the data
and the heuristic algorithm used in finding optimal locations. Section 4
presents the empirical analysis, which starts by calculating the
2
There are a few producers of consumer electronics that still manufacture their
products in Sweden, but in most cases R&D and design are made in Sweden while
production is located in low wage countries like China. Also, consumer electronics is the
industry that has had the most rapid outsourcing of production in the Swedish economy,
with textile manufacturing and rubber manufacturing as the only industries with nearly
as much of the production being outsourced to low wage countries (Lennartsson and
Lindholm, 2004).
4
environmental damage induced by buying a standard consumer electronics
product online versus in a local brick-and-mortar store. The results are
also aggregated to the whole of Sweden for e-tailing in general as well as
for consumer electronics products. Section 5 presents a sensitivity analysis
incorporating all assumptions imposed, to arrive at the results presented in
section 4. Finally, section 6 concludes the paper.
2. Online and brick-and-mortar retailing of consumer
electronics in Sweden
In this section, we start by describing e-tailing in Sweden for consumer
electronics and in general. We then describe the delivery system from etailers to their consumers. Finally, we discuss the brick-and-mortar
retailing of consumer electronics in Sweden and the Dalecarlia region.
First, e-tailing is dependent on Internet access, possessed by
approximately 90% of Swedish households. In addition, most workplaces
have Internet access, making e-tailing available to the vast majority of the
Swedish population. In the last quarter of 2012, 73% of a random sample
of Swedish consumers reported having bought consumer products online
in the previous three months, and 63% of the sample reported that they
would buy products online in the coming three months (HUI Research,
2014).3 Moreover, 90% of respondents reported having shopped online at
some time, the main cited reasons for online shopping being that it is
simple, cheap, and increases the consumer’s product selection. Most
online consumers use their desktop or laptop computer for online
shopping, but one in five reported having used a smart phone or tablet for
online shopping in 2012 (HUI Research 2014).
3
The information about Swedish online shopping comes from e-barometern 2012 and
2013. e-barometern is a yearly report on Swedish online shopping behavior produced by
HUI Research (a Swedish research and consultancy firm working mainly in the retail
trade industry), Posten AB (the Swedish Post), and Svensk Distanshandel (a federation of
commercial enterprises in the online retail industry). The questions asked differ
somewhat between years, so some statistics are from e-barometern 2012 reporting
statistics for 2011 (HUI Research 2013) and others are from e-barometern 2013
reporting statistics for 2012 (HUI Research 2014).
5
As stated above, we are studying the online and brick-and-mortar markets
for consumer electronics. We chose electronics as the studied market
because it is the largest e-tailing category in Sweden with sales of SEK 8.8
billion in 2013 (HUI Research 2014). Clothing is the next largest category
with SEK 7.2 billion in sales followed by books (SEK 3.3 billion),
furniture (SEK 1.2 billion), and sporting goods (SEK 1.0 billion). The
fastest growing categories are sporting goods, furniture, and electronics,
with annual growth rates of 28%, 19%, and 15%, respectively, in 2013. A
sample of consumers was asked in a survey what products, if any, they
bought online in 2013: 44% reported having bought books online, 40%
clothing, 25% computers and computer accessories, and 21% other home
electronics products (HUI Research 2014). There are some gender
differences in e-tailing, books being the main category for women and
computers and computer accessories for men (HUI Research 2013).
Though the sales growth rate is impressive for sporting goods, this
category is starting at a low level. As consumer electronics will continue
to be one of the most important e-tailing categories for the foreseeable
future, it was chosen for the present analysis.4Swedish Post delivers most
e-tail packages in rural areas in northern Sweden, where over ten packages
per year per household are delivered in many northern municipalities. 5
The three municipalities with the most packages delivered are Storuman,
Jokkmokk, and Gällevare, all located in northern Sweden and all
averaging 11.4–12.0 packages delivered per year per household. In
contrast, in most municipalities in southern Sweden, particularly the three
main cities, fewer than seven packages are delivered per year per
household. In the municipalities of Malmö, Gothenburg, and Stockholm,
5.9–6.1 packages are delivered per household and year. The Dalecarlia
region lies between the extremes of Sweden with seven to nine packages
4
This paper examines the environmental impact of transportation related to the
retailing of consumer electronics, not the import or manufacturing of such products. It
should, however, be noted that approximately 80% of the environmental impact of
consumer electronics comes from manufacturing rather than transporting them (Weber
et al. 2007).
5
The Post is note the sole provider of this type of services in Sweden, firms as DB
Schenker and DHL are also active in the market. However, the Post is the market leader
in the Swedish market, and also the only provider of the type of statistics reported in
the section above.
6
delivered per household and year by Swedish Post, with two exceptions: in
the municipalities of Malung and Sälen, in the remote north of the region,
over ten packages are delivered per household and year, while in
Borlänge, in the center of the region and with a well-developed retail
trade, fewer than 7 packages are delivered per household and year (HUI
Research 2013). As such, the Dalecarlia region as a whole can be
considered representative of most of Sweden, except, perhaps, for the
major cities and the remote far north.
In Sweden, the consumer is offered a choice of delivery points for picking
up online purchases. Swedish Post, handling most e-tail packages, offers
consumers a list of delivery points, the nearest the consumer’s residence
being the suggested primary alternative. The opening hours of these
outlets are usually 9.00–20.00. As pointed out by Cairns (2005), delivering
products to intermediate points with a longer pickup time window for the
consumer permits more efficient delivery, possibly reducing peak-period
congestion. The vast majority (85–90%) of surveyed consumers chose to
pick up products at the proposed nearest outlet, and the consumer’s
preferred pickup time at the outlet was Monday to Friday after 18.00 (HUI
Research 2014).
Fifty percent of online shoppers reported having ever returned an online
purchase, and 77% reported the experience of doing this as good or very
good (HUI Research 2014). The return process usually entailed the
consumer returning the package to the outlet where it was picked up, and
the only product category for which consumers mention a good return
system as important for their purchase decision is clothes (HUI Research
2014). It should also be noted that Swedish e-tailers are not overly
exposed to foreign competition, though increased competition from abroad
is expected in the future. However, 40% of surveyed consumers reported
never having bought anything from a foreign e-tailer, and an additional
40% reported having bought products from foreign retailers only once per
or less often per year (HUI Research 2014).
Brick-and-mortar consumer electronics retailing in Sweden has a total
annual turnover of approximately SEK 35 billion, but the sector’s
profitability is not that impressive. In summer 2011, the Swedish brickand-mortar electronics retail chain Onoff filed for bankruptcy, and its
stores were taken over by its competitor Expert. However, less than a year
7
later, Expert also filed for bankruptcy, meaning that two large, nationwide
retail chains in consumer electronics have exited the market. In addition,
several other chains are reporting weak profits. Meanwhile, Elgiganten,
which has both brick-and-mortar stores and e-tailing for consumer
electronics, is currently the best performing chain in Sweden. It is
therefore conceivable that brick-and-mortar electronics retailers may leave
certain local geographic markets in Sweden due to competition from etailers, increasing the potential environmental benefits of e-tailing.
In 2012, there were seven brick-and mortar consumer electronics stores in
Dalecarlia (see Fig. 1a), all parts of consumer electronics retail chains.
Two of the chains (Elgiganten and Euronics) had three stores each, while
one chain (SIBA) had only one store. Most of the stores are located in
Dalecarlia’s major towns. The largest town, Borlänge, is the only town
with two stores. One chain, Euronics, has a somewhat different
localization pattern than do the other two chains, with two of its three
stores located in smaller settlements (i.e., Malung and Svärdsjö) in the
region.
3. Data and method
In this paper, we identify the shortest route and transport mode the product
follows on its way from regional entry port, via the retailer, to consumer
residence, and calculate the emissions induced by this transport. To do so,
we draw on data from the Dalecarlia region in central Sweden, and impose
several identifying assumptions (labeled using Roman numerals) that are
scrutinized by means of sensitivity analysis in section 5. Note that the
method developed here could be used to measure emissions in any setting
where the entry points into the studied region, emissions per kilometer for
the transport method used, location of the final destination, and available
transport network are known. The method may therefore also have
important uses outside the retail sector.
There are several reasons for choosing the Dalecarlia region when
evaluating the measurement method developed. To perform a thorough
investigation of how the various assumptions and data requirements affect
the model output, we need access to data as detailed as possible regarding
the location of people’s residences, the region’s road network, and all
other necessary measurements.
8
First, in Dalecarlia, the population’s residences are geo-coded in 250 ×
250-m squares, meaning that the actual residential location may err by 175
m at most.6 Fig. 2a shows the geographical distribution of the population:
considering that consumer electronics is a broad category of products
appealing to almost everyone, it is reasonable to regard anyone in the
population irrespective of disposable income, gender, and age as a
potential buyer of such products. 7 The population totals 277,000 people
whose residency is represented by 15,729 squares whose center
coordinates are known (census data from Statistics Sweden as of 2002).
The population density is high in the southeast part of the region, along the
two main rivers, and around a lake in the center of the region. The western
and the northern parts of the region are sparsely populated.
Second, previous work has carefully examined the road network in the
region, so potential pitfalls encountered in working with these large
databases are known (Carling, Han, and Håkansson 2012; Han,
Håkansson, and Rebreyend 2013; Carling, Han, Håkansson, and
Rebreyend 2014). Fig. 1b depicts the road network of Dalecarlia (actually
the national roads only, as showing the many local streets and private
roads in the dataset would clutter the map). The network was constructed
using the national road database (NVDB), a digital database representing
the Swedish road network and maintained by the National Transport
Administration (Trafikverket). The database contains national roads, local
streets, and private roads (both government subsidized and unsubsidized);
the version used here was extracted in 2010, representing the network of
that time. Furthermore, attributes of the road segments, such as their
position, length, and nominal speed limits, are also given (for details, see
Han, Håkansson, and Rebreyend 2013). A very realistic travel distance for
a potential consumer can be derived by calculating the distance along the
road system from the home to any point where either a brick-and-mortar
store or a Swedish Post delivery point is located.
6
In a 250 × 250-m square, the longest distance from the center point to the edge is
√1252 + 1252 = 175.
7
We are using individuals rather than households as the unit of analysis since this is the
type of data we have access to from Statistics Sweden. However, the example product
in our paper is a computer, and we believe that in most cases such a product is today
bought and used by an individual rather than shared within a household.
9
Fig. 1 about here.
Third, an in-depth study of consumer shopping trip behavior was
conducted in Borlänge, a centrally located city in the region (Carling,
Håkansson, and Jia, 2013b; Jia, Carling, and Håkansson 2013). Some 250
volunteer car owners were tracked for two months using GPS. Typical
travel behavior for trips to a store selling durable goods was to drive the
shortest route from the home to store, implying the lowest possible CO2
emissions. Consequently, we approximated shopping-related trips using
the shortest route in the following analysis.
Fourth, there are only three gateways8 into the region, meaning that it is
relatively straightforward to obtain information about how consumer
electronics products arrive there and are then distributed to consumer
residences, irrespective of whether the purchase is made online or at a
brick-and-mortar store. 9 Fig. 1a also shows the current location of the
seven existing brick-and-mortar consumer electronic stores and the 71
delivery points for products purchased online.
Altogether, the region’s road network is represented by 1,964,801
segments joined in about 1.5 million nodes. This means that a consumer
can follow a myriad of potential routes to get to the store or delivery point.
Based on previous work, we stipulate that the consumer takes the shortest
route (Jia et al. 2013). However, identifying the shortest route given this
vast number of alternatives is challenging in itself. We follow the
convention of using the algorithm proposed by Dijkstra (1959) to find the
8
Mountains in the west and north of the region limit the number of gateways into the
region to three from the south and east, limiting the routing choices of professional
carriers (cf. Figure 1b). Although there are two airports in the region, neither of them is
used or is suitable for freight shipments. The brick-and-mortar retailers have confirmed
that all their shipments are by truck. However, Swedish Post might occasionally use train
for partial shipments of the products. In such cases, our approach overestimates the CO 2
emissions induced by online shopping, as we assume truck transport.
9
The important point for us is to identify the point where the distribution network
starts to differ between online and brick-and-mortar stores. This would also be the case
if there were local production in the studied region, so this would not in principle affect
the method developed, except that in such cases we would have to identify where in
the Dalecarlia region the distribution network from producer to online or brick-andmortar retailers started to differ.
10
shortest distance between all node pairs in the road system, an effort that is
very time-consuming but done only once. The algorithm, in its naïve form,
specifies a starting node, identifies all its adjacent nodes. Thereafter, it
seeks second-order nodes adjacent to the starting node and identifies the
distance to them via the adjacent nodes. Then the third order nodes
adjacent to the starting node are identified and the distance to the staring
node via the nodes adjacent to the staring node and the second order nodes
adjacent to the starting node are identified. This process continues until
all node pairs of interest have been assigned a distance. In other words, the
algorithm starts with nearby nodes and calculates stepwise the distance
between nodes farther and farther apart. Finally, a (non-symmetric) matrix
of road distance between all node pairs is obtained in which the rows of
the matrix refer to the nodes of the residences and the columns to the
nodes of the stores or delivery points. Zhan and Noon (1998) confirmed
that the algorithm, used successfully, identifies the shortest route in a
network.
Though road distance is not the same as CO2 emissions, we nevertheless
assume a perfect correlation between the two. We do this despite being
aware that other factors, such as speed, time, acceleration, deceleration,
road and weather conditions, and driver and vehicle types, are being
ignored. Stead (1999), based on data from the 1989–1991 National Travel
Survey, suggested using road distance as a proxy for vehicle emissions
because of the ease of collecting and computing it. Previous work in
Dalecarlia indicates that, while intersections and arterial roads imply
higher emissions, emissions crucially depend on road distance (Carling,
Håkansson, and Jia, 2013b; Jia, Carling, and Håkansson 2013). It is an
approximation to replace CO2 emissions with road distance, though it is a
fairly good one, as we can demonstrate in the sensitivity analysis
presented in section 5.
To calculate the CO2 emissions we assume the following. First, the
consumer drives a gasoline-powered Toyota Avensis 1.8 with CO2
emissions of 0.15 kg per km 10 , making the trip solely to pick up a
10
This emission rate is according to the EU norm for testing car emissions and refers to
driving on a mixture of urban and non-urban roads. In 2012, newly registered cars in
Sweden emitted 0.14 kg per km of CO2, whereas the existing car fleet in Sweden emitted
somewhat more CO2.
11
consumer electronics product (e.g., a computer or a small stereo) and
return to his or her residence. The product is sold in a 0.3 × 0.6 × 0.6 m3
box weighing up till 10 kg. The product is transported by a professional
carrier using a Scania truck and a trailer with a standard loading volume of
100 m3 respecting the Swedish restriction of 24 tons of load per vehicle.
The Scania truck runs on diesel, emits 1.08 kg per km of CO2 (according
to the producer; see www.scania.com), and is loaded to 60% of its
capacity with identical products, such that the consumer’s product
constitutes one of 600 in the load and is responsible for approximately
0.002 kg per km of CO2. Emissions when on- and offloading the product
and when moving it indoors are neglected, and emissions from
transporting the product to the region’s boundary from the manufacturer
are assumed to be the same irrespective of its being purchased online or in
a store and are thus set to zero in the calculations. Moreover, we stipulate
that each person in Dalecarlia is equally likely to purchase the product,
i.e., that there is no geographical variation in the likelihood of purchase.
The online-purchased products are assumed to first arrive at the region’s
six Swedish Post distribution centers via the shortest route upon entering
the region through gateway B from Stockholm where parcels are sorted
(see Fig. 1b). They are then transported to the 71 delivery points, again via
the shortest routes.11 For a product purchased in a store, we assume that
the product arrives at the store from the boundary of the region via the
shortest route.
The companies were not particularly willing to disclose their logistics
solutions. We do know that these firms only have one distribution center
within Sweden, and that this is not located within the region under study.
We therefore assume that the product enters through one of the three
gateways such that the gateway implies the shortest distance to the store
(see Fig. 1b). This assumption is conservative, as it might underestimate
the product’s actual transporting distance to the store if the retailer’s
logistics solution does not use the shortest route.
11
This assumption minimizes emissions, it could be that some other type of logistics
distribution system such as a spoke and hub system is used. We have not been able to
get any information of the precise nature of the logistics system used, however,
reloading is costly and it thus seems unlikely that, for example, a spoke and hub system
are at work within the region.
12
The current locations of stores and delivery points, shown in Fig. 1a, are
presumably suboptimal and potentially subject to reconsideration. We
therefore use the p-median model to find the best possible store locations
from an environmental perspective. Hakimi (1964) developed the pmedian model to find the optimal location of switching centers in a
network, which is a discrete location problem on a map with spatially
distributed demand points (Hakimi 1965; Daskin 1995). In the p-median
model, the demand points are assumed to be assigned to the nearest
facilities. The distance is weighted by the mass of the demand points, in
this case, the number of residents at a point. The goal is to locate p centers
or facilities such that the average individual distance is minimized.
Consequently, it is impossible to find more environmentally friendly retail
outlet locations than the solution to the p-median model under our
assumptions that consumers take the shortest routes and choose the nearest
stores or online delivery points and that road distance and CO2 emissions
are perfectly correlated.
The p-median problem is non-deterministic polynomial-time (NP) hard
and, unless the combinatorial problem is modest, it is impossible to find an
exact solution. Instead, an approximate solution is sought using a heuristic
algorithm. In this paper, we use simulated annealing (SA) because it
generally provides good solutions to p-median problems, is flexible
enough to be controlled, and has worked well on other p-median problems
in similar contexts (Chiyoshi and Galvao 2000).
Han et al. (2013) give details of SA implementation. The algorithm starts
with a configuration of p facilities picked at random. One facility is picked
at random and is examined to determine whether the average distance is
reduced by moving the facility to any of its neighboring nodes. If so, this
configuration is accepted as an improvement and the previous step of
randomly selecting a facility and searching its neighborhood is repeated. If
not, the original configuration is kept with a preset probability and a
poorer configuration is selected with one minus this probability. This
gradual movement away from the original configuration continues until
the average distance is near the minimum. We use the Carling and Meng
(2014) approach to obtain confidence intervals for the minimum distance,
to ensure that we are only meters away from the best possible solution.
13
For clarity, we end this section by gathering together all identifying
assumptions discussed above and on which the results build. Three
assumptions are related to the measurement method as such: (i) the road
distance and CO2 emissions are perfectly correlated; (ii) the number of
brick-and-mortar stores is fixed during the studied period; and (iii) the
consumer population is stable during the studied period. These three
assumptions, along with knowledge of the locations of brick-and-mortar
stores, online delivery points, and residences of the population of the
region, and of the transportation networks used to transport goods in the
studied region, are all the methodological assumptions and data required
to use the model.
However, the model also requires assumptions about human behavior,
which can of course be altered in infinite ways. In this paper, we will test
robustness to seven additional assumptions regarding consumer behavior
and three additional assumptions regarding producer behavior.
There are several additional assumptions about consumer behavior. (iv)
Online-purchased products are picked up at the delivery point nearest the
consumer’s residence, as confirmed by the surveys (HUI Research 2013
and 2014) cited in section 2. (v) Consumers in Dalecarlia take the shortest
route from their residence to the brick-and-mortar store or online delivery
point, as suggested by a previous study (Jia et al. 2013). (vi) Consumers
always pick up the product by car and drive a car emitting 0.15 kg per km.
According to the National Transport Administration, new cars in Sweden
emitted on average 0.138 kg per km in 2012. Although precise figures are
lacking, the older fleet of cars would typically have higher emissions,
making 0.138 kg per km an underestimation of the overall average
emissions. (vii) The region’s consumers are equally likely to purchase a
given product. (viii) The consumers either purchase the product on visiting
a brick-and-mortar store or purchase it online. (ix) The consumers are
indifferent to whether they shop in a store or online. (x) The consumers
shopping at a brick-and-mortar store choose the nearest one.
There are three assumptions about producer behavior. (xi) The truck is
loaded to 60% of its capacity. (xii) Online-purchased products arrive at the
delivery point by first going via the shortest route to one of the six
distribution centers and then via the shortest route from the distribution
center to the delivery point. This is essentially how Swedish Post
14
described their logistics solution to us, although they were unwilling to go
into detail. (xiii) A product destined for a brick-and-mortar outlet arrives at
the store via the shortest route from the nearest of the three gateways into
the region. The sensitivity to all these assumptions will be scrutinized in
section 5.
4. Empirical analysis of CO2 emissions induced by
consumers shopping
To set the scene, consider a stereotypical consumer electronics product,
such as a desktop computer or small stereo. 12 Such a physical product
needs to be transported to the consumer’s residence, typically by car,
inducing marginal freight trips for delivery to the consumer and causing
additional environmental damage. On the other hand, it is a marginal
product for delivery by the professional carrier, as its volume and weight
are marginal to standard trucks. Of course, some consumer electronics
products (e.g., books and DVDs) are tiny and easily transported by
consumers walking, biking, or riding a bus from the store. However, in
Sweden these products would also typically be delivered by ordinary mail
to the consumer’s residential mailbox. 13 Hence, we believe that the
environmental impact of the transport of these tiny products can be
abstracted from. Also note that when it comes to high value consumer
electronics products as a computer or a stereo, the consumer is likely to
choose to pick up the product at a delivery point where the likelihood of
theft is negligible.
12
It should be noted that some low-end TV sets and other electronics products have at
times been sold at some of the largest retail food outlets (e.g., ICA Maxi and Coop
Forum) in Dalarna, but due to the low profit margins on consumer electronics the
sections containing these type of products have in most cases decreased in size or been
removed from these stores. The sales of such products are thus considered to be limited
and are excluded from our analysis, but it should be noted that if there are additional
brick-and-mortar stores selling the type of consumer electronics products being
considered in this paper, this would reduce the difference in emissions between online
and brick-and-mortar stores.
13
Of course, at some point online retailing could expand to the point at which Swedish
Post would be required to add additional delivery trips. Although online retailing is
expanding, the analysis of such effects is outside the scope of the present paper.
15
Table 1 shows the consumer’s travel distance on the road network from
home to the nearest store and back. The average distance to the 7 current
brick-and-mortar stores in Dalecarlia is 48.5 km, with considerable
variation between consumers. For 5% of consumers, the nearest store is
within walking distance (under 2.6 km), while for another 5%, the nearest
store is over 162 km from home. Obviously, the postal delivery points are
much more conveniently located, approximately 25% of consumers
having to travel under 2.1 km to the nearest delivery point, with an
average of 6.7 km for consumers overall. Assuming that CO2 emissions
approximately coincide with distance travelled, the average consumer
induces only 14% CO2 emissions when buying the product online rather
than at a store.
Table 1 also shows the hypothetical situation when stores and delivery
points are optimally located according to the p-median model. A first
observation is that the postal delivery points are currently nearly optimally
located, as the mean distance differs by under 0.7 km between the current
and hypothetical locations. Note also that, comparing the current with the
optimal online delivery points, the travel distance to the current locations
is less than the optimal one for consumers living in urban areas in the
region, while the opposite is true for consumers in rural areas.
The brick-and-mortar stores could, from the environmental and consumer
perspectives, be better located. Optimally locating the 7 stores would
reduce the average consumer’s trip from 48.5 to 28.8 km a 41% reduction.
Optimally locating the brick-and-mortar stores would generally most
benefit the quartile of consumers today living farthest from a store, but
optimal locations would reduce travel distance for all percentiles.
Table 1 about here.
The consumer’s trip to pick up the product represents a substantial part of
the transport effort; the other part is transporting the product to the pickup
point, whether store or postal delivery point. Table 2 shows the distance
the product travels from entry into the region to the store or delivery point.
The values in the table are calculated assuming travel via the shortest route
and derived assigning equal weight to all outlets. The average distance
from regional boundary to store is 47 km, whereas the average distance is
123 km to the delivery point. Three unsurprising things can be noted from
16
Table 2. First, products purchased online must travel farther to the pickup
point than do ones sold in stores. Second, professional carriers usually
carry products farther than do consumers (cf. Table 1). Third, optimally
locating stores from the consumer perspective would mean longer-distance
transport to the stores for the professional carriers (averaging 62 km).
Table 2 about here.
However, a consumer carrying a product in a car induces much higher
CO2 emissions per travelled kilometer than does a professional carrier
bringing many product units on the same trip. Hence, the values in Tables
1 and 2 cannot simply be added. Following Wiese et al. (2012), we started
by calculating total CO2 emissions from traditional brick-and-mortar
stores and then turned to CO2 emissions from e-tailers. Wiese et al. (2012)
analyzed one German clothing retailer, comparing two selected brick-andmortar stores with an e-tailing system. The retail chain provided
information about distances from the central warehouse to the two stores,
type of transportation used, the quantity delivered to the stores, and the
delivery frequency, making it possible to calculate the supply chain’s
environmental impact. The demand side environmental impact was
investigated using a consumer survey administered to customers of the
chain’s brick-and-mortar stores. The questionnaire provided information
about customer postal code, type of customer transport, and number of
products bought at the store.
We instead use information about the location of all individual residences,
brick-and-mortar electronics stores, and Swedish Post delivery points in
Dalecarlia.14 In addition, we know the layout (i.e., the different types of
roads and the speed limits) of the road network connecting the brick-andmortar stores and the outlet depots to the individual household residences.
We believe that the total environmental impact of online and brick-andmortar retailing can be calculated with more precision than previously.
14
The Swedish Post is the market leader in the delivery of goods bought online, and we
therefore use the locations of the online delivery points that the Post uses in our
analysis. There are also other firms active in the market, and the main competitors to
the Post in the Swedish market are DB Schenker and DHL. It should, however, be noted
that the delivery points in the Dalecarlia region for products purchased online are in the
majority of cases co-located for the three main firms (the Post, DB Schenker and DHL).
17
Table 3 about here.
Table 3 shows the average total CO2 emissions per purchase of a standard
consumer electronics product (e.g., a desktop computer or small stereo).
Purchasing the product in a brick-and-mortar store induces on average 7.4
kg of CO2 emissions. This is substantially more than in the case of etailing, where the average is 1.2 kg of CO2, implying 84% lower
emissions. Many consumers (about 50% according to Table 1) live near a
delivery point and may prefer to pick up the product on foot, rather than
by the car as assumed above (iv). The fourth and fifth columns in Table 3
show the resulting emissions if every consumer within 2 km of an outlet
walks to pick up the product. This behavior is probably not that common if
a desktop computer or small stereo is assumed to be the product. However,
other small electronics may conveniently be carried while walking, in
which case the difference in induced emissions would be greater (1.0/7.4
meaning 86% lower emissions).
As mentioned in the third section, several brick-and-mortar stores were
recently closed due to bankruptcy. Such unplanned closures will lead to
brick-and-mortar stores being poorly located relative to consumers, so
there is room for the brick-and-mortar stores to be better located. Table 4
again shows the average total CO2 emissions per purchase of the standard
consumer electronics product, but assuming stores and delivery points to
be located so as to minimize average CO2 emissions per purchase. In this
case, seven optimally located brick-and-mortar stores would still lead to
four-times-higher CO2 emissions per product than would the online
alternative. It is clear that e-tailing is environmentally preferable to brickand-mortar retailing, even if it were possible to locate the brick-andmortar stores optimally from an environmental perspective.
Table 4 about here.
What does this effect of e-tailing in terms of reduced CO2 emissions
amount to at a national level? Consumer electronics retailing totals SEK
44 billion annually, of which approximately SEK 8.8 billion constituted
online purchases in 2013 (HUI Research 2014). Consumer electronics
constitutes almost 25% of e-tailing, so when Swedish Post delivers eight
products purchased online per household per year in Sweden, two of these
18
packages can be expected to contain consumer electronics. 15 Statistics
Sweden estimated the number of households in Sweden in 2011 at
approximately 2.24 million. Consequently, approximately 4.5 million
consumer electronics packages were delivered in Sweden due to e-tailing.
If we assume that consumer electronics items purchased in brick-andmortar stores are comparable to those bought online, then consumers took
home approximately 22.5 million packages from consumer electronics
stores. Before 2005, when e-tailing was nearly nonexistent in Sweden,
these 27 million packages would have induced 27 ∗ 7.4 = 200 million kg
of CO2. Today, they instead induce 22.5 ∗ 7.4 + 4.5 ∗ 1.2 = 172 million
kg of CO2 thanks to the availability of e-tailing. In the unlikely event of
brick-and-mortar stores being completely replaced by e-tailing, the
emissions reduction would be substantial at 27 ∗ 1.2 = 32 million kg of
CO2. Such an exercise in aggregation should, of course, be considered
only indicative, but nevertheless illustrates that further growth in e-tailing
might have more than a trivial impact on the environment.
5. Robustness of the measurement method
To estimate the average CO2 emissions per purchased consumer
electronics product, several identifying assumptions were imposed. 16 Here
15
A fraction of the packages are probably delivered directly to the consumer’s residence
thereby inducing even less CO2-emissions. It is hard to say how large the fraction is.
However, as an indication, the consumers report that at least 70% of them prefer to
have a cell phone delivered to the delivery point rather than directly to their residence
(HUI Research, 2014).
16
It has been suggested that we should also try to numerically calculate the impact of
returns on our results. Unfortunately, we do not have any reliable numbers on how
common returns are when it comes to consumer electronics products. However, note
that returns will only affect our results if there is a difference in how common returns
are when the product is bought online as opposed to in a brick-or-mortar store, or if the
returned product is only transported part of the way for one or the other of the two
retailing solutions being compared. Otherwise, the impact of a return is similar to one
additional purchase, the only difference being that the product is now transported from
the consumers residence and to either the brick-and-mortar store or the online delivery
point, and back through the logistics chain. Arbitrarily assuming that 10% of the
purchases are returned when buying online and 3% when buying in a brick and mortar
store, and that the products are delivered at least back to the point of entry in the
region in both cases, the impact on emissions can be calculated. In that case, simply
multiply the emissions in table 3 for online purchases with 1.10 and the ones from the
mortar and brick stores with 1.03.
19
we look at the sensitivity of the results to each of these assumptions. We
begin with the method-related assumptions, and then investigate the
consumer behavior assumptions and finally the producer behavior
assumptions.
Assumption (i): The first assumption concerns the relationship between
CO2 emissions and road distance. Carling et al. (2013b) found that
emissions peaked at intersections and on arterial streets in urban areas due
to non-constant velocity. The CO2 emissions of travelling to a delivery
point could be underestimated, as such travel would usually occur in urban
areas where constant speed is difficult to maintain. In towns and near
intersections, the speed limit is usually 50 km per h or lower. To check
assumption (i), we elaborate on the CO2 emissions for travelling on urban
roads and streets by assigning higher emissions to road segments with
speed limits of 50 km per h and below. On these segments, we increase the
CO2 emissions of cars by 50% and trucks by 100%, as the latter are even
more sensitive to varying driving speed.
Considerable transport effort related to shopping occurs on urban roads
with speed limits of 50 km per h and below. On average, consumers in
Dalecarlia patronizing online delivery points travel on such roads for
66.3% of the distance travelled, while 36.0% of such consumers travel
exclusively on them. Trucks and consumers travelling to brick-and-mortar
stores as well as trucks travelling to online delivery points travel more on
inter-urban roads and are therefore less exposed to urban roads inducing
speed fluctuations. Nonetheless, their exposure to urban roads is nontrivial, calling assumption (i) into question.
Table 5 compares products purchased in brick-and-mortar stores and
online when CO2 emissions are stipulated to be higher on urban roads in
the region. As seen in the table, this stipulation increases emissions in
urban areas, making the online solution somewhat less attractive than the
brick-and-mortar one relative to the baseline results. However, the
differences are too small to significantly change our results, so we deem
our original measurements robust to the assumption that distance equals
emissions.
Table 5 about here.
20
Assumptions (iii) and (vii)17: Assumption (iii) was that the population of
the studied region remained stable during the studied period, while
assumption (vii) was that all residents of the region were equally likely to
purchase the product. Age is an important part of the consumer profile that
we cannot access, so we may have to allow for heterogeneity between age
groups. Age is highly correlated to income, for example, but can also be
used to model geographical redistribution likely to represent future
demographic changes in the region, i.e., assumption (iii). This is because
people born into older cohorts largely live in rural areas, whereas people
born into younger cohorts are more concentrated in urban areas. Due to
this spatially skewed age distribution, there is an ongoing process of birth
deficits and population decrease in rural areas and the opposite in many
urban areas (e.g., Håkansson 2000). Table 6 shows results comparable to
those in Table 3, but weighted by age. Elderly consumers (≥65 years old)
have a weight of 0.5, young consumers (≤15 years) a weight of 1.5, and
those in between a weight of 1. Note that these changes can be seen as
altering both the population composition and likelihood of purchasing,
testing both assumptions (iii) and (vii) at once. Although young consumers
are now considered three times more likely to purchase electronics than
are old consumers, the values in Table 6 are almost identical to those in
Table 3, so we conclude that the results are insensitive to these
assumptions.
Table 6 about here.
Assumption (iv): One assumption regarding consumer behavior (iv) is that
online-purchased products are picked up at the nearest delivery point. This
assumption has been confirmed in most cases in Sweden via the surveys
cited in section 2 (HUI Research 2013 and 2014), in which 85–90% of
surveyed consumers selected the outlet nearest their residence.
Assumption (v): This assumption, that consumers in the Dalecarlia region
take the shortest routes from their residences to the brick-and-mortar
stores or online delivery points,18 was supported by a study cited in section
17
Assumption (ii) will be tested together with assumptions (ix) and (x), below.
The method suggested can also be used if more consumers than in the studied
Swedish region travel from work to the delivery points, with the added data
requirement that we then also need to know where the consumer works and the
18
21
3 (Jia et al. 2013). Researchers compared actual travelling routes with the
shortest routes to a shopping center, finding that only 5 of 500 investigated
shopping trips did not take the shortest routes.
Assumption (vi): The calculations presented above assumed that the
consumer drives a car emitting 0.15 kg per km of CO2, roughly equaling
the emissions of a Toyota Avensis. According to the National Transport
Administration, new cars in Sweden emitted on average 0.138 kg per km
in 2012, while the older fleet of cars typically had higher emissions,
making 0.138 kg per km an underestimation of the overall average
emissions. What is important here is that the total emissions for each
purchase are calculated as follows:
Total emissions = (consumer’s car emissions per km × km driven by
consumer) + (distributer’s truck emissions per km × km driven by
distributer)
(1)
As can be seen from equation (1), the car’s emissions can be changed at
will and the total emissions recalculated, since this is only a scale factor
for the total emissions of car travel. Note also that the same holds if we
want to investigate how a change in truck emissions or choice of travel
route (i.e., distance traveled) affects total emissions.
Assumption (viii)19: We assume that consumers made the purchase either
at the store or online. According to Cullinane (2009), however, if people
browse online and shop in brick-and-mortar stores, some shopping
journeys can be saved, but if they browse in the stores and shop online,
additional travel will likely be incurred. Moreover, the RAC Foundation
(2006) reports that almost 80% of surveyed consumers travel to brick-andmortar stores to compare products. We accordingly repeated the analysis,
but stipulated that each online purchase was preceded 80% of the time by
a trip to a brick-and-mortar store to physically assess the product and its
substitutes. Under this behavioral assumption, we find that online
shopping would induce 7.89 kg of CO2 on average, comparable to the
exclusively brick-and-mortar store case. The environmental benefits of
additional distance traveled to pick up the package. However, as demonstrated by Jia et
al. (2013), such behavior is unimportant in our empirical setting.
19
Assumption (vii) was investigated together with assumption (iii), above.
22
online shopping would be completely offset if as many as 80% of
consumers behaved in this way; in fact, more detailed analysis indicated
that if 71% or more of consumers behaved in this way, the environmental
benefits of online shopping would be offset. It should be noted that in
Sweden in 2013, only 6% of consumers buying consumer electronics
online reported first visiting a brick-and-mortar store and then purchasing
the product online, while 32% reported first researching what product to
buy online and then purchasing the product from a brick-and-mortar store
(HUI Research 2014).
Assumptions (ix), (x), and (ii): Table 7 shows the results of simulations in
which certain customer behavior assumptions are imposed. The
identifying assumptions (ix) and (x) concern how attractive a consumer
finds a brick-and-mortar store relative to online shopping and the
consumer’s propensity to travel to shop for consumer electronics. In this,
we are applying the idea of a gravity model as proposed in an operational
research setting by Drezner and Drezner (2007), which in turn draws on
work in the marketing literature (particularly Huff 1964). Drezner and
Drezner (2007) specify the probability that a consumer residing at q will
patronize a facility located at p as
𝐴𝑝 𝑒
∑𝑝∈ 𝐴𝑝 𝑒
𝑝
𝑝
, where 𝐴𝑝 is the
attractiveness of the facility, is the parameter of the exponential distance
decay function,20 and 𝑝 is the shortest distance between residence and
facility. We adapt this probability to the context such that the probability
(𝑝) =
of
patronizing
brick-and-mortar
store
p
is
𝐴𝑝 𝑒
∑𝑝∈ 𝐴𝑝 𝑒
𝑝
𝑝 +𝐴
𝑒
, where 𝐴 = 1 is the normed attractiveness of
online shopping and
is the shortest distance to the nearest delivery
point for online-purchased products. To understand this specification,
consider a consumer who can choose between one brick-and-mortar store
and one delivery point for online-purchased products and who lives
equidistant from the two outlets. The attractiveness parameter for the
brick-and-mortar store then describes how much more likely the consumer
is to choose the brick-and-mortar over the online alternative. For example,
20
The exponential function and the inverse distance function dominate the literature, as
discussed by Drezner (2006).
23
𝐴𝑝 = 2 means that the consumer would patronize the brick-and-mortar
store two times out of three.21
In the analysis, we consider three values of = 1.0,0.11,0.035, the first
referring to a situation in which the consumer is very likely to choose the
nearest store or delivery point, the second being the estimated parameter
value based on Californian visitors to shopping malls (Drezner 2006), and
the third being the estimated value based on Swedes’ self-reported trips to
buy durable goods (Carling et al. 2012). The values of can be converted
into average distances travelled to a store of 1, 9, or 30 km. Furthermore,
we let 𝐴𝑝 = 1.0,2.0,5.0 represent the brick-and-mortar stores, including
the case of consumers indifferent to whether they see the product in the
store or online (𝐴𝑝 = 1.0) and that of a consumer who finds it much more
attractive to see and touch the product physically (𝐴𝑝 = 5.0). Table 7
shows how the market share of the brick-and-mortar stores increases due
to their attractiveness when the market share is computed as the expected
number (implied by the model) of consumers patronizing any brick-andmortar store divided by the number of consumers. Focusing on the case in
which consumers are willing to consider travelling to stores other than the
nearest one ( = 0.035), we note that the market share of brick-andmortar stores increases from 55% if consumers find them as attractive as
online shopping (𝐴𝑝 = 1.0) to 83% if consumers find them much more
attractive than online shopping (𝐴𝑝 = 5.0). Considering that = 0.035 is
the most likely estimate in Sweden and that brick-and-mortar stores
currently sell approximately 80% of all purchased consumer electronics,
one may conjecture from Table 7 that Swedish consumers currently regard
brick-and-mortar shopping as about two to five times more attractive than
online shopping, on average.
The last column of the table gives the average CO2 emissions per
consumer and purchase. In calculating the emissions, we take into account
that the consumer will shop at various brick-and-mortar stores and
sometimes shop online. The formula is ∑𝑝=1 (𝑝) ∗ ( 𝑝 ∗
+ ̃𝑝 ∗
) + (1 − ∑𝑝=1 (𝑝)) ∗ (
∗
+ ̃ ∗ ), where
and
are the
21
One argument for a high attractiveness of the brick-and-mortar stores can be colocation of retailing giving the consumer access to several stores for the product in
question in a limited geographical area.
24
CO2 emissions per kilometer driven by consumer cars and delivery trucks,
respectively, ̃𝑝 is the road distance the truck travels to store p, and ̃ is
the road distance the truck travels to the online delivery point. The
formula therefore gives the consumer’s expected CO2 emissions for
repeated purchases. An increased likelihood to travel for shopping implies
a higher market share for brick-and-mortar stores, which in turn leads to a
dramatic increase in CO2 emissions. Consider, for example, the case when
brick-and-mortar and online shopping are equally attractive to consumers,
i.e., 𝐴𝑝 = 1.0. If consumers are unwilling to travel ( = 1), they will
almost always shop online and pick up their purchases at the nearest
delivery points, as that implies the least travelling with resulting low CO2
emissions of 1.23 kg. If they are likely to travel ( = 0.035), then they
will sometimes shop online, sometimes at stores near their residences, and
sometimes at stores far from their residences. As a result, their travelling
will on average be extensive, resulting in high CO2 emissions of 5.95 kg.
Table 7 about here.
Some of the results presented in Table 7 are illustrated in Fig. 2, which
indicates the geographical areas dominated by brick-and-mortar shopping.
The left panel presents the case in which = 0.11 and 𝐴𝑝 = 1, showing
that most of the region, except for the centermost areas surrounding the
brick-and-mortar stores, is served by e-tailing. In the right panel,
consumers supposedly are likely to travel for shopping ( = 0.035) and
find brick-and-mortar stores more attractive 𝐴𝑝 = 2than online shopping
(𝐴𝑝 = 2), so the more densely populated areas of the region are served
chiefly by brick-and-mortar shopping.
Fig. 2 about here.
We also elaborate on the closure of brick-and-mortar stores to check the
sensitivity of assumption (ii) by stepwise removing, one at a time, the
store with the smallest market share. For example, Table 8 presents the
situation after closing the two stores attracting the smallest shares of
consumers. Although store closure leads to a smaller market share for
brick-and-mortar shopping, the general pattern found in Table 7 remains.
25
Table 8 about here.
Assumption (xi): The truck is assumed to be loaded to 60% of its capacity,
though the loading could be lower or higher. We therefore check the
sensitivity to this assumption by stipulating that the truck is loaded to 30%
of its capacity, which might be the case if the truck typically returns empty
from the delivery points. We also consider an 80% loading, corresponding
to efficient distribution and a good solution to the travelling salesman
problem, in which the truck finds an efficient route to pass all scheduled
delivery points. Table 9 shows that varying the loadings only modestly
affects the CO2 emissions induced by selling a standard electronics
product at a brick-and-mortar store. The assessment of the onlinepurchased product’s emissions is somewhat more sensitive to the
stipulated loading, but the difference in emissions between brick-andmortar- and online-purchased products remains large.
Table 9 about here.
Assumption (xii): Online-purchased products arrive at the delivery point
by first going via the shortest route to one of the six distribution centers
and then via the shortest route from the distribution center to the delivery
point. This is essentially how Swedish Post described their logistics
solution to us, although they were unwilling to go into detail.
Assumption (xiii): A product sold at a brick-and-mortar outlet comes to the
store via the shortest route from the nearest of the three gateways into the
region (xiii).
Assumptions (xii) and (xiii) may be flawed and could in that case lead to
underestimated CO2 emissions. From our analysis, we know the distances
traveled via the shortest routes from points of entry into the region to
consumer residences, and use these to calculate total emissions in
accordance with equation (1). If interested, one could use equation (1) to
introduce longer transportation routes for both the consumer and/or
retailer distribution networks and recalculate the total emissions. The
equation could for instance be used if one suspected that consumers often
used multi-purpose trips when shopping, in which case we would
introduce only shorter routes specifically reflecting the marginal transport
effort related to shopping. However, multi-purpose shopping trips are not
26
that common in Sweden (Jia et al. 2013) and are only relevant when
comparing online and brick-and-mortar shopping if behavior differs
systematically between the two types of shopping.
6. Discussion
Retailing creates an environmental impact that should not be
underestimated. In Great Britain, the average consumer made 219
shopping trips and travelled a total of 926 miles for retail purposes in 2006
(DfT 2006). Meanwhile, in a Swedish setting, Carling et al. (2013a)
reported that the current location of retailers in the Dalecarlia region of
Sweden was suboptimal, and that suboptimal retailer locations generated
on average 22% more CO2 emissions than did optimal locations.
An empirical literature (e.g., Wiese et al. 2012; Edwards et al. 2010)
analyzes the environmental impact of online shopping. However, this
literature has focused on the emissions induced by consumers traveling to
and from brick-and-mortar stores or online delivery points, and has not
compared any but the “last-mile” environmental impacts of online versus
brick-and-mortar retailing.
This paper sought to develop a method for empirically measuring the CO2
footprint of brick-and-mortar retailing versus e-tailing from entry point to
a region (e.g., country, county, and municipality) to the consumer’s
residence. The method developed was then used to calculate and compare
the environmental impacts of buying a standard electronics product online
and in a brick-and-mortar store in the Dalecarlia region in Sweden. The
method developed only requires knowledge of the road network of the
studied region, the location of the residences of the population (measured
as precisely as possible), and the locations of the brick-and-mortar outlets
and e-tailer delivery points. The method also requires several assumptions
that need scrutiny to determine whether the method is robust to changes in
the underlying assumptions. This was done thoroughly in this study, and
the results indicate that the method developed is very robust to changes in
the underlying assumptions.
The results indicate that e-tailing results in a substantial reduction in CO2
emissions from consumer travel. The average distance from a consumer
27
residence to a brick-and-mortar electronics retailer is 48.54 km in the
Dalecarlia region, while the average distance to an online delivery point is
only 6.7 km. As such, making the purchase online will lead to only 14% of
the consumer travel emissions that would have resulted from purchasing
the product in a brick-and-mortar store. It should also be noted that the
online delivery points in the Dalecarlia region are well located relative to
consumer residences. The actual delivery point locations differ from those
that would minimize CO2 emissions caused by consumer travel by under
0.7 km. The results also indicate that e-tailing causes the distance traveled
from regional entry point to delivery point (i.e., brick-and-mortar store or
online delivery point) to increase. On average, the product travels 47.15
km to the brick-and-mortar store versus 122.75 km to the online delivery
point.
However, one must recall that a product carried in a consumer car induces
much higher CO2 emissions than does the same product delivered by a
professional carrier transporting many units simultaneously. As such, we
have also calculated the total CO2 emissions from regional entry point to
consumer residence for the two options, i.e., e-tailing or brick-and-mortar
stores. The results indicate that purchasing the product in a brick-andmortar store on average causes 7.4 kg of CO2 to be emitted along the
whole chain from regional entry point to consumer residence, while
purchasing the same product online only induces on average 1.2 kg of CO2
emissions. As such, consumers in the Dalecarlia region who switch from
buying the product in a store to buying the same product online on average
reduce their transport CO2 emissions by approximately 84%.
This is a case study and more cases are needed. It would be interesting to
see how the results in this study would hold for other cases, especially in
more densely populated areas in Sweden, as well as in other countries
where other shopping behavior could be observed, and where the
distribution of the goods is done differently. It would also be of interest to
further investigate how good the transportation work done by professional
carriers is from a CO2 emissions perspective, and redo the analysis in this
paper taking the sub-optimality of the carrier routes into account. In the
meanwhile, have we outlined a method to follow the products from
entering a region to the consumer residence that seems to be a fruitful way
28
to compare transport related CO2 emissions induced by brick-and-mortar
retailing with emissions from online shopping.
Acknowledgments
The authors would like to thank Sven-Olov Daunfeldt, Oana Mihaescu,
Pascal Rebreyend and participants at the 8th HUI Workshop in Retailing
(Tammsvik, Sweden, January 16-17, 2014) and the 21th EIRASS
Conference on Recent Advances in Retailing and Services Science
(Bucharest, Romania, July 7-19, 2014) for valuable comments and
suggestions. This study was financed by a Dalarna University internal
grant, and the funding source had no involvement in study design, data
collection and analysis, or the decision to submit the article for
publication.
29
References
Cairns, S., (2005), Delivering supermarket shopping: more or less traffic?,
Transport Reviews: A Transnational Transdisciplinary Journal, 25, 51-84.
Carling, K., Han, M., and Håkansson, J., (2012), Does Euclidean distance work
well when the p-median model is applied in rural areas?, Annals of Operations
Research, 2012, 201:1, 83-97.
Carling, K., Han, M., Håkansson, J., and Rebreyend, P., (2012), An empirical test
of the gravity p-median model, Working papers in transport, tourism,
information technology and microdata analysis, 2012:10.
Carling, K., Han, M., Håkansson, J. and Rebreyend, P., (2014), Distance measure
and the p-median problem in rural areas, Annals of Operations Research, Online
July 27.
Carling, K., Håkansson, J. and Jia, T., (2013), Out-of-town shopping and its
induced CO2-emissions, Journal of Retailing and Consumer Services, 20:4, 382388.
Carling, K., Håkansson, J. and Rudholm, N., (2013), Optimal retail location and
CO2-emissions, Applied Economic Letters, 20:14, 1357-1361.
Carling, K., and Meng, X., (2014), On statistical bounds of heuristic solutions to
location problems, Working papers in transport, tourism, information technology
and microdata analysis, 2014:10.
Chiyoshi, F.Y. and Galvão, R.D., (2000), A statistical analysis of simulated
annealing applied to the p-median problem, Annals of Operations Research 96,
61-74.
Cullinane, S., (2009), From bricks to clicks: The impact of online retailing on
transport and the environment, Transport Reviews, 29, 759-776.
Daskin, M., (1995), Network and discrete location, Wiley, New York.
DfT., (2006), National Transport Survey: 2006,London, TSO.
Dijkstra, E.W., (1959), A note on two problems in connexion with graphs,
Numerische Mathematik, 1, 269–271.
30
Drezner, T., (2006). Derived attractiveness of shopping malls, IMA Journal of
Management Mathematics, 17, 349-358.
Drezner T. and Drezner Z., (2007), The gravity p-median model, European
Journal of Operational Research, 179, 1239-1251.
Edwards, J.B., McKinnon, A.C. and S.L. Cullinane (2010) Comparative analysis
of the carbon footprints of conventional and online retailing: A “last mile”
perspective. International Journal of Physical Distribution & Logistics
Management, 40, 103-123.
Hakimi, S.L., (1964), Optimum locations of switching centers and the absolute
centers and medians of graph, Operations Research 12:3, 450-459.
Hakimi, S.L., (1965), Optimum distribution of switching centers in a
communications network and some related graph theoretic problems, Operations
Research 13, 462-475.
Han, M., Håkansson, J. and Rebreyend, P., (2013), How do different densities in
a network affect the optimal location of service centers?, Working papers in
transport, tourism, information technology and microdata analysis, 2013:15.
Huff, D.L., (1964), Defining and estimating a trade area, Journal of Marketing,
28, 34–38.
HUI Research, (2013), e-barometern, Årsrapport 2012,
http://www.hui.se/statistik-rapporter/index-och-barometrar/e-barometern.
HUI Research, (2014), e-barometern, Årsrapport 2013,
http://www.hui.se/statistik-rapporter/index-och-barometrar/e-barometern.
Håkansson, J., (2000), Impact of migration, natural population change and age
composition on the redistribution of the population in Sweden 1970 – 1996,
CYBERGEO, 123.
Jia, T., Carling, K. and Håkansson, J., (2013), Trips and their CO2 emissions
induced by a shopping center, Journal of Transport Geography, 33, 135-145.
Lennartsson, D. and Lindholm, P., (2004) Utflyttning av produktion inom den
svenska industrin/Outsourcing of production in Swedish industry. Statistics
Sweden (in Swedish).
31
RAC Foundation, (2006), Motoring Towards 2050: Shopping and Transport
Policy, RAC, London.
Stead, D., (1999), Relationships between transport emissions and travel patterns
in Britain, Transport Policy, 6, 247–258.
Weber, C.L., Matthews, H.S., Corbett, J.J. and Williams, E.D., (2007), Carbon
emissions embodied in importation, transport and retail of electronics in the U.S.:
A growing global issue, in: Proceedings of the 2007 IEEE Symposium on
Electronics and the Environment, 174-179.
Wiese, A.., Toporowski, W. and Zielke, S., (2012), Transport-related CO2 effects
of online and brick-and-mortar shopping: A comparison and sensitivity analysis
of clothing retailing, Transportation Research D, 17, 473-477.
Zhan, F.B. and Noon. C.E., (1998), Shortest path algorithms: an evaluation using
real road networks, Transportation Science, 32:1, 65-73.
32
Table 1. Consumers’ return travel distance on the road network from home
to nearest brick-and-mortar store and online delivery point (in km),
showing current and p-median optimal locations.
Percentile:
5
25
50
75
95
Mean
St. dev.
Current location
Brick-andOnline
mortar
delivery
stores
points
2.64
0.80
7.12
2.08
25.86
3.64
77.96
7.88
162.10
22.58
48.54
6.70
55.72
8.46
Optimal location
Brick-andOnline
mortar
delivery
stores
points
1.64
0.92
5.20
2.12
16.46
3.70
40.12
7.56
88.80
17.68
28.76
5.98
36.96
7.14
Table 2. Distance products travel on the road network to brick-and-mortar
stores and online delivery points (in km).
Percentile:
5
25
50
75
95
Mean
St. dev.
Current location
Brick-andOnline
mortar
delivery
stores
points
22.25
14.23
75.75
39.86
104.89
52.63
168.53
253.17
47.15
122.75
40.61
70.54
33
Optimal location
Brick-andOnline
mortar
delivery
stores
points
29.15
18.00
78.39
51.20
107.97
115.45
192.93
272.25
62.28
130.81
47.24
76.49
Table 3. CO2 emissions (in kg) induced by transporting a typical product
from the regional boundary to consumer’s home via current outlets.
Brick-andOnline
Brick-andOnline
mortar
delivery
mortar (incl.
(incl.
a
a
stores
points
walking)
walking)
5
0.48
0.29
0.08
0.06
25
1.13
0.51
1.13
0.18
50
3.96
0.78
3.96
0.41
75
11.79
1.42
11.79
1.42
95
24.58
3.65
24.58
3.65
Mean
7.44
1.22
7.40
1.05
St. dev.
8.41
1.30
8.44
1.39
a
It is assumed that all consumers within 2 km of the outlet walk to pick up the product
and while doing so produce no CO2 emissions.
Percentile:
Table 4. CO2 emissions (in kg) induced by transporting a typical product
from the regional boundary to the consumer’s home via outlets that are
environmentally optimally located.
Brick-andOnline
Brick-andOnline
mortar
delivery
mortar (incl.
(incl.
a
a
stores
points
walking)
walking)
5
0.56
0.28
0.10
0.06
25
1.39
0.52
1.39
0.18
50
3.13
0.77
3.13
0.41
75
6.50
1.29
6.50
1.29
95
13.44
3.04
13.44
3.04
Mean
4.77
1.12
4.73
0.95
St. dev.
5.31
1.13
5.35
1.23
a
It is assumed that all consumers within 2 km of the outlet walk to pick up the product
and while doing so produce no CO2 emissions.
Percentile:
34
Table 5. CO2 emissions (in kg) induced by transporting a typical product
from the regional boundary to consumer’s home via current outlets;
greater CO2 emissions assumed in urban areas.
Percentile:
5
25
50
75
95
Mean
St. dev.
Baseline
Brick-andOnline
mortar
delivery
stores
points
0.48
0.29
1.13
0.51
3.96
0.78
11.79
1.42
24.58
3.65
7.44
1.22
8.41
1.30
Higher urban emissions
Brick-andOnline
mortar
delivery
stores
points
0.75
0.43
1.87
0.79
4.99
1.21
13.26
1.95
26.92
4.38
8.67
1.64
9.05
1.45
Table 6. CO2 emissions (in kg) induced by transporting a typical product
from the regional boundary to consumer’s home via current outlets;
consumers weighted by age.
Brick-andOnline
Brick-andOnline
mortar
delivery
mortar (incl.
(incl.
a
a
stores
points
walking)
walking)
5
0.48
0.29
0.08
0.06
25
1.14
0.50
1.14
0.18
50
3.98
0.77
3.98
0.41
75
11.87
1.40
11.87
1.40
95
24.58
3.61
24.58
3.61
Mean
7.46
1.21
7.42
1.04
St. dev.
8.40
1.29
8.44
1.38
a
It is assumed that all consumers within 2 km of the outlet walk to pick up the product
and while doing so produce no CO2 emissions.
Percentile:
35
Table 7. Market share of brick-and-mortar stores and average CO2
emissions (in kg) induced by transporting a typical product from the
regional boundary to consumer’s home via current outlets; seven brickand-mortar stores.
Market share (%) CO2 emissions
𝐴𝑝
1
2
5
1
0.11
0.035
1
0.11
0.035
1
0.11
0.035
11.53
30.56
55.43
16.87
41.70
69.06
24.86
55.10
82.68
1.23
1.85
5.95
1.24
2.16
7.44
1.27
2.66
9.20
Table 8. Market share of brick-and-mortar stores and average CO2
emissions (in kg) induced by transporting a typical product from the
regional boundary to consumer’s home via current outlets; 5 brick-andmortar stores.
Market share (%) CO2 emissions
𝐴𝑝
1
2
5
1
0.11
0.035
1
0.11
0.035
1
0.11
0.035
8.60
25.42
49.42
12.78
34.47
62.45
19.32
45.38
76.40
36
1.23
1.79
5.51
1.24
2.07
7.08
1.27
2.53
9.17
Table 9. CO2 emissions (in kg) induced by transporting a typical product
from the regional boundary to consumer’s home via current outlets; trucks
loaded to two capacity levels.
Percentile:
5
25
50
75
95
Mean
St. dev.
30% loading
Brick-andOnline
mortar
delivery
stores
points
0.58
0.40
1.21
0.69
4.04
1.01
11.88
1.65
24.84
3.93
7.54
1.43
8.46
1.34
37
80% loading
Brick-andOnline
mortar
delivery
stores
points
0.46
0.25
1.12
0.46
3.94
0.72
11.77
1.35
24.51
3.60
7.41
1.16
8.40
1.29
Figure 1. a) Consumer residences and current locations of brick-andmortar stores; b) national road network and current locations of postal
distribution centers and delivery points.
38
Figure 2. Market areas when (a) = 0.11 and 𝐴𝑝 = 1 and when (b)
= 0.035 and 𝐴𝑝 = 2 for current brick-and-mortar stores.
39
PAPER VI
40
On administrative borders and accessibility to public services:
The case of hospitals in Sweden.
Authors1: Xiangli Meng, Kenneth Carling, Johan Håkansson2, Pascal
Rebreyend
Abstract: An administrative border might hinder the optimal allocation of
a given set of resources by restricting the flow of goods, services, and
people. In this paper we address the question: Do administrative borders
lead to poor accessibility to public service such as hospitals? In answering
the question, we have examined the case of Sweden and its regional
borders. We have used detailed data on the Swedish road network, its
hospitals, and its geo-coded population. We have assessed the population’s
spatial accessibility to Swedish hospitals by computing the inhabitants’
distance to the nearest hospital. We have also elaborated several scenarios
ranging from strongly confining regional borders to no confinements of
borders and recomputed the accessibility. Our findings imply that
administrative borders are only marginally worsening the accessibility.
Key words: hospitals, optimal location, network distance, travel time,
location model
1. Introduction
1
2
School of Technology and Business Studies, Dalarna University, Borlänge, Sweden
Corresponding author: e-mail: jhk@du.se
41
A national, regional or any other administrative border might be
considered a barrier to the free flow of goods, services, and people, and
thereby hindering the optimal allocation of a given set of resources. As a
consequence, in particular in borderlands, the highest achievable economic
and social utility may not be attained. Van Houtum (2000) gives an
extensive review of the study of borders with an emphasis on the EU and
its internal borders. In spite, or maybe because, of the globalization
process, the recent upsurge of research on borders is discussed by
Andersson et al (2002). While not all borderland studies view a border as a
barrier, it is widely held that borders reduce trade and are a demarcation of
the labor market. In fact, a core part of the EU policy has been to promote
cross-border transaction of goods, services, and labor towards a common
European market. There are also a growing number of cross-border
cooperation of public authorities in Europe. However, it is still too early to
regard such cooperation as defining new territorial entities and joint
regional policies (e.g. Perkmann, 2007; Popescu, 2008; Harguindéguy and
Bray, 2009). Public services in the EU are still normally confined by
national or regional borders. As an illustration, López et al (2009) discuss
the funding of Spanish rail investments in light of them having substantial
spill-overs in French and Portuguese regions bordering Spain.
Similar to transport infrastructure, health care is often under public control
in the EU. In this paper, we examine how regional borders affect the
spatial accessibility to hospitals within Sweden. Since Swedish regions are
comparable in geographical size to many European countries such as
Belgium, Denmark, Estonia, Slovenia, Switzerland, and the Netherlands
as well as provinces in Italy and Spain and states in Germany with a selfgoverning of the health care, we believe the results will be informative of
the internal borders’ of Europe effect on the accessibility of health care.
To be specific, we address three issues. The first is the effect of borders on
inhabitants’ spatial accessibility to hospitals. The second is the quality of
the location of hospitals and the resulting accessibility. The third is
accessibility in relation to population dynamics.
Sweden, for several reasons, is a suitable case for a borderland study of
accessibility to hospitals. Firstly, we have access to good data of the
national road network and a precise geo-coding of the inhabitants, the
hospitals, and the regional borders. Secondly, hospital funding,
management, and operation are confined by the regional borders. Thirdly,
after 200 years of a stable regional division of the country a substantial re42
organization of the regions is due.
The paper is organized as follows: In Section 2, the institutional settings of
the Swedish health care and the regional re-organization are discussed
jointly with a short review on location models and their application in
analyzing populations’ spatial access to health care. Section 3 presents
data, defines the distance measures, and provides some descriptive
statistics of key variables. Furthermore, a sketch of how health care is
organized in Sweden is given jointly with maps of Sweden that put the
location model into the empirical context. In Section 4 the experimental
design leading to a ‘what-if’ analysis and the optimization method are
described. Results are presented in Section 5, and the paper ends with a
concluding discussion in Section 6.
2. Swedish health care, accessibility, and location models
Health care in Sweden is organized and tax funded at a regional level
because it is the regions’ primary responsibility. The health care is
politically controlled and the population can respond to its management by
democratic channels such as elections and (less often) referendums. The
regional division of Sweden has remained stable for more than 200 years,
but it is currently subject to a major revision. The primary reason for the
revision is that many regions as a consequence of population dynamics
and historical decisions are locked up in suboptimal solutions within the
region. Therefore it is difficult to operate health care efficiently which
leads to long queues and high production costs (see e.g. McKee and Healy,
2002).
Health care service depends to a large extent on face-to-face activities and
hence the spatial accessibility for the population is a key concern. Central
to the supply of health care is the hospitals. Drawing on efficiency
arguments, the trend in Sweden and elsewhere (Hope 2011) has been a
concentration of hospitals in fewer locations with a possible consequent
decrease in spatial accessibility for the population. The concentration
seems to go hand in hand with urbanization, but it is counteracted by
suburbanization, counter urbanization and urban sprawl from the 1960s.
The net outcome on the accessibility for the population is unclear due to
these counteracting forces. Nonetheless, the concentration of health care
has led to a growing number of people questioning its management. For
instance in the Swedish region Västerbotten, a recent referendum
43
regarding a political proposal of further concentration of health care was
enforced in September 8th, 2013. In the referendum, about 90% of the
voters rejected the proposal.
The direction of the regional revision of Sweden is clear; the number of
regions shall decrease from the present 21 regions to about 6 to 8 regions.
The reason behind the revision is that larger regions imply greater
populations, which allows greater potential to organize health care
efficiently. As for spatial accessibility, such revision would reduce the
presumed and negative border effect, but not necessarily lessen the sub
optimality of solutions within the regions. Because of this, some political
parties, and most notably the health minister, have argued for hospitals to
be organized and managed on a national level. A key fact in the debate on
administrative level of health care in Sweden ought to be spatial
accessibility for the population under the alternatives, a fact that up to now
is missing. Furthermore, there is no international study on the potential
impact of a national administrative revision on the population’s spatial
accessibility to the hospitals to the best of our knowledge.
There are, however, many studies that measure and describe a population’s
spatial accessibility to health care usually in a confined area (e.g. Higgs,
2004; Perry and Gesler, 2000; Shi et al, 2012; Tanser et al, 2006). These
studies did not provide, as a benchmark, the best possible spatial
accessibility. To do so, an analytic procedure that, for instance, minimizes
the average distance to the health care is necessary. To address such a
general location problem the p-median model is commonly used (see e.g.
Hakimi, 1964; Reese, 2006). The p-median model intends to find an
optimal solution for the location of supply points that minimizes the
average distance to the population’s nearest supply point. This model has
been applied to solve location problems of hospitals (see e.g. Daskin and
Dean, 2004; Wang, 2012). Unfortunately, the p-median problem is NPhard forcing most applications to address rather small problems of limited
spatial reach. The largest p-median problem solved that we are aware of is
synthetically generated data consisting of 89,600 nodes (Avella et al,
2012). Avella’s et al (2012) problem is modest relative to a problem of
optimizing spatial accessibility on a national level assuming geo-coded
data with high geographical resolution.
It is, therefore, an open question whether it is possible to derive the
benchmark of the best possible spatial accessibility for the population on a
national level. We shall attempt to do so using about 5,400,000 inhabitants
44
and their residence geocoded in about 190,000 squares each of which is
500 by 500 meters. The inhabitant will be assumed to patronize the nearest
of Sweden’s 73 hospitals by travelling along the shortest route on
Sweden’s very extensive road network of about 680,000 kilometers.
The p-median model is not the only location model relevant for optimizing
spatial accessibility of hospitals. In a literature review by Daskin and Dean
(2004) and more recently by Wang (2012), several location-allocation
models for finding optimal location of health care facilities were described
and summarized. The location models optimize facility locations
according to different objectives. One common location model is the
location set covering problem (LSCP) which minimizes the number of
facilities covering the whole demand (Toregas and ReVelle, 1972).
Relative to the p-median model, the LSCP model would lead to a change
in the number of hospitals compared with the present situation and thereby
indicating merging of current hospitals or adding of new hospitals.
Another commonly used model was developed by Church and ReVelle
(1974) who go in another direction by maximizing the demand covered
within a desired distance or time threshold (maximum covering location
problem, MCLP). Relative to the p-median model, the MCLP model put
little weight on inhabitants in remote areas implying a drastic deterioration
in accessibility for them. Yet another model is the center model described
by Wang (2012) with the objective of minimizing the maximum distance
to the nearest facility. The center model is perhaps best suited for
emergency service planning as it, compared with the p-median model,
gives heavy weight to the remote inhabitants and downplays the huge
demand of densely populated areas.
To locate health care facilities of different hierarchical levels such as
hospitals with specialized care and local health centers, the hierarchical
type models have been proposed (Michael et al, 2002; Narula, 1986).
Hierarchical location models locate p hospitals for health care with
services on different levels simultaneously. Hierarchical location models
are computationally very heavy which makes them most suitable for
solving problems where the number of facilities and nodes for possible
location is small.
Although the alternative location models are interesting, we will focus on
the best possible spatial accessibility in the sense of minimizing the
average distance to the nearest hospital for the population. In other words,
the p-median model will be used. Furthermore, we will only consider
45
homogenous hospitals meaning that hierarchical location models are
unwarranted.
3. Data and descriptive statistics
Sweden is about 450,000 km2. Figure 1 depicts the country’s 21 regions.
The size of the regions ranges from 3,000 km2 (the island Gotland) to the
northernmost region Norrbotten of 97,000 km2 with an average regional
size of 21,000 km2. To put the geographical size of the regions of Sweden
in the European perspective, it may be noted that the smallest regions are
of the size of Luxembourg, the middle sized are comparable with Belgium
and German states, and the largest are comparable with Hungary and
Portugal.
We have access to high quality, geo-coded data of the Swedish inhabitants
as of 2008. They are geo-coded in squares of 500 by 500 meters. All
inhabitants within a certain square are geo-coded to the center (point) of
the corresponding square where the center is taken to be the demand point
in the ensuing location analysis. The inhabitants are distributed in 188,325
squares making up approximately 10 percent of the country’s area. The
population used in the analysis is all the inhabitants in the age of 20 to 64
years and it amounts to 5,411,573.3
Figure 1a shows the distribution of the population. The population is
asymmetrically distributed in the country due to natural conditions such as
climate, variation in altitude, quality of the soil, access to water and so
forth. The great part of the population lives in the southern part of the
country and along the coast of the northern part. While the population
density of Sweden (20 inh./km2) is very low compared with other
European countries, the variation in population density between the
regions is substantial. The western part of northern Sweden is very
sparsely populated with a population density below one inhabitant per
square kilometer, whereas many regions in the southern parts have a
population density of about 50 inh./km2 with an extreme of 350 inh./km2.
A hospital is a complex producer of health care and consequently its
definition is nontrivial as discussed by Mckee and Healy (2002). For this
study we have accepted a conventional classification of health care in
Sweden used for hospital ranking in Sweden 2010 (Sveriges bästa sjukhus,
3
The restriction to the working population is a consequence of the data having been
gathered for labor market related studies.
46
2010). This classification identifies 73 hospitals in Sweden.4 The hospitals
are located in 69 of the 1,938 settlements5 (depicted in Figure 1b). Two
settlements being the two largest cities in the country – Stockholm and
Gothenburg – contain three hospitals each. In the search for optimal
location of hospitals each of the 1,938 settlements are considered as a
candidate for locating a hospital.
Figure 1a-c: Distribution of the population (a), settlements (b), and regions,
current hospitals, and major national roads (c).
Figure 1c illustrates the locations of the 73 hospitals in Sweden. The
number of hospitals in Sweden is low compared with other European
countries. There is about 0.75 hospitals per 100,000 inhabitants in
Sweden. The overall average for Europe is 2.6 hospitals per 100,000
4
It goes without saying that a petite part of the health care is highly specialized and not
offered everywhere. The national government funds and exercises the power to decide
the location of such health care, but we shall abstract from it due to its rarity.
5
Only settlements with more than 200 inhabitants according to the census of 1995 are
considered in the location analysis.
47
inhabitants with a range from 1 (the Netherlands) to 6 (Finland) (Hope
2011). In spite of Sweden’s dissimilarity to other European countries in
this respect, the expenditure on health care in Sweden is similar to other
European countries of about 10 per cent of the GDP.
The population size of the regions is about 300,000 inhabitants and
consequently it is expected to be three hospitals per region. In fact, this is
the case with three exceptions being the markedly more populated regions
surrounding the cities Stockholm, Gothenburg, and Malmo. These regions
have 6-9 hospitals and a population exceeding 1,300,000 inhabitants.
Figure 2: National roads and their speed limit.
As mentioned before, the inhabitants may travel between the residence
and the hospital along some 680,000 kilometers of roads. National roads
maintained by the state are the most important roads in the road network
and they make up 15 per cent of Sweden’s road network. We have
retrieved the road network information from the national road data base
(NVDB). In Figure 2 the national roads are visualized. There are
31,000,000 road segments stored in NVDB. Each segment is stored along
48
with other attributes such as speed limit. The speed limit varies between 5
and 120 km/h with 80 percent of the road segments having a speed limit of
70 km/h. From Figure 2 it may be noticed that national roads with a speed
limit below 80 km/h dominate in the rural areas while national roads with
higher speed limits connect the larger towns by a sparse network. Within
urban areas the speed limit is usually 50km/h or lower. We have processed
the data into a country wide road network to enable both the computing of
travel distance and travel time between the 188,325 demand points and the
1,938 candidate nodes for hospital location (Meng and Rebreyend, 2014).
While there is some latitude for the inhabitants to select the hospital to
patronize within the region, it is safe to assume that the chosen hospital is
that nearest to the residence and that the shortest route to the hospital is
taken. This means that the shortest route between the hospitals and the
demand points needs to be identified. To do so, we have used the
algorithm originally proposed by Dijkstra (1959). At the onset, the
algorithm identifies and set all nodes (i.e. settlements and demand points)
as unvisited and assigns them infinity as distance. The algorithm begins
with a starting node. This node is marked as visited and receives the
distance 0. The distance of all its neighbors is then updated. The algorithm
is thereafter iterating on all unvisited nodes. At each step the unvisited
node with the lowest current distance from the starting node is picked. The
node is marked as visited (and then its distance is the lowest distance to
the starting node) and the distance of each of its neighbors to the starting
node is updated if needed. The algorithm can stop at this stage if the node
is the destination node. In our case, we continue the algorithm until all
nodes are marked as visited since we need distances from one point to all
the others. The resulting Origin-Destination (OD) matrix was created on a
Dell Optiplex 9010 with an Intel Core I7-3770 (3.4 GHz) 32 Gb of RAM
and a Linux operation system. It took 12.5 hours to generate the matrix.
The final OD matrix is of the dimension 1,938 by 188,227 representing the
candidate nodes of locating hospitals and the demand points in Sweden. 98
demand points were lost in the generation of the OD matrix due to
residences without access to the road network.
4. Experimental design
As stated in the introduction, we intend to address three issues. The first
one is the effect of borders on inhabitants’ spatial accessibility to hospitals.
49
The second is the accessibility to hospitals without restrictions of borders
and where hospitals are optimally located. The third is accessibility in
relation to population dynamics.
In addressing the first issue, we first compute the population’s distance to
the nearest hospital along the shortest route. In this computation, the
inhabitants may only patronize a hospital in their residential region. In the
alternative scenario, the inhabitants may patronize hospitals in any region
in which case boundaries implied by the borders are removed. Thus, we
also compute the distance when the inhabitant may patronize the nearest
hospital of any region.
The second issue to be addressed is location of the current 73 hospitals.
Are they located in a way that yields the best possible accessibility subject
to the restriction of the 73 hospitals in the country? To answer the question
we identify the optimum of the 73 hospitals where, by optimality, it is
meant a location of the hospitals such that the population’s distance to the
nearest hospital (irrespective of regional borders) is minimized.
To find the optimal location of hospitals we use the p-median model. It
can be stated as:
1
𝑀
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑅 ∑𝑁
𝑖=1 ∑𝑗=1 ℎ𝑖 𝑖𝑗 𝑥𝑖𝑗
s.t. ∑𝑀
𝑗=1 𝑥𝑖𝑗 = 1, 𝑖 = 1,2, … , 𝑁
𝑥𝑖𝑗 ≤ 𝑦𝑗
𝑀
∑𝑗=1 𝑦𝑗 = 𝑝
where R is the number of inhabitants, I is the set of demand nodes indexed
by i, J is the set of M candidate locations (i.e. settlements) indexed by j, ℎ𝑖
is the number of inhabitants in demand point i, 𝑖𝑗 is the distance of the
shortest route between demand point i and candidate location j, and p is
the number of hospitals to be located. Furthermore, 𝑥𝑖𝑗 equals one if the
demand point i is assigned to a hospital at location j and zero otherwise,
whereas 𝑦𝑗 equals one if a hospital is located at point j and zero otherwise.
The distance is measured both as travel distance in meters and travel time
in seconds in the road network. Often Euclidian distance is used as a
distance measure, but it has been found to be unreliable (Bach, 1981;
Carling et al, 2012; 2014).
The p-median model assigns the inhabitants to the nearest hospital without
considering the maximum capacity of a hospital. In this case, this might
lead to absurdly large hospitals in Stockholm and Gothenburg since their
large and concentrated populations are represented by one single
50
settlement each. To overcome the problem in the implementation of the pmedian model, we comply with the current situation by assigning three
hospitals in the same candidate location in Stockholm and Gothenburg.
To solve the p-median problem is a nontrivial task as the problem is NPcomplete (see Kariv & Hakimi, 1979) implying that enumeration of all
possible solutions is infeasible. Much research has been devoted to
develop efficient (heuristic) algorithms to solve the p-median model (see
Daskin, 1995; Handler and Mirchandani, 1979). We solve the problem by
using a common heuristic solution method known as simulated annealing
(SA) (see Lenanova and Loresh, 2004). Alternative solution methods for
the p-median model are extensively discussed by Reese (2006).
The virtue of simulated annealing as other heuristic methods is that the
algorithm will iterate towards a good solution being not necessarily the
optimum as a stopping point must be given. As a consequence it is
unknown whether the solution is close to the optimal solution or not.
However, it has been shown that statistical confidence intervals derived by
Weibull estimator may be used for estimating the uncertainty of the
solution with regard to the optimum (Carling and Meng, 2014). We run SA
until the confidence intervals are very tight – in matters of travel time it
amounts to some seconds.
As far as the third issue is concerned, identifying an optimal location of
hospitals is done at a specific point of time. Is it likely that this optimum
be robust to population dynamics? To address this question the population
is divided by age and the optimal location of hospitals is identified for
both the younger part of the population (20-39) and the older part (50-64).
The dissimilarity of the two solutions is thereafter examined.
In sum, the experiments related to the aim of the paper examine the
current situation to a number of counterfactual scenarios with regional
borders removed, national (and optimal) allocation of hospitals, and
redistribution of the population.
5. Results
5.1 The effect of removing regional borders
Table 1 shows the average and the median distance to the nearest of the
current 73 hospitals. The inhabitants have on average 17.9 kilometers to
their nearest hospital within the region while the median distance is 11.3
51
kilometers. The time it takes to travel the distance in the road network,
assuming attained velocity to be the speed limit, is on average 15 minutes
and 18 seconds while the median value is 11:06 minutes.
If the population was free to patronize hospitals irrespective of regional
borders, the distance would decrease somewhat. For instance, the
inhabitants would on average have the distance to a hospital shortened by
0.6 kilometers or by 25 seconds. The resulting improvement in
accessibility would be about 3 percent.
The majority of the inhabitants would be unaffected by the removal of
regional borders, a fact that follows from the median distance being
(almost) identical in the current and the counterfactual situation.
Table 1: The inhabitants distance to the nearest hospital within the region as well
as within Sweden.
Unsurprisingly a fraction of the population living close to the regional
borders would benefit from them being removed. To examine the size of
this fraction of the population, we have computed each inhabitant’s
Measure
Within the region
Mean
Median
Within Sweden
Mean
Median
Distance (km)
17.9
11.3
Time (min)
15:18
11:06
17.3
14:53
11.3
11:08
shortening of the distance to a hospital as a consequence of the removal of
regional borders. Figure 3 gives the shortening in distance (in percent) to
the nearest hospital. The figure shows that a majority of the inhabitants (55
per cent) would be unaffected as their nearest hospital already is located in
their region of their residence. However, 45 percent of the inhabitants
would be better off by having the opportunity of patronizing a hospital in a
neighboring region. This opportunity would be of marginal importance
though, as the shortening in distance is at the most of some 10 per cent.
As a result, the removal of regional borders has little effect on improving
the accessibility to hospitals in Sweden. Most inhabitants would be
unaffected, but those affected would be subject to a modest improvement
in accessibility.
52
8
Shortening of distance (%)
7
6
With distance measure in kilometres
5
with distance measure in minutes
4
3
2
1
0
5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95
Percentile of the population
Figure 3: Rate of shortening in distance (in percent) to the nearest hospital due to
removal of regional borders for percentiles of the population.
5.2 The effect of optimal location of hospitals
The present spatial accessibility to hospitals is rather poor as the average
distance between inhabitants and hospitals is 17.9 kilometers. Is this a
result of the current 73 hospitals being poorly located with regard to the
population?
Table 2: The inhabitants distance to nearest current and optimally located
hospital.
a
Current location Optimal location
Shortening (%)
Measure
Mean Median Mean Median Mean Median
Distance (km) 17.3
11.3
16.2
10.4
6.4
8.0
Time (min)
14:53
11:08
13:54
10:00
6.6
10.3
Note: a) The 99% confidence intervals for the mean values are (16.16-16.20 km)
and (13:51-13:54 min).
Table 2 gives the average and the median distance to all current 73
hospitals for the population being unrestricted by regional borders (cf
Table 1). It also shows the inhabitants’ distance to the nearest of 73
optimally located hospitals. The location of the 73 optimally located
hospitals is depicted in Figure 4a. Finally, the table gives the resulting
shortening of the distance as a consequence of hospitals being optimally
located. The shortening in distance is of a modest 5-10 percent with the
median indicating greater relative improvement for the inhabitants already
53
closest to the hospitals.
Table 3: Relocation towards optimality. Number of hospitals, inhabitants affected
and their distance to a hospital.
Mean distance nearest
Hospitals Inhabitants
hospital
Shortening
Measure
relocated
Affected
current
optimal
(%)
Distance
17
1,323,599
25.9
21.6
16.7
(km)
Time (min)
22
1,747,908
19:47
16:44
15.4
The scenario that the health care would be under national control with a
resulting relocation of the 73 hospitals towards an optimal location is not
far-fetched. Who would be affected if the scenario were to be realized?
Table 3 gives some answers to this question. First of all, most of the
current hospitals are optimally located. Only 17 (22 if optimized with
respect to time) of the current hospitals would require relocation for
attaining a maximum accessibility for the population under 73 hospitals
(see Table 3). Secondly, a substantial proportion of the population, 24 per
cent (31 per cent if optimized with respect to time), would be affected by
the relocation towards optimal accessibility. Thirdly, the inhabitants
affected by the relocation would have improvement in accessibility to
hospitals with about 16 per cent.
54
Figure 4a-b: Current and optimal 73 hospitals as well as inhabitants with
improved and worsened accessibility as a consequence of optimal hospital
configuration (a) and optimally located hospitals for inhabitants of 20-39 years
and 50-64 years, respectively (b).
Relocation towards optimality would result in some inhabitants be closer
to a hospital than presently and some inhabitants would be further away.
To illustrate the underlying gross affect, inhabitants with improved and
inhabitants with worsened accessibility are separated (Table 4). The
magnitude of the improvement and the worsening is similar, but the
number of inhabitants positively affected is about twice the ones
negatively affected. Figure 4a visualizes the locations of the positively and
negatively affected inhabitants (if optimized with respect to travel
distance). In general, the relocation towards optimality implies a slight
relocation from one town to the neighboring one.
To draw conclusions regarding the location towards optimality issue, the
current location of the 73 hospitals are not far from an optimal solution
with regard to the population’s accessibility to hospitals. An optimal
configuration of hospitals seems to be an exercise of carefully fine-tuning
the location within the regions.
55
Table 4: Number of Inhabitants with affected accessibility by relocation of
hospitals towards optimality, their distance to the nearest hospital, and change in
distance.
Measur
e
Distanc
e (km)
Time
(min)
Improved accessibility
Mean distance to a hospital
Inhabitant Curren Optima Differenc
s
t
l
e
846,519
32.1
18.7
-13.4
1,163,453
23:16
14:34
-8:42
Worsened accessibility
Mean distance to a hospital
Inhabitant Curren Optima Differenc
s
t
l
e
477,083
15.0
26.7
11.7
547,450
12:15
21:02
8:47
5.3 Robustness of optimal location to population dynamics
What effect do population dynamics have on the optimal locations? How
much will a change in the spatial distribution of the population affect the
accessibility to the optimal hospitals where the optimum is identified for a
particular population at hand? We identify the optimum for groups of
inhabitants. The first group is inhabitants aged between 50-64 years and
the other group are those between 20-39 years. For these two sets of
optimally located hospitals, we compute the accessibility for the
inhabitants of 20-39 years.
Figure 4b shows the location of the 73 hospitals optimized with respect to
travel time and the two groups. The configurations for the two groups are
similar and there are 58 hospitals coincide with each other. The figure
indicates that the younger population would require more hospitals around
Stockholm and Gothenburg at the cost of fewer hospitals in the
northwestern part of the country. The requirement is however not very
critical. The younger population has today 13:31 minutes on average to the
nearest hospital. An optimal location of hospitals for them would only
reduce the time to 12:25 minutes. How much worse off would the younger
population be if they had to accept a configuration of hospitals optimized
for the older? The answer is less than 1 per cent or 5 seconds since their
travel time would increase to 12.30 minutes. Thus, an optimal location of
hospitals seems to be robust to a long-term spatial redistribution of a
population.
5.4 Miscellaneous results
Returning to the issue of a reformation of the regional division in Sweden,
what affect may it have on the spatial accessibility to health care? It is
clear that the removal of borders is inconsequential. However, there is
scope for some improvement by optimizing the location of hospitals. Is
56
such improvement likely to follow from merging neighboring regions?
Figure 5 shows two parts of Sweden. To the left panel the region
surrounding Gothenburg known as Västra Götaland (the dark gray area) is
shown, but hereafter simply referred to as the Gothenburg region. To the
right the Stockholm region as well as neighboring regions (the light gray
area) is shown. The Gothenburg region is a forerunner in the regional
reformation. In 1998 three regions near to Gothenburg were merged into
the Gothenburg region and as a consequence the hospitals of the three
independent regions came under the power of one region. Stockholm and
the neighboring regions depicted in Figure 5 are candidates for being
merged into a single region (hereafter the Stockholm region).
Figure 5: The factual Gothenburg region (dark gray area) and the hypothetical
Stockholm region (light gray area).
If the reformation of the regional borders would have any effect on the
interregional and suboptimal location of hospitals, then the hospital
location in the Gothenburg region ought to be better than the Stockholm
region which is not formed yet. This is checked by letting all regions be
looked up with the current location of hospitals except the Gothenburg and
57
the Stockholm region where hospitals may be relocated to the optimum
within the region. The population is free to patronize hospitals in any
region. Recall from Table 1 that the average travel time was 14:53 minutes
for the population to the current hospitals. If the hospitals in the
Stockholm region were optimally located, the average travel time would
decrease to 14:39 minutes. Yet, if the hospitals in the Gothenburg region
were optimally located, the average travel time would similarly decrease
to 14:38 minutes. Hence, there is no reason to expect that the formation of
an extended Stockholm region would generate a better location of the
hospitals in such a not yet formed region than today.
The various experiments, so far, have indicated that any regional
reformation will have little impact on the spatial accessibility to hospitals.
One may wonder: how is the ongoing trend of concentration to fewer
hospitals in Sweden as elsewhere affecting spatial accessibility? To
address this question we have considered two scenarios. Out of the 73
hospitals in Sweden 48 of them are labelled emergency hospitals with a
slight higher level of specialized care. The first scenario is that the 25 nonemergency hospitals would close and the country be left with current 49
emergency hospitals to serve the population. The average travel time
would as a result increase by 26 per cent. The second scenario is that
Sweden had twice as many hospitals as today, thereby being more similar
to other European countries in terms of health care. In this scenario, the
average travel time would decrease by almost a half (39 per cent). As a
conclusion, the key to a spatially accessible health care is the number of
hospitals.
6. Conclusion
A national, regional or any other administrative border might be
considered as barriers to the free flow of goods, services, and people.
These barriers hinder the optimal allocation of a given set of resources. As
a consequence, in particular in borderlands, the highest achievable
economic and social utility may not be attained. For this reason, it seems
sensible that the EU policy has been to promote cross-border transaction
of goods, services, and labor towards a common European market. Public
services have, however, been exempted from the free flow of services and
largely confined by national and regional borders. The present EU policy
58
is, however, addressing the confinement of public services. So it is
interesting to ask: Do the Europeans suffer from a poor accessibility to
public services due to internal borders?
In this paper we have attempted to address this question by studying the
effect of administrative borders within Sweden on the population’s spatial
accessibility by considering one prominent public service which is
hospital service. We have elaborated several scenarios ranging from
strongly confining regional borders to no confinements of borders as well
as long-term population redistribution. Our findings imply that the borders
are only marginally worsening the accessibility. Instead, the key to good
spatial accessibility to hospital service is the number of hospitals.
However, it is more likely that this number is under further decrease due
to the ongoing concentration of hospitals.
While we believe that the case of Sweden can be extrapolated to a
European setting, it would be interesting to replicate the study on a
European level.
Acknowledgements
We are grateful to Hasan Fleyeh for comments on an earlier version of this
paper.
References
Anderson J, O'Dowd L, Wilson TM, 2002 “Introduction: Why Study
Borders Now?” Regional & Federal Studies 12(4) 1-12.
Avella P, Boccia M, Salerno S, Vasilyev I, 2012, “An aggregation heuristic
for large scale p-median problem” Computers & Operations Research 397
1625-1632.
Bach L, 1981, “The problem of aggregation and distance for analyses of
accessibility and access opportunity in location-allocation models”
Environment and Planning A: 138 955-978.
Carling K, Han M, Håkansson J, 2012, “Does Euclidean distance work
well when the p-median model is applied in rural areas” Annals of
Operations Research, 201(1) 83-97.
59
Carling K, Han M, Håkansson J, Rebreyend P, 2014, “Distance measure
and the p-median problem in rural areas” Annals of Operations Research
forthcoming.
Carling K, Meng X, 2014, “On statistical bounds of heuristic solutions to
location problems” Working papers in transport, tourism, information
technology and microdata analysis, 2014 10.
Church R L, ReVelle C S, 1974, “The maximum covering location
problem” Papers of the Regional Science Association 32 101–118.
Daskin M S, Dean L K, 2004, Location of health care facilities. In
operations research and health care (Springer, US) pp 43-76.
Dijkstra E W, 1959, “A note on two problems in connexion with graphs”
Numerische Mathematik 1(1) 269-271.
Hakimi S L, 1964, “Optimum locations of switching centers and the
absolute centers and medians of a graph” Operations Research 12(3) 450459.
Hakimi S L, 1965, “Optimum Distribution of Switching Centers in a
Communication Network and Some Related Graph Theoretic Problems”
Operations Research 13(3) 462-475.
Harguindéguy J-P, Bray Z, 2009, “Does cross-border cooperation
empower European regions? The case of INTERREG III-A France –
Spain” Environment and Planning C: 27 747-760.
Higgs G, 2004, "A literature review of the use of GIS-based measures of
access to health care services" Health Services and Outcomes Research
Methodology 5.2 (2004): 119-139.
Hope 2011, Hospitals in Europe: Health care data 2011, European
hospital and health care federation (HOPE Publications, Brussels) pp 14.
Kariv O, Hakimi S L, 1979, “An algorithmic approach to network location
problems. part 2: The p-median” SIAM Journal on Applied Mathematics
37(3) 513-538.
60
Levanova T, Loresh M A, 2004, “Algorithm of ant system and simulated
annealing for the p-median problem” Automation and Remote Control 65
431-438.
López E, Monzón A, Ortega E, Mancebo Quintana S 2009, “Assessment
of Cross‐Border Spillover Effects of National Transport Infrastructure
Plans: An Accessibility Approach” Transport Reviews: A Transnational
Transdisciplinary Journal 29(4) 515-536.
McKee M, Healy J, 2002, Hospitals in a changing Europe (Open
University Press, Buckingham) pp 5Meng X, Rebreyend P, 2014, “From the road network database to a graph
for localization purposes” Working papers in transport, tourism,
information technology and microdata analysis, 2014 09.
Narula S C, 1986, “Minisum hierarchical location-allocation problems on
a network: A survey” Annals of Operations Research 6(8) 255-272.
Perkmann M, 2007, “Policy entrepreneurship and multi-level governance:
a comparative study of European cross-border regions” Environment and
Planning C: 25(6) 861-879.
Perry B, Gesler W, 2000, “Physical access to primary health care in
Andean Bolivia” Social Science & Medicine 50(9) 1177-1188.
Popescu G, 2008, “The conflicting logics of cross-border
reterritorialization: Geopolitics of Euroregions in Eastern Europe”
Political Geography 27 418-438.
Reese J, 2006, “Solution methods for the p-median problem: An annotated
bibliography” Networks 48(3) 125-142.
Shi X, Alford-Teaster J, Onega T, Wang D, 2012, “Spatial access and local
demand for major cancer care facilities in the United States” Annals of the
Association of American Geographers 102(5) 1125-1134.
Sveriges bästa sjukhus 2010, Health Consumer Powerhous, Sweden
http://www.healthpowerhouse.se/index.php?option=com_content&view=a
61
rticle&id=305:sveriges-baesta-sjukhus&catid=56:sveriges-baestasjukhus&Itemid=77.
Tanser F, Gijsbertsen B, Herbst K, 2006, “Modelling and understanding
primary health care accessibility and utilization in rural South Africa: an
exploration using a geographical information system” Social Science &
Medicine 63(3) 691-705.
Toregas C, ReVelle C S, 1972, “Optimal location under time or distance
constraints” Papers of the Regional Science Association 28 133–143.
Van Houtum H, 2000, “III European perspectives on borderlands” Journal
of Borderlands studies 15(1) 56-83.
Wang F, 2012, “Measurement, optimization, and impact of health care
accessibility: a methodological review” Annals of the Association of
American Geographers 102(5) 1104-1112.
62