Journées de l'optimisation 2018
HEC Montréal, Québec, Canada, 7 — 9 mai 2018
WA10 Clustering
9 mai 2018 10h30 – 12h10
Salle: Sony (48)
Présidée par Daniel Aloise
4 présentations

10h30  10h55
A typology of logistics service providers in Canada
This work analyzes the content of 100 websites of Canadian logistics service providers (LSP) in order to identify a variety of value propositions defined as the services offered and the promised outcomes to the customers. Using clustering techniques on this data, it is possible to create a typology of LSPs.

10h55  11h20
Towards stationlevel demand prediction for effective rebalancing in bikesharing systems
Bikesharing systems are today an efficient means of transportation. The proposed model uses temporal and weather features to model the network behavior. The model extracts main behaviors, characterizes them and rebuilds a prediction stationwise. This model is applied to the Montreal network and is able to loose 20% fewer trips than the operator.

11h20  11h45
Less is more: Basic variable neighborhood search heuristic for balanced minimum sumofsquares clustering
Balanced clustering addresses the problem of finding homogeneous and wellseparated subsets of equal cardinality from a set of data points. We present a basic variable neighborhood search heuristic for balanced minimum sumofsquares clustering. Computational experiments and statistical tests show that the proposed algorithm outperforms the current stateoftheart algorithm.

11h45  12h10
A scalable algorithm for the solution of large clustering problems
Clustering consists in finding homogeneous and wellseparated subsets, called clusters, from a set of given objects. The literature presents numerous clustering criteria to be maximized for separation and minimized for homogeneity. In this paper, we propose a global optimization method for clustering problems with respect to clustering criteria that satisfy three simple properties. We exemplify the use of our method on the diameter minimization clustering problem, which is strongly NPhard. Our algorithm can solve problems containing more than 500,000 objects while consuming only moderate amounts of time and memory. The size of the problems that can be solved using our algorithm is two orders of magnitude larger than the largest problems solved by the previous stateoftheart exact methods.