10:30 AM - 10:55 AM
Quadratic Programming Algorithm
The aim of this study is to calculate the maximum of a convex quadratic function under linear constraints when the optimal solution is a summit (convex quadratic programming for example). Here, we propose an algorithm very similar to simplex method in linear case. One important result of this paper is that when moving from one summit to neighbouring one, cumulating of the objective function does not depend on the values of its double products. Another result is the new expression of the objective function at each iteration; allowing to obtain a quadratic system equivalent to the original system. Note that this algorithm is also valid for the linear programming problems. Like simplex method, the complexity of this algorithm needs no depending explicitly on the size of the numbers of the problem instances.
10:55 AM - 11:20 AM
Effects of the LLL Reduction on Integer Least Squares Estimation
To estimate an integer parameter vector in a linear model, a typical method is to solve an integer least squares (ILS) problem. The most widely used approach to solving an ILS problem is the so called sphere decoding methods. It has been observed that using the well-known LLL reduction as preprocessing can make a sphere decoder faster and can improve the success probability of the Babai point, a suboptimal solution. In this talk we rigorously show that both observations are true in theory.
11:20 AM - 11:45 AM
Nearest Facet of a Higher Dimensional Convex Hull to an Inner Point
In a convex polytope of H-representaion, finding the nearest facet can be solved using linear programming. However for a convex hull or convex polytope of V-representation, such objective would be time consuming. By applying a simplex with suitable radius, we can prune facets and vertices which are natively far away from that point. And the remaining polytope which is generated by the left vertices and the intersection points of original polytope and the simplex will be hoped to be less complex than the original one.
11:45 AM - 12:10 PM
Alleviate the Total Dependency of the EM Algorithm on the Initial Value Choice Using EMVNS Algorithm
Over many years the outstanding challenge for Clustering is widely used in almost every discipline in order to discover natural groups in a large data base. Probabilistic model-based clustering techniques and in particular Finite Gaussian Mixture Models (FGMM) have shown attracted results in a corpus of applications. Finding maximum likelihood parameter values for FGMM is often done with the Expectation Maximization (EM) algorithm. However the choice of initial values can severely affect the time and the accuracy to attain convergence of the algorithm in finding global maxima. However, in practice, the data dimension is very large modelled by FGMM and without guarantees to have a large basin attraction of the global max. In spite of this, the need of applying a robust method for complex problems is necessary. Contrarily to other methods based in local search, VNS provide a powerful and simple tool to implement for getting a best results comparing to competitive methods. Moreover, the use of the appropriate structures in VNS leads not only to improve the Maximization of the likelihood function but with best time realization