15th EUROPT Workshop on Advances in Continuous Optimization
Montréal, Canada, 12 — 14 juillet 2017
15th EUROPT Workshop on Advances in Continuous Optimization
Montréal, Canada, 12 — 14 juillet 2017
Dynamic Control and Optimization with Applications I
13 juil. 2017 09h45 – 11h00
Salle: Saine Marketing
Présidée par Kok Lay TEO
3 présentations
-
09h45 - 10h10
Reaction Diffusion Equations and their Optimal Control with Potential Applications to Biomedical Problems
-
10h10 - 10h35
Dynamic Illumination Optical Flow Computing for Sensing Multiple Mobile Robots from a Drone
In this paper, we consider a motion sense problem motivated by the International Aerial Robotics Competition (IARC) Mission-7 where an aerial robot is required to provide detection and estimation about mobile vehicles. Dense optical flow computing is employed first to provide a velocity field from image sequences. Then, region growing based on the optical flow field is used to extract moving objects on the background, and motion estimation is eventually achieved while both camera and objects are moving. In addition, classical optical flow techniques do not work in the competition since there may be illumination changes such as flashlights and reflections in the arena. To deal with this problem, the procedures of the brightness constancy relaxation and intensity normalization are combined in the optical flow algorithm. Experimental results have demonstrated the robustness against varying illumination. The proposed approaches can provide motion estimation results of acceptable accuracy for both benchmark data sets (the sphere image sequence and the taxi image sequence) and image sequences generated with micro aerial vehicles.
-
10h35 - 11h00
Reinforcement Learning Approach for Optimal Control Problem with Model-Reality Differences
In this paper, the reinforcement learning approach is discussed to solve the optimal control problem. In our approach, the adjusted parameters are added into model used so as the differences between the real plant and the model used could be measured. During the iterative procedure, the optimal solution of the model used is updated by using the reinforcement learning algorithm to approximate the correct optimal solution of the original optimal control problem, in spite of model-reality differences. For illustration, solving linear and nonlinear optimal control problems are demonstrated. In conclusion, the efficiency of the approach proposed is highly presented.