15th EUROPT Workshop on Advances in Continuous Optimization

Montréal, Canada, July 12 — 14, 2017

15th EUROPT Workshop on Advances in Continuous Optimization

Montréal, Canada, July 12 — 14, 2017

Schedule Authors My Schedule
Cal add eabad1550a3cf3ed9646c36511a21a854fcb401e3247c61aefa77286b00fe402

Dynamic Control and Optimization with Applications I

Jul 13, 2017 09:45 AM – 11:00 AM

Location: Saine Marketing

Chaired by Kok Lay TEO

3 Presentations

  • Cal add eabad1550a3cf3ed9646c36511a21a854fcb401e3247c61aefa77286b00fe402
    09:45 AM - 10:10 AM

    Reaction Diffusion Equations and their Optimal Control with Potential Applications to Biomedical Problems

    • NasirUddin Ahmed, presenter, University of Ottawa

  • Cal add eabad1550a3cf3ed9646c36511a21a854fcb401e3247c61aefa77286b00fe402
    10:10 AM - 10:35 AM

    Dynamic Illumination Optical Flow Computing for Sensing Multiple Mobile Robots from a Drone

    • Chao Xu, presenter, Zhejiang University

    In this paper, we consider a motion sense problem motivated by the International Aerial Robotics Competition (IARC) Mission-7 where an aerial robot is required to provide detection and estimation about mobile vehicles. Dense optical flow computing is employed first to provide a velocity field from image sequences. Then, region growing based on the optical flow field is used to extract moving objects on the background, and motion estimation is eventually achieved while both camera and objects are moving. In addition, classical optical flow techniques do not work in the competition since there may be illumination changes such as flashlights and reflections in the arena. To deal with this problem, the procedures of the brightness constancy relaxation and intensity normalization are combined in the optical flow algorithm. Experimental results have demonstrated the robustness against varying illumination. The proposed approaches can provide motion estimation results of acceptable accuracy for both benchmark data sets (the sphere image sequence and the taxi image sequence) and image sequences generated with micro aerial vehicles.

  • Cal add eabad1550a3cf3ed9646c36511a21a854fcb401e3247c61aefa77286b00fe402
    10:35 AM - 11:00 AM

    Reinforcement Learning Approach for Optimal Control Problem with Model-Reality Differences

    • Sie Long Kek, presenter, Universiti Tun Hussein Onn Malaysia
    • Wah June Leong, Universiti Putra Malaysia
    • Kok Lay Teo, Curtin University of Technology

    In this paper, the reinforcement learning approach is discussed to solve the optimal control problem. In our approach, the adjusted parameters are added into model used so as the differences between the real plant and the model used could be measured. During the iterative procedure, the optimal solution of the model used is updated by using the reinforcement learning algorithm to approximate the correct optimal solution of the original optimal control problem, in spite of model-reality differences. For illustration, solving linear and nonlinear optimal control problems are demonstrated. In conclusion, the efficiency of the approach proposed is highly presented.