09h45 - 10h10
Solving the generator maintenance scheduling problem in hydropower systems with uncertain water inflows
For power generation companies, preventive maintenance of generators is essential for their efficient and reliable operation. Specifically in the hydropower industry, non-linear relationships between the system variables and uncertainty of the water inflows make it challenging to find optimal maintenance schedules. However, the structure of this problem can be exploited to separate maintenance scheduling decisions from operational decisions. Following this idea, we present a solution method for a mixed-integer linear optimization formulation of this problem. Using real-world data, we compare our approach with standard solution techniques.
10h10 - 10h35
Numerical methods for solving mid-term hydropower optimization with stochastic dynamic programming
In this talk, we present new strategies to reduce the computation time required to optimize hydropower systems management with stochastic dynamic programming (SDP). In the SDP algorithm, the original problem is decomposed as a sequence of small scale nonlinear subproblems. Based on the concavity of the hydropower production functions, which can be accurately estimated by linear approximation, we demonstrate the benefits of using a successive linear programming (SLP) algorithm. We compare the SLP algorithm with other state-of-the-art nonlinear optimization solvers and the numerical results show that the computation time can be reduced significantly without loss of accuracy.
10h35 - 11h00
Direct Policy Search Gradient Method using Inflow Scenarios
In the field of water reservoir operation design, stochastic dynamic programming (SDP) has been extensively used. This method imposes a time decomposition of the stochastic process to build the value functions. Thus, achieving a good modeling of the complex spatio-temporal correlation of the streamflow is tricky. The Direct Policy Search methods (DPS) methods optimize directly into the policy parameter within a given family of functions. The actual parametrization is evaluated by simulating over an ensemble of scenarios. Thus, no model for the random process is needed and the stochastic process is implicitly represented through simulation. So far, DPS is still slow to converge because it used with inefficient optimization methods such as evolutionary algorithms. We propose a method using a gradient estimation to perform the optimization step. The gradient is estimated using the Reinforce algorithm. This method does not require the derivative of the objective function but only the derivative of the policy function. The policy function used in our case is radial basis functions network. Sensibility of the DPS and improvement of the method are presented based on the real case of Kemano (British-Colombia, Canada). A comparison to SDP is performed and results suggest superiority of the DPS thanks to the better uncertainty modeling, with computation time of the same order of magnitude.