10th International Conference on Computational Management

HEC Montréal, 1 — 3 May 2013

10th International Conference on Computational Management

HEC Montréal, 1 — 3 May 2013

Schedule Authors My Schedule

FB3 Operations Management

May 3, 2013 02:00 PM – 03:30 PM

Location: St-Hubert

Chaired by Mohammad Sadegh Pakkar

3 Presentations

  • 02:00 PM - 02:30 PM

    Imperfect Production with Learning and Forgetting Effects

    • Zahra Sadeghigivi, presenter, Ryerson University

    The Wright learning curve was developed to capture improvements in a production environment where all produced items (aircraft material) conform to quality standards. However, generating imperfect quality (defective) items is inevitable for other industrial processes, where these are reworked. If the rework process is itself imperfect, then a defective item is scrapped. Jaber and Guiffrida (2004) developed a quality learning curve (QLC) for production processes generating some defective items requiring rework. They have shown that a learning curve can be convex under some conditions. The convexity represents an optimum batch size which contributes to a minimum unit time (cost) on the learning curve. They have considered a constant rate of generating defect items in their study. In a follow-up study, Jaber and Guiffrida (2008) assumed that a production process can be interrupted to restore its quality into an in-control state. Although interruptions reduce the number of defectives generated in a lot, it comes at the cost of increasing the production downtime. The works of Jaber and Guiffrida (2004, 2008) have a major limitation that it considered a single, the very first, production lot (cycle). Therefore the forgetting effects on production and rework processes were not considered. Forgetting hinders the learning process when workers are transferred to other jobs or when they are on breaks. This research addresses this limitation by incorporating forgetting in the production and rework processes of the QLC model of Jaber and Guiffrida (2004, 2008). Numerical examples are illustrative of the behaviour of the developed model for different values of the learning rates, the ratio of rework time to production time, the forgetting intensity, the probability of the process going out-of control, and the length of a break. Results indicate that the performance function of the process has a convex curve and it improves with faster learning, frequent process restorations, and short breaks.

  • 02:30 PM - 03:00 PM

    An Integrated Approach Based on DEA and AHP: A Case Study for Financial Performance Assessment

    • Mohammad Sadegh Pakkar, presenter, Faculty of Management, Laurentian University

    This research proposes a theoretical framework to assess the performance of Decision Making Units (DMUs) by integrating the Data Envelopment Analysis (DEA) and Analytic Hierarchy Process (AHP) methodologies. According to this, we consider two sets of weights of input and output categories under a hierarchical structure of inputs and outputs. The first set of weights represents the best attainable level of efficiency for each DMU in comparison to other DMUs. The second set of weights reflects the priority weights of input and output categories for all DMUs, using AHP, in the DEA framework. Normalizing the data set provides the possibility of comparison between weights under a unified scale.
    We assess the performance of each DMU in terms of the relative closeness to the priority weights of input and output categories for all DMUs. For this purpose, we develop a parametric distance model to measure the deviations between the two sets of weights. Increasing the values of parameter in a defined range of efficiency loss, we explore how much the deviations can be improved to achieve the desired goals of the decision maker (DM). This may result in various ranking positions for each DMU in comparison to the other DMUs.
    To highlight the usefulness of the proposed approach, a case study for assessing the financial performance of eight listed companies in the steel industry of China is carried out.

    Keywords: Data Envelopment Analysis; Analytic Hierarchy Process; Performance; Weights of Input and Output Categories; Distance Model.

  • 03:00 PM - 03:30 PM

    Minimizing Maintenance Cost under Reliability Constraints

    • Marcus Poggi, presenter, Pontifícia Universidade Católica do Rio de Janeiro
    • Bruno Flach, IBM Research
    • Thuener Silva, PUC-Rio

    Given a production environment where machines fail with probability given as a function of its operating time since last maintenance, one is interested in minimizing total maintenance costs while a desired reliability level is assured. In this context, reliability is defined to be the probability of meeting or exceeding a pre-defined level of total available production capacity along a given time period so that demands are fulfilled on their due dates.

    The problem is modeled over a discrete horizon as a space-time network where the availability of a machine is represented by an arc linking two instants with capacity associated to its productivity. The probabilistic availability of each machine during each time period determines whether the arc exists or not in corresponding scenarios. Deciding on maintenance activities to be performed on an time period implies modifying the failure probability of the associated machine production arcs in all instants that follow.

    The problem is formulated as a mixed integer non-linear programming (MINLP) problem and belongs to the class of stochastic programming problems with endogenous uncertainty – i.e., those in which the probability distribution of the random parameters is decision dependent. The proposed approach includes a convexification technique for polynomials of binary variables, an efficient cut-generation algorithm and the incorporation of importance sampling concepts into the stochastic programming framework so as to allow the solution of reasonable size instances of the problem.

Back