18th INFORMS Computing Society (ICS) Conference

Toronto, Canada, 14 — 16 March 2025

18th INFORMS Computing Society (ICS) Conference

Toronto, Canada, 14 — 16 March 2025

Schedule Authors My Schedule

ML and AI in Healthcare

Mar 16, 2025 08:30 AM – 10:00 AM

Location: South Sitting

Chaired by Parvin Malekzadeh

4 Presentations

  • 08:30 AM - 08:52 AM

    Causal Forecasting for Optimal Healthcare Resource Allocation

    • Brandon Mossop, presenter,

    This talk explores the use of causal forecasting to improve machine learning (ML) performance and enable optimal resource allocation. By modelling the spread of COVID-19 in Ontario, Canada, using both epidemiological and mobility data, we demonstrated how a causal forecasting framework can provide improved predictive performance and explainability compared to traditional association-based approaches. Specifically, our work identifies the key causal drivers of infection rates, thereby enabling policymakers and practitioners to make data-driven decisions that target the most influential factors. For instance, we found a two-week lagged relationship between grocery store visits and subsequent rises in COVID-19 cases, suggesting this drove the spread of the disease. Understanding these causal links allows optimal deployment of resources, targeting key drivers rather than relying on spurious associations. Experimental results show that ML models trained with causal relationships outperform association-based models, leading to superior robustness and generalization.

  • 08:52 AM - 09:14 AM

    Automating Diagnostic Form Population for Better Patient Care

    • Tasnim Ahmed, presenter,
    • Salimur Choudhury, Queen's University

    We explore how automating diagnostic requisition forms can ease administrative workloads and enhance patient management within the Canadian healthcare system. This study proposes a LLM-based framework to automate form population, reducing time burden of manual form-filling. It comprises two phases. At first, we collect diverse forms from multiple hospitals, featuring varied layouts and question types, and use layout-aware OCR to extract clinical questions. Extracted questions are represented as objects, each generated by a structured LLM-based parsing module, including the question text, category (text, binary, MCQ, or numerical), and, where applicable, a set of options. In phase two, we generate synthetic patient data using Synthea and implement a hybrid RAG to retrieve relevant information. This information is fed into an LLM-based QA system to answer the questions. Empirical evaluations indicate that our method reduces manual overhead while maintaining high accuracy, which allows medical personnel to devote more attention to patient-centric tasks.

  • 09:14 AM - 09:36 AM

    Dynamic Sequential and Interconnected Policies for Collaborative Supply Networks Using Multi-Objective Reinforcement Learning

    • Niloofar Gilani Larimi, presenter, University of Victoria
    • Adel Guitouni, University of Victoria

    Effective resource allocation in dynamic, uncertain environments is crucial for mitigating disruptions in critical supply networks like healthcare. This study introduces a dual-phase framework that combines a sequential aspect for proactive ordering with interconnected feature of reactive inventory pooling through transshipment policies. Using deep reinforcement learning with the proximal policy optimization algorithm, it addresses non-stationary demand and stochastic lead times in multi-echelon, multi-product networks. Unlike static approaches, this scalable method, validated through a case study, balances cost efficiency and service equity. Monte Carlo sampling explores scenarios with high confidence and low error. To enhance realism, transshipment feasibility is assessed based on capacity, product type, and travel time between nodes. Results show combining proactive and reactive policies reduces shortages by nearly half with minimal variability. Centrally located end users serve as backups during emergencies, offering policymakers strategies to enhance resilience and improve critical decision-making.

  • 09:36 AM - 09:58 AM

    Reinforcement Learning for Optimizing Physician Effort Allocation in Emergency Departments

    • Parvin Malekzadeh, presenter, Rotman School of Management, University of Toronto, Toronto, Ontario
    • Dmitry Krass, Rotman School of Management, University of Toronto, Toronto, Ontario
    • Opher Baron, Rotman School of Management, University of Toronto, Toronto, Ontario

    In emergency departments (EDs), physicians must allocate effort between new patients awaiting initial assessment and in-system patients needing reassessment/care completion. We study how this effort allocation impacts both overall ED service quality and physician throughput (i.e., number of patients who have received both initial and secondary assessments within a shift). Physicians often aim to maximize throughput, driven by financial incentives and continuity of care concerns, while EDs prioritize minimizing overall waiting times. These competing objectives necessitate identifying optimal strategies for physicians and EDs, and an understanding of their interactions and trade-offs. We model the problem as a finite-horizon Markov decision process in a two-station tandem queueing system, capturing stochastic arrivals, service times, and patient abandonment. Using dynamic programming and reinforcement learning, we identify optimal strategies (policies) for physicians, the ED system, and their Pareto-optimal combination.
    Our findings demonstrate threshold-based policies and offer actionable insights to optimize effort allocation in healthcare.

Back