Thomas Runarsson - Academia.edu (original) (raw)

Papers by Thomas Runarsson

Research paper thumbnail of Towards an evolutionary guided exact solution to elective surgery scheduling under uncertainty and ward restrictions

The problem of constructing surgery schedules, with limited downstream ward capacity, is formulat... more The problem of constructing surgery schedules, with limited downstream ward capacity, is formulated as a mathematical model with probabilistic constraints. This exact model becomes computationally intractable for mathematical programming solvers as the number of patients increase. An evolutionary algorithm is used to restrict the size of the search space, making the problem tractable again, effectively guiding the solvers towards an exact and feasible solution. Solutions are validated using Monte Carlo simulations during the evolutionary search. The optimization problem is inspired by real challenges faced by many hospitals today and tested on real-life hospital data.

Research paper thumbnail of On the mechanical stability of porous coated press fit titanium implants: A finite element study of a pushout test

Journal of Biomechanics, 2008

Pushout tests can be used to estimate the shear strength of the bone implant interface. Numerous ... more Pushout tests can be used to estimate the shear strength of the bone implant interface. Numerous such experimental studies have been published in the literature. Despite this researchers are still some way off with respect to the development of accurate numerical models to simulate implant stability. In the present work a specific experimental pushout study from the literature was simulated using two different bones implant interface models. The implant was a porous coated Ti-6Al-4V retrieved 4 weeks postoperatively from a dog model. The purpose was to find out which of the interface models could replicate the experimental results using physically meaningful input parameters. The results showed that a model based on partial bone ingrowth (ingrowth stability) is superior to an interface model based on friction and prestressing due to press fit (initial stability). Even though the present study is limited to a single experimental setup, the authors suggest that the presented methodology can be used to investigate implant stability from other experimental pushout models. This would eventually enhance the much needed understanding of the mechanical response of the bone implant interface and help to quantify how implant stability evolves with time.

Research paper thumbnail of Asynchronous Parallel (1+1)-CMA-ES for Constrained Global Optimisation

The global search performance of an asynchrounous parallel (1 + 1) evolution strategy using the f... more The global search performance of an asynchrounous parallel (1 + 1) evolution strategy using the full covariance matrix adaptation for constrained optimization is presented. Although the (1 + 1)-CMA-ES may be a poor global optimizer it will be shown that within this parallel framework the global search performance can be enhanced significantly. This is achieved even when all individual (1 + 1) strategies use the same initial search point. The focus will be on constrained global optimization using a recently developed (1 + 1) evolution strategy for this purpose.

Research paper thumbnail of Stochastic Master Surgical Scheduling Under Ward Uncertainty

Springer proceedings in mathematics & statistics, 2020

In this work, we address the elective surgery scheduling problem and the risk of last-minute canc... more In this work, we address the elective surgery scheduling problem and the risk of last-minute cancellations. This risk is associated with the likelihood of operating rooms going into overtime and ward beds exceeding their limit. The risk of overtime is constrained by considering only feasible combinations of operating room days schedules. To account for the feasibility, we restrict the number of surgeries assigned to each combination and force it to maintain the correct ratio between in- and out-patients for each operator. Furthermore, the probability of running into overtime is bound and verified using Monte-Carlo simulation. The risk of exceeding the ward limit is solved by a mixed-integer programming model where the probability of going over the available ward beds downstream is bound. The approach is inspired by real challenges and tested on real-life hospital data.

Research paper thumbnail of Improving Curriculum Timetabling Models Using Clustering

This work describes how clustering can aid in the modelling of the curriculum timetabling problem... more This work describes how clustering can aid in the modelling of the curriculum timetabling problem. The practical timetabling problem cannot be solved to proven optimality in any reasonable time. A clustering technique is used to construct additional constraints, that reduce the size of the feasible search space, and improves the quality of the time-tables found within a reasonable computational time. The approach is illustrated using on a real world timetabling problem and a state-of-the-art commercial solver.

Research paper thumbnail of Proceedings of the 9th international conference on Parallel Problem Solving from Nature

Research paper thumbnail of Active model learning for the student nurse allocation problem

2022 IEEE Symposium Series on Computational Intelligence (SSCI), Dec 4, 2022

Research paper thumbnail of Calibration of Automatic Seizure Detection Algorithms

2022 IEEE Signal Processing in Medicine and Biology Symposium (SPMB)

Research paper thumbnail of Neonatal seizure detection algorithms: The effect of channel count

Current Directions in Biomedical Engineering

The number of electrodes used to acquire neonatal EEG signals varies between institutions. Theref... more The number of electrodes used to acquire neonatal EEG signals varies between institutions. Therefore, tools for automatic EEG analysis, such as neonatal seizure detection algorithms, need to be able to handle different electrode montages in order to find widespread use. The aim of this study was to analyse the effect of montage on neonatal seizure detector performance. A full 18-channel montage was compared to reduced 3- and 8-channel montages using a convolutional neural network for seizure detection. Sensitivity decreased by 10 - 18 % for the reduced montages while specificity was mostly unaffected. Electrode artefacts and artefacts associated with biological rhythms caused incorrect classification of nonseizure activity in some cases, but these artefacts were filtered out in the 3-channel montage. Other types of artefacts had little effect. Reduced montages result in some reduction in classifier accuracy, but the performance may still be acceptable. Recording artefacts had a limi...

Research paper thumbnail of Discovering dispatching rules from data using imitation learning: A case study for the job-shop problem

Journal of Scheduling, 2017

Dispatching rules can be automatically generated from scheduling data. This paper will demonstrat... more Dispatching rules can be automatically generated from scheduling data. This paper will demonstrate that the key to learning an effective dispatching rule is through the careful construction of the training data, \{\mathbf {x}_i(k),y_i(k)\}_{k=1}^K\in {\mathscr {D}}$${xi(k),yi(k)}k=1K∈D, where (i) features of partially constructed schedules \mathbf {x}_i$$xi should necessarily reflect the induced data distribution {\mathscr {D}}$$D for when the rule is applied. This is achieved by updating the learned model in an active imitation learning fashion; (ii) y_i$$yi is labelled optimally using a MIP solver; and (iii) data need to be balanced, as the set is unbalanced with respect to the dispatching step k. Using the guidelines set by our framework the design of custom dispatching rules, for a particular scheduling application, will become more effective. In the study presented three different distributions of the job-shop will be considered. The machine learning approach considered is based on preference learning, i.e. which dispatch (post-decision state) is preferable to another.

Research paper thumbnail of On imitating Connect-4 game trajectories using an approximate n-tuple evaluation function

2015 IEEE Conference on Computational Intelligence and Games (CIG), 2015

The effect of game trajectories on learning after-state evaluation functions for the game Connect... more The effect of game trajectories on learning after-state evaluation functions for the game Connect-4 is investigated. The evaluation function is approximated using a linear function of n-tuple features. The learning is supervised by an AI game engine, called Velena, within a preference learning framework. A different distribution of game trajectories will be generated when applying the learned approximated evaluation function, which may degrade the performance of the player. A technique known as the Dagger method will be used to address this problem. Furthermore, the opponent playing strategy is a source for new game trajectories. Random play will be introduced to the game to model this behaviour. The method of introducing random play to the game will again form different game trajectories and result in various strengths of play learned. An empirical study of a number of techniques for the generation of game trajectories is presented and evaluated.

Research paper thumbnail of Evolutionary Learning of Weighted Linear Composite Dispatching Rules for Scheduling

Proceedings of the International Conference on Evolutionary Computation Theory and Applications, 2014

Research paper thumbnail of Bounding the Likelihood of Exceeding Ward Capacity in Stochastic Surgery Scheduling

Applied sciences, Aug 27, 2022

This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY

Research paper thumbnail of An Evolutionary Approach to the Discovery of Hybrid Branching Rules for Mixed Integer Solvers

An evolutionary algorithm is used to search for problem specific branching rules within the branc... more An evolutionary algorithm is used to search for problem specific branching rules within the branch-and-bound framework. For this purpose an instance generator is used to create training data for an integer programming problem, in particular the multi-dimensional 0/1 knapsack problem. An extensive experimental study will illustrate that new and more effective rules can be found using evolutionary computation.

Research paper thumbnail of Detection of grapes in natural environment using feedforward neural network as a classifier

2016 SAI Computing Conference (SAI), 2016

Detection of grapes in real-life images has importance in various viticulture applications. A gra... more Detection of grapes in real-life images has importance in various viticulture applications. A grape detector based on an SVM classifier, in combination with a HOG descriptor, has proven to be very efficient in detection of white varieties in high-resolution images. Nevertheless, the high time complexity of such utilization was not suitable for its realtime applications, even when a detector of a simplified structure was used. Thus, we examined possibilities of the simplified version application on images of lower resolutions. For this purpose, we designed a method aimed at search for a detector's setting which gives the best time complexity vs. performance ratio. In order to provide precise evaluation results, we formed new extended datasets. We discovered that even applied on low-resolution images, the simplified detector, with an appropriate setting of all tuneable parameters, was competitive with other state of the art solutions. We concluded that the detector is qualified for real-time detection of grapes in real-life images.

Research paper thumbnail of Ensemble learning using individual neonatal data for seizure detection

IEEE Journal of Translational Engineering in Health and Medicine

Sharing medical data between institutions is difficult in practice due to data protection laws an... more Sharing medical data between institutions is difficult in practice due to data protection laws and official procedures within institutions. Therefore, most existing algorithms are trained on relatively small electroencephalogram (EEG) data sets which is likely to be detrimental to prediction accuracy. In this work, we simulate a case when the data can not be shared by splitting the publicly available data set into disjoint sets representing data in individual institutions. Methods and procedures: We propose to train a (local) detector in each institution and aggregate their individual predictions into one final prediction. Four aggregation schemes are compared, namely, the majority vote, the mean, the weighted mean and the Dawid-Skene method. The method was validated on an independent data set using only a subset of EEG channels. Results: The ensemble reaches accuracy comparable to a single detector trained on all the data when sufficient amount of data is available in each institution. Conclusion: The weighted mean aggregation scheme showed best performance, it was only marginally outperformed by the Dawid-Skene method when local detectors approach performance of a single detector trained on all available data. Clinical impact: Ensemble learning allows training of reliable algorithms for neonatal EEG analysis without a need to share the potentially sensitive EEG data between institutions.

Research paper thumbnail of Physiological

Feature test-retest reliability is proposed as a useful criterion for the selection/exclusion of ... more Feature test-retest reliability is proposed as a useful criterion for the selection/exclusion of features in time series classification tasks. Three sets of physiological time series are examined, EEG and ECG recordings together with measurements of neck movement. Comparisons of reliability estimates from test-retest studies with measures of feature importance from classification tasks suggest that low reliability can be used to exclude irrelevant features prior to classifier training. By removing features with low reliability an unnecessary degradation of the classifier accuracy may be avoided.

Research paper thumbnail of Deep Preference Neural Network for Move Prediction in Board Games

Communications in Computer and Information Science, 2018

The training of deep neural networks for move prediction in board games using comparison training... more The training of deep neural networks for move prediction in board games using comparison training is studied. Specifically, the aim is to predict moves for the game Othello from championship tournament game data. A general deep preference neural network will be presented based on a twenty year old model by Tesauro. The problem of over-fitting becomes an immediate concern when training the deep preference neural networks. It will be shown how dropout may combat this problem to a certain extent. How classification test accuracy does not necessarily correspond to move accuracy is illustrated and the key difference between preference training versus single-label classification is discussed. The careful use of dropout coupled with richer game data produces an evaluation function that is a better move predictor but will not necessarily produce a stronger game player.

Research paper thumbnail of Approximating Probabilistic Constraints for Surgery Scheduling Using Neural Networks

Machine Learning, Optimization, and Data Science, 2019

The problem of generating surgery schedules is formulated as a mathematical model with probabilis... more The problem of generating surgery schedules is formulated as a mathematical model with probabilistic constraints. The approach presented uses modern machine learning to approximate the model’s probabilistic constraints. The technique is inspired by models that use slacks in capacity planning. Essentially a neural-network is used to learn linear constraints that will replace the probabilistic constraint. The data used to learn these constraints is verified and labeled using Monte Carlo simulations. The solutions iteratively discovered, during the optimization procedure, produce also new training data. The neural-network continues its training on this data until the solution discovered is verified to be feasible. The stochastic surgery model studied is inspired by real challenges faced by many hospitals today and tested on real-life data.

Research paper thumbnail of Learning Probabilistic Constraints for Surgery Scheduling Using a Support Vector Machine

The problem of generating surgery schedules is formulated as a mathematical model with probabilis... more The problem of generating surgery schedules is formulated as a mathematical model with probabilistic constraints. The approach presented is a new method for tackling probabilistic constraints using machine learning. The technique is inspired by models that use slacks in capacity planning. Essentially support vector classification is used to learn a linear constraint that will replace the probabilistic constraint. The data used to learn this constraint is labeled using Monte Carlo simulations. This data is iteratively discovered, during the optimization procedure, and augmented to the training set. The linear support vector classifier is then updated during search, until a feasible solution is discovered. The stochastic surgery model presented is inspired by real challenges faced by many hospitals today and tested on real-life data.

Research paper thumbnail of Towards an evolutionary guided exact solution to elective surgery scheduling under uncertainty and ward restrictions

The problem of constructing surgery schedules, with limited downstream ward capacity, is formulat... more The problem of constructing surgery schedules, with limited downstream ward capacity, is formulated as a mathematical model with probabilistic constraints. This exact model becomes computationally intractable for mathematical programming solvers as the number of patients increase. An evolutionary algorithm is used to restrict the size of the search space, making the problem tractable again, effectively guiding the solvers towards an exact and feasible solution. Solutions are validated using Monte Carlo simulations during the evolutionary search. The optimization problem is inspired by real challenges faced by many hospitals today and tested on real-life hospital data.

Research paper thumbnail of On the mechanical stability of porous coated press fit titanium implants: A finite element study of a pushout test

Journal of Biomechanics, 2008

Pushout tests can be used to estimate the shear strength of the bone implant interface. Numerous ... more Pushout tests can be used to estimate the shear strength of the bone implant interface. Numerous such experimental studies have been published in the literature. Despite this researchers are still some way off with respect to the development of accurate numerical models to simulate implant stability. In the present work a specific experimental pushout study from the literature was simulated using two different bones implant interface models. The implant was a porous coated Ti-6Al-4V retrieved 4 weeks postoperatively from a dog model. The purpose was to find out which of the interface models could replicate the experimental results using physically meaningful input parameters. The results showed that a model based on partial bone ingrowth (ingrowth stability) is superior to an interface model based on friction and prestressing due to press fit (initial stability). Even though the present study is limited to a single experimental setup, the authors suggest that the presented methodology can be used to investigate implant stability from other experimental pushout models. This would eventually enhance the much needed understanding of the mechanical response of the bone implant interface and help to quantify how implant stability evolves with time.

Research paper thumbnail of Asynchronous Parallel (1+1)-CMA-ES for Constrained Global Optimisation

The global search performance of an asynchrounous parallel (1 + 1) evolution strategy using the f... more The global search performance of an asynchrounous parallel (1 + 1) evolution strategy using the full covariance matrix adaptation for constrained optimization is presented. Although the (1 + 1)-CMA-ES may be a poor global optimizer it will be shown that within this parallel framework the global search performance can be enhanced significantly. This is achieved even when all individual (1 + 1) strategies use the same initial search point. The focus will be on constrained global optimization using a recently developed (1 + 1) evolution strategy for this purpose.

Research paper thumbnail of Stochastic Master Surgical Scheduling Under Ward Uncertainty

Springer proceedings in mathematics & statistics, 2020

In this work, we address the elective surgery scheduling problem and the risk of last-minute canc... more In this work, we address the elective surgery scheduling problem and the risk of last-minute cancellations. This risk is associated with the likelihood of operating rooms going into overtime and ward beds exceeding their limit. The risk of overtime is constrained by considering only feasible combinations of operating room days schedules. To account for the feasibility, we restrict the number of surgeries assigned to each combination and force it to maintain the correct ratio between in- and out-patients for each operator. Furthermore, the probability of running into overtime is bound and verified using Monte-Carlo simulation. The risk of exceeding the ward limit is solved by a mixed-integer programming model where the probability of going over the available ward beds downstream is bound. The approach is inspired by real challenges and tested on real-life hospital data.

Research paper thumbnail of Improving Curriculum Timetabling Models Using Clustering

This work describes how clustering can aid in the modelling of the curriculum timetabling problem... more This work describes how clustering can aid in the modelling of the curriculum timetabling problem. The practical timetabling problem cannot be solved to proven optimality in any reasonable time. A clustering technique is used to construct additional constraints, that reduce the size of the feasible search space, and improves the quality of the time-tables found within a reasonable computational time. The approach is illustrated using on a real world timetabling problem and a state-of-the-art commercial solver.

Research paper thumbnail of Proceedings of the 9th international conference on Parallel Problem Solving from Nature

Research paper thumbnail of Active model learning for the student nurse allocation problem

2022 IEEE Symposium Series on Computational Intelligence (SSCI), Dec 4, 2022

Research paper thumbnail of Calibration of Automatic Seizure Detection Algorithms

2022 IEEE Signal Processing in Medicine and Biology Symposium (SPMB)

Research paper thumbnail of Neonatal seizure detection algorithms: The effect of channel count

Current Directions in Biomedical Engineering

The number of electrodes used to acquire neonatal EEG signals varies between institutions. Theref... more The number of electrodes used to acquire neonatal EEG signals varies between institutions. Therefore, tools for automatic EEG analysis, such as neonatal seizure detection algorithms, need to be able to handle different electrode montages in order to find widespread use. The aim of this study was to analyse the effect of montage on neonatal seizure detector performance. A full 18-channel montage was compared to reduced 3- and 8-channel montages using a convolutional neural network for seizure detection. Sensitivity decreased by 10 - 18 % for the reduced montages while specificity was mostly unaffected. Electrode artefacts and artefacts associated with biological rhythms caused incorrect classification of nonseizure activity in some cases, but these artefacts were filtered out in the 3-channel montage. Other types of artefacts had little effect. Reduced montages result in some reduction in classifier accuracy, but the performance may still be acceptable. Recording artefacts had a limi...

Research paper thumbnail of Discovering dispatching rules from data using imitation learning: A case study for the job-shop problem

Journal of Scheduling, 2017

Dispatching rules can be automatically generated from scheduling data. This paper will demonstrat... more Dispatching rules can be automatically generated from scheduling data. This paper will demonstrate that the key to learning an effective dispatching rule is through the careful construction of the training data, \{\mathbf {x}_i(k),y_i(k)\}_{k=1}^K\in {\mathscr {D}}$${xi(k),yi(k)}k=1K∈D, where (i) features of partially constructed schedules \mathbf {x}_i$$xi should necessarily reflect the induced data distribution {\mathscr {D}}$$D for when the rule is applied. This is achieved by updating the learned model in an active imitation learning fashion; (ii) y_i$$yi is labelled optimally using a MIP solver; and (iii) data need to be balanced, as the set is unbalanced with respect to the dispatching step k. Using the guidelines set by our framework the design of custom dispatching rules, for a particular scheduling application, will become more effective. In the study presented three different distributions of the job-shop will be considered. The machine learning approach considered is based on preference learning, i.e. which dispatch (post-decision state) is preferable to another.

Research paper thumbnail of On imitating Connect-4 game trajectories using an approximate n-tuple evaluation function

2015 IEEE Conference on Computational Intelligence and Games (CIG), 2015

The effect of game trajectories on learning after-state evaluation functions for the game Connect... more The effect of game trajectories on learning after-state evaluation functions for the game Connect-4 is investigated. The evaluation function is approximated using a linear function of n-tuple features. The learning is supervised by an AI game engine, called Velena, within a preference learning framework. A different distribution of game trajectories will be generated when applying the learned approximated evaluation function, which may degrade the performance of the player. A technique known as the Dagger method will be used to address this problem. Furthermore, the opponent playing strategy is a source for new game trajectories. Random play will be introduced to the game to model this behaviour. The method of introducing random play to the game will again form different game trajectories and result in various strengths of play learned. An empirical study of a number of techniques for the generation of game trajectories is presented and evaluated.

Research paper thumbnail of Evolutionary Learning of Weighted Linear Composite Dispatching Rules for Scheduling

Proceedings of the International Conference on Evolutionary Computation Theory and Applications, 2014

Research paper thumbnail of Bounding the Likelihood of Exceeding Ward Capacity in Stochastic Surgery Scheduling

Applied sciences, Aug 27, 2022

This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY

Research paper thumbnail of An Evolutionary Approach to the Discovery of Hybrid Branching Rules for Mixed Integer Solvers

An evolutionary algorithm is used to search for problem specific branching rules within the branc... more An evolutionary algorithm is used to search for problem specific branching rules within the branch-and-bound framework. For this purpose an instance generator is used to create training data for an integer programming problem, in particular the multi-dimensional 0/1 knapsack problem. An extensive experimental study will illustrate that new and more effective rules can be found using evolutionary computation.

Research paper thumbnail of Detection of grapes in natural environment using feedforward neural network as a classifier

2016 SAI Computing Conference (SAI), 2016

Detection of grapes in real-life images has importance in various viticulture applications. A gra... more Detection of grapes in real-life images has importance in various viticulture applications. A grape detector based on an SVM classifier, in combination with a HOG descriptor, has proven to be very efficient in detection of white varieties in high-resolution images. Nevertheless, the high time complexity of such utilization was not suitable for its realtime applications, even when a detector of a simplified structure was used. Thus, we examined possibilities of the simplified version application on images of lower resolutions. For this purpose, we designed a method aimed at search for a detector's setting which gives the best time complexity vs. performance ratio. In order to provide precise evaluation results, we formed new extended datasets. We discovered that even applied on low-resolution images, the simplified detector, with an appropriate setting of all tuneable parameters, was competitive with other state of the art solutions. We concluded that the detector is qualified for real-time detection of grapes in real-life images.

Research paper thumbnail of Ensemble learning using individual neonatal data for seizure detection

IEEE Journal of Translational Engineering in Health and Medicine

Sharing medical data between institutions is difficult in practice due to data protection laws an... more Sharing medical data between institutions is difficult in practice due to data protection laws and official procedures within institutions. Therefore, most existing algorithms are trained on relatively small electroencephalogram (EEG) data sets which is likely to be detrimental to prediction accuracy. In this work, we simulate a case when the data can not be shared by splitting the publicly available data set into disjoint sets representing data in individual institutions. Methods and procedures: We propose to train a (local) detector in each institution and aggregate their individual predictions into one final prediction. Four aggregation schemes are compared, namely, the majority vote, the mean, the weighted mean and the Dawid-Skene method. The method was validated on an independent data set using only a subset of EEG channels. Results: The ensemble reaches accuracy comparable to a single detector trained on all the data when sufficient amount of data is available in each institution. Conclusion: The weighted mean aggregation scheme showed best performance, it was only marginally outperformed by the Dawid-Skene method when local detectors approach performance of a single detector trained on all available data. Clinical impact: Ensemble learning allows training of reliable algorithms for neonatal EEG analysis without a need to share the potentially sensitive EEG data between institutions.

Research paper thumbnail of Physiological

Feature test-retest reliability is proposed as a useful criterion for the selection/exclusion of ... more Feature test-retest reliability is proposed as a useful criterion for the selection/exclusion of features in time series classification tasks. Three sets of physiological time series are examined, EEG and ECG recordings together with measurements of neck movement. Comparisons of reliability estimates from test-retest studies with measures of feature importance from classification tasks suggest that low reliability can be used to exclude irrelevant features prior to classifier training. By removing features with low reliability an unnecessary degradation of the classifier accuracy may be avoided.

Research paper thumbnail of Deep Preference Neural Network for Move Prediction in Board Games

Communications in Computer and Information Science, 2018

The training of deep neural networks for move prediction in board games using comparison training... more The training of deep neural networks for move prediction in board games using comparison training is studied. Specifically, the aim is to predict moves for the game Othello from championship tournament game data. A general deep preference neural network will be presented based on a twenty year old model by Tesauro. The problem of over-fitting becomes an immediate concern when training the deep preference neural networks. It will be shown how dropout may combat this problem to a certain extent. How classification test accuracy does not necessarily correspond to move accuracy is illustrated and the key difference between preference training versus single-label classification is discussed. The careful use of dropout coupled with richer game data produces an evaluation function that is a better move predictor but will not necessarily produce a stronger game player.

Research paper thumbnail of Approximating Probabilistic Constraints for Surgery Scheduling Using Neural Networks

Machine Learning, Optimization, and Data Science, 2019

The problem of generating surgery schedules is formulated as a mathematical model with probabilis... more The problem of generating surgery schedules is formulated as a mathematical model with probabilistic constraints. The approach presented uses modern machine learning to approximate the model’s probabilistic constraints. The technique is inspired by models that use slacks in capacity planning. Essentially a neural-network is used to learn linear constraints that will replace the probabilistic constraint. The data used to learn these constraints is verified and labeled using Monte Carlo simulations. The solutions iteratively discovered, during the optimization procedure, produce also new training data. The neural-network continues its training on this data until the solution discovered is verified to be feasible. The stochastic surgery model studied is inspired by real challenges faced by many hospitals today and tested on real-life data.

Research paper thumbnail of Learning Probabilistic Constraints for Surgery Scheduling Using a Support Vector Machine

The problem of generating surgery schedules is formulated as a mathematical model with probabilis... more The problem of generating surgery schedules is formulated as a mathematical model with probabilistic constraints. The approach presented is a new method for tackling probabilistic constraints using machine learning. The technique is inspired by models that use slacks in capacity planning. Essentially support vector classification is used to learn a linear constraint that will replace the probabilistic constraint. The data used to learn this constraint is labeled using Monte Carlo simulations. This data is iteratively discovered, during the optimization procedure, and augmented to the training set. The linear support vector classifier is then updated during search, until a feasible solution is discovered. The stochastic surgery model presented is inspired by real challenges faced by many hospitals today and tested on real-life data.