post doctoral fellow
DISI, Università di Bologna
Tel. +39 051 20 93270
I am a fixed-term Assistant Professor at the DISI department of the University of Bologna, working on Combinatoral Optimization and Decision Support Systems. In particular, my research activity is focused on hybrid optimization methods, based on heterogeneous techniques such as Constraint Programming, (Mixed) Integer Linear (and Non-Linear) Programming, and Machine Learning. My main application fields are Resource allocation and Scheduling problems, Cyclic Scheduling (e.g for control system design), and Scheduling problems in the presence of Uncertainty.
More recently, I have started to work with prof. Michela Milano on a methodology to solve optimization problems over complex system by embedding Machine Learning models withing optimization models: we called it "Empirical Model Learning". If you are interested, you can find more information here, or you can check some code for embedding Neural Networks in CP.
I have to say that I like this "Empirical Model Learning" idea very much, and not just because it's something I am working on. I actually like the idea that by hybridizing CP (or another optimization technique) and learning we can tackle problems that would be impossible to address by using either of the techniques alone.
Some shared resources (some refer to past activities...):
I have recently started to share code via git repositories and Virtual Machines. Currently, I have:
I'm a post-doctoral fellow at DEIS, University of Bologna since April 2010, within the Artificial Intelligence group. I got my PhD degree at University of Bologna in April 2010, with topic "Hybrid Methods for Resource Allocation and Scheduling Problems in Deterministic and Stochastic Environments".My research activity is related to Constraint Programming and its integration with Integer Programming techniques; in particular, my focus is on resource allocation and scheduling problems, with applications to emebedded system design based on multiprocessor platform.I got a Master Degree in Computer Science at University of Bologna, with disseration "Allocation and Scheduling of Conditional Task Graphs on MultiProcessor Systems on Chip: a stochastic, constraint based, approach", and grade 110/110 CL. My bachelor degree is from the same university with dissertation "Development of a support web site for scientific research", and grade 110/110.In the period April 2006 - September 2006 and in January 2007 I did some research at the Intelligent Information Systems Institute of Cornell University (Ithaca, NY). From June 2009 to December 2010 took an intership at the Ecole Polytechnique Féderale de Lausanne (EPFL), in the LSI lab.
The pages in this section contain code and data (typically: results or instances) from some of the research topics I have worked on. I have recently started to use git repositories and Virtual Machines to share code. Currently, I have:
You can find some code for Neuron Constraints in the section about Empirical Model Learning.
This is the generator used to build instances for works on uncertain durations and those making use of Logic based Benders' Decomposition (i.e. Conditional Task Grapha and multi-stage LBBD). A paper describing some of the features of a flexible graph generator was presented at the ICAPS07 workshop "Scheduling a scheduling competition". The generator is designed to produce Task Graph representing "realistic applications"; options are customized via a configuration file, passed to the software as a command line argument. The generator can label arcs and nodes in the graph wit a user-defined set of attributes; attribute values are obtained by combining randomly generated numbers and other attributes values; this allows to state (e.g.) that a task maximum duration is always larger than the minimum duration. The code for the generator is available for download. Be careful, however: it is not well polished and there is no documentation at the moment. UPDATE: there's guy at our lab working on a brand new graph generator; I'll let you know when it is available.
Here you can access data and code related to the paper:
A. Bonfietti, M. Lombardi, M. Milano, "De-cyclic Cyclic Scheduling Problems", Proc. of ICAPS 2013
[NEW!] I have just released the code of the PCP solver we used for the experimentation, with all the MCS detection algorithm implemented.
This page contains links to pseudo-code and experimental results for an MCS detection method we proposed, based on the solution of a minimum flow problem. In exerimentation we compare:
All appraoches are tested within a (retively) classical CP Depth First Search approach (referred to as CP) and an optimization appraoch based on the destructive computation of increasing lower bounds (referred to as SWEEP). More details can be found in this paper.
Here is the pseudo-code for minium flow computation and for the extracttion of the maximum weight independent set (if the graph has been build by splitting nodes with non-null flow resource requirement):
Beware, while the real code has been extensively tested, the pseudo code has not.
The following list contains link to tables with the results for single approaches and pairwise comparisons; note that since comparison are done on an instance basis, their data cannot be deduced by single approach tables. All tables are in CSV format and use Linux EOL.
The following list is a selection of summary results; they are much more readable than the previous ones.
I have been workng for some time on model learning and Neuron Constraints. You can find more details (including code) here.
This page contains the code of a solver to build optimized robust schedules for problems where tasks have fixed or even uncertain duration. The approach couples an off-line optimizer and an simple on-line scheduling policy and was moslty developed in the context of the FP7 Project SMECY.
The off-line optimizer is based on the Precedence Constraint Posting technique, i.e. a "schedule" is in fact a set of precedence constraints that make the original graph conflict free. The solver searches for feasible schedules by branching over Minimal Critical Sets, which can be identified using a number of different techniques.
The on-line scheduling policy can be arbitrarily chosen, provided that some basic specifications are satisfied. In detail, we support two broad classes of on-line schedulers:
The code is known to compile on Ubuntu Linux distributions. Dependencies include the boost library and the IBM ILOG CP suite (version 1.5 or above), available for free for academic use via the IBM Academic Initiative. The code is released with a detailed README file, containing instruction about how to build and run the software. Moreover, we include a subset of Resource Constraint Scheduling Problem instances from the PSPlib library, in the input format required by the PCP scheduler.
Here are the links to al available resources:
Here we provide the instances and the Empirical System Model for two Thermal Aware Workload Dispatching Problems on multicore CPUs. You can contanct Michele Lombardi via email for a detailed description of the data format.
In this project, we investigated the impact of information extracted from sampling and diving on the solution of Constraint Satisfaction Problems (CSP). A sample is a complete assignment of variables to values taken from their domain according to a given distribution. Diving consists in repeatedly performing depth first search attempts with random variable and value selection, constraint propagation enabled and backtracking disabled; each attempt is called a dive and, unless a feasible solution is found, it is a partial assignment of variables (whereas a sample is a --possibly infeasible-- complete assignment). While the probability of finding a feasible solution via sampling or diving is negligible if the problem is difficult enough, samples and dives are very fast to generate and, intuitively, even when they are infeasible, they can provide some statistical information on search space structure. The aim of this paper is to understand to what extent it is possible to support the CSP solving process with information derived from sampling and diving. In particular, we are interested in extracting from samples and dives precise indications on the quality of individual variable-value assignments with respect to feasibility. We formally prove that even uniform sampling could provide precise evaluation of the quality of single variable-value assignments; as expected, this requires huge sample sizes and is therefore not useful in practice. On the contrary, diving is much better suited for assignment evaluation purposes. We undertake a thorough experimental analysis on a collection of Partial Latin Square and Car Sequencing instances to assess the quality of information provided by dives. Dive features are identified and their impact on search is evaluated. Results show that diving provides information that can be fruitfully exploited.
Here are the instances used for the experimentation and part of the results:
Further details are available upon request: the download size would be large and (at this point of time) too many explanations required in order to have them available on a web page).
I defended my PhD thesis in April 2010; the title was "Hybrid Methods for Resource Allocation and Scheduling Problems in Deterministic and Stochastic Environments" and the full text is available for download.