Home

Awesome

Lagrangian Duality for Constrained Deep Learning

This repository provides the implementation of the Lagrangian Dual learning Framework (LDF) described in Lagrangian Duality for Constrained Deep Learning. It focuses on applications in Fairness and Transprecision Computing.

About

The project develops a Lagrangian duality framework for learning applications that feature complex constraints. Such constraints arise in many science and engineering domains, where the task amounts to learning optimization problems which must be solved repeatedly and include hard physical and operational constraints. The framework also considers applications where the learning task must enforce constraints on the predictor itself, either because they are natural properties of the function to learn or because it is desirable from a societal standpoint to impose them.

The paper demonstrates experimentally that Lagrangian duality brings significant benefits for these applications. In energy domains, the combination of Lagrangian duality and deep learning can be used to obtain state of the art results to predict optimal power flows, in energy systems, and optimal compressor settings, in gas networks. In transprecision computing, Lagrangian duality can complement deep learning to impose monotonicity constraints on the predictor without sacrificing accuracy. Finally, Lagrangian duality can be used to enforce fairness constraints on a predictor and obtain state-of-the-art results when minimizing disparate treatments.

In the following we report two example applications deep learning with for group fairness constraints and for transprecision computing

<a name="fair"></a>Fairness

One popular fairness definition is the Demographic Parity(DP) which requires the percentage of positive prediction outcomes across groups should be similar. See Eq. 10 in the paper. Please check the subfolder "fairness" and the Demo_For_Bank_Data.ipynb example to see how our model capture such DP fairness in learning the classifier.

Example

python3 run.py

<a name="trans"></a>Transprecision Computing

Transprecision computing is the idea of reducing energy consumption by reducing the precision (a.k.a. number of bits) of the variables involved in a computation. It is especially important in low-power embedded platforms, which arise in many contexts such as smart wearable and autonomous vechicles. Increasing precision typically reduces the error of the target algorithm. However, it also increases the energy consumption, which is a function of the maximal number of used bits.

The objective is to design a configuration d_l, i.e., a mapping from input computation to the precision for the variables involved in the computation. The sought configuration should balance precision and energy consumption, given a bound to the error produced by the loss in precision when the highest precision configuration is adopted.

However, given a configuration, computing the corresponding error can be very time-consuming and the task considered in this paper seeks to learn a mapping between configurations and error. This learning task is non-trivial, since the solution space precision-error is non-smooth and non-linear.

The samples (d_l, y_l) in the dataset represent, respectively, a configuration dl and its associated error y_l obtained by running the configuration d_l for a given computation. The problem O(d_l) specifies the error obtained when using configuration d_l. Importantly, transcomputing expects a monotonic behavior: Higher precision configurations should generate more accurate results (i.e., a smaller error). Therefore, the structure of the problem imposes the learning task to require a dominance relation <= between instances of the dataset. More precisely, d_ <= d_2 holds if<br> \forall i \in [N] x1_i <= x2_i<br> where N is the number of variables involved in the computation and x1_i , x2_i are the precision values for the variables in d_1 and d_2 respectively.

We provide an example of loading the synthetic dataset and running different models on <code> <i> transprecision_computing/Example_Running_Notebook.ipynb </i> </code>

Example

python3 run_experiments.py

   

Requirements

python==3.7
torch==1.3.1

Cite As

@inproceedings{Fioretto:ECML20,
    title     = {A Lagrangian Dual Framework for Deep Neural Networks with Constraints Optimization},
    author    = {Ferdinando Fioretto and  Pascal {Van  Hentenryck} and Terrence {W.K. Mak} and Cuong Tran and Federico Baldo and Michele Lombardi},
    booktitle = {European Conference on Machine Learning and  Principles and Practice of Knowledge Discovery in Databases ({ECML-PKDD})},
    year      = {2020},
    series    = {Lecture Notes in Computer Science},
    volume    = {12461},
    pages     = {118--135},
    publisher = {Springer},
 
}

Contact

Ferdinando Fioretto ffiorett@syr.edu<br> Cuong Tran cutran@syr.edu