Home

Awesome

DML-removebg-preview (1)

Introduction

LLY-DML is part of the LILY Project and focuses on optimization parameter-based quantum circuits. It enhances the efficiency of quantum algorithms by fine-tuning parameters of quantum gates. DML stands for Differentiable Machine Learning, emphasizing the use of gradient-based optimization techniques to improve the performance of quantum circuits.

LLY-DML is available on the LILY QML platform, making it accessible for researchers and developers.

For inquiries or further information, please contact: info@lilyqml.de.

Contributors

RoleNameLinks
Project LeadLeon KaiserORCID, GitHub
Inquiries and ManagementRaul NieliEmail
Supporting ContributorsEileen KühnGitHub, KIT Profile
Supporting ContributorsMax KühnGitHub

Table of Contents

  1. Quantum ML-Gate: L-Gate
  2. Objective of Training
  3. Optimization Methods
  4. Public Collaboration

Quantum ML-Gate: L-Gate

The L-Gate is a pivotal component in quantum machine learning circuits, designed to meet specific requirements for effective parameter optimization. It integrates input parameters with optimization parameters, allowing for a seamless flow of data and control. Here are the key properties and design aspects of the L-Gate:

Key Properties

  1. Parameter Optimization:
    The L-Gate must enable the optimization of parameters, allowing for fine-tuning that enhances the performance of quantum algorithms. This optimization is achieved by merging input parameters with optimization parameters to create a dynamic and responsive system.

  2. Full Bloch Sphere Utilization:
    The design of the L-Gate ensures that the entire Bloch sphere is accessible. This feature allows for a complete range of quantum state manipulations, providing flexibility and precision in quantum operations.

  3. Integration of Input and Optimization Parameters:
    The L-Gate represents a machine learning gate that combines input parameters with optimization parameters. This integration is crucial for adapting to various quantum learning tasks and achieving desired outcomes.

L-Gate Structure

The structure of the L-Gate is represented as follows:

L-Gate Structure

TP0IP0HTP1IP1HTP2IP2

The sequence of tunable parameters (TP) and input parameters (IP), interspersed with Hadamard gates (H), facilitates the desired operations, ensuring that the L-Gate functions effectively as a machine learning gate.

Explanation

Objective of Training

In this system, a multi-qubit system is always used when training these gates. In this case, each qubit has multiple L-Gates. The gates are trained so that, for a given input, combined with the tuning phases, they produce a well-defined state. Thus, the system learns to associate a specific input with a fixed state of the system.

Visual Representation of a Multi-Qubit System

     ┌───────┐     ┌───────┐     ┌───────┐     ┌───────┐     ┌───────┐
q_0: | TP₀,₀ | --- | IP₀,₀ | --- |  H₀   | --- | TP₀,₁ | --- | IP₀,₁ | 
     └───────┘     └───────┘     └───────┘     └───────┘     └───────┘
     ┌───────┐     ┌───────┐     ┌───────┐ 
     |  H₀   | --- | TP₀,₂ | --- | IP₀,₂ |
     └───────┘     └───────┘     └───────┘

     ┌───────┐     ┌───────┐     ┌───────┐     ┌───────┐     ┌───────┐
q_1: | TP₁,₀ | --- | IP₁,₀ | --- |  H₁   | --- | TP₁,₁ | --- | IP₁,₁ | 
     └───────┘     └───────┘     └───────┘     └───────┘     └───────┘
     ┌───────┐     ┌───────┐     ┌───────┐ 
     |  H₁   | --- | TP₁,₂ | --- | IP₁,₂ |
     └───────┘     └───────┘     └───────┘

     ┌───────┐     ┌───────┐     ┌───────┐     ┌───────┐     ┌───────┐
q_2: | TP₂,₀ | --- | IP₂,₀ | --- |  H₂   | --- | TP₂,₁ | --- | IP₂,₁ | 
     └───────┘     └───────┘     └───────┘     └───────┘     └───────┘
     ┌───────┐     ┌───────┐     ┌───────┐ 
     |  H₂   | --- | TP₂,₂ | --- | IP₂,₂ |
     └───────┘     └───────┘     └───────┘

Explanation

Training Parameter Matrix (TP)

<div align="center"> <table style="border-collapse: collapse; border: none; text-align: center; font-size: 18px;"> <tr> <td style="border: none;">TP<sub>0,0</sub></td> <td style="border: none;">TP<sub>0,1</sub></td> <td style="border: none;">TP<sub>0,2</sub></td> </tr> <tr> <td style="border: none;">TP<sub>1,0</sub></td> <td style="border: none;">TP<sub>1,1</sub></td> <td style="border: none;">TP<sub>1,2</sub></td> </tr> <tr> <td style="border: none;">TP<sub>2,0</sub></td> <td style="border: none;">TP<sub>2,1</sub></td> <td style="border: none;">TP<sub>2,2</sub></td> </tr> </table> </div>

Input Parameter Matrix (IP)

<div align="center"> <table style="border-collapse: collapse; border: none; text-align: center; font-size: 18px;"> <tr> <td style="border: none;">IP<sub>0,0</sub></td> <td style="border: none;">IP<sub>0,1</sub></td> <td style="border: none;">IP<sub>0,2</sub></td> </tr> <tr> <td style="border: none;">IP<sub>1,0</sub></td> <td style="border: none;">IP<sub>1,1</sub></td> <td style="border: none;">IP<sub>1,2</sub></td> </tr> <tr> <td style="border: none;">IP<sub>2,0</sub></td> <td style="border: none;">IP<sub>2,1</sub></td> <td style="border: none;">IP<sub>2,2</sub></td> </tr> </table> </div>

Process Description

During the training of these gates, data in the form of matrices is applied to the IP gates. The TP gates are then optimized to achieve the desired state transformation. The input matrix feeds specific values into the IP gates, which correspond to the data that the system processes. The training matrix allows the TP gates to adjust their parameters to align with the desired outcomes, effectively learning how to map inputs to specific quantum states.

Optimizer Classes in the Quantum Circuit System

1. Optimizer Class

Methods:

2. OptimizerWithMomentum Class

Methods:

3. AdamOptimizer Class

Methods:

4. GeneticOptimizer Class

Methods:

5. PSOOptimizer Class (Particle Swarm Optimization)

Methods:

6. BayesianOptimizer Class

Methods:

7. SimulatedAnnealingOptimizer Class

Methods:

8. QuantumNaturalGradientOptimizer Class

Methods:

Public Collaboration

We welcome and encourage public collaboration on this GitHub project. If you're interested in contributing, there are several ways you can get involved:

1. Contact the Team

If you have questions or suggestions, feel free to reach out to our team at any time. We're eager to hear your thoughts and are open to discussions about potential improvements or new ideas for the project.

2. Explore the Repository

Dive into the repository to understand the current state of the project. You'll find detailed documentation and examples that will help you get up to speed quickly. We recommend checking out the following resources:

3. Pick a Task or Feature

Identify tasks or features that interest you and feel free to take them on. You can find a list of tasks or features in our issue tracker, where we regularly update the project's current needs and priorities. Here’s how you can proceed:

4. Join the Community

Engage with other contributors in the project's discussions. This is a great way to exchange ideas, ask questions, and collaborate on solutions. You can:

5. Contributing Guidelines

We have a set of guidelines to help you contribute effectively:

By contributing, you become a part of our community, helping us improve and expand the project. We value every contribution and look forward to collaborating with you!

If you're ready to start, head over to our contributing guide for detailed instructions. Together, we can make this project even better!