Home

Awesome

<img src='./image/logo.png' width="400" style="max-width: 100%;" >

audioFlux

<!-- shields.io -->

GitHub Workflow Status (with branch) example branch parameter language PyPI - Version PyPI - Python Version Docs GitHub

<!--[![PyPI Downloads](https://img.shields.io/pypi/dm/audioflux.svg?label=Pypi%20downloads)](https://pypi.org/project/audioflux/)-->

DOI

<!--[![codebeat badge](https://codebeat.co/badges/0e21a344-0928-4aee-8262-be9a41fa488b)](https://codebeat.co/projects/github-com-libaudioflux-audioflux-master) ![](https://img.shields.io/badge/pod-v0.1.1-blue.svg)-->

audioflux is a deep learning tool library for audio and music analysis, feature extraction. It supports dozens of time-frequency analysis transformation methods and hundreds of corresponding time-domain and frequency-domain feature combinations. It can be provided to deep learning networks for training, and is used to study various tasks in the audio field such as Classification, Separation, Music Information Retrieval(MIR) and ASR etc.

<!-- **`audioflux`** has the following features: - Systematic and multi-dimensional feature extraction and combination can be flexibly used for various task research and analysis. - High performance, core part C implementation, FFT hardware acceleration based on different platforms, convenient for large-scale data feature extraction. - It supports the mobile end and meets the real-time calculation of audio stream at the mobile end. -->
New Features

Table of Contents

Overview

audioFlux is based on data stream design. It decouples each algorithm module in structure, and can quickly and efficiently extract features of multiple dimensions. The following is the main feature architecture diagram.

<img src='./image/feature_all.png'> <!--<img src='./feature_all.pdf'>-->

You can use multiple dimensional feature combinations, select different deep learning networks training, study various tasks in the audio field such as Classification, Separation, MIR etc.

<img src='./image/flow.png'>

The main functions of audioFlux include transform, feature and mir modules.

1. Transform

In the time–frequency representation, main transform algorithm:

<!-- &emsp -->

The above transform supports all the following frequency scale types:

The following transform are not supports multiple frequency scale types, only used as independent transform:

Detailed transform function, description, and use view the documentation.

The synchrosqueezing or reassignment is a technique for sharpening a time-frequency representation, contains the following algorithms:

2. Feature

The feature module contains the following algorithms:

<!-- harmonic pitch class profiles(HPCP) -->

3. MIR

The mir module contains the following algorithms:

Installation

language

The library is cross-platform and currently supports Linux, macOS, Windows, iOS and Android systems.

Python Package Install

To install the audioFlux package, Python >=3.6, using the released python package.

Using PyPI:

$ pip install audioflux 

Using Anaconda:

$ conda install -c tanky25 -c conda-forge audioflux
<!--Read installation instructions: https://audioflux.top/install-->

Other Build

Quickstart

More example scripts are provided in the Documentation section.

Benchmark

server hardware:

- CPU: AMD Ryzen Threadripper 3970X 32-Core Processor
<img src='./docs/image/benchmark/linux_amd_1.png' width="800" >

More detailed performance benchmark are provided in the Benchmark module.

Documentation

Documentation of the package can be found online:

https://audioflux.top

Contributing

We are more than happy to collaborate and receive your contributions to audioFlux. If you want to contribute, please fork the latest git repository and create a feature branch. Submitted requests should pass all continuous integration tests.

You are also more than welcome to suggest any improvements, including proposals for need help, find a bug, have a feature request, ask a general question, new algorithms. <a href="https://github.com/libAudioFlux/audioFlux/issues/new"> Open an issue</a>

Citing

If you want to cite audioFlux in a scholarly work, please use the following ways:

License

audioFlux project is available MIT License.