Awesome
Quick Links: Installation | Documentation
Turi Create
Turi Create simplifies the development of custom machine learning models. You don't have to be a machine learning expert to add recommendations, object detection, image classification, image similarity or activity classification to your app.
- Easy-to-use: Focus on tasks instead of algorithms
- Visual: Built-in, streaming visualizations to explore your data
- Flexible: Supports text, images, audio, video and sensor data
- Fast and Scalable: Work with large datasets on a single machine
- Ready To Deploy: Export models to Core ML for use in iOS, macOS, watchOS, and tvOS apps
With Turi Create, you can accomplish many common ML tasks:
ML Task | Description |
---|---|
Recommender | Personalize choices for users |
Image Classification | Label images |
Drawing Classification | Recognize Pencil/Touch Drawings and Gestures |
Sound Classification | Classify sounds |
Object Detection | Recognize objects within images |
One Shot Object Detection | Recognize 2D objects within images using a single example |
Style Transfer | Stylize images |
Activity Classification | Detect an activity using sensors |
Image Similarity | Find similar images |
Classifiers | Predict a label |
Regression | Predict numeric values |
Clustering | Group similar datapoints together |
Text Classifier | Analyze sentiment of messages |
Example: Image classifier with a few lines of code
If you want your app to recognize specific objects in images, you can build your own model with just a few lines of code:
import turicreate as tc
# Load data
data = tc.SFrame('photoLabel.sframe')
# Create a model
model = tc.image_classifier.create(data, target='photoLabel')
# Make predictions
predictions = model.predict(data)
# Export to Core ML
model.export_coreml('MyClassifier.mlmodel')
It's easy to use the resulting model in an iOS application:
<p align="center"><img src="https://docs-assets.developer.apple.com/published/a2c37bce1f/689f61a6-1087-4112-99d9-bbfb326e3138.png" alt="Turi Create" width="600"></p>Supported Platforms
Turi Create supports:
- macOS 10.12+
- Linux (with glibc 2.10+)
- Windows 10 (via WSL)
System Requirements
Turi Create requires:
- Python 2.7, 3.5, 3.6, 3.7, 3.8
- x86_64 architecture
- At least 4 GB of RAM
Installation
For detailed instructions for different varieties of Linux see LINUX_INSTALL.md. For common installation issues see INSTALL_ISSUES.md.
We recommend using virtualenv to use, install, or build Turi Create.
pip install virtualenv
The method for installing Turi Create follows the
standard python package installation steps.
To create and activate a Python virtual environment called venv
follow these steps:
# Create a Python virtual environment
cd ~
virtualenv venv
# Activate your virtual environment
source ~/venv/bin/activate
Alternatively, if you are using Anaconda, you may use its virtual environment:
conda create -n virtual_environment_name anaconda
conda activate virtual_environment_name
To install Turi Create
within your virtual environment:
(venv) pip install -U turicreate
Documentation
The package User Guide and API Docs contain more details on how to use Turi Create.
GPU Support
Turi Create does not require a GPU, but certain models can be accelerated 9-13x by utilizing a GPU.
Linux | macOS 10.13+ | macOS 10.14+ discrete GPUs, macOS 10.15+ integrated GPUs |
---|---|---|
Activity Classification | Image Classification | Activity Classification |
Drawing Classification | Image Similarity | Object Detection |
Image Classification | Sound Classification | One Shot Object Detection |
Image Similarity | Style Transfer | |
Object Detection | ||
One Shot Object Detection | ||
Sound Classification | ||
Style Transfer |
macOS GPU support is automatic. For Linux GPU support, see LinuxGPU.md.
Building From Source
If you want to build Turi Create from source, see BUILD.md.
Contributing
Prior to contributing, please review CONTRIBUTING.md and do not provide any contributions unless you agree with the terms and conditions set forth in CONTRIBUTING.md.
We want the Turi Create community to be as welcoming and inclusive as possible, and have adopted a Code of Conduct that we expect all community members, including contributors, to read and observe.