Home

Awesome

JavaScript/WebGL lightweight and robust face tracking library designed for augmented reality face filters

This JavaScript library detects and tracks the face in real time from the camera video feed captured with WebRTC. Then it is possible to overlay 3D content for augmented reality applications. We provide various demonstrations using the main WebGL 3D engines. We have included in this repository the release versions of the 3D engines to work with a determined version (they are in /libs/<name of the engine>/).

This library is lightweight and it does not include any 3D engine or third party library. We want to keep it framework agnostic so the outputs of the library are raw: if the face is detected or not, the position and the scale of the detected face and the rotation Euler angles. But thanks to the featured helpers, examples and boilerplates, you can quickly deal with a higher level context (for motion head tracking, for face filter or face replacement...). We continuously add new demonstrations, so stay tuned!

<!-- If you need a custom development service using this library, you can submit the [FaceFilter development request form](https://forms.gle/kktPyojpJbwSSPED7). We will get back to you quickly. -->

Table of contents

<!-- * [Developer support plans](#developer-support-plans) --> <!-- * [Jeeliz Partner Network](#jeeliz-partner-network) --> <p align="center"> <img src='https://user-images.githubusercontent.com/11960872/37533324-cfa3e516-2941-11e8-99a9-96a1e20c80a3.jpg' /> </p>

Features

Here are the main features of the library:

Architecture

Demonstrations and apps

Included in this repository

These demonstration are included in this repository. So they are released under the FaceFilter licence. You will probably find among them the perfect starting point to build your own face based augmented reality application:

<!-- These demos are NOT maintained anymore: --> <!-- * Daft Punk (put the iconic helmet): [live demo](https://jeeliz.com/demos/faceFilter/demos/threejs/daft_punk/) * Star Wars: Darth Vader: [live demo](https://jeeliz.com/demos/faceFilter/demos/threejs/star_wars/) * Harry Potter (say "Lumos!"): [live demo](https://jeeliz.com/demos/faceFilter/demos/threejs/harry_potter/) * Halloween Spiders (you've got a spider in your mouth): [live demo](https://jeeliz.com/demos/faceFilter/demos/threejs/halloween_spider/), [source code](/demos/threejs/halloween_spiders) -->

Some screenshot videos are available on Youtube. You can also subscribe to the Jeeliz Youtube channel or to the @WebARRocks Twitter account to be kept informed of our cutting edge developments.

Third party

These amazing applications rely on this library for face detection and tracking:

<!--* [SpiderMan Far From Home AR web application](https://spider-manfarfromhome.herokuapp.com/), made by [Ignite](http://ignitexr.com/). This library is used for the first part of the experience (Edith glasses). Then [8th Wall SLAM engine](https://www.8thwall.com/) is used to display the drones. --> <!-- * [Masks with WebM recording](https://etc.pen-nei.jp/jeeliz/): Mask demo made by [@iong](https://twitter.com/iong). You can change the mask using the lower left buttons, and record the video in WebM file format. --> <!-- * [AR Bee NFT](https://www.hicetnunc.xyz/objkt/219200) NFT based on WebAR created by [Fivos Doganis (@fdoganis)](https://twitter.com/fdoganis) with a 3D model designed by [Michelle Brown](https://twitter.com/Thebadlament) and traded on [Hic et Nunc](https://www.hicetnunc.xyz) NFT platform. --> <!-- * * [Bollé virtual try-on](https://www.bolle.com/us/sunglasses/technologies/volt--lenses/chronoshield-46228.html) Bollé relies on Jeeliz Facefilter for its Sunglasses and ski mask virtual try-on module, developed by [Emersya](https://emersya.com/) and using Emersya amazing custom 3D engine for 3D rendering. --> <!-- * [Custom Halloween pumpkin mask](https://emersya.com/app/UBYF7QDAK4): Customize and try your own Halloween pumpkin using this funny webapp, made by [Emersya](https://emersya.com/) and relying on Emersya custom 3D engine for rendering. --> <!-- * [Glasses configurator](https://3zvx5.csb.app/): Glasses virtual try-on webapp developped with R3F and Create-React-App. [Click here to view the source code on Codesandbox](https://codesandbox.io/s/3zvx5). -->

If you have developped an application or a fun demo using this library, we would love to see it and insert a link here! Just contact us on Twitter @WebARRocks or LinkedIn

Specifications

Here we describe how to use this library. Although we planned to add new features, we will keep it backward compatible.

Get started

On your HTML page, you first need to include the main script between the tags <head> and </head>:

 <script src="dist/jeelizFaceFilter.js"></script>

Then you should include a <canvas> HTML element in the DOM, between the tags <body> and </body>. The width and height properties of the <canvas> element should be set. They define the resolution of the canvas and the final rendering will be computed using this resolution. Be careful to not enlarge too much the canvas size using its CSS properties without increasing its resolution, otherwise it may look blurry or pixelated. We advise to fix the resolution to the actual canvas size. Do not forget to call JEELIZFACEFILTER.resize() if you resize the canvas after the initialization step. We strongly encourage you to use our helper /helpers/JeelizResizer.js to set the width and height of the canvas (see Optimization/Canvas and video resolutions section).

<canvas width="600" height="600" id='jeeFaceFilterCanvas'></canvas>

This canvas will be used by WebGL both for the computation and the 3D rendering. When your page is loaded you should launch this function:

JEELIZFACEFILTER.init({
  canvasId: 'jeeFaceFilterCanvas',
  NNCPath: '../../../neuralNets/', // path to JSON neural network model (NN_DEFAULT.json by default)
  callbackReady: function(errCode, spec){
    if (errCode){
      console.log('AN ERROR HAPPENS. ERROR CODE =', errCode);
      return;
    }
    // [init scene with spec...]
    console.log('INFO: JEELIZFACEFILTER IS READY');
  }, //end callbackReady()

  // called at each render iteration (drawing loop)
  callbackTrack: function(detectState){
    // Render your scene here
    // [... do something with detectState]
  } //end callbackTrack()
});

Optional init arguments

{
  'videoElement' // not set by default. <video> element used
   // WARN: If you specify this parameter,
   //       1. all other settings will be useless
   //       2. it means that you fully handle the video aspect
   //       3. in case of using web-camera device make sure that
   //          initialization goes after `loadeddata` event of the `videoElement`,
   //          otherwise face detector will yield very low `detectState.detected` values
   //          (to be more sure also await first `timeupdate` event)

  'deviceId'            // not set by default
  'facingMode': 'user', // to use the rear camera, set to 'environment'

  'idealWidth': 800,  // ideal video width in pixels
  'idealHeight': 600, // ideal video height in pixels
  'minWidth': 480,    // min video width in pixels
  'maxWidth': 1920,   // max video width in pixels
  'minHeight': 480,   // min video height in pixels
  'maxHeight': 1920,  // max video height in pixels,
  'rotate': 0,        // rotation in degrees possible values: 0,90,-90,180
  'flipX': false      // if we should flip horizontally the video. Default: false
},

If the user has a mobile device in portrait display mode, the width and height of these parameters are automatically inverted for the first camera request. If it does not succeed, we invert the width and height.

Error codes

The initialization function ( callbackReady in the code snippet ) will be called with an error code ( errCode ). It can have these values:

The returned objects

We detail here the arguments of the callback functions like callbackReady or callbackTrack. The reference of these objects do not change for memory optimization purpose. So you should copy their property values if you want to keep them unchanged outside the callback functions scopes.

The initialization returned object

The initialization callback function ( callbackReady in the code snippet ) is called with a second argument, spec, if there is no error. spec is a dictionnary having these properties:

The detection state

At each render iteration a callback function is executed ( callbackTrack in the code snippet ). It has one argument ( detectState ) which is a dictionnary with these properties:

In multiface detection mode, detectState is an array. Its size is equal to the maximum number of detected faces and each element of this array has the format described just before.

Miscellaneous methods

After the initialization (ie after that callbackReady is launched ) , these methods are available:

Optimization

1 or 2 Canvas?

You can either:

  1. Use 1 <canvas> with 1 WebGL context, shared by facefilter and THREE.js (or another 3D engine),
  2. Use 2 separate <canvas> elements, aligned using CSS, 1 canvas for AR, and the second one to display the video and to run this library.

The 1. is often more efficient, but the newest versions of THREE.js are not suited to share the WebGL context and some weird bugs can occur. So I strongly advise to use 2 separate canvas.

Canvas and video resolutions

We strongly recommend the use of the JeelizResizer helper in order to size the canvas to the display size in order to not compute more pixels than required. This helper also computes the best camera resolution, which is the closer to the canvas actual size. If the camera resolution is too high compared to the canvas resolution, your application will be unnecessarily slowed because it is quite costly to refresh the WebGL texture for each video frame. And if the video resolution is too low compared to the canvas resolution, the image will be blurry. You can take a look at the THREE.js boilerplate to see how it is used. To use the helper, you first need to include it in the HTML code:

<script src="https://appstatic.jeeliz.com/faceFilter/JeelizResizer.js"></script>

Then in your main script, before initializing Jeeliz FaceFilter, you should call it to size the canvas to the best resolution and to find the optimal video resolution:

JeelizResizer.size_canvas({
  canvasId: 'jeeFaceFilterCanvas',
    callback: function(isError, bestVideoSettings){
      JEELIZFACEFILTER.init({
        videoSettings: bestVideoSettings,
        // ...
        // ...
      });
    }
});

Take a look at the source code of this helper (in helpers/JeelizResize.js) to get more information.

Misc

A few tips:

Multiple faces

It is possible to detect and track several faces at the same time. To enable this feature, you only have to specify the optional init parameter maxFacesDetected. Its maximum value is 8. Indeed, if you are tracking for example 8 faces at the same time, the detection will be slower because there is 8 times less computing power per tracked face. If you have set this value to 8 but if there is only 1 face detected, it should not slow down too much compared to the single face tracking.

If multiple face tracking is enabled, the callbackTrack function is called with an array of detection states (instead of being executed with a simple detection state). The detection state format is still the same.

You can use our Three.js multiple faces detection helper, helpers/JeelizThreeHelper.js to get started and test this example. The main script has only 60 lines of code !

Multiple videos

To create a new JEELIZFACEFILTER instance, you need to call:

const JEELIZFACEFILTER2 = JEELIZFACEFILTER.create_new();

Be aware that:

Checkout this demo to have an example of how it works: source code, live demo

Changing the 3D engine

It is possible to use another 3D engine than BABYLON.JS or THREE.JS. If you have accomplished this work, we would be interested to add your demonstration in this repository (or link to your code). Just open a pull request.

The 3D engine can either share the WebGL context and the canvas with FaceFilter, or use a second canvas overlaid on the FaceFilter canvas (the FaceFilter canvas is just used to render the video). In the first case, the WebGL context is created by Jeeliz Face Filter. We strongly encourage the second approach, even if the first one may be a bit more optimized.

<!--The background video texture is given directly as a `WebGLTexture` object, so it is usable only on the FaceFilter WebGL context. It would be more costly in term of computating time to have a second WebGL context for the 3D rendering, because at each new video frame we should transfert the video data from the `<video>` element to the 2 webgl contexts: the Jeeliz Face Filter WebGL context for processing, and the 3D engine WebGL Context for rendering. Fortunately, with BABYLON.JS or THREE.JS, it is easy to specify an already initialized WebGL context. -->

Changing the neural network

Since July 2018 it is possible to change the neural network. When calling JEELIZFACEFILTER.init({...}) with NNCPath: <path of NN_DEFAULT.json> you set NNCPath value to a specific neural network file:

  JEELIZFACEFILTER.init({
    NNCPath: '../../neuralNets/NN_LIGHT_1.json'
    // ...
  })

It is also possible to give directly the neural network model JSON file content by using NNC property instead of NNCPath.

We provide several neural network models:

Using module

/dist/jeelizFaceFilter.module.js is exactly the same as /dist/jeelizFaceFilter.js except that it works as a JavaScript module, so you can import it directly using:

import 'dist/jeelizFaceFilter.module.js'

or using require (see issue #72):

const faceFilter = require('./lib/jeelizFaceFilter.module.js').JEELIZFACEFILTER;

faceFilter.init({
  // you can also provide the canvas directly
  // using the canvas property instead of canvasId:
  canvasId: 'jeeFaceFilterCanvas',
  NNCPath: '../../../neuralNets/', // path to JSON neural network model (NN_DEFAULT.json by default)
  callbackReady: function(errCode, spec){
    if (errCode){
      console.log('AN ERROR HAPPENS. ERROR CODE =', errCode);
      return;
    }
    // [init scene with spec...]
    console.log('INFO: JEELIZFACEFILTER IS READY');
  }, //end callbackReady()

  // called at each render iteration (drawing loop)
  callbackTrack: function(detectState){
      // Render your scene here
      // [... do something with detectState]
  } //end callbackTrack()
});

Integration

With a bundler

If you use this library with a bundler (typically Webpack or Parcel), first you should use the module version.

Then, with the standard library, we load the neural network model (specified by NNCPath provided as initialization parameter) using AJAX for the following reasons:

With a bundler, it is a bit more complicated. It is easier to load the neural network model using a classical import or require call and to provide it using the NNC init parameter:

const faceFilter = require('./lib/jeelizFaceFilter.module.js').JEELIZFACEFILTER
const neuralNetworkModel = require('./neuralNets/NN_DEFAULT.json')

faceFilter.init({
  NNC:  neuralNetworkModel, // instead of NNCPath
  // ... other init parameters
});

You can check out the amazing work of @jackbilestech, jackbilestech/jeelizFaceFilter if you are interested to use this library in a NPM / ES6 / Webpack environment.

With JavaScript frontend frameworks

With REACT and THREE Fiber

Since October 2020, there is a React/THREE Fiber/Webpack boilerplate in /reactThreeFiberDemo path.

See also

We don't officially cover here integration with mainstream JavaScript frontend frameworks (React, Vue, Angular). Feel free to submit a Pull Request to add a boilerplate or a demo for a specific framework. Here is a bunch of submitted issues dealing with React integration:

You can also take a look at these Github code repositories:

Native

It is possible to execute a JavaScript application using this library into a Webview for a native app integration. For IOS the camera access is disabled inside WKWebview component for IOS before IOS14.3. If you want to make your application run on devices running IOS <= 14.2, you have to implement a hack to stream the camera video into the webview using websockets.

His hack has not been implemented into this repository but in a similar Jeeliz Library, Jeeliz Weboji. Here are the links:

But it is still a dirty hack introducing a bottleneck. It still run pretty well on a high end device (tested on Iphone XR), but it is better to stick on a full web environment.

There is also this Github issue detailing how to embed the library into a Webview component, for React native. It is for Android only:

Unity

Since September 2023, Marks has developed a Unity plugin to create Face filters using Unity and export them for the web. You can buy it on the Unity asset store here: Augmented Reality WebGL - Face Tracking Virtual Try On

Hosting

This library requires the user's camera video feed through MediaStream API. So your application should be hosted by a HTTPS server (even with a self-signed certificate). It won't work at all with unsecure HTTP, even locally with some web browsers.

The development server

For development purpose we provide a simple and minimalist HTTPS server in order to check out the demos or develop your very own filters. To launch it, execute in the bash console:

with phython2

  python2 httpsServer.py

It requires Python 2.X. Then open in your web browser https://localhost:4443.

with node

  npm install
  npm run dev

go to https://127.0.0.1:8000/demos/threejs/cube/index.html

when you open the browser it will show not secure. Go to advance. Click proceed.

Hosting optimization

You can use our hosted and up to date version of the library, available here:

https://appstatic.jeeliz.com/faceFilter/jeelizFaceFilter.js

It uses the neural network NN_DEFAULT.json hosted in the same path. The helpers used in these demos (all scripts in /helpers/) are also hosted on https://appstatic.jeeliz.com/faceFilter/.

It is served through a content delivery network (CDN) using gzip compression. If you host the scripts by yourself, be careful to enable gzip HTTP/HTTPS compression for JSON and JS files. Indeed, the neuron network JSON file, neuralNets/NN_DEFAULT.json is quite heavy, but very well compressed with GZIP. You can check the gzip compression of your server here.

The neuron network file, neuralNets/NN_DEFAULT.json is loaded using an ajax XMLHttpRequest after calling JEEFACEFILTER.init(). This loading is proceeded after the user has accepted to share its camera. So we won't load this quite heavy file if the user refuses to share it or if there is no camera available. The loading can be faster if you systematically preload neuralNets/NN_DEFAULT.json using a service worker or a simple raw XMLHttpRequest just after the HTML page loading. Then the file will be already in the browser cache when Jeeliz Facefilter will request it.

About the tech

Under the hood

This library relies on Jeeliz WebGL Deep Learning technology to detect and track the user's face using a neural network. The accuracy is adaptative: the best is the hardware, the more detections are processed per second. All is done client-side.

Compatibility

If a compatibility error occurred, please post an issue on this repository. If this is a problem with the camera access, please first retry after closing all applications which could use the camera (Skype, Messenger, other browser tabs and windows, ...). Please include:

Articles and tutorials

You have written a tutorial using this library? Submit a pull request or send us the link, we would be glad to add it.

In English

<!-- * Creating a Snapchat-like face filter using Jeeliz FaceFilter and THREE.JS: * Part 1: [Creating your first filter](https://jeeliz.com/blog/creating-a-snapchat-like-filter-with-jeelizs-facefilter-api-part-1-creating-your-first-filter/) * Part 2: [ User interactions and particles](https://jeeliz.com/blog/creating-a-snapchat-like-filter-with-jeelizs-facefilter-api-part-2-user-interactions-and-particles/) --> <!-- * Tutorial: [Matrix theme face filter](https://jeeliz.com/blog/tutorial-javascript-webgl-webcam-facial-filter-on-the-theme-of-matrix/) -->

In French

In Japanese

<!-- ## Developer support plans Developer support plans are billed each calendar year. There is a 6 months commitment period: you can cancel or change your plan every 6 months from the beginning of your subscription. | Feature | No support plan | Basic | Advanced | Enterprise | | --- | :-: | :-: | :-: | :-: | | Access to [Github issues](/../../issues) | X | X | X | X | | Major upgrade email alert | | X | X | X | | Email support | | X | X | X | | Guaranteed delay | | 4 business days | 2 business days | 1 business day | | Designated contacts | | 1 | 1 | 5 | | Screenshare/videocall support | | | X | X | | Screenshare/videocall support delay | | | 3 business days | 1 business day | | Additional Hourly pricing | $150 | $120 | $100 | $100 | | **Price** | **Free** | **$50/mo** | **$120/mo** | **$700/mo** | Please contact us at **contact__at__jeeliz.com** and provide: 1. Which plan you need, 2. What month it starts, 3. Your company info: name, address, city, state, zipcode, 4. The name and the title of the signatory of the contract, 5. The name(s) of the people who will benefit from the support (*designated contacts*). We will send back the contract proposal. --> <!-- ## Jeeliz Partner Network If you are a freelance developer, or if you represent a software company or a web agency able to build projects with this library, you can join the Jeeliz Partner Network (JPN) by filling this [Google Form](https://docs.google.com/forms/d/e/1FAIpQLSccwO9Seyi4ZHkXc_Udn0VRWUhKZfXpO6AGMFamnWVVXOA1hA/viewform?usp=sf_link). We will redirect you development services requests involving this library. We will also provide premium support for integrating and using this library. Conversely, if you are looking for a reliable development service provider to build your face filter using this library, please fill the [FaceFilter development request form](https://forms.gle/kktPyojpJbwSSPED7). We will put you in touch with a qualified partner. -->

License

Apache 2.0. This application is free for both commercial and non-commercial use.

<!-- We appreciate attribution by including the [Jeeliz logo](https://jeeliz.com/wp-content/uploads/2018/01/LOGO_JEELIZ_BLUE.png) and a link to the [Jeeliz website](https://jeeliz.com) in your application or desktop website. Of course we do not expect a large link to Jeeliz over your face filter, but if you can put the link in the credits/about/help/footer section it would be great. -->

References