Awesome
WARNING: This repository is no longer maintained :warning:
This repository will not be updated. The repository will be kept available in read-only mode.
Create voice commands for VR experiences with Watson services
Read this in other languages: 한국어, 中国.
In this Code Pattern we will create a Virtual Reality game based on Watson's Speech-to-Text and Watson's Assistant services.
In Virtual Reality, where you truly “inhabit” the space, speech can feel like a more natural interface than other methods. Providing speech controls allows developers to create more immersive experiences. The HTC Vive is the 3rd most popular head-mounted VR devices (not including Google Cardboard) and an ideal candidate for Speech interaction, selling roughly 400 thousand units in 2016.
When the reader has completed this Code Pattern, they will understand how to:
- Add IBM Watson Speech-to-Text and Assistant to a Virtual Reality environment build in Unity.
Flow
- User interacts in virtual reality and gives voice commands such as "Create a large black box".
- The HTC Vive Headset microphone picks up the voice command and the running application sends it to Watson Speech-to-Text.
- Watson Speech-to-Text converts the audio to text and returns it to the running Application that powers the HTC Vive.
- The application sends the text to Watson Assistant. Watson Assistant returns the recognized intent "Create" and the entities "large", "black", and "box". The virtual reality application then displays the large black box (which falls from the sky).
Included components
- IBM Watson Assistant: Create a chatbot with a program that conducts a conversation via auditory or textual methods.
- IBM Watson Speech-to-Text: Converts audio voice into written text.
Featured technologies
- Unity: A cross-platform game engine used to develop video games for PC, Mac, consoles, mobile devices and websites.
Watch the Video
Steps
1. Before You Begin
2. Create IBM Cloud services
On your local machine:
git clone https://github.com/IBM/vr-speech-sandbox-vive.git
cd vr-speech-sandbox-vive
In IBM Cloud:
- Create a Speech-To-Text service instance.
- Create an Assistant service instance.
Import the Assistant workspace.json:
- Find the Assistant service in your IBM Cloud Dashboard.
- Click on the service and then click on
Launch tool
. - Go to the
Skills
tab. - Click
Create new
- Click the
Import skill
tab. - Click
Choose JSON file
, go to your cloned repo dir, andOpen
the workspace.json file indata/workspace.json
. - Select
Everything
and clickImport
.
To find the WORKSPACE_ID
for Watson Assistant:
- Go back to the
Skills
tab. - Find the card for the workspace you would like to use. Look for
IBM Speech Sandbox Vive
. - Click on the three dots in the upper right-hand corner of the card and select
View API Details
. - Copy the
Workspace ID
GUID. Save it configuration later.
3. Building and Running
Note: This has been compiled and tested using Unity 2018.2.14f1 and Watson Unity SDK from the Unity asset Store as of July 24, 2018 and tested with the
develop
branch of the github unity-sdk as ofcommit d1ce5607ebb77 Nov 1 2018
.
Note: If you are in any IBM Cloud region other than US-South you must use Unity 2018.2 or higher. This is because Unity 2018.2 or higher is needed for TLS 1.2, which is the only TLS version available in all regions other than US-South.
- Download the Unity SDK for Watson or perform the following:
git clone https://github.com/watson-developer-cloud/unity-sdk.git
Make sure you are on the develop branch.
-
Open Unity and inside the project launcher select the button.
-
Navigate to where you cloned this repository and open the
Creation Sandbox
directory. -
If prompted to upgrade the project to a newer Unity version, do so.
-
Follow these instructions to add the Watson Unity SDK downloaded in step 1 to the project.
-
Follow these instructions to create your Speech To Text and Watson Assistant services and find their credentials (using IBM Cloud You can find your workspace ID by selecting the expansion menu on your assistant workspace and selecting
View details
. -
In the Unity Hierarchy view, click on
_Scenes
->MainGame
->MainMenu
and then theSaveCredentials
object. -
In the Inspector you will see Variables for
Speech To Text
andWatson Assistant
and eitherCF Authentication
for the Cloud Foundry username and password, or theIAM Authentication
if you have the IAM apikey. Since you only have only one version of these credentials, fill out only one of the two for each service. -
Fill out the
Speech To Text Service Url
, theAssistant Service Url
, theAssistant Workspace Id
, and theAssistant Version Date
. There are tool tips which will show help and any defaults.
- Install Blender
- In the Unity editor project tab, select
Assets
->Scenes
->MainGame
->MainMenu
and double click to load the scene. - Press Play
Links
- Youtube Video
- Demo of Cardboard version on Youtube
- Viveport
- Watson Unity SDK
- Blog
- Another blog!
- Article about VR Speech Sandbox
- News article about Star Trek Bridge Crew
Learn more
- Artificial Intelligence Code Patterns: Enjoyed this Code Pattern? Check out our other AI Code Patterns.
- AI and Data Code Pattern Playlist: Bookmark our playlist with all of our Code Pattern videos
- With Watson: Want to take your Watson app to the next level? Looking to utilize Watson Brand assets? Join the With Watson program to leverage exclusive brand, marketing, and tech resources to amplify and accelerate your Watson embedded commercial solution.
License
This code pattern is licensed under the Apache Software License, Version 2. Separate third party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 (DCO) and the Apache Software License, Version 2.