Michael Ludden (Program Director and Senior Production Manager of IBM Watson) said that IBM Watson is going to get ready for the next generation of AR and VR with VR speech Sandbox feature. He added that IBM has decided to work on this after seeing the sudden increase in demands for the AR and VR applications. He also mentioned that IBM Watson’s platform is made for the VR content that would be soon available.
The company wants to harness the speech interaction system of Watson and use it alongside VR speech Sandbox. The VR speech Sandbox allows the user in virtual reality to speak to their headset. Users can create various objects and interacts with them. They can discover what objects can be created with various commands. Currently, users can create 100 objects (to be increased with time). VR Speech Sandbox shows the ability of the system to handle modification and variations of what users say and take action on the basis of that.
According to Ludden, it is very simple to integrate VR Speech Sandbox into an existing or new application. It is similar to the way Unity works, which would allow developers to barely touch scripts. According to Ludden, the users can have a voice interaction system in an hour in an existing or new application.
Currently, VR speech Sandbox has a Unity SDK but IBM is working on supporting Unreal Engine as well as other game engines used to build for VR or AR content. According to recently done research, there are more than 7 million AR or VR developers in the world and which is due to increase more in the coming time.
Introducing IBM Speech Sandbox:
Video Source: IBM Cloud Experience Lab