IVO Gesture Recognition Engine

IVO is a gesture recognition engine to position, scale, and animate 3D virtual objects
in Augmented Reality and Virtual Reality environments.  IVO utilizes the Kinect 2 to
recognize different hand gestures for virtual object interaction.

IVO is currently available as an engine for custom projects and licensing.  SDK will be available soon.

Contact Us


IVO Gesture Recognition Features:

NATURAL HAND GESTURES

IVO uses the following simplified hand gestures virtual object interaction:
• Select and scale 3D virtual object
• Hold and rotate 3D virtual object
• Select and view digital overlay information for the 3D virtual object
• Place 3D virtual objects into predefined animation sequence
• Move and scale animation sequence with 3D virtual objects animated

DID YOU KNOW?

Deloitte recently released a report showing how gesture based interactions are the next evolution for user interface interaction. Read more here.


3D OBJECT COMPATIBLE

IVO was designed to work with virtually any 3D object. 3D objects can be uploaded to the system, tagged and positioned. 3D objects can also be attached to an animated scene that the user can control within the IVO environment.

DID YOU KNOW?

To see an example of how 3D virtual objects are added to a scene and then animated, please view IVO demo video above or here.


ENVIRONMENTAL POSITIONING

3D virtual objects can be placed and interacted with within your physical space and in relation to your position. This allows you to see different 3D virtual objects in front, near or behind you. Animated effects can be added to the 3D virtual objects and you can also use a scene animation to see how all the 3D virtual objects will animate around you.

DID YOU KNOW?

3D Objects can also be tagged with information. When using the point gesture, the user of the system can identify the object. Text, photos and videos can be used with the virtual tagging.


VIRTUAL ENVIRONMENTS

IVO was also designed to work within both AR and VR environments that utilize depth sensing cameras. 3D objects using IVO can be placed and tracked in relation to the user’s position and movements.

DID YOU KNOW?

AR and VR environments that utilize depth sensing cameras can upgrade an AR/VR experience with hand tracking, gestural interaction and even help with spatial mapping.


VIRTUAL SESSION SYNCHING

One of the planned features for IVO will include the ability to synchronize virtual objects between different platforms. As an example, a user in a Virtual Reality environment using IVO could collaborate and interact with a synchronized user in an Augmented Reality environment using IVO.

DID YOU KNOW?

Zugara’s patent pending ZugSTAR technology allows video feeds to be synchronized with virtual object feeds for collaboration and interaction between multiple participants. You can view more information on ZugSTAR here.


INTERACTIVE LEARNING

IVO was designed for multiple use cases including interactive educational experiences for the classroom to visual presentations in the boardroom.

DID YOU KNOW?

IVO can also be used in various environments that utilize a 3D camera such as a museum display, event or storefront window.


HARDWARE REQUIREMENTS

IVO Gesture Recognition Engine utilizes the following hardware requirements:
• PC with i7 Processor and at least 1GB Video RAM (Example PC Specs)
Xbox One Kinect + Windows Adapter

PRICING

For latest pricing, please Contact Us.


IVO Gesture Recognition Engine Image Gallery:


Contact Us