FAQs + Approach

Combining Lego Primitives

Different primitives can combine between Computer Vision, AI, Spatial Computing and Digital Twins. We’re building a variety of new tools and capabilities that can be combined to be used for assessment and training.

Why Mobile and not Goggles?

We focus on Mobile hardware instead of Goggles because we want to support usability, accessibility and scale for workers on the job where they are.

Why XR First?

XR first design principles, mean we are trying to support users to author directly in the Mixed Reality Stage, without any need to have 3d design skills.

Intersection between Training x Assessment x AI x XR

The application of these tools emphasise creating evidence of competency and judgment.

Why Visual Spatial?

Health and field workers prefer for trainers to show them, instead of tell them. Physical movements are understood better when people can see from all angles.

Why XR and not VR?

Frontline workers are on site, in the field and aged care workers in people’s homes. We need tools that enable them to visualise on site.

Why prioritise Computer Vision?

Computer Vision AI can be tied to a Trainer’s vision and judgment. Computer Vision x AI can guide a Trainee with “Trainer-like” co-pilot visual judgment.

Most Health Data is visual. Vision can provide the visual context which AI needs to guide workers specific to the physical environments they are in.

Where does GenerativeAI fit into training?

In training, GenerativeAI can be used by Instructional designers to accelerate the creation of photos and materials for training. However, we believe that GenerativeAI is more useful if it is chained in the process after the use of Computer Vision.

Why combine XR with AI?

Co-pilot XR digital twins can demonstrate physical body movements and multi step processes in a way that is closer to a physical trainer, instead of just a photo or video. In situations where trainees want the trainer to “demonstrate” to them - this is where XR digital twins come closer to demonstrating the movements and processes of trainers.

Photos dislocate workers from their context, but XR can keep workers in context. If we use AI to detect danger in the environment, we can mark-up and highlight the actual location in the XR context. XR can overlay information on the workers physical environment.

Alex-Net Flow for Training

Alex-net combined Computer Vision with AI to recognise a “CAT”. Now people can use GenerativeAI to type “CAT” and generate an image of a cat.

In a training context, Conja can apply Computer Vision with AI, to visually recognise and label a dangerous item in the workers environment, then use AI to generate visual spatial guidance in the form of digital twins or XR markup.

This can be applied further to Automated Compliance Checks.

OpenUSD is the next significant technology to affect training

SCORM was critical to standardise training technologies over the past decades.

We believe OpenUSD with Omniverse will change how training and assessment is done in the next few decades. Rather than stream Omniverse into Conja, we’re building Conja with native mobile first principles for OpenUSD, so that when we connect to Omniverse, we will bring offline, edge, remote native capabilities to multiply Conja with Omniverse.