The company Klleon, specializing in the field of artificial intelligence, has announced the launch of a new groundbreaking technology related to digital personas during the GTC conference for the year 2024. This technology allows the creation of realistic three-dimensional models for virtual individuals who have the ability to interact within virtual space.
These models include detailed features such as facial characteristics, hair properties, skin color shades, and body movements. These models can interact with the virtual world naturally and intelligently, including speaking, expressing emotions, and making decisions.
The digital avatar technology from Klleon can be utilized in various fields. For example, in the education sector, it can be used to create three-dimensional models of important historical or scientific figures, allowing students to interact with them directly. In the entertainment industry, digital avatars can be used to design fictional characters for video games, movies, or TV series. In the healthcare sector, they can be used to create three-dimensional models of body parts, assisting doctors in analyzing and treating diseases. And in marketing, they are used to design three-dimensional models of products or services, giving customers the opportunity to interact with them virtually before making a purchase.
This technology is the result of more than two years of hard work in research and development, in partnership with Nvidia, aiming to design a virtual human entity capable of engaging in conversations and expressing a variety of human emotions.
This technology is being transformed for use in “CreChat,” a program designed for interacting with interactive virtual individuals in real-time, expected to be launched in the first half of 2024. “CreChat” is an application that allows creating digital copies of celebrities that communicate with users through conversations combining image and sound in a realistic manner.
Thanks to advanced image processing technology from Nvidia, Klleon has made significant progress in this industry by leveraging the “Nvidia Omniverse” technology known as A2F, which converts sound into facial expressions, enabling precise and emotional facial expressions for virtual individuals.