Nvidia has released an early beta version of the Chat with RTX app, which allows you to run a personal artificial intelligence chatbot on your personal computer.
You can feed the chatbot with video clips from YouTube and your documents to create summaries and get relevant answers based on your data.
Modern Nvidia is attempting to transform artificial intelligence present in graphics processing units into a tool that everyone can use.
The chatbot is run locally on your computer, all you need is a graphics processing unit from the GeForce RTX 30 or GeForce RTX 40 series with a minimum of 8 gigabytes of video RAM.
The Chat with RTX platform enables the direct use of advanced artificial intelligence capabilities through its devices, facilitating access to RAG technologies and the TensorRT-LLM program.
At the same time, the application does not require significant resource consumption from data centers and makes it easy to enhance local privacy, allowing users not to worry about artificial intelligence conversations.
Chatbots have become an essential part of interactions for millions worldwide, typically relying on cloud servers equipped with Nvidia graphics processing units.
RTX Chat changes this by allowing users to leverage generative artificial intelligence features locally, by using the processing power of graphics processing units.
Nvidia said: “Chat with RTX is more than just a chatbot, it is a personal AI companion that users can customize with their own content. Users can enhance their experience and benefit from artificial intelligence quickly and privately by using the capabilities of local computers running on Windows.”
The Chat with RTX application uses RAG technology, TensorRT-LLM program, and Nvidia RTX acceleration to simplify quick contextually relevant responses using local datasets.
Users can link the application to local files on their computers and convert them into a dataset used in open-source large language models, such as Mistral or Llama 2.
Users can type queries in their natural language, and Chat with RTX can quickly check and provide the answer with the relevant context, like requesting a restaurant recommendation or any personal information, without needing to search through multiple files.
The application supports multiple file formats such as txt, pdf, doc, docx, and xml, making it very convenient and easy to use.
Nvidia explained that the Chat with RTX feature lies in the ability to include information from various media sources, especially video clips and playlists on YouTube.
Individuals can integrate knowledge derived from video content into the chatbot, allowing for contextually appropriate queries. Thus, individuals can search for travel recommendations based on videos of favorite influencers or access quick educational content and guidance from educational sources.
The local data processing capabilities in the application include the ability to obtain quick results without needing to send user data to external servers.
The Chat with RTX application allows users to deal with sensitive information without sharing it with any external parties or the need to connect to the internet by dispensing with required cloud services.