NVIDIA's ChatRTX Now Powered by Google's Gemma
Artificial Intelligence. Good news for tech enthusiasts with NVIDIA’s RTX GPUs! ChatRTX, the company’s experimental AI chatbot, just received a major upgrade.
Why is local AI important?
AI chatbots rely on remote servers to process information. This can be slow and raise privacy concerns for some users. ChatRTX takes a different approach. It runs the LLM entirely on your local device, powered by your RTX GPU.This means faster response times and the comfort of knowing your data stays on your machine. AI chatbots need powerful computers! The app creates a local chatbot server accessible from a browser, providing a powerful search tool for analyzing data chat. RTX also introduces voice and query support with NVIDIA's Whisper AI speech recognition system.
These computers are in remote servers, not your phone. Servers have lots of space for data, like the chatbot’s training info.
Servers also connect to even more data, like customer info. Some chatbots can work on your device too, but for the really hard stuff, they call the server for help.
You know chat GPT; that's the kind of idea behind this, but you're running them locally on your own machine. Now you've been able to do this for a couple of years. Now this entire technology stack is pretty much open source. You can train your own language models.
NVIDIA’s ChatRTX Now Powered by Google’s Gemma
The latest update adds Gemma to ChatRTX‘s arsenal of LLMs. Gemma is a powerful model developed by Google, similar to the technology behind LaMDA (now Bard). It’s designed to run efficiently on powerful laptops and desktops, making it a perfect fit for ChatRTX.
NVIDIA‘s update to ChatRTX is a big step forward for local AI. With powerful models like Gemma and new features like voice search, it is becoming a more versatile tool for anyone who wants to leverage AI for their personal data. As AI technology continues to evolve, we can expect even more exciting developments in the world of local AI assistants.
Windows PC allows for robust queries into personal documents and now images by supporting a variety of AI models. Notably, the update includes Google's Gemma Chat, GLM 3, a bilingual model capable of understanding both English and Chinese, and an open AI clip for interacting with photo data previously chatted.
Rtx-enabled searches across local documents and YouTube videos offer summaries and detailed answers. With these upgrades, users can now also conduct voice queries through the integration of Nvidia's AI speech recognition technology.
To access these tools, one must download a substantial 36 GB file from Nvidia's website, making these powerful AI tools more accessible for personal use. Next up is a significant legal move. Eight US newspapers, including high-profile names like the New York Daily News and the Chicago Tribune, have filed a lawsuit against OpenAI.
Level Up Your Local AI
Local AI assistants could become more personalized for individual users. These assistants might become better at understanding the context of user queries. It leads to more relevant responses.The key thing about working locally. You can feed it your collection of books, so you can feed it a bunch of things in pdf or text format, or a variety of them, and have it use that as the data set, so you can basically have this as your own little summary tool for the stuff that you have locally.
So right now, if I want to move it over to my code editor, I click copy, bring it over to the code editor, and go from there, but it is again entirely local. It's free! It's easy to set up! You can give it your own training data.
It's this part that makes it most useful to me: I can take on specific topics I'm most interested in, and it will search into them. So again, you could do something like grab the documentation for your engine of choice, convert it into a pdf file, and train it on that, and it's all happening locally on your machine.