PDA

View Full Version : How to install open source AI on Linux with GUI interface - GPT4All tutorial



Fli
10-28-2024, 05:59 AM
Since online AI like:
https://pizzagpt.it/en
https://chatgptfree.ai/
https://deepai.org/chat/free-chatgpt
https://chatgbt.one/
https://you.com/

are limited in the length of the query and for a privacy reason (not wanting to share my queries with a big tech), I wanted to run good open source AI on my computer.

I have found this GUI free open source software (FOSS):

https://docs.gpt4all.io/gpt4all_desktop/quickstart.html - https://docs.gpt4all.io - GPT4All runs large language models (LLMs) privately on everyday desktops & laptops.
https://github.com/ortegaalfredo/neurochat/releases - https://github.com/ortegaalfredo/neurochat - Native gui to serveral AI services plus llama.cpp local AIs.
https://github.com/lencx/ChatGPT/releases / https://github.com/lencx/ChatGPT - ChatGPT AI, I am unsure if it requires API from OpenAI or if it is some older self-hosted model
https://github.com/GaiZhenbiao/ChuanhuChatGPT/blob/main/readme/README_en.md - Web-UI for LLMs including ChatGPT/ChatGLM/LLaMA
https://github.com/lencx/Noi/releases - https://github.com/lencx/Noi - This is NOT self-hosted, so it uses external APIs saving your computer resources.
https://getstream.io/blog/best-local-llm-tools/ - Shows also some more advanced tools which seems to be also more difficult to learn

I have selected first GPT4All option, installing it like this:


wget https://gpt4all.io/installers/gpt4all-installer-linux.run
chmod +x gpt4all-installer-linux.run && ./gpt4all-installer-linux.run

Once GUI installation completed, I could find the GPT4All launcher on my desktop. The binary location is ..../GPT4All/bin/chat
Launching it and going to the Models tab, I could see the chat based and instruction based models. Since I am layman who wants a simplicity, I have selected chat based model Llama 3 per the AI recommendation (https://internetlifeforum.com/showthread.php?28632-Chat-or-instruction-based-AI-model-to-choose-as-a-newbie).

After downloading several GB large model, It required some minutes of a 100% CPU time to load it. So other apps has been lagging during that.

Asking first question on the chat tab caused pretty slow response and when it had been typed out on the screen, there was a 1.8 token/second displayed, which indicates the speed. It looked way too slow. So I went to a Settings > Application > Device > switched from "Application default" to my graphics card device. And the tokens per second speed tripled (the speed of the response output). So apparently GPT4All defaulted to a CPU while GPU is more suitable for the AI. I guess developers should auto-detect the fastest method automatically.

I am more or less happy with the Llama 3 8B replies comparing to ChatGPT replies from https://pizzagpt.it/en
Though for example in case of a Linux commands, it still often outputs outdated/not existing command parameters.