Running your first Local LLM Model like ChatGPT without Coding

In the fast-paced world of artificial intelligence and machine learning, local LLM (Large Language Models) such as ChatGPT have revolutionized how we interact with technology. These models are not just for the tech-savvy; even general users on Windows can set up and run these models on their personal computers. This guide aims to simplify the process, enabling you to run your first local LLM model without delving into complex coding.

But, first Let's sink in some basic definitions.

LLM (Large Language Models)

Large Language Models, or LLMs, are like the super smart assistants on your phone or computer but way more powerful. Imagine having a teacher, a librarian, and a storyteller all rolled into one. That's what LLMs are. They can read and understand huge amounts of text from books, websites, and all sorts of places on the internet. Then, when you ask them something, they use all that knowledge to give you answers, write stories, solve problems, or even make jokes. It's like having a chat with someone who knows a bit about everything under the sun.

GPT (Generative Pre-trained Transformer)

Now, GPT is like a superstar member of the LLM family. It's a specific type of Large Language Model developed by OpenAI. If LLMs are smartphones, GPT is like a specific brand's latest model, say, the iPhone 13. GPT models, especially the latest ones like GPT-3 or GPT-4, are incredibly advanced. They're trained on an enormous dataset to understand context, answer questions, write in various styles, and even mimic human conversational patterns. GPT has been making waves for its ability to generate text that can sometimes be almost indistinguishable from something a human would write.

LLM vs GPT: Simplified Comparison

LLM: The big family of intelligent models that can work with language in various ways. It's like saying "vehicles," which includes everything from bikes to cars to planes.
GPT: A specific, high-profile member of the LLM family, known for its advanced capabilities and versatility. It's like talking about a specific, top-of-the-line car model that's known for its performance and features.

In simple terms, think of LLM as "cricket" – a sport loved and played in different forms across the country. GPT, then, would be like Virat Kohli – a star player known for his skill, versatility, and ability to perform exceptionally in any match situation. Just as Kohli stands out in the cricket world for his achievements, GPT stands out in the world of LLMs for its advanced capabilities and innovations.

Mistral AI

In this tutorial, we are going to setup Mistal AI on your local machine. Now, talking about Mistral AI, think of it as a new friend in the world of smart assistants. Just like when a new smartphone model comes out with better cameras and features, Mistral AI is an upgrade in the world of AI. It's designed to be even smarter, faster, and more helpful. The folks who make these AIs are always trying to improve them, making them understand you better, give more accurate answers, and even sound more human-like when they talk back to you. Mistral AI is just one of the latest efforts to make these virtual assistants an even bigger help in our daily lives, whether it's for work, learning new stuff, or just having fun chatting.

Requirements

Before diving into the setup process, ensure your system meets the following requirements:

  • Graphics Card: A GPU with CUDA support for efficient processing.
  • Memory: At least 16/32 GB of RAM to handle the operations smoothly.
  • Operating System: A Windows, Linux, or MAC OS environment.

Process Overview

The process involves four main steps:

  1. Setting up WSL (Windows Sub-System for Linux)
  2. Installing OLLAMA
  3. Installing Mistral AI
  4. Running the LLM Model

1. Setting Up WSL (Windows Sub-System for Linux)

Note: This process is to setup Linux environment on windows, if you already have a working Linux setup, feel free to skip to Step 2.

Windows Sub-System for Linux (WSL) allows you to run a Linux environment directly on Windows, without the overhead of a virtual machine. Here's how to set it up:

  1. Enable WSL

  1. Open Powershell Command Prompt as an administrator.
and run the command: `Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux`


Note: If RestartNeeded: True, you are supposed to restart your machine before continuing the process.

  1. Install Linux Distribution

  1. Install Ubuntu (or your preferred distribution) by running `wsl --install -d Ubuntu` You'll be prompted to enter a username and password. Once completed, your Linux environment is ready to use. In this case, I have it installed already, but you should see something similar to below once the setup is completed. `wsl --install -d Ubuntu`

2. Installing OLLAMA

OLLAMA is a platform that simplifies the installation and running of LLM models. To install OLLAMA:

  1. Access WSL Terminal: Run `wsl -d Ubuntu` in the Command Prompt to access your Linux terminal.
  2. Install OLLAMA: Execute the command `curl https://ollama.ai/install.sh | sh`. You may need to enter your root user credentials set during WSL setup.

3. Installing Mistral AI

  1. Download Mistral Model: Run `ollama run mistral` in the WSL terminal. The model will start downloading automatically.

4. Running the LLM Model

Once installed, the model operates similarly to ChatGPT 3.5, capable of performing a wide array of tasks like writing emails, essays, coding, general speech, problem solving etc.

    The response speed depends on your GPU's capabilities, but Mistral AI is fairly fast even over a CPU. WSL is recommended over heavy configuration changes in other virtual environments like VMWare Workstation and Virutalbox. OLLAMA is soon expected to release a dedicated Windows version, further simplifying the process.


  • Download New models: Visit the OLLAMA library at ollama.ai to browse and download various models. Use `ollama run MODEL_NAME` to download new models.
  • List downloaded Models: ollama list.
  • Delete downloaded Models: ollama rm Model_Name

Conclusion

Running a local LLM model like ChatGPT on your Windows PC doesn't have to be a daunting task filled with complex coding. With the right tools like WSL, OLLAMA, and Mistral AI, you can seamlessly set up and explore the capabilities of large language models. Whether you're a tech enthusiast or a general user, this guide ensures that you're well-equipped to step into the world of AI and machine learning.


You can Integrate it with your workflow to increase productivity by automating tasks, Get AI Powered Suggestions and Ideas, write emails and documents privately. You can also adjust how Mistral AI generates its responses and limit them to the desired scope according to your needs, but that's a topic for another post, that's all for now.

Bhanu Namikaze

Bhanu Namikaze is an Ethical Hacker, Security Analyst, Blogger, Web Developer and a Mechanical Engineer. He Enjoys writing articles, Blogging, Debugging Errors and Capture the Flags. Enjoy Learning; There is Nothing Like Absolute Defeat - Try and try until you Succeed.

No comments:

Post a Comment