Running AI Models Locally - Under 2mins

Ansari

Ansari

Category : Tools

Time to read : 2 mins

Step 1: Gear Up Your Machine

To kick things off, you’ll need a decent chunk of RAM (8 to 16 GB should do), a good CPU, and if you’re lucky, a GPU. After all, AI models have a soft spot for GPUs.

Step 2: Get the Software

Head over to the Ollama site, download their software, and follow the straight forward installation process. Once it’s up and running, you’ll spot a nifty little “go” symbol — simple as that.

Step 3: Choose Your Model

Navigate to the models section on the ollama site and start with something small model. I began with DeepSeek 1.7b, the smallest model available. Copy the command provided, paste it into your trusty command prompt, and let the magic unfold. Be patient; it might take a few minutes depending on the model’s size.

Step 4: Chat Like a Pro

But wait, we’re not done yet. To chat with your AI model in style, download ChatboxAI. Install it, fire it up, and link it to Ollama API. Select your preferred model and voila! You’re chatting with your AI buddy directly from your machine, no data leakage, just secure and snug.

Remember, keeping it neighbourly with tech means keeping it close to home. Until next time!

Love this article? 🤍Check out what else I write about