Running Models Locally
Ollama

Ollama Setup Guide for Bwat Quick start instructions for running local AI models with Bwat using Ollama

✅ System Requirements Windows, macOS, or Linux system

Bwat extension installed in VS Code

🛠️ Installation & Configuration

  1. Install Ollama Download from ollama.com

Complete installation for your operating system

Ollama download options
  1. Download Your Preferred Model Browse available models at ollama.com/search

Copy the run command for your selected model:

bash ollama run [model-name]

Model selection process

Execute the command in your terminal:

bash ollama run llama2

Running Ollama model
  1. Configure Bwat Integration Launch VS Code

Open Bwat settings panel

Select "Ollama" as your API provider

Configure connection:

Base URL: http://localhost:11434/ (opens in a new tab) (default)

Choose your downloaded model from the list

Bwat configuration with Ollama

📌 Important Notes Ollama must remain running during Bwat sessions

Initial model downloads may require significant time

Verify model is fully downloaded before use

🚦 Troubleshooting Tips If connection issues occur:

Confirm Ollama process is active

Verify the base URL matches your setup

Check model download completed successfully

For advanced configuration, consult the Ollama API Documentation.