Ollama Setup Guide for Bwat Quick start instructions for running local AI models with Bwat using Ollama
✅ System Requirements Windows, macOS, or Linux system
Bwat extension installed in VS Code
🛠️ Installation & Configuration
- Install Ollama Download from ollama.com
Complete installation for your operating system
%20(1)%20(1).png)
- Download Your Preferred Model Browse available models at ollama.com/search
Copy the run command for your selected model:
bash ollama run [model-name]
.gif)
Execute the command in your terminal:
bash ollama run llama2
.gif)
- Configure Bwat Integration Launch VS Code
Open Bwat settings panel
Select "Ollama" as your API provider
Configure connection:
Base URL: http://localhost:11434/ (opens in a new tab) (default)
Choose your downloaded model from the list
.gif)
📌 Important Notes Ollama must remain running during Bwat sessions
Initial model downloads may require significant time
Verify model is fully downloaded before use
🚦 Troubleshooting Tips If connection issues occur:
Confirm Ollama process is active
Verify the base URL matches your setup
Check model download completed successfully
For advanced configuration, consult the Ollama API Documentation.