Running Models Locally
LM Studio

LM Studio Setup Guide Quick start instructions for running local AI models with Bwat using LM Studio.

🖥️ Configuring LM Studio with Bwat Run AI models locally on your machine and integrate them with Bwat.

✅ Requirements Windows/macOS/Linux system with AVX2 support

Bwat extension installed in VS Code

🛠️ Installation & Configuration

  1. Get LM Studio Download from lmstudio.ai

Complete installation for your OS

LM Studio download portal
  1. Launch the Application Open LM Studio after installation

Interface contains four key sections:

Chat: Interactive interface

Developer: Server controls

My Models: Local model storage

Discover: Model marketplace

LM Studio main interface
  1. Acquire a Model Browse the "Discover" section

Select and download your preferred AI model

Allow time for download completion (varies by size)

Model download process
  1. Activate Local Server Navigate to "Developer" tab

Toggle server to "Running" position

Server runs at default address: http://localhost:1234 (opens in a new tab)

Server activation process
  1. Connect Bwat Launch VS Code

Open Bwat settings

Choose "LM Studio" as API provider

Select your downloaded model

Bwat configuration steps

📌 Important Reminders LM Studio must remain running during Bwat sessions

Initial model downloads may require significant time

All models are stored locally after download

Verify server status in Developer tab if issues occur

🚦 Troubleshooting Tips Connection failures:

Confirm LM Studio server is active

Verify model is properly loaded

Check system meets minimum requirements

Performance issues:

Try smaller models first

Close other resource-intensive applications

Ensure proper ventilation for thermal management