With the integration of LM Studio and iTerm2, powered by the cutting-edge DeepSeek LLM, developers can now streamline their workflows.
This setup enhances coding efficiency while maintaining complete control over their data.
Running DeepSeek LLM locally offers several benefits:
- Enhanced Privacy: Your data stays on your machine, ensuring that sensitive information is not shared with external servers.
There’s no need to send data back and forth over the internet. - Customization: You have full control over the model and can fine-tune it to better suit your specific needs and preferences.
- Offline Access: You can use the model even without an internet connection, making it more reliable in various situations.
- Cost Efficiency: Avoiding cloud service fees can be more economical, especially for extensive or long-term use.
These advantages make running DeepSeek LLM locally a powerful option for developers and users who prioritize privacy.
The following steps show the integration of LM Studio with iTerm2.
LM Studio
Download your preferred LLM and load the Model:


- Jump to the Developer screen
- Open Settings and set the Server Port to:
11434
- Start the Engine

The screen shows now a running service:

Click on the copy-button and close the page
iTerm2
Open the Settings of iTerm2
- install the plugin
- Enable AI features
- enter any API Key (entry is necessary but is not checked locally)
- For the first test you can leave the AI Prompt
- Use llama3:latest Model
- paste the URL copied from LM Studio and add
/v1/chat/completions
The final URL is thenhttp://localhost:11434/v1/chat/completions

close the Settings-Windows
Action
-Press command-y in your iTerm2 session
-type your question into the windows and press shift-enter to ask your LLM:

Now you can use your local running LLM, even when you switch off your network-adapter 🙂
You must be logged in to post a comment.