With the integration of LM Studio and iTerm2, powered by the cutting-edge DeepSeek LLM, developers can now streamline their workflows.
This setup enhances coding efficiency while maintaining complete control over their data.

Running DeepSeek LLM locally offers several benefits:

  1. Customization: You have full control over the model and can fine-tune it to better suit your specific needs and preferences.
  2. Offline Access: You can use the model even without an internet connection, making it more reliable in various situations.
  3. Cost Efficiency: Avoiding cloud service fees can be more economical, especially for extensive or long-term use.

These advantages make running DeepSeek LLM locally a powerful option for developers and users who prioritize privacy.

The following steps show the integration of LM Studio with iTerm2.

LM Studio

Download your preferred LLM and load the Model:

  1. Jump to the Developer screen
  2. Open Settings and set the Server Port to: 11434
  3. Start the Engine

The screen shows now a running service:

Click on the copy-button and close the page

iTerm2

Open the Settings of iTerm2

  1. install the plugin
  2. Enable AI features
  3. enter any API Key (entry is necessary but is not checked locally)
  4. For the first test you can leave the AI Prompt
  5. Use llama3:latest Model
  6. paste the URL copied from LM Studio and add /v1/chat/completions

    The final URL is then
    http://localhost:11434/v1/chat/completions

close the Settings-Windows

Action

-Press command-y in your iTerm2 session
-type your question into the windows and press shift-enter to ask your LLM:

Now you can use your local running LLM, even when you switch off your network-adapter 🙂