Run DeepSeek LLM locally on your M series Mac with LM Studio and integrate iTerm2

With the integration of LM Studio and iTerm2, powered by the cutting-edge DeepSeek LLM, developers can now streamline their workflows.
This setup enhances coding efficiency while maintaining complete control over their data.

Running DeepSeek LLM locally offers several benefits:

  1. Customization: You have full control over the model and can fine-tune it to better suit your specific needs and preferences.
  2. Offline Access: You can use the model even without an internet connection, making it more reliable in various situations.
  3. Cost Efficiency: Avoiding cloud service fees can be more economical, especially for extensive or long-term use.

These advantages make running DeepSeek LLM locally a powerful option for developers and users who prioritize privacy.

The following steps show the integration of LM Studio with iTerm2.

LM Studio

Download your preferred LLM and load the Model:

  1. Jump to the Developer screen
  2. Open Settings and set the Server Port to: 11434
  3. Start the Engine

The screen shows now a running service:

Click on the copy-button and close the page

iTerm2

Open the Settings of iTerm2

  1. install the plugin
  2. Enable AI features
  3. enter any API Key (entry is necessary but is not checked locally)
  4. For the first test you can leave the AI Prompt
  5. Use llama3:latest Model
  6. paste the URL copied from LM Studio and add /v1/chat/completions

    The final URL is then
    http://localhost:11434/v1/chat/completions

close the Settings-Windows

Action

-Press command-y in your iTerm2 session
-type your question into the windows and press shift-enter to ask your LLM:

Now you can use your local running LLM, even when you switch off your network-adapter 🙂

Automate Your Cloud Backups: rclone and Duplicati

In today’s digital age, safeguarding your data is more crucial than ever. With the increasing reliance on cloud storage, it’s essential to have a robust backup strategy in place. This blog post will guide you through automating your cloud backups (like Onedrive in this example) using rclone and Duplicati on a Linux system (in my case Ubuntu 24.04.1 LTS).

Why rclone and Duplicati?

  • rclone: A versatile command-line tool (inspired by rsync) that supports various cloud storage providers, including OneDrive. It allows you to sync, copy, and mount cloud storage as if it were a local filesystem.
  • Duplicati: An open-source backup solution that offers incremental backups, encryption, and scheduling. It’s designed to work efficiently with cloud storage, making it an ideal choice for automated backups.

We’ll use rclone to mount your OneDrive folder as a local directory seamlessly. This setup allows Duplicati to perform smart incremental backups, ensuring your data is securely backed up without unnecessary duplication. In this guide, I’ll walk you through the steps to set up rclone and Duplicati, making sure your cloud storage is backed up efficiently and securely. Let’s get started!

Install rclone

This command downloads and runs the installation script for rclone, making it easy to install on most Unix-like systems, including Linux and macOS. For Windows, you can download the executable from the rclone website.

run apt install rclone

Install Duplicati

The install-process of Duplicati is already explained here.

Onedrive homework

By default, rclone uses a shared Client ID and Key when communicating with OneDrive, unless a custom client_id is specified in the configuration. This means that all rclone users share the same default Client ID for their requests. This is everything but not optimal, also throttling usually occurs.

Recommended step: Create unique Client ID for Onedrive personal

click New registration on https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and follow the steps outlined at the rclone page.
My screenshots for this step are attached (just for your reference, please zoom in to make the file readable in your browser. ).

Setup rclone

This section guides you through the steps to configure rclone to mount your OneDrive folder to use this mount point as the source for  Duplicati backups.

run rclone config and answer the questions

Example output (Ubuntu 24.04)

for Use web browser to automatically authenticate rclone with remote?:
Choose “Yes” if your host supports a GUI.
In my case I have to answer this question with no and have to jump on an GUI-equipped host running the same clone-version to generate the needed one drive-token with the command: rclone authorize "onedrive"

Now we can mount the onedrive-storage-folder as a mount-point.
In this example I use /mnt/onedrive as the mount-point (the folder /mnt/onedrive must be present prior executing the mount command):

rclone mount onedrive:/ /mnt/onedrive

Let’s create an rclone-service to mount the one drive-folder at startup:

vi /etc/systemd/system/rclonemount.service

Start and test the created rclonemount.service:

systemctl start rclonemount

run rclonemount.service at startup:

systemctl enable rclonemount

With Duplicati we can create now a new Backup-Job using the source directory /mnt/onedrive, or any specific subfolder like /mnt/onedrive/important_data.

Onedrive can now be backed up fully automatically with a smart backup solution 🙂

As we wrap up our journey with rclone, it’s clear that this powerful tool can significantly streamline your data management tasks. Whether you’re syncing files across multiple cloud services, automating backups, or simply exploring new ways to enhance your workflow, rclone offers a versatile and reliable solution.

Remember, the key to mastering rcloneor any tool—is practice and experimentation. Don’t hesitate to dive into the documentation, explore the various commands, and tailor rclone to fit your unique needs. The possibilities are vast, and the more you experiment, the more you’ll discover the true potential of this remarkable tool.

SSH Security Made Easy: An Introduction to ssh-audit

ssh-audit is a powerful tool designed to help you assess the security of your SSH servers (and clients!). It provides detailed information about the server’s configuration, supported algorithms, and potential vulnerabilities. In this guide, I’ll walk you through the steps to install ssh-audit and run your first security tests. Secure SSH configuration made easy.

Installation on Linux

  1. Clone the Repository: Open your terminal and clone the ssh-audit repository from GitHub:
    git clone https://github.com/jtesta/ssh-audit.git
  2. Navigate to the Directory: Change to the ssh-audit directory:
    cd ssh-audit
  3. Install Dependencies: Ensure you have Python installed on your system. If not, install it using your package manager. For example, on Ubuntu:
    sudo apt-get install python3

Installation on macOS

To install ssh-audit , run:
brew install ssh-audit
(You have already Brew installed, right ?)

Please check the ssh-audit url for many other setup options (Docker,Windows,etc.)

Test the SSH-Server against vulnerabilities

execute ssh-audit <hostname>
Replace <hostname> with the IP address or domain name of the SSH server you want to audit.

Example of Ubuntu’s 24.04 LTS default SSHD setup:

(if you add the -l warn switch you just get the vulnerabilities presented)

Interpreting the Results: ssh-audit will provide a detailed report of the server’s configuration, including supported key exchange algorithms, encryption ciphers, and MAC algorithms. Look for any warnings or recommendations to improve your server’s security.

Remediation

After running ssh-audit and identifying potential vulnerabilities or weak configurations in your SSH server, it’s important to take steps to remediate these issues. Below are examples of how to apply them:

Example for Ubuntu 24.04.1 LTS:

(Note: This is just an example. The example eliminates vulnerabilities for the SSH-daemon, but it can well be that this snippet does not fit for your setup. Handle with care)

This snippet creates a configuration file (51-ssh-harden_202412.conf) in directory /etc/ssh/sshd_config.d/ with the specified settings to enhance the security of your SSH server.

(SSHD restart required)



Example for RHEL 9.4

(Note: This is just an example. This example eliminates vulnerabilities for the SSH-daemon, but it can well be that this snippet does not fit for your setup. Handle with care)

(SSHD restart required)

Proof the remediation

run ssh-audit again!

Example-output after remediation:

How can I test if my SSH-Client is not vulnerable ?

If you run ssh-audit with the switch -c it creates an ssh-service on port 2222 and audits every connection attempt:

output after the login-attempt (ssh 127.0.0.1 -p 2222)


Make your SSH-communication more secure, if not the SSH-Service opens an attack surface for uninvited visitors.
Secure SSH configuration is Key!

Consider other additional security-steps like:
Secure your SSH communication with certificates
Lab setup: Secure your SSH communication with certificates
Fail2Ban: ban hosts that cause multiple authentication errors

..
.


Do you use NFTables or IPTables (or both) ?

Most major Linux distributions have adopted nftables as their default firewall framework, often using it under the hood for iptables commands. Here are some of the key distributions that support nftables:

  1. Debian: Starting with Debian Buster, nftables is the default backend for iptables.
  2. Ubuntu: From Ubuntu 20.10 (Groovy Gorilla) onwards, nftables is included and can be used as the default firewall framework.
  3. Fedora: Fedora has integrated nftables and uses it as the default firewall framework.
  4. Arch Linux: Arch Linux includes nftables and provides packages for easy installation and configuration.
  5. Red Hat Enterprise Linux (RHEL): RHEL 8 and later versions use nftables as the default packet filtering framework.

Let’s examine a fresh installed Ubuntu 24.04 LTS on an RPI:

What is iptables -V telling me ?

The system does not use the legacy iptables framework, instead it uses the nf_tables version of iptables which provides a bridge to the nftables infrastructure/framework.

to complete the knowledge we check the symbolic link of iptables:

Iptables-nft ruleset appears in the rule listing of nftables.

Is iptables-nft and nftables then the same ? No, but they share the infrastructure of nftables.

Here’s how they work together:

Compatibility Layer
iptables-nft: This is a variant of iptables that uses the nftables kernel API. When you use iptables commands, they are translated into nftables rules by this compatibility layer. This allows you to continue using familiar iptables commands while benefiting from the advanced features of nftables.
iptables-legacy: This is the traditional iptables that directly interacts with the kernel’s iptables API. If you use iptables-legacy, it operates independently of nftables and does not translate rules into nftables format.
Interaction
Rule Management: When you use iptables-nft, the rules you create are managed by nftables under the hood. This means that nftables takes precedence, and the rules are stored in the nftables ruleset.
Kernel API: Both iptables-nft and nftables use the same kernel API for packet filtering. This ensures that the packet matching and filtering behavior is consistent, regardless of which tool you use to create the rules.
Coexistence: If you use both iptables-legacy and nftables, they can coexist, but it’s generally recommended to stick with one framework to avoid conflicts and ensure consistency.

Best Practices

Transition to nftables: If you’re starting fresh or looking to modernize your firewall management, transitioning to nftables is recommended. It offers better performance, more features, and a simpler syntax.
Use iptables-nft: If you prefer using iptables commands, use the iptables-nft variant to take advantage of nftables’ capabilities while maintaining familiarity with iptables syntax.
By understanding how iptables and nftables interact, you can make informed decisions about managing your firewall rules and ensure a smooth transition to nftables.

Check out the official nftables wiki: http://wiki.nftables.org/