F# for Linux People



F# for Linux People

Originally published 2021-12-16 on my blog at carpenoctem.dev



Introduction

Everything you need to start hacking F# on Linux!



Table of Contents



Why this page exists

People learning the F# language today are blessed with excellent books, blogs, quality official online documentation, and other resources. However, these resources tend to assume that the student is either using Windows, familiar with .NET development with C#, or using a particular IDE/Editor.

Often, something that “just works” on Windows with Visual Studios may take some creativity to get working on Linux. Sometimes (though not often) it doesn’t work at all.

The goal of this article is to fill that gap by documenting my own experience of learning F# as a Linux-centric developer who has not programmed on Windows or .NET for 15+ years. It will not cover the language itself, but rather the tooling, ecosystem and things that confused me along the way.



Installation & Initial Configuration

To install F#, you need to install the official .NET SDK from Microsoft, which includes F#. Don’t worry, it is open source under the MIT license, and it runs beautifully on Linux.

Thankfully, installing the .NET SDK is trivial: Microsoft maintains official package repositories for Ubuntu, Debian, CentOS/RHEL, Fedora, OpenSUSE, SLES, and Alpine. Arch Linux has a community maintained package. There is also manual installer for distros not listed here. Furthermore, note that if you use VS Code, you likely already have the correct repository.

They even support ARM, so get your Raspberry Pi’s ready!

Once you have configured one of these official repositories, you’ll need to install a packaged named dotnet-sdk-6.0 or similar. On Ubuntu, it’s just:

sudo apt install dotnet-sdk-6.0

There is also dotnet-runtime-6.0, which allows you to run .NET applications but not build them. Useful for servers and docker images. (There is also way to build standalone binaries which do not even require the runtime. See the Standalone Executable section below).

That’s it! You should have the dotnet command line tool installed on your system. You won’t need to run any other sudo commands.

The dotnet tool is your one-stop-shop for managing your .NET installation, installing packages, creating projects, and so on. It is similar to npm for node. However, dotnet handles multiple versions of the SDK and runtime seamlessly, so you do not need a separate ‘version manager’ like nvm, rvm, perlbrew, or virtualenv.



After Installation

After installation, put something similar to this in your .bash_profile, .zshrc, or other shell initialization file:

export DOTNET_CLI_TELEMETRY_OPTOUT=1

if [ -d "$HOME/.dotnet/tools" ]; then
    export PATH=$HOME/.dotnet/tools:$PATH
fi

The first line prevents the dotnet command line tool from sending Microsoft anonymized usage information. No, it is not cool that this is opt-out instead of opt-in, but at least it is supposedly anonymized, and not hidden or obfuscated.

The rest sets up your path to include the ~/.dotnet/tools directory, where various tools you install via dotnet tool install are located. More on this later.



What About Mono?

No doubt you’ve heard of the open source implementation of the .NET Framework started by Miguel de Icaza in 2004.

Mono still exists and is not deprecated. In fact, Mono is used by the Unity gaming engine. Xamarin, the .NET-based platform for developing iOS and Android applications, also uses Mono (although they may be switching to the official .NET soon). Mono will also likely be used indefinitely by pre-existing free software such as Tomboy.

However, you should use the official .NET SDK from Microsoft for F#. The official .NET SDK is more complete and up-to-date, especially for F# developers. Furthermore, the official SDK dominates F# developer mindshare, meaning that third party F# libraries will likely be written for the official SDK.



.NET Versions

If you just want to start hacking F#, all you need to know is:

.NET 6 is the current version of the .NET platform, and F# 6 is the current version of the F# language.

However, eventually you will want to know some of the history of .NET, because libraries and projects you find online will target various older versions, and you need to understand what’s going on. Come back to this section when you do.

Click here for a brief and probably wrong history of .NET versions

In the beginning, 2001 to be specific, there was .NET Framework (yes, ‘Framework’ is part of the name). It was proprietary and Windows-only, and remains so to this day, though some parts were open sourced.

In 2014, Microsoft released .NET Core as a separate, alternative implementation of .NET. It was cross-platform and open source under the MIT license. It proved immensely popular and revitalized interest in .NET. There were several versions of .NET Core, with 3.1 released in December 2019.

Around this time, the decision was made to consolidate .NET Core and .NET Framework. In November 2020, .NET Core was renamed .NET, and MS announced .NET Framework would no longer be developed. The first version of .NET was 5, not 4, to avoid confusion with the existing .NET Framework 4.x.

(Yes, it’s just “.NET”, with no suffix or prefix. This has made it difficult to differentiate whether one is talking about .NET in general (which may include Framework), or more specifically the recent releases from Microsoft 🤷🏼).

And that’s how you end up with “.NET 6”, the current version.

Minor Caveats 1. Although .NET Framework is no longer actively developed and version 4.8 will be its final version, it will continue to exists indefinitely because the last versions are installed by default on Windows 10 and various versions Windows Server. You may encounter older code, or code written by Windows-only developers targeting these versions.

Minor Caveat 2. There’s also something called .NET Standard. Unlike the others, .NET Standard is merely a specification, and not a software package you can download and install. It seems to be an earlier attempt to unify the different frameworks. Specifically, if you can build a .NET library that targets .NET Standard, it will run on both .NET Framework and .NET Core and .NET. With the consolidation of the various versions, the .NET Standard specification was deprecated. However, if you find a project targeting .NET Standard, it should work on current versions unmodified.



Projects and Solutions

In .NET, a Project is basically a compile-able unit of source code. An executable console application Project might be created with:

dotnet new console -lang 'F#' -o YourFirstApp

And a library might be created with:

dotnet new classlib -lang "F#" -o MyFirstLib

However, in the world of .NET, there is a higher level of organization called the Solution. Solutions contain Projects, and Projects within Solutions can reference each other. This makes it easy share libraries between different executables. Also, in .NET, your tests should exist as a separate project.

Here’s an example of creating a Solution with a console application referencing a library:

# Create the solution
dotnet new sln -o MySolution
cd MySolution

dotnet new classlib -lang "F#" -o src/MyLib
dotnet new console -lang "F#" -o src/App

# Adding projects to a solution
dotnet sln add src/MyLib/MyLib.fsproj
dotnet sln add src/App/App.fsproj

# Reference the library from the console app
dotnet add src/App/App.fsproj reference src/MyLib/MyLib.fsproj

dotnet run --project App



Slow Startup Time

If you are acustomed to interpretted languages such as Python, you will notice that dotnet run seems very slow… a simple Hello World application will take over 2 seconds to launch! Don’t worry, compiled applications will start up much quicker, but it is quite annoying during development.

Unfortunately, there is no way to reduce startup time significantly.

Two possible remedies are:

  • Use dotnet watch run so that the application is run every time you save a change in a source file.
  • Do you experimental coding in FSI.



Tools

The dotnet cli tool can be used to install various tools. You can either install them globally (in ~/.dotnet/tools) or locally, within a project or solution. Global installations are more conveninent during development (less typing), but local installations make more sense when you are using CI/CD. It is ok to have a tool installed both locally and globally.

Regardless, one tool you’ll definitely want to configure is the Paket package management software:

dotnet tool install paket --global

Assuming you added ~/.dotnet/tools to your $PATH as mentioned above, you should be able to run paket now in your shell.

Installing locally in a solution or project involves an extra step:

dotnet new tool-manifest
dotnet tool install paket

To run the locally installed tool, you’ll need to run it as dotnet paket. There is no need to mess with your $PATH in this case. Be sure to also add the newly generated manifest file .config/dotnet-tools.json to git.



Package Management

.NET has a public repository of packages called NuGet. It is akin to pip for Python, npm for Node.js, CPAN for Perl, etc.

NuGet packages are installed at the Project level (as opposed to the Solution level) with a command like:

dotnet add package Giraffe 

Then you can reference any module/namespace provided by the package with open:

open Giraffe

One important note, from an open source perspective: unlike repositories for other languages, packages on NuGet may not be FOSS. For example, IronPDF is completely closed-source and proprietary, yet it is distributed via NuGet. Therefore, please check out the package’s license carefully before using a random package off of NuGet!



Paket

Paket is an alternative dependency manager for .NET, written in F#. It can use NuGet packages, as well as point directly to Github repos and URLs. See their FAQ for why you may want to use Paket over the native package management built into dotnet.

Note: All examples in this section will assume you’ve installed Paket globally (see the Tools section). If you want to use a local paket, change all calls of paket below to dotnet paket.

Paket can be configured at the Solution level or the Project level. Let’s start with a solution:

dotnet new sln -o PaketTest1
cd PaketTest1
dotnet new console -o App1 -lang 'F#'
dotnet sln add App1/App1.fsproj
paket init 

paket init creates a paket.dependencies file (which you should add to your git repo). After initialization, the first thing you must do is open paket.dependencies and fix the framework line to point to the correct version, if necessary. For example, with Paket 6.2.1, the init command creates the following:

source https://api.nuget.org/v3/index.json

storage: none
framework: net5.0, netstandard2.0, netstandard2.1 # WRONG! 

You need to change the framework line to net6.0:

source https://api.nuget.org/v3/index.json

storage: none
framework: net6.0 # CORRECTED

I have no idea why Paket does not specify the correct framework by default. It might be that Paket is not updated for .NET 6.0 at the time of writing, though I had similar problems during .NET 5.0.

After fixing the dependencies file, you must install the FSharp.Core, which includes the standard libraries for F#:

paket add FSharp.Core

FSharp.Core is not installed by default because Paket can be used for C# applications as well. You can install any other NuGet package with the add subcommand, shown below with an optional version number:

paket add Suave --version 2.5.6

After installing these packages, the paket.dependencies file will look like:

source https://api.nuget.org/v3/index.json

storage: none
framework: net6.0
nuget FSharp.Core
nuget Suave 2.5.6

You can edit this file directly, but make sure to run paket update to tell Paket of your changes. You will also notice a few new files:

  • paket.lock contains the dependency tree as discussed earlier.
  • Within Projects will be a new paket.references file. This is a simple text file containing a list of packages used by that project. If you change this file, you will need to run paket update to propagate the changes to your .fsproj file.

Paket directly in Projects. Paket can also be initialized in a bare Project, without a solution:

dotnet new console -o PaketTest2 -lang 'F#'
cd PaketTest2
paket init
### FIX paket.dependencies as described above
paket add FSharp.Core
paket add Suave
dotnet run 

In this case, the dependencies, lock, and reference files will all be created in the same directory.



FSI – F# Interactive

F# comes with a REPL called FSI or F# interactive, which can be launched with dotnet fsi.

$ dotnet fsi                                                                             

Microsoft (R) F# Interactive version 12.0.0.0 for F# 6.0
Copyright (c) Microsoft Corporation. All Rights Reserved.

For help type #help;;

> printfn "Hello, %s" "World";;

The Official Doc is adequate so I won’t go into too much more.

A couple things to know:

  • F# scripts should have the fsx file extension.
  • The #load "file.fsx" syntax allows you to load other fsx files.
  • The #r "..." syntax allows you to load packages.
  • ;; is used to terminate statements, or groups of statements.
  • You can use the shebang #!/usr/bin/env -S dotnet fsi and run it like any other script on your system.
  • Ctrl-D quits the REPL.


Using NuGet with FSI

NuGet packages can be loaded during an FSI session like this:

#r "nuget: Suave";;

// and then use it as usual:
open Suave;; 


Using Paket with FSI

For the same reason you may want to use Paket in regular F# code (for example, version consistency across multiple scripts), you may want to use it within FSX scripts. To be honest, this was not easy to figure out on Linux, and in fact, my problems with getting Paket working on Linux is what prompted me to write this entire article.

First, you need to get the package FSharp.DependencyManager.Paket onto your system. The easiest way to do that is in FSI:

#r "nuget: FSharp.DependencyManager.Paket";;

Now, there will be a cached copy of the package in the ~/.nuget/packages directory. We need to pass this to the --compilertool option of fsi (you will need to adjust it for your unix username and version of paket):

dotnet fsi --compilertool:"/home/YOURUSERNAME/.nuget/packages/fsharp.dependencymanager.paket/6.2.1/lib/netstandard2.0"

I recommend having an alias like below, and updating it whenever you update paket:

alias fsi='dotnet fsi --compilertool:"/home/YOURUSERNAME/.nuget/packages/fsharp.dependencymanager.paket/6.2.1/lib/netstandard2.0"'

Now, if you run FSI within a Solution or Project, you will be able to load the package according to the versions in paket.lock, assuring version consistency between multiple Projects and FSX scripts:

#r "paket: nuget Suave";; 

Warning – There is an bug that prevents multiple users on your machine from loading NuGet and Paket packages in this way.
The cause of this bug is that the packages are stored in /tmp/nuget and /tmp/script-packages with the permission 775, preventing other users (not in the same group) from creating new subdirectories. To workaround this, simply remove these directories (or maybe permission them correctly) if switching users.



Standalone Executables

F# applications can be compiled into standalone, self-contained binaries. They can be distributed just like statically compiled applications written in C, Rust, or Go. Of course, these binaries will tend to be large because they include the .NET runtime (a Hello World application comes in at around 65M). This may be a consideration if the target environment is constrained (maybe an embedded device) or if you want to create dozens of of individual applications.

In order to build self-contained applications, add the lines highlighted below:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net5.0</TargetFramework>
    <!-- FROM HERE.... -->
    <PublishSingleFile>true</PublishSingleFile>
    <SelfContained>true</SelfContained>
    <RuntimeIdentifier>linux-x64</RuntimeIdentifier>
    <PublishReadyToRun>true</PublishReadyToRun>
    <!-- TO HERE -->
  </PropertyGroup>
</Project>

And then run:

dotnet publish

Your binary will be available in bin/Debug/net5.0/linux-x64/publish/.

For more information, see here



Templates

The dotnet CLI tool uses Templates to initialize new projects and other components. They are akin to various project scafolding systems like create-react-app for React.

To see a list of templates installed on your machine, run dotnet new --list:

Template Name                                 Short Name           Language    Tags                                               
--------------------------------------------  -------------------  ----------  ---------------------------------------------------
Console Application                           console              [C#],F#,VB  Common/Console                                     
Class library                                 classlib             [C#],F#,VB  Common/Library                                     
Gtk Application                               gtkapp               [C#]        Gtk/GUI App                                        
Gtk Dialog                                    gtkdialog            [C#]        Gtk/UI                                             
Gtk Widget                                    gtkwidget            [C#]        Gtk/UI                                             
Gtk Window                                    gtkwindow            [C#]        Gtk/UI                                             
MSTest Test Project                           mstest               [C#],F#,VB  Test/MSTest                                        
NUnit 3 Test Item                             nunit-test           [C#],F#,VB  Test/NUnit                                         
NUnit 3 Test Project                          nunit                [C#],F#,VB  Test/NUnit                                         
xUnit Test Project                            xunit                [C#],F#,VB  Test/xUnit                                         
MVC ViewImports                               viewimports          [C#]        Web/ASP.NET                                        
Razor Component                               razorcomponent       [C#]        Web/ASP.NET                                        
MVC ViewStart                                 viewstart            [C#]        Web/ASP.NET                                        
Razor Page                                    page                 [C#]        Web/ASP.NET                                        
Blazor Server App                             blazorserver         [C#]        Web/Blazor                                         
Blazor WebAssembly App                        blazorwasm           [C#]        Web/Blazor/WebAssembly                             
ASP.NET Core Empty                            web                  [C#],F#     Web/Empty                                          
ASP.NET Core Web App (Model-View-Controller)  mvc                  [C#],F#     Web/MVC                                            
ASP.NET Core Web App                          webapp               [C#]        Web/MVC/Razor Pages                                
Razor Class Library                           razorclasslib        [C#]        Web/Razor/Library                                  
... and a whole lot more

The “Short Name” is what you pass to dotnet new. The Language column specifies the default language of the template and the availability of other languages. To create a Class Library, therefore, you would need to run dotnet new classlib --lang 'F#' -o MyClassLib, because otherwise it would default to C#.

Also note that most of the templates that come default are for C# only. That’s ok, since the core use cases (console, classlib, and testing) are covered, and the F# community has developed templates for other use cases.

In order to install, say, the Expecto testing framework template, you would run:

dotnet new -i "Expecto.Template::*"

The template list (dotnet new --list) will show you that the Short Name for this template is unsurprisingly ‘expecto’, so you would run something like the following to create your testing project:

dotnet new expecto -o AwesomeTestProj

Note that you do not need to specify --lang 'F#' here because it F# is the default (and only) language for this template.

When you install a template using --install, they are installed in ~/.templateengine.

Under the hood, Templates are just specially tagged NuGet packages. To find out the underlying package for a Template, run dotnet new -u.



Git

The following lines may be useful in your .gitignore:

[Dd]ebug/
[Rr]elease/
x64/
[Bb]in/
[Oo]bj/
.paket/
paket-files/




Vim

If you are a Vim/NeoVim user, you will be happy to know that F# support is surprisingly good. With the help of the F# Language Server plugin Ionide-Vim, you can get access to contextual code completion (called ‘Intellisense’ in Visual Studios), diagnostics, and much more.

On NeoVim, the built-in LSP client works without modification. On Vim, you will need LanguageClient-neovim.



Visual Studio Code

Visual Studio Code, unsurprisingly, has excellent support for F# through Ionide. Simply Ctrl-Shift-X to the extension management screen to install.

You can also impress the kids by using FSI through a “Notebook” interface with the .NET Interactive Notebooks extension. After installation, press Ctrl-Shift-P and select “.NET Interface: Create new blank notebook”, choose “Create as .ipynb” then “F#”.

TODO: Figure out how to get VS Code to recognize Paket



GUI Development

Your only real choice for GUI development with F#/.NET on Linux is GTK#. (I’ve gotten feedback on Twitter that this statement may be a bit harsh, will update when I dive a bit deeper into other options).

Despite investing heavily in Open Source for the .NET itself, Microsoft has never seriously supported GUI development on Linux. You could run old WinForm applications using Mono, and there are some efforts to run UWP applications on Linux. However, development tools for libraries are largely these tied to Visual Studios and Windows.


Source link

I erroneously deletedy root partition. Now what?

Posted on Reddit’s r/archlinux and r/techsupport.

Is there a way I can reinstall Arch from the bootstick I still have?

I’ve been following this tutorial, and how my system and it differs is that all of my partitions have “unknown filesystems”.

Here’s a backlog:


    error: disk `lvmid/MMwt86-jYqe-hUn1 ... zo7KQw' not found.
    Entering rescue mode...
    grub rescue> ls
    (hd0) (hd1) (hd1,gpt2) (hd1,gpt1)
    grub rescue> ls (hd0)
    (hd0): File-system is unknown.
    grub rescue> set
    cmdpath=(hd1,gpt1)/EFI/grub_uefi
    prefix=(lvmid/MMwt86-jYqe ... zo7KQw)/boot/grub
    root=lvmid/MMwt86-jYqe-hUn1-x1eI-sqCk-RoXD-NihFx4/49zeY2-wEvE-jsN6-2EqG-5TW4-CeLE-zo7KQe
    grub rescue>

And the same error I get for (hd0) applies to the other three ((hd1), (hd1,gpt2), and (hd1,gpt1)).

IMPORTANT NOTE:
(hd1,) is my bootstick. Removing it beforehand results in (hd0) (hd0,gpt2) (hd0,gpt1).


Source link

Optimize Your Webserver by Installing a Single NGINX Module

In 2012, Google released version 1.0 of their PageSpeed modules for NGINX and Apache. It has gone largely unnoticed since then. The short of PageSpeed is that if you add it to your web server, you can configure it to optimize anything passing through it using techniques such as minification, format conversion, and even injecting scripts to lazy-load images. You can read more about what it does on the official site.

It sounded great in theory, but how properly install it with NGINX wasn’t obvious. While Google does publish scripts to help with the installation, it requires a non-trivial depth of knowledge to do right. After struggling with it for many hours, I wrote a guide for personal future reference.

I recently returned to those notes to entirely automate the process using GitHub Actions. The work is open-source and available on GitHub.



Installation

Run the following as root on a Debian-based machine:

sudo su
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 8028BE1819F3E4A0
echo "deb https://nginx-pagespeed.knyz.org/dist/ /" > /etc/apt/sources.list.d/nginx-pagespeed.list
echo "Package: *" > /etc/apt/preferences.d/99nginx-pagespeed
echo "Pin: origin http://nginx-pagespeed.knyz.org/" >> /etc/apt/preferences.d/99nginx-pagespeed
echo "Pin-Priority: 900" >> /etc/apt/preferences.d/99nginx-pagespeed
apt update
apt install nginx-full # If NGINX is already installed, an `apt upgrade` works too
echo "pagespeed on;" > /etc/nginx/conf.d/pagespeed.conf
echo "pagespeed FileCachePath "/var/cache/pagespeed/";" >> /etc/nginx/conf.d/pagespeed.conf
echo "pagespeed FileCacheSizeKb 102400;" >> /etc/nginx/conf.d/pagespeed.conf
echo "pagespeed FileCacheCleanIntervalMs 3600000;" >> /etc/nginx/conf.d/pagespeed.conf
echo "pagespeed FileCacheInodeLimit 500000;" >> /etc/nginx/conf.d/pagespeed.conf
echo "pagespeed RewriteLevel CoreFilters;" >> /etc/nginx/conf.d/pagespeed.conf
systemctl reload nginx

The installation process is explained more thoroughly on the GitHub page if you’re curious.

Once that is done, you will have an active NGINX + PageSpeed installation that will receive the same updates as upstream NGINX. You can learn more about individual filters that you can enable in the documentation.

This post was originally shared on my Building Better Software Slower blog


Source link

What am I doing wrong in the RClone `crypt remote` creation process?

So, I’m following along with this video on how to set up rclone, and I was making an encrypted version of my Google Drive remote (google_drive).
However, whenever I do ./rclone copy Test.txt google_drive_crypt it creates a new directory inside the rclone folder called “google_drive” and puts the encoded folder in there. — However, when the person does it in the video it works perfectly fine, putting the file on Google Drive.
What am I doing wrong?

Here’s a (small) command backlog:

[calin@CalinArchLinuxPC rclone-v1.57.0-linux-386]$ ./rclone config
Current remotes:

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
name> google_drive
Storage> drive

# ...

name> google_drive_crypt
remote> google_drive

Thanks!
Cheers!


Source link

Perl module tests on Linux 32bit on Github Action

I’m creating SPVM. SPVM is a Perl module I’m creating now.

I want to do tests of SPVM on Linux 32bit. I search for the way. I search github actions used in Perl itself.



Linux 32bit Github Action

I customized it. The created github action yml is linux-32bit.yml

name: linux-32bit

on:
  push:
    branches:
      - '*'
    tags-ignore:
      - '*'
  pull_request:

jobs:
  build:
    runs-on: ubuntu-latest
    container:
      image: i386/ubuntu:latest
    steps:
      - name: install the Perl header, core modules, building tools
        run: |
          apt update
          apt install -y libperl-dev build-essential
      - uses: actions/checkout@v1
      - name: perl Makefile.PL
        run: perl Makefile.PL
      - name: make
        run: make
      - name: make disttest
        run: make disttest



Short Descriptions



the Docker Container Image

Use the docker container image “i386/ubuntu:latest”

    container:
      image: i386/ubuntu:latest



apt

Install the Perl header, core modules, building tools.

        run: |
          apt update
          apt install -y libperl-dev build-essential

Source link

Bring Your Home Network Anywhere For Free – Home VPN with WireGuard on Raspberry Pi + Pi-hole (Ubuntu Server 20.04 LTS)

In the previous blog post, I talked about setting up Ubuntu Server 20.04 LTS and Pi-hole DNS on Raspberry Pi. You can go through the process step by step following Block Ads, Tracking, and Telemetry With Pi-hole on Raspberry Pi (Ubuntu Server 20.04 LTS).

Having Pi-hole set up on our home network, we will have a much better internet browsing experience without ads and better control of available resources (if any). Also, maybe you have a network-attached storage (NAS) in your network and would want to have access from anywhere, or you just want a safe browsing experience when connected to public WiFis.

The setup above is limited only to your home network, and after a couple of days of browsing, you will think – why can’t I bring this network setup wherever we go!? Well, YOU CAN. A logical presumption would be to have a way to connect to our home network from anywhere and browse through it. Even when you connect from the other side of the world.



Virtual Private Network

Virtual Private Network (VPN) allows us to connect our devices to another network over the internet in a secure manner. We can browse the internet using other computers’ (server) internet connection.

I am sure you came across internet ads for paid services like ExpressVPN, NordVPN, Surfshark, etc. They are awesome without a doubt, you can fake your device’s IP location and use some geographically limited services like Netflix, but it won’t get you to your home network. And you have to pay for it. All VPNs use VPN protocols to create and secure your connection, so why shouldn’t you, for your needs?



WireGuard or OpenVPN

Two most popular VPN protocols used today are WireGuard and OpenVPN. There is no specific reason why I choose one over the other, but it is said that WireGuard is much faster than OpenVPN and it consumes around 15% less data, handles network changes better and appears to be just as secure (I don’t know who said it).



WireGuard (or OpenVPN) on Raspberry Pi

We could go through the manual installation instructions for WireGuard, but there is a great tool, PiVPN which allows us to install the desired VPN very easily.

Log in to your Raspberry Pi directly or via Secure Shell (SSH), and run:

curl -L https://install.pivpn.io | bash

The process will use sudo and install the necessary dependencies. Just wait for it to do its job. After installing the necessary packages, you will be prompted with graphical options:
image.png
We previously talked about setting up a static IP address on Ubuntu Server 20.04. PiVPN won’t configure static IP for us because we are not using Raspbian OS for our Raspberry Pi.
image.png
Just accept default options, and be sure to select the WireGuard option when prompted.
image.png
You can change the default WireGuard port if necessary but have in mind that you will need it later, so make sure you remember it (I will use the default option, port 51820)
image.png
If you have a Pi-hole installation, PiVPN will detect it and ask if you want to use it as a DNS.
image.png
In the next steps, you will be prompted to use Public IP or DNS. Choose your public IP address.
image.png

If your ISP provides you with a dynamic IP address, there is a solution in the next post. For now, continue with this article.

Continue with the process and accept unattended upgrades to the server.
image.png
Just follow the process and accept to reboot Raspberry Pi after the installation, so everything is set up.
image.png

If you use Pi-hole as a DHCP server, you won’t have an internet connection while Raspberry Pi is rebooting.



Port Forwarding

To be able to connect to your Raspberry Pi VPN server, we need to set up a port forwarding option on your router. I have Technicolor CGA2121, but you can find that on every router, under settings (or advanced settings, usually under the Application & Gaming option).
image.png



Adding VPN Client

To add a new VPN client user, use the integrated PiVPN command:

pivpn add

Choose your client name and hit ENTER.

You may have a warning to Run ‘systemctl daemon-reload’ to reload units, so just do it.

Now your client is ready to connect. You can find installation files here for different operating systems.

For Android and iOS devices, there is a WireGuard application on PlayStore/AppStore, so download it. To quickly set up WireGuard VPN, from your Raspberry Pi run:

pivpn -qr

You will have a QR code on the screen which you can read from your mobile phone to set it up.
image.png

Now when you leave your home network, you are always a flip of the switch away from it.



Pi-hole DNS Troubleshooting

If you installed PiVPN before Pi-hole, edit the PiVPN configuration with:

$ sudo nano /etc/pivpn/wireguard/setupVars.conf
  • Remove the pivpnDNS1=[...] and pivpnDNS2=[...] lines
  • Add this line pivpnDNS1=192.168.0.50 (your Pi-hole IP might be different) to point clients to the Pi-hole IP
  • Save the file with Ctrl+X, Y and exit
  • Run pihole -a -i local to tell Pi-hole to listen on all interfaces



Dynamic IP Address

If you are lucky enough or you are not sorry to pay for the static IP address, you can skip this part. Otherwise, here is how to

%[https://amelspahic.com/set-up-dynamic-dns-for-dynamic-ip-addresses-at-home]



Final Words

I hope this tutorial will help you set up your VPN communication and bring even more privacy, security, and comfort while browsing the internet.


Source link

My first impressions with pyenv

pyenv provides an easy way to install almost any version of python from a large list of distributions. I have simply been using the version of python from the os package manager for awhile, but recently I bumped my home system to Ubuntu 21.10 impish, and it is only 3.9+ while the libraries I needed were only compatable with up to 3.8.

I needed to install an older version of python on ubuntu

I’ve been wanting to check out pyenv for awhile now, but without a burning need to do so.



installing

Based on the Readme it looked like I needed to install using homebrew,so this is what I did, but I later realized that there is a pyenv-installer repo that may have saved me this need.

https://waylonwalker.com/til/installing-homebrew-linux/



List out install candidates

You can list all of the available versions to install with
pyenv install --list. It does reccomend updating pyenv if you suspect that it is missing one. At the time of writing this comes out to 532 different versions!

pyenv install --list



Let’s install the latest 3.8 patch

Installing a version is as easy as pyenv install 3.8.12. This will install it, but not make it active anywhere.

pyenv install 3.8.12



let’s use python 3.8.12 while in this directory

Running pyenv local will set the version of python that we wish to use while in this directory and any directory underneath of it while using the pyenv command.

pyenv local python3.8.12



.python-version file

This creates a .python-version files in the directory I ran it in, that contains simply the version number.

3.8.12



using with pipx

I immediately ran into the same issue I was having before when trying to run pipx, as pipx was running my system python. I had to install pipx in the python3.8 environment to get it to use it.

pyenv exec pip install pipx
pyenv exec pipx run kedro new



python is still the system python

When I open a terminal and call python its still my system python that I installed and set with update-alternatives. I am not sure if this is expected or based on how I had installed the system python previously, but it’s what happened on my system.

update-alternatives --query python

Name: python
Link: /home/walkers/.local/bin/python
Status: auto
Best: /usr/bin/python3
Value: /usr/bin/python3



making a virtual environment

To make a virtual environment, I simply ran pyenv exec python in place of where I would normally run python and it worked for me. There is a whole package to get pyenv and venv to play nicely together, so I suspect that there is more to it, but this worked well for me and I was happy.

pyenv exec python -m venv .venv --prompt $(basename $PWD)

Now when my virtual environment is active it points to the python in that virtual environment, and is the version of python that was used to create the environment.



Links

https://github.com/pyenv/pyenv#installation

I wrote this during my first few minutes of using pyenv. It’s been working great for me since then and has been practically invisible. If you have more experience with pyenv I would really appreciate a comment on your experience below.


Source link

Linux – Find in Multiple Folders and Delete Files Within

This is a worknote.



Scenario

In a Javascript monorepo, there was a need to:

  • delete all dist folders files, not the folders themselfs
  • but without touching the .gitignore in the folders



TLDR

dir=$(find ./packages -name "dist" -type d);
for i in $dir; do find $i -type f ( -iname "*" ! -iname ".gitignore" ) -exec rm  +; done



Solution

  1. Find all dist folders in the monorepo
  2. Iterate over the folders set and collect all files within them, excluding .gitignore
  3. Delete all sets of found files.



Solution Walkthrough



Find all folders with specific name

find ./packages -name "dist" -type d

findfind – find files. Allow filtering.

./packages – target root folder to start the search from.

-name "dist" – filter only object with the name “dist”.

-type d – filter only object with type of directory.



Execute Bash Expression

Exectute an expression in bash and put the results in a local variable.

dir=$()



Find all relevant files in a fo`er

bash
find $i -type f ( -iname "*" ! -iname ".gitignore" )

$i – Variabe containing the folder name from the for iteration.

-type f – filter object only of type of file.

-iname – From the docs: Like -name, but the match is case insensitive.

! – From the docs: ! expr True if expr is false. This character will also usually need protection from interpretation by the shell.

( – From the docs: ( expr ) Force precedence. Since parentheses are special to the shell, you will normally need to quote them. Many of the examples in this manual page use backslashes for this purpose: (...)' instead of (…)`

for i in $dir; do find $i -type f ( -iname "*" ! -iname ".gitignore" ); done



Run through folders set and execute find on them

for i in $dir; do find $i; done

forfor loop

$i – iteration variable



Execute delete on found set

find $i -exec rm  +

-exec – From the docs: Execute command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of `;’ is encountered.

The string `’ is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone, as in some versions of find.


Source link

How to install tarball (.tar) files in linux

Does it happen to you that you that whenever you want to install a piece of software you’re give either a .deb file or a .tar file. Installing .deb files is easy, it’s just like how you do in windows, but .tar files are a pain specially for beginners.
In this simple tutorial we’ll learn how to download and install .tar file. I’ll use Ubuntu but it should work in most Linux distros. I’ll install *waterfox web browser * but the process is similar for all the tarball (.tar) installation files.



tldr;

  1. download the tar file
  2. extract it to some location
  3. create a desktop entry for running the application



Detailed method



Step 1 :download the .tar file and then move it to the directory where you want to install it.

After downloading the file open the terminal in current directory to move the file to /opt directory using the following command.
you can change the filename and target directory accordingly.

sudo mv  waterfox-G4.0.5.en-US.linux-x86_64.tar.bz2 /opt



Step 2: Extract the .tar file

first goto the directory where you moved the .tar file.

cd /opt/

To extract the .tar file present in the current directory to use the following command

sudo tar xjf waterfox-G4.0.5.en-US.linux-x86_64.tar.bz2

you can replace the .tar filename i.e. waterfox-G4.0.5.en-US.linux-x86_64.tar.bz2 as per your filename.



Step 3: Create desktop entry with appropriate permissions

make yourself owner of the extracted repository

sudo chown -R $USER /opt/waterfox

Create a desktop entry so that you don’t need to come to this directory to launch the application.
run the following command

gedit ~/.local/share/applications/waterfox.desktop

it’ll open gedit text editor where you need to insert the specifications of the desktop entry.
paste the following in the editor and save.

[Desktop Entry]
Name=Waterfox
Exec=/opt/waterfox/waterfox %u
Terminal=false
Icon=/opt/waterfox/browser/chrome/icons/default/default128.png
Type=Application
Categories=Application;Network;X-Developer;

Again, change the various parameters and paths according to your setup.

Desktop entry has many parameters but only a few are required but you should add at least these. You can read more at Desktop Entry Standard

Finally you need to make your desktop entry executable using the following command.

chmod +x ~/.local/share/applications/waterfox.desktop

you can change the name of desktop entry accordingly.

finally you can remove the .tar file using the following command

sudo rm -rf waterfox*.tar.bz2

Source link

Dockerfile for Go

Each time I start a new Go project, I repeat many steps.
Like set up .gitignore, CI configs, Dockerfile, …

So I decide to have a baseline Dockerfile like this:

FROM golang:1.18beta1-bullseye as builder

WORKDIR /build

COPY go.mod .
COPY go.sum .
COPY vendor .
COPY . .

RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GOAMD64=v3 go build -o ./app main.go

FROM gcr.io/distroless/base-debian11

COPY --from=builder /build/app /app

ENTRYPOINT ["/app"]

I use multi-stage build to keep my image size small.
First stage is Go official image,
second stage is Distroless.

Before Distroless, I use Alpine official image,
There is a whole discussion on the Internet to choose which is the best base image for Go.
After reading some blogs, I discover Distroless as a small and secure base image.
So I stick with it for a while.

Also, remember to match Distroless Debian version with Go official image Debian version.

FROM golang:1.18beta1-bullseye as builder

This is Go image I use as a build stage.

WORKDIR /build

COPY go.mod .
COPY go.sum .
COPY vendor .
COPY . .

I use /build to emphasize that I am building something in that directory.

The 4 COPY lines are familiar if you use Go enough.
First is go.mod and go.sum because it defines Go modules.
The second is vendor because I use it a lot, this is not necessary but I use it because I don’t want each time I build Dockerfile, I need to redownload Go modules.

RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GOAMD64=v3 go build -o ./app main.go

This is where I build Go program.
CGO_ENABLED=0 because I don’t want to mess with C libraries.
GOOS=linux GOARCH=amd64 is easy to explain, Linux with x86-64.
GOAMD64=v3 is new since Go 1.18,
I use v3 because I read about AMD64 version in Arch Linux rfcs. TLDR’s newer computers are already x86-64-v3.

FROM gcr.io/distroless/base-debian11

COPY --from=builder /build/app /app

ENTRYPOINT ["/app"]

Finally, I copy app to Distroless base image.


Source link