Create a Nexus Repository with Windows Containers

Docker is an extremely powerful tool. You often build your own image using X image or X Linux distros like Ubuntu or Debian.

In our case, we have a Nexus repository running in Windows but we started to move everything to AWS. The greatest challenge happened when we were not able to migrate the DB. Regardless, the configuration we chose, the repo started to crash over and over again. The only option was to move the old VM as a new EC2 which was a bad deal since this would be our only EC2 because the rest was running in containers in ECS.

After a while, we thought of a new solution, we had heard about Windows Containers, something that is not so well-known. And this is the solution that we came with:

RUN mkdir "c:\nexus"
WORKDIR c:/nexus
RUN mkdir sonatype-work
RUN mkdir nexus-3.36.0-01
COPY nexus-3.36.0-01 .
COPY sonatype-work .
WORKDIR c:/nexus/nexus-3.36.0-01/bin
CMD ["nexus.exe /run"]

The first line downloads a copy of Windows Nano Server, which has the smallest footprint. And later, we configured the location that we wanted.

The number 3.36.0-01 represents our current version of the Nexus repo, it can change over the years.

And that’s all that you need.

Now, something important to highlight, this solution doesn’t show any logs for any reason and takes some time to run it the first time. So, if you start hitting: http://localhost:8081 in minute one, it might not work, it takes a while maybe 3 to 5 minutes.

Source link

One does not “just containerize” an app

The Docker ecosystem is filled with leaky abstractions. The utopian vision of Docker containers is a world where a developer can grab a base container for a language, copy in their code with a minimal Dockerfile, and be ready to develop and deploy instantly.

Unfortunately, this landscape is filled with per-language gotchas that make this world a far cry from reality. Here are some of the wonky things I’ve run into when working with containers:

  • When working with Python, you really can’t use the Alpine-based images. Python package binaries don’t compile correctly on Alpine out of the box. For a lightweight container, you need to use a slim Debian container. This means you must already know that “Buster” and “Stretch” are Debian versions (because Debian is not in the tag name), and you must know which version is more recent.
  • When developing using the official Python images, you can’t get good autocomplete in your editor without extra steps because the packages aren’t in the mounted volume. You can use the VS Code Remote extension to actually edit files within the container, or you could use venv to colocate the dependencies with the code. However, using venv within the container won’t give you proper autocomplete because the Python interpreter is inside the container itself.
  • The official PHP image won’t let you install PHP packages from the Debian repos because its PHP executable is compiled from source. You must instead use the container’s docker-php-ext-install command.
  • The official PHP image does not come with PHP’s Composer package manager, yet Python, Node and Ruby images all come with their respective language package manager out of the box. You must find a way to programmatically install Composer.
  • The official Node image comes preconfigured with a default unprivileged user. This is an incredible feature that I wish all images had, however, it is an image-specific feature that you must keep in mind when building a production Dockerfile.
  • You cannot run an unprivileged user with the official Nginx image. Nginx has a separate Nginx image that uses an unprivileged user, however, it is not marked with the official tag.
  • The official MySQL image (and countless others) don’t have a version for ARM processors, meaning they can’t run on common inexpensive devices like the Raspberry Pi.

A few months ago I had a goal of “learning Docker,” but I’ve found that I’ve instead spent most of my time learning the individual base images and their quirks. This isn’t totally surprising since every language requires lots of domain specific knowledge: it took me two weeks to figure out how to deploy my first Rails app on an Ubuntu VM. Docker’s infrastructure-as-code paradigm also offers enormous advantages over configuring each development and deployment environment manually, which I did prior to adopting containerization.

However, it’s worth acknowledging the complexities and pain-points of working with Docker images. One does not “just containerize” an app: it’s a process that involves a lot of learning for each language and base image.

Source link