Como crear imágenes Distroless para Node.js y Go



¿Qué es una imagen Distroless?

Las imágenes distroless están basadas en imágenes Debian, pero son muy diferentes a las de Ubuntu. En primer lugar, Google gestiona estos contenedores, y nos da la confianza de que van a estar preparadas para no tener ningún problema.

La segunda diferencia es que hay contenedores específicos para lenguajes concretos. ¿Por qué tener contenedores específicos para imágenes específicas? ¿Por qué no instalarlos todos en un solo contenedor? Además del problema del tamaño, Google ha eliminado el 90% del contenedor y ha mantenido únicamente lo que se requiere para ejecutar el lenguaje específico. Esto elimina en gran medida la cantidad de vulnerabilidades que se pueden encontrar en el contenedor. Por ejemplo, digamos que alguien es capaz y quiere ejecutar un comando en el contenedor dentro de tu cluster. ¿Sabes que pasaría? Nada, ya que estos contenedores no contienen ninguna shell. Pongamos que se ha encontrado una vulnerabilidad en un paquete de Debian. Lo más probable es que el paquete no exista en la imagen y no tengamos dicha vulnerabilidad.



¿Qué lenguajes se pueden utilizar?

A día hoy solo se pueden utilizar estos lenguajes y versiones:

  • Base
  • Static
  • Dotnet
  • CC
  • Java
  • Node.js
  • Python 2.7
  • Python 3

También puedes consultarlas en el repositorio de imágenes de Google en este enlace.



Diferencias entre la imagen static y base:

La imagen static contiene un sistema Linux basado en glibc que contiene certificados CA, tzdata, un directorio en /etc/passwd para un usuario root y un directorio temporal en /tmp.

La imagen base contiene los paquetes glibc, libssl, openssl y todo lo nombrado anteriormente en la imagen static. La mayoría de aplicaciones deberían usar base.



Ventajas de Distroless

  • En la imagen no hay nada más que tu aplicación, con lo cual si alguien tiene acceso a tu cluster no podrá ejecutar ningún tipo de programa en el contenedor.
  • Las imágenes tienden a ser más livianas.
  • A menos paquetes, menos posibilidades de tener vulnerabilidades.



Desventajas de Distroless

  • No es recomendable para desarrollar, ya que no puedes entrar al contenedor para debuggear el código.
  • Algunas veces tu aplicación necesitará alguna dependencia del sistema, esta al no tener shell no podremos lanzar ningún comando para instalar dependencias.
  • Solo puedes emplear un lenguaje en cada imagen.



Demo

A continuación explicaré como construir la imagen de un servidor web tanto en Node.js como en Go. También optimizaremos las imágenes haciendo uso de multi-stage.

Para ello clonaremos mi repositorio y nos situaremos en el directorio distroless:

git clone https://github.com/victorargento/victorargento.git && cd victorargento/distroless



Node.js

La imagen de Node.js tiene 3 fases: construcción de la aplicación para poder ser compilada, instalación solamente de las dependencias de producción y por último transferir la aplicación a la imagen distroless.

# Construimos la aplicación con las dependencias de desarrollo.
FROM node:16-alpine3.14 AS pre-build-env
WORKDIR /app
COPY package*.json ./

RUN npm install --only=development

COPY . .
RUN npm run build

# Instalamos solamente las dependencias necesarias para la ejecución de la aplicación.
FROM node:16-alpine3.14 AS build-env
WORKDIR /app
COPY package*.json ./

RUN npm install --only=production

# Copiamos la carpeta dist construida en la imagen PRE-BUILD-ENV.
COPY --from=pre-build-env /app/dist ./dist
# Este paso podemos omitirlo, ya que solamente necesitas la carpeta dist para ejecutar la aplicación,
# pero tal vez tu aplicación necesite algunos archivos que tienes en la raíz del proyecto, si ese es el caso, copia solamente archivos necesarios.
COPY . . 

# Volvemos a copiar la aplicación, pero esta vez en la imagen Distroless.
FROM gcr.io/distroless/nodejs:16
USER nonroot:nonroot
WORKDIR /app
COPY --from=build-env --chown=nonroot:nonroot /app /app
CMD ["dist/index.js"]

Para construir la imagen ejecutaremos el siguiente comando:

docker build -t hello-app:node ./hello-app-node



Go

La imagen de Go solamente tiene 2 fases: construcción del binario y transferir la aplicación a la imagen distroless.

# Construimos el binario en la imagen que contiene Go.
FROM golang:1.17 as build-env

WORKDIR /go/src/app
COPY *.go ./

RUN go mod init
RUN go get -d -v ./...
RUN go vet -v
RUN go test -v

RUN CGO_ENABLED=0 go build -o /go/bin/app

# Copiamos el binario desde BUILD-ENV a la imagen Distroless.
# En este caso utilizaremos la imagen static, que contiene las dependencias mínimas,
# si nuestra aplicación depende de paquetes como glibc, libssl o openssl utilizaremos la imagen base
# FROM gcr.io/distroless/base
FROM gcr.io/distroless/static
USER nonroot:nonroot

COPY --from=build-env --chown=nonroot:nonroot /go/bin/app /
CMD ["/app"]

Para construir la imagen ejecutaremos el siguiente comando:

docker build -t hello-app:go ./hello-app-go

Source link

Using Docker Run inside of GitHub actions

Recently I decided to take on the task of automating my site’s build and deployment process through GitHub Actions. I’m using my own static site generator Cleaver to handle that, which requires both Node + PHP to be installed in order to run the asset compilation and build process. Now, GitHub Actions supports both of those runtimes out of the box, but I had just created a perfectly good Docker image for using Cleaver, and instead wanted to use that.

Ultimately it was a mixture of just wanting the fine-grain control that a single Docker image provides, and because, well I just wanted to see how to do it!



What Didn’t Work

So, you’re able to actually use Docker images in GitHub actions, but by default you’re only able to use them one of two ways.

jobs:
    compile:
        name: Compile site assets
        runs-on: ubuntu-latest
        container:
            image: aschmelyun/cleaver:latest

This first option is as the base for an entire job. Normally a lot of GitHub actions have you start off with an Ubuntu distro as the base for the VM (there are other OS’s you can choose from as well) and then add in your container image. But the entire rest of the job uses whatever container you specify as the starting point for all of the rest of the job’s steps.

jobs:
    compile:
        name: Compile site assets
        runs-on: ubuntu-latest
        steps:
          - name: Run the build process with Docker
            uses: docker://aschmelyun/cleaver

This second option is as an action in the steps for a job. Instead of something like uses: actions/checkout@v2, you can instead specify a Docker image from the hub to run in its place. The problem with this one though is that you have to generate a Docker image that runs specifically like a GitHub action expects. That means things like avoiding WORKDIR and ENTRYPOINT attributes, as they’re handled internally by the GitHub Actions worker.

What I wanted was simply to be able to use docker run ... under a single action in a job.



What Worked

I ended up finding an action available on GitHub by addnab called docker-run-action that works exactly how I wanted. You specify an image, any options, and a list of commands to run with it, and only during that step of the build process is it used.

jobs:
    compile:
        name: Compile site assets
        runs-on: ubuntu-latest
        steps:
          - name: Check out the repo
            uses: actions/checkout@v2
          - name: Run the build process with Docker
            uses: addnab/docker-run-action@v3
            with:
                image: aschmelyun/cleaver:latest
                options: -v $ github.workspace :/var/www
                run: |
                    composer install
                    npm install
                    npm run production

Let me break down what each of these lines does:

image: aschmelyun/cleaver:latest

This one is pretty obvious, it specifies the image that’s pulled and used in the docker run command. I’m using mine for Cleaver that’s on the public Docker Hub, but you can also use a privately-owned image as well.

options: -v $ github.workspace :/var/www

Here I’m creating a bind mount from the current workspace to /var/www, which is the working directory that my Docker image expects. github.workspace includes all of the code checked out from our current repo, and I’m mounting that whole directory as that’s what my build process expects. Because I’m using a bind mount, anything done to this code will then be available to GitHub Actions in any following step (like a deployment).

run: |
    composer install
    npm install
    npm run production

This is where I specify the actual commands I want to run against my container image. This action ignores the entrypoint of the container image, so even though normally using docker run aschmelyun/cleaver:latest it would run those three commands, using this action I have to actually specify them out again in the yaml.

Once they complete, GitHub should now have a new dist folder in the workspace containing the compiled site assets that can then be deployed out to a production server. Once the job finishes up, that’s removed and is never committed to the repo or accessible to a separate job.



Wrapping Up

Sometimes during a CI/CD process it’s helpful to use a ready-made Docker image to run one-off commands and processes. This could be especially helpful if the software you need isn’t available on the actions platform, or requires a lengthy setup process that’s already written out in a Dockerfile.

If you have any questions about anything in this article, or if you’d like to get more smaller pieces of regular content regarding Docker and other web dev stuff, feel free to follow or reach out to me on Twitter!




Source link

CRUD API with Fastify, Postgres, Docker

Hi I am Francesco. You can find me on Twitter here https://twitter.com/FrancescoCiull4

Creating Content in Public
All this Content has been created during 2 livestreams from scratch

Here is the link if you wanna take a look on how I created this content (Even this article as it’s part of the content itself!)



Part 1

IMAGE ALT TEXT HERE



Part 2

IMAGE ALT TEXT HERE

In this article, we will set some CRUD API using:

  • Node.js (JavaScript Runtime Engine)
  • Fastify (Fast and low overhead web framework, for Node.js )
  • Postgres (PostgreSQL) is a free open-source relational database, very popular and stable)
  • Docker (Platform to deploy applications using containers)

GitHub Repository: https://github.com/FrancescoXX/study-with-me-fastify-docker



NODE

image.png

Node is a back-end JavaScript runtime environment, which means briefly that can execute JavaScript code on a computer, for example, yours or the one where Node is installed. The good thing is that, by having Docker, you DON’t actually need to install it, because we will use the Node image, and so we can also avoid versioning between my version of Node installed on my machine and yours



FASTIFY

image.png

Fastify is a web framework focused on performance. It is inspired by Hapi and Express and it’s for sure one of the fastest web frameworks in town.



POSTGRES

image.png

Postgres (PostgreSQL) is a free open-source relational database, very popular and stable



DOCKER

image.png

Docker is a platform to build run and share application using the idea of containers. If you want a brief introduction, here is a short video

IMAGE ALT TEXT HERE



Step by Step

  1. Create a folder named fastify-postgres-docker and enter into it
mkdir fastify-postgres-docker && cd fastify-postgres-docker
  1. Initialize node application using npm
npm init -y
  1. Install the dependencies
npm install fastify fastify-postgres pg
  1. Create the app folder and enter into it
mkdir app && cd app

From inside the src folder, create a server.js file and a route.js file

The folder structure should look like this

image.png

Let’s write the server.js file

const fastify = require('fastify')( logger: true );
fastify.register(require('fastify-postgres'), 
  connectionString: `postgres://$process.env.POSTGRES_USER:$process.env.POSTGRES_PASSWORD@$process.env.POSTGRES_SERVICE:$process.env.POSTGRES_PORT/$process.env.POSTGRES_DB`,
);
fastify.register(require('./routes'));

// Run the server
const start = () => 
  fastify.listen(3000, '0.0.0.0', (err, address) => 
    if (err) 
      fastify.log.error(err);
      process.exit(1);
    
  );
;
start();

Fastify uses the idea of plugins, you can check more about this here

https://www.fastify.io/docs/master/Plugins/

Let’s write the first part of the routes.js file

async function routes(fastify, options) 
  // Testing route
  fastify.get('/', async (request, reply) => 
    return  hello: 'world' ;
  );


module.exports = routes;



DOCKER

image.png

Now the Docker Part!

In the main folder, create 3 files:

  • Dockerfile
  • docker-compose.yml
  • .dockerignore (it starts with a dot)

the .dockerignore file:

node_modules
.gitignore
.env

the Dockerfile:

FROM node:14

EXPOSE 3000

# Use latest version of npm
RUN npm install npm@latest -g

COPY package.json package-lock.json* ./

RUN npm install --no-optional && npm cache clean --force

# copy in our source code last, as it changes the most
WORKDIR /usr

COPY . .

CMD [ "node", "app/server.js"]

The docker-compose.yml file:

version: '3.8'
services:
  fastify_backend:
    container_name: fastify_backend
    image: francescoxx/fastify_backend:0.0.1
    build:
      context: .
    ports:
      - '3000:3000'
    env_file: .env
    depends_on: 
      - postgres

  postgres:
    container_name: postgres
    hostname: postgres
    image: 'postgres:13'
    ports:
      - '5432:5432'
    restart: always
    env_file: .env
    volumes:
      - fastify_volume:/var/lib/postgres/data

volumes:
  fastify_volume: 

replace the image “francescoxx/fastify_backend:0.0.1” with an image name of your choice!

Before running our services, we need to create a .env file, to store our environment variables, and populate it with all the environment variables we need.

POSTGRES_USER=francesco
POSTGRES_PASSWORD=dbpassword
POSTGRES_DB=fastifydb
POSTGRES_SERVICE=postgres
POSTGRES_PORT=5432

The End file should look something like this:

image.png

you can change them accordingly on your needings!

Let’s start the postgres service:

docker-compose up -d postgres

we should have a Postgres DB up and running!

let’s check what is inside the DB:
From another Terminal, type

docker exec -it postgres psql -U francesco fastifydb

and once we are inside the container
(you can verify this by checking the postgres=# terminal)

connect to the fastifydb database

c fastifydb

this means that a database named “fastifydb” has been created by postgres using the environment variable we have passed at the beginning

and then:

dt

and you should get the message:

“Did not find any relations.”

image.png

This is because we have created the database, using the environment variable, but we haven’t created any table or relationship yet

Type ‘exit’ to exit from this terminal

exit

And you are again at your terminal

Time to build our image!

from the folder where the docker-compose.yml file is located, run

docker-compose build

image.png

Now it’s time to run our node application

docker-compose up -d fastify_backend

WE can verify if both the containers are running, by using the ‘docker ps -a’ command

image.png

Let’s add an endpoint to init the DB. (This could be done in other better ways!)

In the route.js file, let’s add a simple endpoint that will create the users table:

// INIT TABLE. Launch just once to create the table
  fastify.get('/initDB', (req, reply) => 
    fastify.pg.connect(onConnect);
    function onConnect(err, client, release) 
      if (err) return reply.send(err);
      client.query(
        'CREATE TABLE IF NOT EXISTS "users" ("id" SERIAL PRIMARY KEY,"name" varchar(30),"description" varchar(30),"tweets" integer);',
        function onResult(err, result) 
      );
    
  );



ADDING API ENDPOINTS

Let’s add other 5 endpoints:



Endpoint to GET all the Users:

  //GET AL USERS
  fastify.route(
    method: 'GET',
    url: '/users',
    handler: async function (request, reply) 
      fastify.pg.connect(onConnect);
      function onConnect(err, client, release) 
        if (err) return reply.send(err);
        client.query('SELECT * from users', function onResult(err, result) );
      
    ,
  );



Endpoint to get one User

  //GET ONE USER if exists
  fastify.route(
    method: 'GET',
    url: '/users/:id',
    handler: async function (request, reply) 
      fastify.pg.connect(onConnect);
      function onConnect(err, client, release) 
        if (err) return reply.send(err);
        client.query(`SELECT * from users where id=$request.params.id`, function onResult(err, result)  result.rows[0]);
        );
      
    ,
  );



Endpoint to create one user

  //UPDATE ONE USER fields
  fastify.route({
    method: 'PUT',
    url: '/users/:id',
    handler: async function (request, reply) 
      fastify.pg.connect(onConnect);
      async function onConnect(err, client, release) 
        if (err) return reply.send(err);
        const oldUserReq = await client.query(`SELECT * from users where id=$request.params.id`);
        const oldUser = oldUserReq.rows[0];
        client.query(
          `UPDATE users SET(name,description,tweets) = ('$request.body.name', '$ oldUser.description', $
            request.body.tweets )
      WHERE id=$request.params.id`,
          function onResult(err, result) 
        );
      
    ,
  });



Endpoint to Delete one user:

  //DELETE ONE USER if exists
  fastify.route({
    method: 'DELETE',
    url: '/users/:id',
    handler: async function (request, reply) 
      fastify.pg.connect(onConnect);
      function onConnect(err, client, release) 
        if (err) return reply.send(err);
        client.query(`DELETE FROM users WHERE id=$request.params.id`, function onResult(err, result) );
      
    ,
  });

The final routes.js file should look like this:

async function routes(fastify, options) {
  // Testing route
  fastify.get('/', async (request, reply) => 
    return  hello: 'world' ;
  );

  // INIT TABLE. Launch just once to create the table
  fastify.get('/initDB', (req, reply) => 
    fastify.pg.connect(onConnect);
    function onConnect(err, client, release) 
      if (err) return reply.send(err);
      client.query(
        'CREATE TABLE IF NOT EXISTS "users" ("id" SERIAL PRIMARY KEY,"name" varchar(30),"description" varchar(30),"tweets" integer);',
        function onResult(err, result) 
      );
    
  );

  //GET AL USERS
  fastify.route(
    method: 'GET',
    url: '/users',
    handler: async function (request, reply) 
      fastify.pg.connect(onConnect);
      function onConnect(err, client, release) 
        if (err) return reply.send(err);
        client.query('SELECT * from users', function onResult(err, result) );
      
    ,
  );

  //GET ONE USER if exists
  fastify.route(
    method: 'GET',
    url: '/users/:id',
    handler: async function (request, reply) 
      fastify.pg.connect(onConnect);
      function onConnect(err, client, release) 
        if (err) return reply.send(err);
        client.query(`SELECT * from users where id=$request.params.id`, function onResult(err, result) );
      
    ,
  );

  //Create users
  fastify.route(
    method: 'POST',
    url: '/users',
    handler: function (request, reply) 
      fastify.pg.connect(onConnect);
      function onConnect(err, client, release) 
        if (err) return reply.send(err);
        const newUser = request.body;
        client.query(
          `INSERT into users (name,description,tweets) VALUES('$newUser.name','$newUser.description',$newUser.tweets)`,
          function onResult(err, result)  result);
          
        );
      
    ,
  );

  //UPDATE ONE USER fields
  fastify.route({
    method: 'PUT',
    url: '/users/:id',
    handler: async function (request, reply) 
      fastify.pg.connect(onConnect);
      async function onConnect(err, client, release) 
        if (err) return reply.send(err);
        const oldUserReq = await client.query(`SELECT * from users where id=$request.params.id`);
        const oldUser = oldUserReq.rows[0];
        client.query(
          `UPDATE users SET(name,description,tweets) = ('$request.body.name', '$request.body.description ', $
            request.body.tweets )
      WHERE id=$request.params.id`,
          function onResult(err, result) 
        );
      
    ,
  });

  //DELETE ONE USER if exists
  fastify.route({
    method: 'DELETE',
    url: '/users/:id',
    handler: async function (request, reply) 
      fastify.pg.connect(onConnect);
      function onConnect(err, client, release) 
        if (err) return reply.send(err);
        client.query(`DELETE FROM users WHERE id=$request.params.id`, function onResult(err, result) );
      
    ,
  });
}

module.exports = routes;

Now let’s test these APIs!



POSTMAN

Important! you need to specify localhost and not 127.0.0.1 in the first part of the url, otherwise it doesn’t work!

image.png

We will use Postman, but you can use a whenever tool you want

First of all, we need to create the user table. We will trigger it by hitting with a GET this url:

GET http://localhost:3000/initDB

image.png

If we get this answer, it means that our ‘users’ table has been created!

Now let’s check all the users with another GET :

GET http://localhost:3000/users

image.png

if we get the empty array answer, [], it means that we actually have the users table, in our DB, but the are no users. This is perfectly fine!

Let’s create some users. We will do this by making a POST request at the same endpoint, adding the values in a json

Example:


    "name":"Adrian",
    "description":"Kangaroo Fighter",
    "tweets":12000

image.png

Please notice that we don’t need to add an ‘id’ , as it is automatically incremented at each new user

LEt’s add another one

image.png

and another one

image.png

Now let’s check again all the users:

image.png

And we see that this time we have 3 users!

We can get one single users by adding the id of the user at the end of the previous url path. For example

GET http://localhost:3000/users/2

To get the user with the id = 2

image.png

To delete an user, you can make a DELETE request at the same endpoint you use to get one user:

DELETE http://localhost:3000/users/2

image.png

Finally, to Update the user, you make a PUT request, passing the new values inside a json, like this


    "name":"Adrian2",
    "description":"SuperKANGAROO"

and you also need to pass the id of the user you wanna update in the url request, like this

PUT http://localhost:3000/users/3

image.png

To check if the user has been really updated, you can make another GET Request:

image.png

As you can see, the name and the description of the user has changed, but not the tweets.



Conclusion

If you have tried to follow this article, I would like to know if you have encountered any problem. Thanks!

GitHub Repository:
https://github.com/FrancescoXX/study-with-me-fastify-docker




Source link

How To Build A Serverless, Internal Developer Platform

Many teams still deploy and manage apps on their own infrastructure. It can be their own private data centre or public cloud IaaS offering. I’ve worked with teams that deploy to their own infrastructure using a custom-built developer platform to deploy, manage, and monitor the status of services. Usually, the interface is nothing fancy, but it does the job well and is adapted to the team/company’s process.



What Is An Internal Developer Platform?

According to internaldeveloperplatform.org An Internal Developer Platform (IDP) is a layer on top of the tech and tooling an engineering team has in place already. It helps Ops (or DevOps) teams structure their setup and enable developer self-service.

This platform can be a web console or CLI that integrates with the existing tools the team uses.



Why Use An Internal Developer Platform (IDP)?

IDPs have a huge impact on the velocity and productivity of the team. If done right, they increase the deployment/delivery frequency, better visibility and transparency across teams, and improved ways of working.



How To Build An Internal Developer Platform on Kubernetes using Knative, Tekton, GitHub, Cloud Native Buildpacks and Next.js

An IDP is built on top of the tech and tooling an engineering team has in place already. For this post, I’ll focus specifically on some of the tools I work with. They are:

  1. Kubernetes: an open-source system for automating deployment, scaling, and management of containerized applications.

  2. Knative: a Kubernetes-based platform to deploy and manage modern serverless workloads.

  3. Tekton: a Cloud Native CI/CD system, allowing developers to build, test, and deploy across cloud providers and on-premise systems.

  4. GitHub: a development platform to build, ship, and maintain software.

  5. Cloud Native Builpacks: transforms your application source code into container images that can run on any cloud, without you writing Dockerfiles.

  6. Next.js: A React framework with a very good development experience.

The platform will run on Kubernetes and support serverless applications through the use of Knative. Developers can access the platform using a web console that’s written in Next.js.

Here’s a bit on how the workflow will be:

I put all this knowledge in my book; How to build a serverless app platform on Kubernetes. It’s a hands-on book that will teach you how to build a serverless developer platform using the technologies and tools I mentioned previously.

You will learn:

  • What Knative is, how to install and use it for your serverless workloads on Kubernetes.
  • How to use create CI/CD pipelines with Tekton.
  • You will learn how to use Buildah to build container images in your pipeline. Afterwards, you will move to using Cloud Native Buildpacks to build images.
  • You will integrate with GitHub by building a GitHub App which will trigger your CI/CD pipeline when it’s time to deploy a new app, or update an existing app.
  • You will build the platform’s web UI using Next.js. Some knowledge of JavaScript is required for this part. No Next.js experience required because every line of code will be explained, so that non-React developers can follow along.

I’m giving a 50% discount to any DEV community member who buys the book with the discount code devcommunity. The discount code is valid for a maximum of 50 purchases, so hurry and get your copy now!

Follow these steps to purchase with your discount code:

  1. Go to the book’s website – bit.ly/3q3UKij
  2. Enter 20 (the min. purchase price) in the price field and click the Buy this button.
  3. Enter devcommunity in the discount code field.
  4. Enter your card and personal details to complete your purchase.

If you encounter any errors or have any feedback, feel free to comment here or DM me on Twitter




Source link

Dockerize Angular App



Agenda

Dockerize an Angular app, built with the Angular CLI, using Docker, In this blog we will have a walkthrough of angular 7 and dockerize it over node image(base).

Here, we specifically focus on:

  1. Create an angular app using CLI and test it locally
  2. Create an image for dev environment with code Hot-reloading



Project Setup

Install the Angular CLI globally:

npm install -g @angular/cli@7.3.10

Generate a new app aka “angular-microservice” using CLI:

ng new angular-microservice 
cd angular-microservice

(optional)To generate in present dir use:

ng new angular-microservice –directory ./



Docker Setup

Add a Dockerfile to the project root:

# base image
FROM node:12.2.0

# set working directory
WORKDIR /app
# install and cache app dependencies
COPY package.json /app/package.json
RUN npm install
RUN npm install -g @angular/cli@7.3.10

# add app
COPY . /app

# start app
CMD ng serve --host 0.0.0.0

Building Docker image

docker build -t angular-microservice:dev .

Run Docker Image

Run the container after the build is done:

docker run -d -v $PWD:/app -v /app/node_modules -p 4201:4200 --rm angular-microservice:dev

Use the -d flag to run the container in the background:

docker run -d -v $PWD:/app -v /app/node_modules -p 4201:4200 --name foo --rm angular-microservice:dev

Please react if you found this blog helpful and informational.


Source link

Constant work to onboarding new members into engineering team

Developing the “onboarding” process for a new person in an engineering team takes a lot of dedication, and keeping this process fluid takes even more work (with as little friction as possible).

This issue is challenging for any team working full time, it is even worse for Open Source projects where contributors usually work in their spare time. We should make this process as fluid as possible so that people don’t get discouraged by the complexity of getting up there and testing, until they make their first pull request.



Configure development environment

Every developer has a different way of setting up a development environment, even if it is in a popular technology (programming language) with lots of documentation, text editor extensions (emacs, vim, vscode, …), etc.

We developers are used to “our way” of doing things, it is common for us to create resistance when someone presents a different way and I do it another way.

In the vast majority of applications they depend on external resources such as databases, APIs, tokens, etc., if we force the developer (user) to read all the project documentation before having the first contact with the project it is very likely that we will lose his engagement, and some frustrations in him, such as:

  • I just wanted to test
  • I have to read all this to see it working
  • What a complicated project
  • I have to install X, Y and Z services/software on my machine
  • I don’t know the programming language used in the project, which plugins should I install in my editor?
  • … and much more.

There are some tools to assist project maintainers (open source or private) to generate an onboarding process with as little friction as possible.

Configuring editor (with all the necessary plugins and parameters), all the services the project needs to run, environment variables configured, database running with initial data load, data viewer configured (software to manage data from the database), etc.

To the point where the developer “clicks a button” and magically has the development environment ready to test the software.

In the last few months we at prestd have been working on improving our documentation (it is far from being good documentation) and removing as much friction as possible in the process of getting a new development environment up, some issues we have implemented until we got to what we have today:

See prestd‘s development guide page here.

It is not nice to have the engineering team working in a bad environment, we need to think more about our team and make the team experience fluid.
people > technology



Focus on the developer (user)

The prestd exists open source since 2016, I particularly like the project very much and believe it is a great solution to accelerate the development of a RESTful API for existing database and especially development of a new API (project starting from scratch).

But for many years we turned to developing the software didn’t look at documentation with the dedication we should, causing the contributor base to shrink (existing and new) — people going through open source project, hardly stayed for many years, so we always have to have the most rounded onboarding process possible.

Given this problem I started to look at the documentation with more dedication and every decision in prestd from now on will be thinking about the developer (user) experience, answering the following questions:

  • Will this improve the developer experience using the project?
  • Will this make the project easier to use?
  • Will this make it easier to maintain the development of the project?

When all 3 questions are “yes”, we will proceed with the implementation, regardless of what it is: feature, improvement, fix, etc.


Source link

Perl module tests on Linux 32bit on Github Action

I’m creating SPVM. SPVM is a Perl module I’m creating now.

I want to do tests of SPVM on Linux 32bit. I search for the way. I search github actions used in Perl itself.



Linux 32bit Github Action

I customized it. The created github action yml is linux-32bit.yml

name: linux-32bit

on:
  push:
    branches:
      - '*'
    tags-ignore:
      - '*'
  pull_request:

jobs:
  build:
    runs-on: ubuntu-latest
    container:
      image: i386/ubuntu:latest
    steps:
      - name: install the Perl header, core modules, building tools
        run: |
          apt update
          apt install -y libperl-dev build-essential
      - uses: actions/checkout@v1
      - name: perl Makefile.PL
        run: perl Makefile.PL
      - name: make
        run: make
      - name: make disttest
        run: make disttest



Short Descriptions



the Docker Container Image

Use the docker container image “i386/ubuntu:latest”

    container:
      image: i386/ubuntu:latest



apt

Install the Perl header, core modules, building tools.

        run: |
          apt update
          apt install -y libperl-dev build-essential

Source link

What Does a DevOps Engineer Do?

Hiring a DevOps Engineer for the first time? Knowing what to look for in a talented engineer can be a challenge. In this article, I discuss what you can expect from a DevOps Engineer in today’s marketplace. I share some of my own experiences hiring DevOps Engineers in today’s competitive labor market. Finally, I talk about cheaper alternatives to hiring a full-time DevOps Engineer.



When Do You Need a DevOps Engineer?

In my past articles, I’ve discussed DevOps release pipelines, stacks, and stages in-depth. A release pipeline is a software-driven process that development teams use to promote application changes from development into production. The pipeline creates multiple stacks – full versions of your application – across multiple stages of deployment.

A development team usually starts a pipeline automatically via a push to a source code control system, such as Git. The team then pushes the change set gradually through each stage (dev->test->staging->prod), testing and validating their changes along the way.

What I haven’t discussed (directly, at least) is how complicated this process is. A DevOps release pipeline is itself a piece of software. It requires code to run – and that code needs to be tested, debugged, and maintained.

Many teams and small development shops get started without a dedicated DevOps engineer. Yours may be one of them! In these situations, a few team members generally own pieces of the pipeline and keep it running. Pipelines at this point are usually a mix of automated promotion and old-school manual deployment.

However, as your application and requests from your customers grow, you may realize the lack of a dedicated DevOps engineer is slowing your team down. Some of the signs include:

  • Your team’s velocity slows under the weight of its current (mostly manual) deployment processes.
  • You have a somewhat automated deployment process but maintaining it is consuming more and more of the team’s time.
  • You realize after a high-profile failure that your release procedures need professional help.
  • You know you should improve your deployment process but your team is so crushed with feature work that no one has time to spend on it.

If you’re facing down one or more of these issues, it may be time to hire a part-time or full-time DevOps Engineer.



Responsibilities of a DevOps Engineer

A DevOps Engineer’s role will likely look slightly different at every company. However, the following broad-based responsibilities tend to be common and consistent.



Automate the Full Release Pipeline

A good release pipeline eliminates unnecessary manual steps and reduces the time required to deploy changes to your application. Building and maintaining this pipeline is the DevOps Engineer’s primary job.

DevOps Engineers usually craft release pipelines using a Continuous Integration/Continuous Development tool. Tools such as Jenkins, Atlassian, GitLab, and Azure DevOps integrate with source code control tools (usually Git) and handle triggering automated actions in response to repository check-ins. If your team already uses such a tool and is committed to it, you’ll want to find someone proficient in your specific CI/CD toolset.

Many CI/CD toolsets offer a set of predefined actions to assist with the CI/CD process. However, other actions will be specific to your team’s application. A DevOps engineer uses one or more scripting languages to automate complicated deployment tasks your team may have been executing manually. Python, JavaScript, shell scripting, and PowerShell (on Windows) are some of the more popular scripting languages that DevOps Engineers use.

For cloud-deployed software, a DevOps Engineer is also responsible for setting up the entire stack on which the application runs using Infrastructure as Code. A DevOps Engineer should be able to design and implement a stack deployment that can be deployed multiple times to any stage of your release pipeline.

Some engineers implement Infrastructure as Code using a scripting language such as Python. However, it’s more common to use a templating language, such as CloudFormation on AWS or Azure Resource Manager (ARM) Templates on Azure.



Setting Best Practices for Software Development

As part of setting up the build and release pipeline, your DevOps guru will also define best practices for coding and validation of changes. In other words, they’re the point person for your team’s change management approval process.

For example, a DevOps Engineer may work with their team to devise the best way to manage the overall work process. For most teams, this usually means adopting an Agile approach to software development such as Scrum or Kanban. It could also mean defining a code review process and teaching the team how to conduct good reviews.



Monitor Builds and Deployments

The DevOps Engineer is responsible for ensuring the continued health of the team’s CI/CD pipeline. This includes:

  • Monitoring build progress and logs from your team’s CI/CD tool
  • Moving quickly to resolve broken builds and keep changes flowing through the pipeline
  • Observing dashboard metrics as new instances of the application come online
  • Staying alert for errors as your deployment shifts more users over to the new version of your application

Monitoring should occur in all stages of the pipeline. As Atlassian points out, pre-production monitoring means you can stomp out critical errors before they ever reach customers.

Depending on the size of your organization, the DevOps Engineer may supervise all of this themselves. They may also work in conjunction with a Sustained Engineering or Support team that’s ultimately responsible for maintaining application health. In either case, your DevOps Engineer should take the lead in defining what the team needs to monitor.



Be the Git Guru

Ahhh, Git. The free source code control system is a marvelous invention. You can’t be a developer nowadays and not know at least the basics of Git. And yet even seasoned developers will sometimes find themselves mired in Merge Conflict Hell.

A team’s DevOps Engineer should know Git inside and out. They should understand, for example, the difference between a merge and a rebase – and which one to use when. They are the person primarily responsible for defining the team’s branching and merging strategy – and maintaining quality internal documentation for other team members.



What to Look for in a DevOps Engineer

As an engineering manager, I’ve hired multiple DevOps engineers. During the interview process, my loops focus on validation a combination of technical and soft skills:

DevOps knowledge

Does the candidate have the basics of CI/CD down pat? What successes have they accumulated in developing successful pipelines? What setbacks have they encountered – and how have they overcome them?

Cloud platform and DevOps tools

In what DevOps tools is your candidate most experienced? Do they know the tools your team is already using?

A DevOps Engineer will also need to make numerous decisions on whether to buy or build certain parts of the DevOps process. For example, does your team roll its own artifact storage features? Or does it leverage a tool like Artifactory? DevOps Engineers need to remain up to speed on the tools marketplace so they can make these critical buy vs. build decisions.

Leadership

A DevOps Engineer needs to do more than build a pipeline. They need to convince a (sometimes reluctant) team of engineers and stakeholders to change the way they develop software. Does your candidate have experience talking a tough crowd into adopting new processes?

As a manager, I like to use STAR (Situation-Task-Action-Result) questions to determine a candidate’s experience with being a technical leader. So I might ask something like, “Tell me about a time when you received pushback from your team on a process change. What was it and how did you resolve it?”

Growth mindset

The DevOps and cloud spaces are changing constantly. So it’s important that a DevOps Engineer not get overly set in their ways.

I also like to use STAR questions to gauge a candidate’s willingness to grow. For example, what’s the last thing that they learned just because it looked interesting? Did they end up using it on the job? If so, what was the result?

Alternatively, I may ask when was the last time they received critical feedback from their manager. What was it? And how did they use that feedback to improve their job performance?



Alternatives to Hiring a Full-Time DevOps Engineer

You’ve determined that you need more DevOps savvy in your org. But that doesn’t mean you need to start off with a full-time position out of the gate. Maybe you can’t afford a full-time position at the moment. Or perhaps you’d just like to test the waters before diving in with both feet.

Fortunately, there are a couple of alternatives to hiring someone full-time.



Hire a Part-Time DevOps Engineer

You may not need (nor even desire) a full-time team member. It may be enough to hire someone on a part-time basis to construct and maintain your build and release pipeline.

In this scenario, you’d want to find a DevOps Engineer who’s good at building self-service solutions. Your team should be able to kick off builds, perform releases, and monitor rollouts without having a full-time DevOps Engineer on call to oversee a successful outcome.



Migrate to TinyStacks

Another option? Forego the engineer! You can potentially save both time and money by adopting a DevOps tool that essentially provides you “DevOps as a service”.

TinyStacks is one such tool. Built by a team with deep experience building out the Amazon Web Services console, TinyStacks provides an automated approach to DevOps. Using a simple UI Web interface, your team can migrate its application into a full release pipeline – complete with AWS cloud architecture – within the week.

Read a little more on what TinyStacks can do for you and contact us below to start a discussion!

Article by Jay Allen


Source link

Building a Prisma Schema

Welcome back to the series Playing with Prisma!

In this article we’re going to take a look at how to build out a Prisma schema. To do this, rather than just regurgitating the docs (which are fantastic by the way, kudos Prisma), we’re going to come up with a little project and build out a schema to fit our needs!

While we will cover a lot of the cool options available to us when setting up a schema, I do recommend reading the docs to see everything Prisma has to offer.



The Project

The schema we’ll be scaffolding out will be for a bookstore’s website with a checkout system.

We’ll want to keep track of books, authors, subjects, etc… for the searching functionality of the website. Also, we’ll need a way to keep track of people and check-in/out times.

Let’s assume our database is a Postgres database and we are starting fresh.

All the things we cover will apply to the other available database providers as well, unless stated otherwise.

To get an idea of what we’re doing, here is a general picture of what our database should look like in the end:

Blank diagram - Page 1 (1).png

Let’s get to it!



Setting up Prisma

To start off, let’s go ahead and create a super simple project to hold our Prisma client we will end up generating.

Wherever you’d like, go ahead and create a project folder. Initialize npm inside of this project and install the prisma package so we can put it to use!

mkdir bookstore-project
cd bookstore-project
npm init
npm i --save prisma

Now let’s initialize prisma, which will scaffold out the initial files we’ll need to get going. We’ll also take a shortcut and let prisma know we’ll be connecting to a postgres database.

prisma init --datasource-provider=postgresql

Once that does its thing, you should be left with a basic project that looks like this:

Screen Shot 2021-12-29 at 11.38.16 PM.png

We’re ready to start configuring and putting our schema together! Go ahead and pop open that schema.prisma file and we’ll get started!



(Optional) Local Postgres Setup With Docker

Blank diagram - Page 1 (16).png

In order to actually generate and build our client, prisma needs to know of a server to connect to. Below is how we can set one up locally in Docker. We won’t go into too much detail here, just how to get it going.



Installing Docker

You can download and install docker here



Add docker-compose file

In your project’s root, create a file called docker-compose.yml. Paste the following into the file:

version: '3.1'

services:

  db:
    image: postgres
    restart: always
    environment:
      POSTGRES_PASSWORD: example
    ports:
      - 5432:5432

  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080



Update .env file

DATABASE_URL="postgresql://postgres:example@localhost:5432/bookstore"



Spin up the database and admin panel

Now that those are configured, run the following command to bring up the postgres server and an admin panel (adminer):

docker-compose up -d

Note: We added the -d flag at the end to run this in detached mode, freeing up our terminal for more commands



Test It Out

Once that finishes pulling the docker images and setting up the servers, head over to localhost:8080 to make sure the admin panel comes up.

You can log in with the credentials:

  • username: postgres
  • password: example
  • database: postgres

Screen Shot 2021-12-29 at 11.52.06 PM.png



Prisma Schema

The prisma schema is the main configuration file for prisma. It’s where prisma learns how to connect to our database, how to generate the prisma client (or any other assets via custom generators), and how to map our data from the database to our application.

A Prisma Schema is built up of three major pieces (or blocks):

  • Datasources
  • Generators
  • Models

Each piece plays a crucial role in the configuration and generation of our Prisma Client (or other generated assets depending on the generator we configure).

A block is composed of a block type, a name, and the fields and options for that block.

Datasource (2).png



Datasource

Blank diagram - Page 1 (5).png

The first thing we’ll want to configure is our datasource block.

This is where we tell Prisma how to connect to our database and what kind of database we are connecting to. Its configuration is fairly straightforward and doesn’t have a whole lot going on in it so we won’t have to go too deep to understand what it’s doing.

Each Prisma schema must have exactly one datasource block configured. No more and no less, as multiple datasources are not supported.

To define a datasource block, we can create a schema block with the type datasource, some name (typically db by convention), and its options.

datasource db 
  provider = "postgresql"
  url      = env("DATABASE_URL")

Database Provider String
Postgres postgresql
MySQL mysql
SQLite sqlite
MSSQL sqlserver
MongoDB mongodb

As you may have guessed, here we are telling Prisma we want to use a postgres database. We are also telling it to look in process.env for a variable called DATABASE_URL to get the connection string.

The env() function allows us to access environment variables via dotenv-expand. It can only be used in two places: the datasource url field, and the generator binaryTargets field.

We could have also passed a string to the url option instead of using the env() function.

There are other options available to the datasource block described here. But for now we’ll keep it to what we need.



Generator

Blank diagram - Page 1 (7).png

The next piece we’ll add is a generator block.

A generator allows you to configure what is generated when you run the command prisma generate. You can configure multiple generators per schema, however by default Prisma sets up a single generator and specifies prisma-client-js as the provider, which builds the Prisma Client.

generator client 
  provider = "prisma-client-js"

Note the generator is named client here, but that name could be anything

There are a few different options available to configure things like where to output the generated assets, but for now we’ll stick to the default settings.

Feel free to check out the rest of the config options here.

Our file in total should now look like this:

datasource db 
  provider = "postgresql"
  url      = env("DATABASE_URL")


generator client 
  provider = "prisma-client-js"

This is all the config we need to define our data connection and configure our generated assets. Now we will move on to the good stuff, modeling out our data!



Models

The model blocks are where we actually tell Prisma what our data looks like and how it should be handled in the Prisma Client.

On a model you can define fields, table/field name mappings, attributes describing the data, and relations to relate models to each other.

A field is made up of a field name, a data type, and any attributes to describe that field of data.

There are tons of different options for our models and fields, and we’ll have to make use of a lot of those to get our bookstore schema going.

Refer to the docs for a full list of types and attributes



Person model

Blank diagram - Page 1 (8).png
Let’s start off by building out the base of our Person model, which will hold the people who can check in and out books.



@id, @default
model Person 
  id        Int       @id @default(autoincrement())

Here we are using two “attributes” that Prisma Syntax Language provides to describe our id field. First we are letting prisma know that field is an @id, which signifies this field is the unique identifier for data in this table. Each model needs to have a unique identifier.

We are also using the @default attribute to specify that the default value for that field should be a number that increments for each row with the autoincrement() function.

We’re going to need more than that to describe our Person though. Let’s add some more fields:



@unique, @updatedAt
model Person 
  id        Int       @id @default(autoincrement())
  firstName String
  lastName  String
  email     String    @unique
  age       Int
  updatedAt DateTime  @updatedAt

That’s a bit more like it! Now we’ve got a pretty good model describing our Person.

We’ve made use of the @unique attribute here to let prisma know the email field should be unique in that table. No persons should have the same email!

We also created a column with the @updatedAt attribute, which will cause that column to automatically update with a current timestamp whenever the row of data updates. Fields using this attribute MUST be of type DateTime.

For now that’s all we’ll need for our Person model. Let’s move on to the Book model.



Book model

Blank diagram - Page 1 (9).png

Just to get things started, lets set up some of the basic fields we know we’ll need for our Books:

model Book 
  title         String
  productId     String
  publishedDate DateTime
  description   String 

These fields are all super simple, but we don’t have a unique identifier yet!
Lets create a compound identifier with the book’s title and productId fields that will be used as the Primary Key for this table.

Also let’s limit the description field to 150 chars by reaching in to Postgres’s native types.



@db native types, @@id
model Book 
  title         String
  productId     String
  publishedDate DateTime
  description   String    @db.VarChar(150)

  @@id([title, productId], name: "titleProduct")

Prisma allows us to use the @db attribute to specify some of the native types available to whichever database provider we are using.

The compound ID we created specifies that this table’s rows should have unique combinations of title and productId. We’ve also passed it an optional name parameter to name the compound ID. Otherwise it would be generated as just title_productId.

The last thing I’d like to add to our book is a Subject. To do this we’ll set up an enum, which is a feature available only to Postgres, MySQL, and MongoDB.



enum

An enum describes a set of possible values. For a full description of how to use enums, check out prisma’s docs

enum Subject 
  GENERAL
  HORROR
  MYSTERY
  ROMANCE
  EDUCATIONAL

Here we set up an enum of Subjects. To use this, we can just create a field on our model and give it the type of our enum.

model Book 
  title         String
  productId     String
  publishedDate DateTime
  description   String    @db.VarChar(150)
  subject       Subject  @default(GENERAL)

  @@id([title, productId], name: "titleProduct")

The subject field of our book model will now hold a value that is in the enum Subject. When creating a record in this table, if no value is provided for subject, it will default to GENERAL because we specified it in the @default attribute.

Great! Now that we have a book, we should probably set up an Author model and relate it to the Book model.



Author model

Blank diagram - Page 1 (10).png

The Author model will hold our author’s details and also relate to a Book so that we can join it to the Book table when querying for details.

First we’ll set up the basic fields our Author will need.



Optional Fields
model Author 
  id        Int     @id @default(autoincrement())
  firstName String
  lastName  String
  birthTown String?

You’ll notice a ? next to the String type on the birthTown field. This is a type modifier that signifies the field is optional.

We know that each Author could potentially have many books, so let’s signify this in the model.



List modifier
model Author 
  id        Int     @id @default(autoincrement())
  firstName String
  lastName  String
  birthTown String?
  Books     Book[]

This lets us know that our Author will have a potential list of Books that are related to it. The field name can be anything, I chose Books just to make it clear. And the type, as you’ll notice, is Book, which corresponds to our Book model. The [] signifies that it will be an array of books.

This is great but how does prisma know how to relate an Author to a Book? This schema will be invalid unless we set up a relation mapping in the Book model. So let’s go back to our Book model and make some adjustments



@relation
model Book 
  authorId      Int
  title         String
  productId     String
  publishedDate DateTime
  description   String    @db.VarChar(150)
  subject       Subjects  @default(GENERAL)
  Author        Author    @relation(references: [id], fields: [authorId])

  @@id([title, productId], name: "titleProduct")

So what’s going on here? I’ve gone ahead and added an authorId field to the model that will be used to map to our Author model.

Blank diagram - Page 1 (11).png

But the more important piece is the new Author field. This field (which could be named anything, I chose Author for clarity) is of the type Author. This type corresponds to our Author model.

On that field we have defined a relation that will be shared between Book and Author.
The references option in the relation points to the field on the Author model we want to match against. The fields option points to the field on the Book model that should match the reference field. And this field is not specified as an array, so we know a Book will have one Author.

And that’s it, we essentially have a one-to-many relationship between Author and Book!

This gets us the majority of the way to where we need to be to get our check-in/check-out system modeled. The last piece will be a model to hold our check-in/out log.



BookLog model

Our initial model will just hold some basic details about the book that is being checked out and the person checking it out. We’ll also go ahead and create a relation between the BookLog and Person model.



@map, @@map, now()
model Person 
   <...other fields...>
   log BookLog[]


model BookLog 
  id           Int      @map("log_id") @id @default(autoincrement())
  title        String
  productId    String
  checkInTime  DateTime
  checkOutTime DateTime @default(now())
  personId     Int
  person       Person   @relation(fields: [personId], references: [id])

  @@map("book_log")

There are a couple of new things going on in this model that we haven’t seen yet.

  • The @map attribute is used to map our model’s field name to the database’s column name. In this case, the database table will have a column named log_id, which we are using in our model as id
  • checkOutTime is using the now() function in its @default definition. This will set the default value of that field to the timestamp when the record is created
  • The @@map attribute allows us to map our model to a database table but name the model something different. In this case, the database table will be book_log, but our model will be BookLog.

Blank diagram - Page 1 (12).png

With that, we now have the ability to query to see which user checked out which book! But what if we wanted to display some details about the book that aren’t available here? Let’s set up a relation to the Book model. This one will be a bit trickier though because the Book model has a compound ID instead of a single primary key!

model Book 
   <...other fields...>
   log BookLog[]


model BookLog 
  id           Int      @id @default(autoincrement()) @map("log_id")
  title        String
  productId    String
  checkInTime  DateTime
  checkOutTime DateTime @default(now())
  personId     Int
  person       Person   @relation(fields: [personId], references: [id])
  book         Book     @relation(fields: [title, productId], references: [title, productId])

  @@map("book_log")

In our relation to the Book model, we have specified that to match a book to a book log, the Book table should be joined on the title and productId fields.

As shown above, you will need to add a field on the opposite relation that defines an array of BookLog records.

Blank diagram - Page 1 (13).png

We’re pretty much all the way there with our model! The last little thing I’d like to add is more of a convenience thing that should help speed up some queries.

Let’s add an index to the BookLog table that will index queries using the id and personId fields



@index
model BookLog 
  id           Int      @id @default(autoincrement()) @map("log_id")
  title        String
  productId    String
  checkInTime  DateTime
  checkOutTime DateTime @default(now())
  personId     Int
  person       Person   @relation(fields: [personId], references: [id])
  book         Book     @relation(fields: [title, productId], references: [title, productId])

  @@index([id, personId])
  @@map("book_log")

Nice, now our database will index on these fields! (Probably not necessary, but hey, for science).



Wrapping Up

We should at this point have a complete schema set up and ready to handle some data! Here is what our completed file looks like:

generator client 
  provider = "prisma-client-js"


datasource db 
  provider = "postgres"
  url      = env("DATABASE_URL")


enum Subject 
  GENERAL
  HORROR
  MYSTERY
  ROMANCE
  EDUCATIONAL


model Author 
  id        Int     @id @default(autoincrement())
  firstName String
  lastName  String
  birthTown String?
  Books     Book[]


model Book 
  authorId      Int
  title         String
  productId     String
  publishedDate DateTime
  description   String    @db.VarChar(150)
  subject       Subject   @default(GENERAL)
  Author        Author    @relation(references: [id], fields: [authorId])
  log           BookLog[]

  @@id([title, productId], name: "titleProduct")
  @@unique([title, authorId])


model Person 
  id        Int       @id @default(autoincrement())
  firstName String
  lastName  String
  dob       DateTime  @map("date_of_birth") @db.Date
  email     String    @unique
  age       Int
  updatedAt DateTime  @updatedAt
  log       BookLog[]


model BookLog 
  id           Int      @id @default(autoincrement()) @map("log_id")
  title        String
  productId    String
  checkInTime  DateTime
  checkOutTime DateTime @default(now())
  personId     Int
  person       Person   @relation(fields: [personId], references: [id])
  book         Book     @relation(fields: [title, productId], references: [title, productId])

  @@index([id, personId])
  @@map("book_log")

If you set up Postgres locally via Docker, feel free to run prisma db push to build out your database tables on the actual database server. You can then view those tables via the admin view as described in the instructions above.

As you can see there are a ton of different options that Prisma Syntax Language gives us when setting up our schemas. While we covered a lot in this article, there are still tons more available. Definitely check out the docs if you’re curious about those.

Thank you for sticking around until this point, and I encourage you to take this schema and play around with some queries to see how the relations work! That’s where some of the real fun comes in!

Happy Coding!


Source link

How to version Docker Images?

If we use git for version management of our code base, similarly we can use Docker as our version management tool for applications.

Lets imagine, we have an application called “Icecream“.

We are going to store our “Icecream” application on a repository named “Fridge” so, lets get the root access by executing the command

sudo -i
  • First we need to login to our repository called “Fridge
docker login Fridge
  • Now pull the “Icecream” image from you “Fridge
docker pull Fridge/Icecream:plain

// syntax: docker pull repo_name/Image_name:version_tag
  • If you don’t have a docker image named “Icecream“, on our “Fridge” repository, then lets build one
docker build -t Icecream:plain
// this command is executed from the working directory where our docker file is placed
  • Now as we have our “Icecream:plain, lets modify it to “Icecream:vanilla” and “Icecream:chocolate“.

  • For that we need to run our Icecream:plain in to a container

docker run -it Icecream:plain bash
  • Add the code for vanilla in our container and do
docker ps
  • copy the container id of the running container and commit
docker commit <container_id> Icecream:vanilla
  • Now run our “Icecream:plain” in to a container again
docker run -it "**Icecream:plain**" bash
  • Add the code for chocolate in our container and do
docker ps
  • copy the container id of the running container and commit
docker commit <container_id> Icecream:chocolate
  • Now push the changes to your “Fridge” repository
docker push Icecream:vanilla
docker push Icecream:chocolate

Explore us more on : Doge Algo 🐶


Source link