How to build a REST API using NodeJS

πŸ‘‹ Hey everyone, I know it’s been a long since I posted a new blog πŸ˜…. πŸ‘€ So in this blog post we are doing to build a REST API that would serve as a source of motivation for developers using NodeJS and MongoDB. So let’s get started πŸ„β€β™‚οΈ



What’s an API? πŸ€”

API stands for “Application Programming Interface” which is a tool that allows two applications to talk to each other πŸ“ž. Let’s understand the meaning of API by some real-life examples ✨

So you have built an amazing e-store application and you wanted other developers to build applications on it. Now you have to build some sort of software that communicates between your web service and the developer’s application and that’s where API comes in.



What’s a REST API? πŸ€”

Now as you have let’s talk something about “REST APIs”. REST stands for Representational State Transfer, it’s one of the most popularly known type of API architecture. These types of APIs follow the client-server model, where one program sends a request and the other response with some data.
The requests are HTTP methods such as POST, GET, PUT, DELETE…

You would have a more clear understanding of APIs and REST APIs when we build a project πŸ‘€. So what are we waiting for, let’s dive started into coding πŸ‘¨β€πŸ’».



Setting up the project πŸ› 

Let’s set up our project so that we can start coding πŸ‘¨β€πŸ’».

  1. Creating a separate folder for our project
   $ mkdir dev-credits-api
  1. Navigate into the folder
   $ cd dev-credits-api
  1. Initializing the project
   $ npm init
  1. Installing the required packages
   $ npm install mongoose express dotenv cors

   # or

   $ yarn add mongoose express dotenv cors
  • Express is the framework by which we are going to our REST API
  • Mongoose is the tool that we are going to use to communicate with our MongoDB database

    4.1. Installing nodemon as a dev dependency

     $ npm install nodemon -D
    
     # or
    
     $ yarn add nodemon -D
    
    • Nodemon is used for automatically restarting the server on file changes detected in the directory. This would be helpful as we would not be restarting the server each time we do changes



Building the REST API πŸ‘¨β€πŸ’»

As we have completed the setup for our project, let’s get started building the REST API.

Create a new file named index.js

Here is the boilerplate code for a basic express app

index.js

const express = require('express');

const app = express();

const port = process.env.PORT || 3000;

app.listen(port, async () => 
  console.log(`Server is running at port $port`);
);

Let’s breakdown it into and understand each part:

  • We are requiring the express package into our file so that we can use it
  • We are assigning some value to the variable port, the port where our server would be running. You might be thinking why is there a process.env.PORT? πŸ€”. It’s because during deployment on services such as Heroku the port number might vary, it may not be 3000 so we are telling that if there is a PORT environment variable then use that else use 3000
  • The last piece of code is telling to which port the server should listen, in our case it’s the PORT variable

Let’s add a new script named start to the package.json file which uses nodemon to automatically restart the server on file changes detected. So after the changes our scripts in package.json would look something like this:

"scripts": 
   "start": "nodemon index.js"

Let’s start our server by running the npm start command. The server would be running at http://localhost:3000. You prompted with an error something like this:

This is happening because we haven’t defined the / (aka the root route)



HTTP methods explained

Let’s take a break from coding and understand what do they do and what’s the success and error status so that it would be easy for debugging 😎



GET

What it does: Request data from a specified resource

Successful response: 200 OK

Error response: 404 not found



POST

What it does: Send data to the server to create a new resource

Successful response: 201 Created

Error response: 404 not found or 409 conflict – if the resource already exists



PUT

What it does: Send data to the server to update a pre-existing resource

Successful response: 200 OK

Error response: 204 no content, 404 not found or 405 method not allowed



DELETE

What it does: Deletes a resource from the server

Successful response: 200 OK

Error response: 404 not found or 405 method not allowed

Check out http.cat for understanding what each HTTP status code means via funny cat images 😹



Adding routes πŸ›£

Routes are different URL paths of an express app that are associated with different HTTP methods, such as GET, POST, DELETE, PUT.

Let’s get started by creating / which sends “Hello, World!”

Add the below piece of code above the line where we declared the port variable

index.js

app.get('/', function (req, res) 
  res.send('Hello, World!');
);

Let’s breakdown this piece of code:

  • The get method specifies the HTTP method for that route. You could use other HTTP methods like post, delete
    • There is a special routing method all which is used for the routes which handle all kinds of HTTP methods
  • There is a callback method that is called when the server receives a request from that endpoint with that specified HTTP method

πŸ₯³ Horray! “Hello, World” is now visible in the / route



Setting up MongoDB

Let’s get in the MongoDB database now 😎.

Head over MongoDB and sign up/sign in and create a new project

You could your co-worker into the project if you wanted too.

After the creation of the project, click on Build a Database

You would be shown with a screen something like this:

Let’s go ahead and choose the free plan πŸ‘€

You would be shown some more options about the cloud provider and the location

Let’s choose the nearest region and move forward.

You would be asked to create a user. This is required as you would need the username and password to generate a connection URL which is then used to connect MongoDB with your NodeJS app.

The creation of the cluster would take 1 – 3 minutes. So let’s grab a cup of coffee until then β˜•. Ahh… it’s been successfully created so let’s get back to coding πŸ‘¨β€πŸ’»

Click on Connect

Click on Connect your application

Copy the connection URL

Create a .env file and replace <password> with the password of the user which you have replaced previously

MONGODB_URL="mongodb+srv://kira272921:<password>@dev-credits-api.t5tkf.mongodb.net/myFirstDatabase?retryWrites=true&w=majority"

Let’s head back to the good old index.js file



Connecting Express app to MongoDB

Let’s start by requiring mongoose and dotenv

const mongoose = require('mongoose');
const dotenv = require('dotenv');

Let’s configure dotenv as well

dotenv.config();

Let’s finally add the piece of code which connects our express application to MongoDB

mongoose
  .connect(process.env.MONGODB_URL, 
    useNewUrlParser: true,
    useUnifiedTopology: true,
  )
  .then(() => 
    console.log('Connected to MongoDB');
  )
  .catch((err) => 
    console.log(err);
  );

The index.js file show looks something like this now

index.js

const express = require('express');
const mongoose = require('mongoose');
const dotenv = require('dotenv');

dotenv.config();

const app = express();

mongoose
  .connect(process.env.MONGODB_URL, 
    useNewUrlParser: true,
    useUnifiedTopology: true,
  )
  .then(() => 
    console.log('Connected to MongoDB');
  )
  .catch((err) => 
    console.log(err);
  );

app.get('/', function (req, res) 
  res.send('Hello, World!');
);

const port = process.env.PORT || 3000;

app.listen(port, async () => 
  console.log(`Server is running at port $port`);
);

πŸ₯³ We successfully connected our express app to the MongoDB database.



Creating Schema and Model πŸ“

A Schema is the structure of the documents in our database. It tells what fields are required, what’s the data type of each field.

A model provides a programming interface for interacting with the database (read, insert, update, etc).

Let’s create a new folder named model and inside it let’s create a model.js where we will define our schema

model/model.js

const mongoose = require('mongoose');

const devCredits = new mongoose.Schema(
  credits: 
    type: Number,
    required: true,
  ,
  id: 
    type: Number,
    required: true,
  ,
);

module.exports = mongoose.model('devCredits', devCredits);

Let’s breakdown it down and understand

  • We imported the mongoose package into the model/model.js file
  • We created a new schema named devCredits. The structure has the credits and id. Credits are the number of dev credits the person has and the id is the discord id of the user (This API was initially created for a discord bot Dev credits bot so the schema of the database is kinda based on discord πŸ€·β€β™‚οΈ)
  • We have finally created a model named “devCredits”



Adding more features 😎

Let’s add more routes to our REST API. Let’s add routes where we can get the total dev credits of a user via their discord ID and give dev credits to other users using another route.



Giving dev credits to other devs

Let’s import our model which we have just created into the index.js file.

const devCredits = require('./model/model.js');

Let’s add a new POST route in the index.js file

app.post('/post', function (req, res) {
  const credit = new devCredits(
    id: req.body.id,
    credits: req.body.credits,
  );

  devCredits.countDocuments( id: req.body.id , function (err, count) 
    if (count > 0) 
      devCredits.findOneAndUpdate(
         id: req.body.id ,
        
          $inc: 
            credits: req.body.credits,
          ,
        ,
         new: true ,
        (err, devCredit) => 
          if (err) 
            res.send(err);
           else res.json(devCredit);
        
      );
     else 
      credit.save((err, credits) => 
        if (err) 
          res.send(err);
        
        res.json(credits);
      );
    
  );
});

Let’s understand what exactly is going on:

  • We have created a new POST route (/post)
  • We validate the data which we receive from the client using our model
  • In the next piece of code we are checking if the user (user id) already exists in the database or not
    • If exists then we are going to increment the credits value
    • Else we are going to create a new document with the user id and add the credits



How to test the API?

We have successfully created added a new feature in our API πŸ₯³. But wait how are we going to test it out πŸ€”

πŸ‘€ We are going to use a VSCode extension called Thunder Client, which is used for API testing. So let’s quickly download it and test our new feature in our API πŸ₯³.

After the completion of the download, you are going to see a thunder icon in your sidebar πŸ‘€

Click the thunder icon and you are going to see a section something like this

Click on New Request. You would be prompted to screen something like this

Let’s test out our /post route now πŸ₯³. Change the URL in the input box from https://www.thunderclient.com/welcome to HTTP:localhost:3000/post

Change the HTTP method from GET to POST

Navigate to the Body tab, this is the section where we are going to write the body of the request.

I have added my discord ID and gave 100 dev credits to it, cuz why not

Let’s click and hope that it works 🀞

πŸ₯πŸ₯πŸ₯πŸ₯πŸ₯ and we got an error

This happened because we didn’t have any middleware so let’s them quickly

index.js

app.use(cors());
app.use(express.json());
app.use(express.urlencoded( extended: false ));

NOTE: We had installed cors as a separated package, so don’t forget to import it as well

Let’s try again now so that it works now 🀞

πŸŽ‰ TADA! We have successfully created our first feature in the API which interacts with the MongoDB database



Getting the total dev credits of a user

Let’s import our model which we have just created into the index.js file.

const devCredits = require('./model/model.js');

Let’s add a new route in the index.js file

app.get('/get/:id', function (req, res) 
  devCredits.find( id: req.params.id ,  _id: 0, __v: 0 , (err, data) => 
    if (err) 
      res.json(err);
    
    res.json(data);
  );
);

Let’s breakdown this down

  • We have created a new route with the GET method
  • We are finding in the database for the ID given in the parameters

Let’s test it out again using Thunder Client πŸ‘€.

πŸŽ‰TADA! It’s works



Cleaning up the codebase

Let’s clean up the codebase a bit πŸ˜….

Let’s create a new folder called routes and inside it let’s create a new file router.js which contains the routes

routes/router.js

const router = require('express').Router();
const devCredits = require('../model/model.js');

router.get('/get/:id', function (req, res) 
  devCredits.find( id: req.params.id ,  _id: 0, __v: 0 , (err, data) => 
    if (err) 
      res.json(err);
    
    res.json(data);
  );
);

router.post('/post', function (req, res) {
  const credit = new devCredits(
    id: req.body.id,
    credits: req.body.credits,
  );

  devCredits.countDocuments( id: req.body.id , function (err, count) 
    if (count > 0) 
      devCredits.findOneAndUpdate(
         id: req.body.id ,
        
          $inc: 
            credits: req.body.credits,
          ,
        ,
         new: true ,
        (err, devCredit) => 
          if (err) 
            res.send(err);
           else res.json(devCredit);
        
      );
     else 
      credit.save((err, credits) => 
        if (err) 
          res.send(err);
        
        res.json(credits);
      );
    
  );
});

module.exports = router;

We have imported the routes/router.js file into the index.js file and used it

index.js

const express = require('express');
const mongoose = require('mongoose');
const dotenv = require('dotenv');
const cors = require('cors');

dotenv.config();

const router = require('./routes/router.js');

const app = express();

app.use(cors());
app.use(express.json());
app.use(express.urlencoded( extended: false ));

mongoose
  .connect(process.env.MONGODB_URL, 
    useNewUrlParser: true,
    useUnifiedTopology: true,
  )
  .then(() => 
    console.log('Connected to MongoDB');
  )
  .catch((err) => 
    console.log(err);
  );

app.get('/', function (req, res) 
  res.send('Hello, World!');
);

app.use(router);

const port = process.env.PORT || 3000;

app.listen(port, async () => 
  console.log(`Server is running at port $port`);
);

Let’s test it out so that we are sure that our code and we didn’t mess up by cleaning up the mess πŸ˜†

πŸ₯³ Horray! There isn’t any error and the code still works as it was before

πŸ˜… Doesn’t routes/router.js seem kinda filled up with the logic and make it kinda messy?

Let’s create a new folder named controllers. In this folder, we will store the logic related to each route.

Let’s get started by creating a new file in the controllers folder named getCredits.js and postCredits.js which contains the logic related to the /get route and /post route respectively

controllers/getCredits.js

const devCredits = require('../model/model.js');

const getCredits = (req, res) => 
  devCredits.find( id: req.params.id ,  _id: 0, __v: 0 , (err, data) => 
    if (err) 
      res.json(err);
    
    res.json(data);
  );
;

module.exports = getCredits;

controllers/postCredits.js

const devCredits = require('../model/model.js');

const postCredits = (req, res) => {
  const credit = new devCredits(
    id: req.body.id,
    credits: req.body.credits,
  );

  devCredits.countDocuments( id: req.body.id , function (err, count) 
    if (count > 0) 
      devCredits.findOneAndUpdate(
         id: req.body.id ,
        
          $inc: 
            credits: req.body.credits,
          ,
        ,
         new: true ,
        (err, devCredit) => 
          if (err) 
            res.send(err);
           else res.json(devCredit);
        
      );
     else 
      credit.save((err, image) => 
        if (err) 
          res.send(err);
        
        res.json(image);
      );
    
  );
};

module.exports = postCredits;

routes/router.js

const router = require('express').Router();

const devCredits = require('../model/model.js');
const getCredits = require('../controllers/getCredits.js');
const postCredits = require('../controllers/postCredits.js');

router.get('/get/:id', getCredits);

router.post('/post', postCredits);

module.exports = router;

Phew, that was a lot of work 😹



Adding rate limit

You don’t want some random guy to just spam your entire database πŸ˜†. So let’s add a rate limit to our API when restricts the client to perform only a few requests every x minutes

Let’s install express-rate-limit package

$ npm install express-rate-limit

# or

$ yarn add express-rate-limit

Let’s create a middleware folder that contains all the middlewares of our API. Create a file named rateLimiter.js under the middleware folder

middleware/rateLimiter.js

const rateLimit = require('express-rate-limit');

const rateLimiter = rateLimit(
  windowMs: 1 * 60 * 1000, // 1 minute
  max: 10,
  message: 'Bonk πŸ”¨',
);

module.exports = rateLimiter;

Let’s understand what this piece of code is doing?

  • We are importing the express-rate-limit package
  • The windowMs specifies the duration
  • The max specifies the max amount of requests the client can make in the duration specified
  • The message is the message which is shown to the client when he exceeds the max limit

So let’s import into the index.js file and test it out

index.js

const rateLimiter = require('./middleware/rateLimiter.js');

app.use(rateLimiter);

😹 I got bonked by myself



Deploying our API on Heroku

πŸ‘€ We have successfully built an API but how would other developers use it if it isn’t deployed?

Let’s deploy it on Heroku πŸš€.

Get started by initializing a git repository in the directory. Create a new GitHub repository and push your changes into that repository πŸ‘€

Let’s create a new file named Procfile which is just a file that tells Heroku which command is need to be run. Add the below content to the Procfile file

web: node index.js

NOTE: nodemon doesn’t work in the production stage. It only works in the development stage, so we have to use the good old node index.js

Create an account on Heroku and click on Create new app, give some cool name to your API

Head over to the settings tab and click Reveal Config Vars

These are the environment variables

Add a new config var with the key as MONGODB_URL and the value as your MongoDB connection URL

Head back to the deploy tab and connect the GitHub repository which you have created just before to your Heroku application

Click the Deploy branch button. TADA πŸš€ You have successfully created a REST API and deployed it as well πŸ˜€

The entire source code for this tutorial will be available on my GitHub https://github.com/Kira272921/dev-credits-api

Check out the API which we built today:

https://devcredits-api.herokuapp.com/

That’s it for this blog folks 🀞. Meet y’all in the next blog post


Source link

How to Connect Your Local Project’s Codebase to a GitHub Repository Fast!

GitHub is one of the most powerful tools for developers, whether you are working on your project solo or working amongst members of a team. Git and GitHub adds a version control layer to your code so anyone can see the change history, the edits, and also see various branches of the codebase.

In this episode of the Tech Stack Playbook, we are going to review the process of uploading a local codebase repository from a computer to GitHub from the command line.

This episode is packed with content, so here’s a glance at what you’ll learn about below, and a series of sections further down in this blog post highlighting the important topics we discussed:

Time stamps:
00:00 GitHub 101
02:15 Set up your code project locally
03:20 Create an empty repository in GitHub
04:47 Initialize your GitHub connection locally
10:28 Review the pushed changes in GitHub
10:53 Set up GitHub Desktop to manage our repository
11:33 Push new changes via GitHub Desktop to GitHub
12:57 Wrap-up and reflection on what we set up with GitHub




πŸ‘¨β€πŸ’» GitHub 101


GitHub
is one of the most powerful tools for developers, whether you are working on your project solo or working amongst members of a team. Git and GitHub adds a version control layer to your code so anyone can see the change history, the edits, and also see various branches of the codebase.

I like to think of GitHub as the code-version of Google Docs. You can switch back to a previous version of your document, make edits and push those in real time, and also collaborate with others on the same version of the document.

Another major benefit to GitHub is branching, allowing you to have different states of your codebase for different reasons. A common practice for codebases involves 3 core branches: dev, stage, and prod. The dev branches is what you will use to build from and test, debug, and add in new features. The stage branch is for new additions that are ready for review ahead of going to prod – the reason being, you need to thoroughly test the addition to make sure it is ready for users and so you don’t mess with the client-facing build. The prod, or production, version of your codebase is what is running live for your clients or customers or users. This (hopefully) is free of bugs and errors because of the previous two steps to push code to this stage.

However, if you are working on your project solo, you might only need 2 core branches: main, a version for you to build/test your app, and prod, a version in production that is always live.

In today’s tutorial, we are going to review the process of uploading a local codebase repository from a computer to GitHub from the command line. In each of these below steps, I denote which ones are things you do (local) – on your computer, or (web) – on the GitHub website.



πŸ‘¨β€πŸ’» Step 1: Set up your code project folder (local)

For this example, I have created a ReactJS Soundcloud Clone application with the create-react-app framework and implemented the AWS Amplify framework with Cognito identity and access management, DynamoDB NoSQL database storage, S3 object oriented storage for media items, and AppSync to help us manage a GraphQL API. The app allows users to create an account that then allows them to upload songs to the cloud through the app and then play those media files through the built-in player. Stay tuned for a full-tutorial on this build coming soon ☺️

If you do have a local codebase on your computer that you want to push to GitHub, feel free to jump right into Step 2 below.

If you do not have a local codebase on your computer to push to GitHub, you can spin up a practice repo with either a React.js or NEXT.js template below to get started:

For React, run:

npx create-react-app techstackplaybookpracticerepo

For Next, run:

npx create-next-app --example with-tailwindcss techstackplaybookpracticerepo

Once you have a folder for your app created with one of these frameworks, move onto Step 2 below.



πŸ‘¨β€πŸ’» Step 2: Create an empty repository in GitHub (web)

When you go to https://github.com, at the top right, when you click on your profile avatar, there is a drop-down of menu items.

Click on the drop-down item that says β€œYour Repositories” which will bring you to a page that lists out all of the repositories in your GitHub account. There will be a green button that says β€œNew” – make sure to click that to pull up the create repository flow.

There will be a number of options to select, but here’s a quick guide:

  • Repository template: (keep default option)
  • Repository name: TechStackPlaybookPracticeRepo
  • Description: (optional)
  • Public/Private: Public
  • Initialize this repository with: (keep these options unchecked)

When you are ready, click β€œCreate repository” to finalize the setup of an empty repository in GitHub.

When the empty repository page loads, the link will look something like this: https://github.com/YourGitHubHandle/TechStackPlaybookPracticeRepo

You will notice on this page, there is a URL that will be to the right of the HTTPS button. It will look like this: https://github.com/YourGitHubHandle/TechStackPlaybookPracticeRepo.git. You will want to copy this URL down as we will need it in Step 3 later on.



πŸ‘¨β€πŸ’» Step 3: Initialize your GitHub connection (local)

From the root of your project folder (the outermost folder that wraps everything, for me this is called soundcloud which contains my /amplify folder, /public folder, /src folder, etc.), make sure that your terminal window is set at this level.

You will initialize an empty git repository with a branch called main with the following:

git init -b main

This will create a hidden folder called .git which will actually save and store all of our version control changes. It’s almost like a cookie that connects our local repository to the GitHub version.

Next, we add our locally created files to this .git file with the following:

git add .

We then want to commit these files we’ve added onto main to our specific repository that we are initializing for GitHub with:

git commit -m β€œFirst Commit to GitHub”

This will probably add a lot of files listed out. Make sure that .gitignore is included in this list of added files and includes node_modules so that you don’t upload a gazillion node_modules files to GitHub ☺️

In the github.com page with the URL that we copied down in Step 2, we will now use this to send our github files to this URL endpoint:

  • make sure to change YourGitHubHandle to your actual account:
  • make sure to change TechStackPlaybookPracticeRepo to the name of your actual repo you created on GitHub
git remote add origin https://github.com/YourGitHubHandle/TechStackPlaybookPracticeRepo.git

What this is effectively doing is telling git that, from the remote local version of our repository, we are going to add all of those files to the origin of this empty GitHub repository link online on the web.

We will now set the new remote with this:

git remote -v

You will then see that there are 2 lines printed in the terminal, one that ends with (fetch) and one that ends with (push). We are calling this GitHub repository and pushing our code locally from the remote to GitHub in the cloud.

Now that we’ve initialized the connection, we will push our code locally to the origin main which we’ve set as the destination in GitHub:

git push -u origin main

This will enumerate all the objects we want to push, it will then get compressed into threads to push them and will push to this GitHub link which is the one we want for this repository and the branch is set as one called main and sets it to track it from origin.



πŸ‘¨β€πŸ’» Step 4: Review the pushed changes in GitHub (web)

On our GitHub repository page (https://github.com/YourGitHubHandle/TechStackPlaybookPracticeRepo), what was once empty, upon refreshing the page, should now show our codebase that we had locally on our computer now on this web page.

What we have done is create a synced pair between our local repository (remote) and our GitHub repository (origin). However, this is just for our most recent changes on our local repository. What if we want to create ongoing pushes to our GitHub repository and do regular pushes as a backup to GitHub? We will review this with a tool called GitHub Desktop in the next step below.



πŸ‘¨β€πŸ’» Step 5: Set up GitHub Desktop to manage our repository (local)


GitHub Desktop
, a Microsoft-created GitHub manager, is a GUI (graphical user interface) client/platform that creates an easy and efficient way to manage our GitHub repository right from our computer without needing to worry about typing the right command line scripts and sequences in the terminal.

While it is very important to understand what is happening behind the scenes at the terminal level, for us to move fast, we need tools and ways to expedite and automate our work flow processes. When you are typing in the terminal, spelling errors and human error can cause us to make mistakes, errors, or lose precious time. GitHub Desktop helps developers move faster with their repositories and has been an amazing tool in my workflow.

As a side note, there are other GUIs for Git and SCM (source control management) tooling, such as Kraken which is optimized for Azure DevOps, as well as GitLab.

We will need to create a new repository in our GitHub Desktop client because while the repository is synced with github.com, our GitHub Desktop client wouldn’t have been updated to track this repository yet until we allow it.

In the β€œAdd” drop-down on the button to the right of the text field in the GitHub Desktop client, you will select the drop-down option: Add Local Repository

When we have the option to β€œChoose” a folder, we will want to select the outermost folder container for our project. For you, this might look like: /user/Documents/GitHub/TechStackPlaybookPracticeRepo

Once the outermost folder is selected, we will click Add Repository

This will now connect to our hidden .git file and anytime we make changes and save them in our code editor, GitHub Desktop will show those changes reflected in the GUI.



πŸ‘¨β€πŸ’» Step 6: Push new changes via GitHub Desktop to GitHub (local)

In GitHub Desktop, we should see 1 or more file changes reflected in the list of β€œchanged files” on the left half of the app. In this video, I updated the README.md file, so that is why it has a check-mark next to README.md and the app says 1 changed file at the top.

In the bottom right, we will give our commit a name, which can be anything you wish. I said: Updated Readme for YouTube!. You can also write a description if you want, but it is optional.

At the top, you will see I have the current branch set to main, as I only have 1 branch created for this video.

When everything looks good, you will click the blue bottom at the bottom left that says β€œCommit to main`

The bottom right button should now say Push origin, and once you select this, it will send those updated changes committed to our local remote branch to the main GitHub branch on the web.



πŸ‘¨β€πŸ’» Step 7: Review the pushed changes in GitHub (web)

On our GitHub repository page (https://github.com/YourGitHubHandle/TechStackPlaybookPracticeRepo), upon refreshing the page, you should see your changes reflected in the online version of the codebase, matching your changes locally as well.

In this example, the README.md file reflects the change and in the file/folder list, you will see that all the folders/files have the commit message First Commit to GitHub from Local except for one, which is that README.md file. It has a message that reads the same message we put into GitHub desktop: Update Readme for YouTube!

Check out the full recording below:


Let me know if you found this post helpful! And if you haven’t yet, make sure to check out these free resources below:

Let’s digitize the world together! πŸš€

— Brian




Source link

DevTips Daily Update 21/01/22

So this week’s DevTips daily tutorials have been a bit truncated – I was ill at the start of the year with COVID so i’m just catching up with things and starting to feel better!

I did publish a few videos however carrying on with our end to end project, looking at PM2 logs, securing the ports of our Digital Ocean droplet and creating specific Express routing files and finally doing a review of our progress on the project by looking at our User Stories!

Here’s a link to each individual tutorial:

Viewing PM2 Logs

Securing a MongoDB database with Digital Ocean

Creating Express Routing Files

User Story Review

Next week we’re going to be carrying on with the project again, starting to make progress on the front end side of things, giving the user something to generate the short URLs.

Thanks for watching πŸ‘


Source link

JS Coding Question #11: Are Two Object Equal [πŸ’₯3 SolutionsπŸ’₯]

…And the series continues after couple of React Interview Questions and Coding articles. πŸ‘

Now, this interview question can be tricky as you would need to know how to recurse in order to solve this problem/challenge. Even seasoned engineers often stumble to answer this question so better not to underestimate this problem but be ready. Video format is available below if you don’t feel like reading. Here is a Codepen if you want to edit/play around the code.



Interview Question #11:

Write a function or program that checks if two objects are equal.

There can be many solutions to problems/challenges that are efficient and less efficient, elegant and less elegant. If you have any other than the 3 solutions I have, please share so others may benefit. Below is my 3 solutions.



Solution #1:

JSON.stringify

const sortString = (str) => str.split("").sort().join("")

function isEqual(obj1, obj2) 
  const a = JSON.stringify(obj1);
  const b = JSON.stringify(obj2);

  // sort so it will over object properties that are not in order
  return sortString(a) === sortString(b)

This is an approach that looks very dirty/hacky but may still be useful if one is comparing smaller objects. It is easy and fast to write and do not need any libraries which adds overhead to the final js bundle. Since this is fast to write, it may also be useful for quick sanity check or verifying if two objects are equal.



Solution #2:

Using a lib

function lodashEqual(obj1, obj2) 
  return _.isEqual(obj1, obj2);

Most codebases have js utility already. What I like about libraries are that they solve common problems effectively and are well-tested to cover edge cases. I love open source lib and firm believer not to reinvent the wheel for problems that are already been solved.



Solution #3:

Custom Approach

function deepEqual(obj1, obj2) 
  // ensure that arguments are objects
  return obj1 && obj2 && typeof obj1 === "object" && typeof obj2 === "object"
    ? // return false right away if objects properties length are not equal
      Object.keys(obj1).length === Object.keys(obj2).length &&
        // use reduce setting the initial value to equal which is true
        Object.keys(obj1).reduce((prev, curr) => 
          // then recurse as deep as possible and keep recursing if values are objects
          return prev && deepEqual(obj1[curr], obj2[curr]);
        , true)
    : // just do normal compare if not an object
      obj1 === obj2;

This custom approach requires recursion to be able to compare nested, deep objects. Many interviewers would want to see interviewees show/draft their own solution as they want to see how they communicate, think and code at the same time.



Final Thoughts:

If I am interviewing someone and they can tell me that solution maybe using a libraries AND sudo code a custom solution trying to recurse through it, I would be satisfied with those answer. How about you? What are your thoughts?



Source link

Task Management App Tutorial with Python

In this tutorial, we’ll first build a simple to-do app in Django using Python. Then we’ll explore some ways we can visualize the resulting data in our database client. Want to build an analytics dashboard or twitter scraper instead? Check out these previous tutorials:



Prerequisites

To fully understand this tutorial, you will need the following:



Python setup



Installing Python

First, let’s install Python using this link. Then, we can select our OS and download it. This tutorial was built on Windows. We can run the command Python --version to check if Python is successfully installed on our system. If Python is installed, we get a response as shown below:

Screenshot of a successful Python install.



Creating a virtual environment

Let’s create an isolated virtual environment for this project. Virtual environments help us avoid installing dependencies globally on our system. This helps to make sure that the dependencies of other projects won’t be included within the dependencies of our current project, even though they’re all on the same local machine. To do that run the following command.

Screenshot of a virtual environment.

Next, activate the virtual environment by navigating to the base directory of our new virtual environment (in our example, it has the name “env”). Then, navigate into the Scripts folder and run the executable file “activate.”

Screenshot of a command line.



Django Setup

You can install Django Web Framework with pip and then follow these steps.



Create a project

Now, let’s create the project. We can do this by running the command django-admin startproject followed by the name of the project (in this case, taskmanagement). The full command should look something like this:

django-admin startproject taskmanagement

Here, we can name the project however we like, but be sure to be descriptive.



Create an app

Let’s navigate to the taskmanagement folder and create an application. To do that we use the command python manage.py startapp then the name of the app, which in our case we’ve named task_management_app.

Screenshot of Python code.

The app is created successfully. We can now open the project in our favorite IDE.



Migrations

In Django, we use migrations to move changes we make to our models (adding a field, deleting a model, etc.) into our database schema. First, we need to make migrations to our database. Run the command python manage.py makemigrations. makemigrations are used to create new migrations based on the changes made to the models in a project. The output is as shown below.

Screenshot of Python code

Now let’s run our app to see if we have successfully installed and our project properly set up. To do this, we are to change the directory into our project and run the command python manage.py runserver:

Screenshot of server spinup.

We have started our development server at http://127.0.01:8000/! So we can now open our browser to access our app using this URL. If everything works fine, the screen with the same image below is displayed as the default Django page.

Default project landing page.

Now we want to make some configurations inside the settings.py file. The primary thing we will do is add the app’s name in the installed apps, as seen directly below, and then add the MySQL database to Django.

Adding app to the list of installed apps.



Editing Settings.py to add MySQL to Django

In your settings.py file, edit the code below and replace it with the details required to connect to your SQL server:

DATABASES = 
    'default': 
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'database_name',
        'HOST': '127.0.0.1',
        'PORT': '3306',
        'USER': 'username',
        'PASSWORD': 'password',
    



Todo list app data model

Next, we want to create a model. This model will determine how the task will be stored on the database. The task will have seven properties, a title, a description of the task, a startTime, an endTime, a completed status, and a created on and an updated on column to mark when our task was first created and updated.

Note: our startTime and endTime use the DateTime class. This will hold the date and time in which the task was started and completed respectively. We have set created_on field’s auto_now_addproperty to true. This automatically fills this field with the current date and time in which the task was created.

Screenshot of defining the fields.

Now let’s go ahead and make migrations. Any time changes are made to our models, we need to make migrations. Once that is completed, we will find a new file inside the migrations folder as shown below.

Screenshot of Python code.

We didn’t write this code manually. This is the SQL code that Django has handled for us. Next, we migrate with python manage.py migrate. Migrate is responsible for applying and reapplying migrations.

Running the migrations.

Now we test that the CRUD operation was successful by registering the task on the admin panel as follows:

from django.contrib import admin
from .models import Task

class TaskAdmin(admin.ModelAdmin):
      list_display = ("title", "description", "startTime",  "endTime", "completed", "created_on", "updated_on")

admin.site.register(Task, TaskAdmin)

Screenshot of IDE



Task management API

To build the API we will install and use the Django REST Framework.



Creating a superuser and adding the rest_framework

Let’s create a superuser with the following command: python manage.py createsuperuser

Screenshot of creating a superuser

Next, we’ll need to add the Django rest_framework to the list of installed apps.

Screenshot of IDE



Creating the serializers

JSON stands for JavaScript Object Notation. Serializers convert the model class to JSON so that the frontend can work with the received data.

from django.db.models.fields import Field
from rest_framework import serializers
from .models import Task


class TaskSerializers(serializers.ModelSerializer):
    class Meta:
        model = Task
        fields = ("id", "title", "description", "startTime", "endTime", "completed", "created_on", "updated_on")



Creating the view

The view is simply a Python function that takes an HTTP request and returns a response. This response may be the HTML contents, for example. The view itself houses the business logic that is necessary to return a response to a client that makes an HTTP call to it.

from django.db.models.query import QuerySet
from django.shortcuts import render
from rest_framework import viewsets
from .serializers import TaskSerializers
from .models import Task

# Create your views here.

class TaskView(viewsets.ModelViewSet):
    serializer_class = TaskSerializers
    queryset = Task.objects.all()

Screenshot of IDE



The URL

The URL is used to map path expressions to Python functions (your views). It first receives HTTP requests and routes them to the right matching function in views.py to handle the request. Now let’s go to the URL that is in the project folder and add the following code:

from django.contrib import admin
from django.urls import path
from django.urls.conf import include
from task_management_app import views
from rest_framework import routers

router = routers.DefaultRouter()
router.register(r'tasks', views.TaskView, 'task')

urlpatterns = [
    path('admin/', admin.site.urls),
    path('api/', include(router.urls)),
]

Screenshot of IDE.



Accessing our endpoints

To access our endpoint run your app and visit http://127.0.0.1:8000/api/tasks/ and add a task.

Screenshot of application.

Boom! We have it on our task manager. We can get a particular task by appending the Id of the task to the URL. We can also update or delete a particular task.

This section has taught us how to build a task management app that performs CRUD operations. Now, we will use a SQL client to visualize the data on our database.



Visualize and query task data

Next, we select the SQL type we are most familiar with. In the course of this article, we would be connecting to a MySQL relational database.

Connecting to the DB.



Visualizing tables

I made a sample data .csv for this tutorial. First, I created a Google sheet, exported it to a .csv, and then imported it. This data table contains columns: id, title, description, completed, created on, startTime, endTime, and updated on. This is our exported data set in Table view:

Screenshot of Arctype.



Creating a dashboard to group together activities

Now, we are going to display our list of activities. First, we’ll write an SQL query that will display our data set. Click on the Queries tab by the sidebar and select Create query button.

Screenshot of Arctype

We can start by renaming our query and save it in the editor’s heading:

SELECT COUNT(*) AS tables_priv FROM `excel_for_arctype_data_sheet_1_1`;

The result we get should look like the screenshot below:

Screenshot of Arctype

Next, we can now click on the Dashboard tab to create a new dashboard. Then, we rename our dashboard with our preferred name.

Next, click on “Add tab” and select Chart:

Adding a chart.

Then, click on the select chart data and select activities_count:

Screenshot of Arctype.

Change the title of the dashboard component and select Score Card as the chart type. Then, we can now drag the table_priv column to be displayed.

Screenshot of Arctype.

Next, we create a table component to display the activities that were most productive.

Create a new query called activities_complete. We can then carry out the following commands in it:

SELECT 
    SUM(completed) AS list_of_completed_activities 
FROM 
    `excel_for_arctype_data_sheet_1_1` 
WHERE 
    title = Atitle;

Screenshot of a query in Arctype.

Then, click on the add button from the activity dashboard section we already created and select table:

Screenshot of Arctype

Lastly, we click on the select chart data button and select the Title query we created earlier.

Screenshot of Arctype



Creating a pie chart

Here, we are going to create chart components with Arctype to display our datasets in pie charts.

We start the process by using our previous query command. Then, simply select the chart type and change it to Pie Chart and drag the tables_priv to the column category.



Mixing different types of charts in one dashboard

Here, we are going to create chart components with Arctype to display our datasets in bar charts. We repeat the same process as we did with pie charts.

We change the chart type to Bar Chart and drag tables_priv to the column category. Then, we configure the chart to match whatever settings we want.



Conclusion

This article was divided into two sections. First, we learned how to use a Python framework (Django) to create a task management system. Secondly, we explored the data model of our task management app using a SQL client. While we did that, Arctype to visualize our tables and columns using queries and dashboards. We also learned how to showcase our dataset into pie charts, bar charts, area charts, and line charts.


Source link

ES6: How to Clone an Object in javascript ?

Hey FolksπŸ‘‹ hope you doing well.
So you must be thinking, “Clone an Object”??? whats the big deal?
Well, i also think this way until i encounter an issue which took me 2 days just to debug that i have done something terrible with cloning an object.

So lets see how can we clone objects in javascript

// we have a user object
const user = 
  name:"Deepak Negi",
  email:"st.deepak15@gmail.com"
 

Now, if we want to copy this user object, so? simple!

const copiedUser = user;

Easy right?… well thats the worst way of copying a user,it is clear that you have some misconceptions of what the statement const copiedUser = user; does.

In JavaScript objects are passed and assigned by reference (more accurately the value of a reference), so user and copiedUser are both references to the same object.

// [Object1]<--------- user

const copiedUser = user;

// [Object1]<--------- user
//         ^ 
//         |
//         ----------- copiedUser

As you can see after the assignment, both references are pointing to the same object.

const user = 
  name:"Deepak Negi",
  email:"st.deepak15@gmail.com"
  
const copiedUser = user;
copiedUser.name = "XYZ"
console.log(copiedUser) // name:"XYZ",email:"st.deepak15@gmail.com"
console.log(user) // name:"XYZ",email:"st.deepak15@gmail.com"

modifing any of them will change both of them πŸ™

So then how we can create copy if we need to modify one and not the other?

1. Spread Operator

const spreadUser = ...user
spreadUser.name = "XYZ"
console.log(spreadUser) // name:"XYZ",email:"st.deepak15@gmail.com"
console.log(user) // name:"Deepak Negi",email:"st.deepak15@gmail.com"

2. Object.assign()

const assignUser = Object.assign(, user);
assignUser.name = "XYZ"
console.log(assignUser) // name:"XYZ",email:"st.deepak15@gmail.com"
console.log(user) // name:"Deepak Negi",email:"st.deepak15@gmail.com"

Yaassss we finally got it!

If you think thats it.. so no… there much more to know, now we have added some more data in the user object and now lets see what happen.

const user = 
  name:"Deepak Negi",
  email:"st.deepak15@gmail.com",
  address:
    line1:"ABC, Tower X",
    city:"New Delhi",
    state:"Delhi",
    zipcode: 000000,
    country:"India"
  

const spreadUser = ...user
spreadUser.address.city = "Pune"
spreadUser.address.state = "Mumbai"

console.log(spreadUser)
// console output 

  name:"Deepak Negi",
  email:"st.deepak15@gmail.com",
  address:
    line1:"ABC, Tower X",
    city:"Pune",
    state:"Mumbai",
    zipcode: 000000,
    country:"India"
  


console.log(user)
// console output 

  name:"Deepak Negi",
  email:"st.deepak15@gmail.com",
  address:
    line1:"ABC, Tower X",
    city:"Pune",
    state:"Mumbai",
    zipcode: 000000,
    country:"India"
  

You see the problem, our actual user object is also changed now and this happens with Object.assign() method as well.

But why?
Because of the shallow copying i.e. object spread operator, as well as Object.assign, does not clone the values of nested objects, but copies the reference to the nested object. That’s called shallow copying.

Then what should we do? Deep copy?
Yes, Deep copy/Deep clone will copies object, even nested properties, to do so serialize the object to JSON and parse it back to a JS object.

const user = 
  name:"Deepak Negi",
  email:"st.deepak15@gmail.com",
  address:
    line1:"ABC, Tower X",
    city:"New Delhi",
    state:"Delhi",
    zipcode: 000000,
    country:"India"
  

const deepCopiedUser = JSON.parse(JSON.stringify(user))

deepCopiedUser.address.city = "Pune"
deepCopiedUser.address.state = "Mumbai"

console.log(deepCopiedUser)
// console output 

  name:"Deepak Negi",
  email:"st.deepak15@gmail.com",
  address:
    line1:"ABC, Tower X",
    city:"Pune",
    state:"Mumbai",
    zipcode: 000000,
    country:"India"
  


console.log(user)
// console output 

  name:"Deepak Negi",
  email:"st.deepak15@gmail.com",
  address:
    line1:"ABC, Tower X",
    city:"New Delhi",
    state:"Delhi",
    zipcode: 000000,
    country:"India"
  


So now our original user object doesn’t change when we modify the deepCopiedUser.

Beware using the JSON.parse(JSON.stringify(user)) if you’re dealing with date, functions, RegExps, Maps, Sets or other complex types within your object. The JSON method can’t handle these types.

So for such cases the lodash clonedeep method is probably the best way to go.

import cloneDeep from 'lodash'
or
const cloneDeep = require('lodash')

const user = 
  name:"Deepak Negi",
  email:"st.deepak15@gmail.com",
  address:
    line1:"ABC, Tower X",
    city:"New Delhi",
    state:"Delhi",
    zipcode: 000000,
    country:"India"
  

const deepCloneUser = cloneDeep(user)
deepCloneUser.address.city = "Pune"
deepCloneUser.address.state = "Mumbai"

console.log(deepCloneUser)
// console output 

  name:"Deepak Negi",
  email:"st.deepak15@gmail.com",
  address:
    line1:"ABC, Tower X",
    city:"Pune",
    state:"Mumbai",
    zipcode: 000000,
    country:"India"
  


console.log(user)
// console output 

  name:"Deepak Negi",
  email:"st.deepak15@gmail.com",
  address:
    line1:"ABC, Tower X",
    city:"New Delhi",
    state:"Delhi",
    zipcode: 000000,
    country:"India"
  

Finally!!

Let me know in the comment what do you think the best way for deep cloning the object.


Source link

8 Machine Learning Questions on K-means to Destroy Your Interview

Read the full article here: https://analyticsarora.com/8-unique-machine-learning-interview-questions-on-k-means/

In preparing for your next Machine Learning Interview, one of the topics you certainly need to be familiar with is K-means. This algorithm is incredibly useful for clustering data points into groups that have not been explicitly labeled! The days of Machine Learning taking over the world are well within their stride, so it is important to have a solid grasp on concepts such as this one.

Check out my in-depth tutorial on implementing K-means from scratch in python if you are not familiar with the algorithm!

This series of articles is meant to equip you with the knowledge you need to ace your ML interview and secure a top tier job in the field.

Article Overview

  • What is K-means clustering?
  • How does K-means clustering work?
  • Why is K-means clustering important?
  • K-means ML Interview Questions and Answers
  • Wrap Up

Source link

Hiding and Revealing things with JavaScript pageYOffset

Quite a while ago I was cloning a website and I stumbled on something I was not familiar with. It took me a while to comprehend what exactly I was looking at. The website had a Navigation bar and that bar would hide itself when you would scroll down the page and shows p when you scroll up. Weird!

My first instinct told me that I could fix this using CSS at first.

.nav-bar
display:hidden;

Hidden display was my first guess but I quickly realized that it completely hides the nav-bar (without ever returning). I thought a little harder and came to the conclusion that it has something to do with JavaScript because I believed that it could trigger a function that could execute IF a condition is met. The condition was that IF I scroll down, the nav-bar should be hidden or ELSE, keep showing the nav-bar. In order to deepen this explanation, an example can be provided. Jimmy wants a chocolate but his mother will not give him one. The chocolates are located in the kitchen cabinet. The only way Jimmy can get a chocolate is if he gets it into the kitchen, without his mom knowing, and taking from there. IF mom is not there then he can sneak into the kitchen quietly. But if she does come into the kitchen then he should hide quickly behind the kitchen counter.

Firstly, let us add an event listener. An event listener method allows JavaScript to constantly monitor the browser to see if specific conditions are being met (in your declared function). In this case, we want JavaScript to listen in on a scroll event. I named my function scrollDown because the conditions I shall list down only apply when I scroll down.

window.addEventListener("scroll", scrollDown);
/* 'e' parameter stands for event */
function scrollDown(e) 
    let navigation = document.getElementById("nav-bar");
    if(window.pageYOffset > 500)
        navigation.style.display = "none";
    
    else
        navigation.style.display = "block";
    

Start by declaring a navigation variable that get the Identification from your html so that Js knows what your are referring to. Secondly, we shall refer to the Y-axis because we are scrolling vertically. JavaScript calls this pageYOffset. So, If the pageYOffset is greater than 500px then hide the navigation. If the condition is false then show it again. The code works but only half way. We need to work on the other half, the part when we scroll up. The problem is that when we scroll up, the nav-bar does not appear again.

window.addEventListener("scroll", scrollUp);

function scrollUp(e) 
    let navigation= document.getElementById("nav-bar");
    if(window.pageYOffset <500)
        navigation.style.display = "block";
    
    else
        navigation.style.display = "none";
    

Now that the code works, go and have fun with it. Maybe you can change the words of a heading as your scroll down. Or change the color of the nav-bar as you scroll.

Thanks for reading!


Source link

HTML 5 – The Brainstorm!

Hello tribe! today we are going to talk about the html tags that go inside the head of the document! Bring out the umbrella because it’s raining brains! halleluyah! 🎼

As we had mentioned so far, in the structure of an HTML document there are two main HTML tags: “head” and “body” . The head tag contains the metadata tags (information about the document) as well as making connections or relationships with other documents and the body tag allows us to display the content.



“Connection” tags:

  • Title : title of the web page
  • meta: where you set data such as the language developed on the page.
  • Script: to load js
  • Style: to load css on the current page
  • Link:In my opinion the most important because it allows to load css, improves SEO, establishes relationships with other pages, or alternative versions of our website etc… It is the Swiss knife of tags!



Examples

<head>
        <title>My cute page</title>
        <meta charset="utf-8">
        <link rel="alternate" href="document-en.html" hreflang="en" /> //create a alternative page
</head>

πŸ‘ Script is best placed in the body and Style with a link.



Meta tag:

Perhaps the tag with the greatest number of possibilities in the header of an HTML document is the tag. In it, and through attributes, we can indicate a large amount of metadata to the document.

Meta has the following attributes:

  • description
  • keywords
  • author
  • language
  • generator
  • theme-color
  • viewport

If you want me to go deeper into tags, let me know in the comments! Well tribe that’s all for now, see you in the next post and remember to always be the best version of you!


Source link

How to host a Django project on Heroku (for free)

This step-by-step beginner tutorial will teach you how to host your local Django project on Heroku for free. I haven’t found many easy to follow tutorials on this topic so I decided to make my own after hosting many projects with the mentioned steps. Enjoy!



Steps

  • Create and activate a virtualenv, install dependancies.
python -m venv <env_name> # Windows
python3 -m venv <env_name> # Other

envScriptsactivate # Windows
source venv/bin/activate # Other
  • Initialize a git repository (git init)
  • Add a .gitignore file

Suggested gitignore for Django

  • Add the following to settings.py:
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
python-3.10.1
  • Install gunicorn:
pip install gunicorn
  • Create Procfile and add the following:
web: gunicorn myproject.wsgi
  • Install django-on-heroku:
pip install django-on-heroku
  • Add the following to settings.py:
# Configure Django App for Heroku.
import django_on_heroku
django_on_heroku.settings(locals())
  • Add requirements.txt by running:
pip freeze > requirements.txt
  • Commit your changes:
git add .
git commit -m "Init commit"
  • Login to heroku from the command line:
heroku login
  • Create a new app:
heroku create app_name
  • Add your heroku app to ALLOWED_HOSTS:
ALLOWED_HOSTS = ['your_app_name.herokuapp.com', ...]
  • Commit your changes:
git add .
git commit -m "Configure ALLOWED_HOSTS"
  • Push your changes to heroku branch:
git push heroku master # Or branch name
  • Migrate your database:
heroku run python manage.py migrate
  • Make sure you have DEBUG = False in your settings.py file.



Extras

  • Open your app online:
heroku open
  • Create admin account:
heroku run python manage.py createsuperuser
  • If you have static files, run:
heroku run python manage.py collectstatic

Source link