How to build a REST API using NodeJS

👋 Hey everyone, I know it’s been a long since I posted a new blog 😅. 👀 So in this blog post we are doing to build a REST API that would serve as a source of motivation for developers using NodeJS and MongoDB. So let’s get started 🏄‍♂️

What’s an API? 🤔

API stands for “Application Programming Interface” which is a tool that allows two applications to talk to each other 📞. Let’s understand the meaning of API by some real-life examples ✨

So you have built an amazing e-store application and you wanted other developers to build applications on it. Now you have to build some sort of software that communicates between your web service and the developer’s application and that’s where API comes in.

What’s a REST API? 🤔

Now as you have let’s talk something about “REST APIs”. REST stands for Representational State Transfer, it’s one of the most popularly known type of API architecture. These types of APIs follow the client-server model, where one program sends a request and the other response with some data.
The requests are HTTP methods such as POST, GET, PUT, DELETE…

You would have a more clear understanding of APIs and REST APIs when we build a project 👀. So what are we waiting for, let’s dive started into coding 👨‍💻.

Setting up the project 🛠

Let’s set up our project so that we can start coding 👨‍💻.

  1. Creating a separate folder for our project
   $ mkdir dev-credits-api
  1. Navigate into the folder
   $ cd dev-credits-api
  1. Initializing the project
   $ npm init
  1. Installing the required packages
   $ npm install mongoose express dotenv cors

   # or

   $ yarn add mongoose express dotenv cors
  • Express is the framework by which we are going to our REST API
  • Mongoose is the tool that we are going to use to communicate with our MongoDB database

    4.1. Installing nodemon as a dev dependency

     $ npm install nodemon -D
     # or
     $ yarn add nodemon -D
    • Nodemon is used for automatically restarting the server on file changes detected in the directory. This would be helpful as we would not be restarting the server each time we do changes

Building the REST API 👨‍💻

As we have completed the setup for our project, let’s get started building the REST API.

Create a new file named index.js

Here is the boilerplate code for a basic express app


const express = require('express');

const app = express();

const port = process.env.PORT || 3000;

app.listen(port, async () => 
  console.log(`Server is running at port $port`);

Let’s breakdown it into and understand each part:

  • We are requiring the express package into our file so that we can use it
  • We are assigning some value to the variable port, the port where our server would be running. You might be thinking why is there a process.env.PORT? 🤔. It’s because during deployment on services such as Heroku the port number might vary, it may not be 3000 so we are telling that if there is a PORT environment variable then use that else use 3000
  • The last piece of code is telling to which port the server should listen, in our case it’s the PORT variable

Let’s add a new script named start to the package.json file which uses nodemon to automatically restart the server on file changes detected. So after the changes our scripts in package.json would look something like this:

   "start": "nodemon index.js"

Let’s start our server by running the npm start command. The server would be running at http://localhost:3000. You prompted with an error something like this:

This is happening because we haven’t defined the / (aka the root route)

HTTP methods explained

Let’s take a break from coding and understand what do they do and what’s the success and error status so that it would be easy for debugging 😎


What it does: Request data from a specified resource

Successful response: 200 OK

Error response: 404 not found


What it does: Send data to the server to create a new resource

Successful response: 201 Created

Error response: 404 not found or 409 conflict – if the resource already exists


What it does: Send data to the server to update a pre-existing resource

Successful response: 200 OK

Error response: 204 no content, 404 not found or 405 method not allowed


What it does: Deletes a resource from the server

Successful response: 200 OK

Error response: 404 not found or 405 method not allowed

Check out for understanding what each HTTP status code means via funny cat images 😹

Adding routes 🛣

Routes are different URL paths of an express app that are associated with different HTTP methods, such as GET, POST, DELETE, PUT.

Let’s get started by creating / which sends “Hello, World!”

Add the below piece of code above the line where we declared the port variable


app.get('/', function (req, res) 
  res.send('Hello, World!');

Let’s breakdown this piece of code:

  • The get method specifies the HTTP method for that route. You could use other HTTP methods like post, delete
    • There is a special routing method all which is used for the routes which handle all kinds of HTTP methods
  • There is a callback method that is called when the server receives a request from that endpoint with that specified HTTP method

🥳 Horray! “Hello, World” is now visible in the / route

Setting up MongoDB

Let’s get in the MongoDB database now 😎.

Head over MongoDB and sign up/sign in and create a new project

You could your co-worker into the project if you wanted too.

After the creation of the project, click on Build a Database

You would be shown with a screen something like this:

Let’s go ahead and choose the free plan 👀

You would be shown some more options about the cloud provider and the location

Let’s choose the nearest region and move forward.

You would be asked to create a user. This is required as you would need the username and password to generate a connection URL which is then used to connect MongoDB with your NodeJS app.

The creation of the cluster would take 1 – 3 minutes. So let’s grab a cup of coffee until then ☕. Ahh… it’s been successfully created so let’s get back to coding 👨‍💻

Click on Connect

Click on Connect your application

Copy the connection URL

Create a .env file and replace <password> with the password of the user which you have replaced previously


Let’s head back to the good old index.js file

Connecting Express app to MongoDB

Let’s start by requiring mongoose and dotenv

const mongoose = require('mongoose');
const dotenv = require('dotenv');

Let’s configure dotenv as well


Let’s finally add the piece of code which connects our express application to MongoDB

    useNewUrlParser: true,
    useUnifiedTopology: true,
  .then(() => 
    console.log('Connected to MongoDB');
  .catch((err) => 

The index.js file show looks something like this now


const express = require('express');
const mongoose = require('mongoose');
const dotenv = require('dotenv');


const app = express();

    useNewUrlParser: true,
    useUnifiedTopology: true,
  .then(() => 
    console.log('Connected to MongoDB');
  .catch((err) => 

app.get('/', function (req, res) 
  res.send('Hello, World!');

const port = process.env.PORT || 3000;

app.listen(port, async () => 
  console.log(`Server is running at port $port`);

🥳 We successfully connected our express app to the MongoDB database.

Creating Schema and Model 📝

A Schema is the structure of the documents in our database. It tells what fields are required, what’s the data type of each field.

A model provides a programming interface for interacting with the database (read, insert, update, etc).

Let’s create a new folder named model and inside it let’s create a model.js where we will define our schema


const mongoose = require('mongoose');

const devCredits = new mongoose.Schema(
    type: Number,
    required: true,
    type: Number,
    required: true,

module.exports = mongoose.model('devCredits', devCredits);

Let’s breakdown it down and understand

  • We imported the mongoose package into the model/model.js file
  • We created a new schema named devCredits. The structure has the credits and id. Credits are the number of dev credits the person has and the id is the discord id of the user (This API was initially created for a discord bot Dev credits bot so the schema of the database is kinda based on discord 🤷‍♂️)
  • We have finally created a model named “devCredits”

Adding more features 😎

Let’s add more routes to our REST API. Let’s add routes where we can get the total dev credits of a user via their discord ID and give dev credits to other users using another route.

Giving dev credits to other devs

Let’s import our model which we have just created into the index.js file.

const devCredits = require('./model/model.js');

Let’s add a new POST route in the index.js file'/post', function (req, res) {
  const credit = new devCredits(
    credits: req.body.credits,

  devCredits.countDocuments( id: , function (err, count) 
    if (count > 0) 
         id: ,
            credits: req.body.credits,
         new: true ,
        (err, devCredit) => 
          if (err) 
           else res.json(devCredit);
     else, credits) => 
        if (err) 

Let’s understand what exactly is going on:

  • We have created a new POST route (/post)
  • We validate the data which we receive from the client using our model
  • In the next piece of code we are checking if the user (user id) already exists in the database or not
    • If exists then we are going to increment the credits value
    • Else we are going to create a new document with the user id and add the credits

How to test the API?

We have successfully created added a new feature in our API 🥳. But wait how are we going to test it out 🤔

👀 We are going to use a VSCode extension called Thunder Client, which is used for API testing. So let’s quickly download it and test our new feature in our API 🥳.

After the completion of the download, you are going to see a thunder icon in your sidebar 👀

Click the thunder icon and you are going to see a section something like this

Click on New Request. You would be prompted to screen something like this

Let’s test out our /post route now 🥳. Change the URL in the input box from to HTTP:localhost:3000/post

Change the HTTP method from GET to POST

Navigate to the Body tab, this is the section where we are going to write the body of the request.

I have added my discord ID and gave 100 dev credits to it, cuz why not

Let’s click and hope that it works 🤞

🥁🥁🥁🥁🥁 and we got an error

This happened because we didn’t have any middleware so let’s them quickly


app.use(express.urlencoded( extended: false ));

NOTE: We had installed cors as a separated package, so don’t forget to import it as well

Let’s try again now so that it works now 🤞

🎉 TADA! We have successfully created our first feature in the API which interacts with the MongoDB database

Getting the total dev credits of a user

Let’s import our model which we have just created into the index.js file.

const devCredits = require('./model/model.js');

Let’s add a new route in the index.js file

app.get('/get/:id', function (req, res) 
  devCredits.find( id: ,  _id: 0, __v: 0 , (err, data) => 
    if (err) 

Let’s breakdown this down

  • We have created a new route with the GET method
  • We are finding in the database for the ID given in the parameters

Let’s test it out again using Thunder Client 👀.

🎉TADA! It’s works

Cleaning up the codebase

Let’s clean up the codebase a bit 😅.

Let’s create a new folder called routes and inside it let’s create a new file router.js which contains the routes


const router = require('express').Router();
const devCredits = require('../model/model.js');

router.get('/get/:id', function (req, res) 
  devCredits.find( id: ,  _id: 0, __v: 0 , (err, data) => 
    if (err) 
);'/post', function (req, res) {
  const credit = new devCredits(
    credits: req.body.credits,

  devCredits.countDocuments( id: , function (err, count) 
    if (count > 0) 
         id: ,
            credits: req.body.credits,
         new: true ,
        (err, devCredit) => 
          if (err) 
           else res.json(devCredit);
     else, credits) => 
        if (err) 

module.exports = router;

We have imported the routes/router.js file into the index.js file and used it


const express = require('express');
const mongoose = require('mongoose');
const dotenv = require('dotenv');
const cors = require('cors');


const router = require('./routes/router.js');

const app = express();

app.use(express.urlencoded( extended: false ));

    useNewUrlParser: true,
    useUnifiedTopology: true,
  .then(() => 
    console.log('Connected to MongoDB');
  .catch((err) => 

app.get('/', function (req, res) 
  res.send('Hello, World!');


const port = process.env.PORT || 3000;

app.listen(port, async () => 
  console.log(`Server is running at port $port`);

Let’s test it out so that we are sure that our code and we didn’t mess up by cleaning up the mess 😆

🥳 Horray! There isn’t any error and the code still works as it was before

😅 Doesn’t routes/router.js seem kinda filled up with the logic and make it kinda messy?

Let’s create a new folder named controllers. In this folder, we will store the logic related to each route.

Let’s get started by creating a new file in the controllers folder named getCredits.js and postCredits.js which contains the logic related to the /get route and /post route respectively


const devCredits = require('../model/model.js');

const getCredits = (req, res) => 
  devCredits.find( id: ,  _id: 0, __v: 0 , (err, data) => 
    if (err) 

module.exports = getCredits;


const devCredits = require('../model/model.js');

const postCredits = (req, res) => {
  const credit = new devCredits(
    credits: req.body.credits,

  devCredits.countDocuments( id: , function (err, count) 
    if (count > 0) 
         id: ,
            credits: req.body.credits,
         new: true ,
        (err, devCredit) => 
          if (err) 
           else res.json(devCredit);
     else, image) => 
        if (err) 

module.exports = postCredits;


const router = require('express').Router();

const devCredits = require('../model/model.js');
const getCredits = require('../controllers/getCredits.js');
const postCredits = require('../controllers/postCredits.js');

router.get('/get/:id', getCredits);'/post', postCredits);

module.exports = router;

Phew, that was a lot of work 😹

Adding rate limit

You don’t want some random guy to just spam your entire database 😆. So let’s add a rate limit to our API when restricts the client to perform only a few requests every x minutes

Let’s install express-rate-limit package

$ npm install express-rate-limit

# or

$ yarn add express-rate-limit

Let’s create a middleware folder that contains all the middlewares of our API. Create a file named rateLimiter.js under the middleware folder


const rateLimit = require('express-rate-limit');

const rateLimiter = rateLimit(
  windowMs: 1 * 60 * 1000, // 1 minute
  max: 10,
  message: 'Bonk 🔨',

module.exports = rateLimiter;

Let’s understand what this piece of code is doing?

  • We are importing the express-rate-limit package
  • The windowMs specifies the duration
  • The max specifies the max amount of requests the client can make in the duration specified
  • The message is the message which is shown to the client when he exceeds the max limit

So let’s import into the index.js file and test it out


const rateLimiter = require('./middleware/rateLimiter.js');


😹 I got bonked by myself

Deploying our API on Heroku

👀 We have successfully built an API but how would other developers use it if it isn’t deployed?

Let’s deploy it on Heroku 🚀.

Get started by initializing a git repository in the directory. Create a new GitHub repository and push your changes into that repository 👀

Let’s create a new file named Procfile which is just a file that tells Heroku which command is need to be run. Add the below content to the Procfile file

web: node index.js

NOTE: nodemon doesn’t work in the production stage. It only works in the development stage, so we have to use the good old node index.js

Create an account on Heroku and click on Create new app, give some cool name to your API

Head over to the settings tab and click Reveal Config Vars

These are the environment variables

Add a new config var with the key as MONGODB_URL and the value as your MongoDB connection URL

Head back to the deploy tab and connect the GitHub repository which you have created just before to your Heroku application

Click the Deploy branch button. TADA 🚀 You have successfully created a REST API and deployed it as well 😀

The entire source code for this tutorial will be available on my GitHub

Check out the API which we built today:

That’s it for this blog folks 🤞. Meet y’all in the next blog post

Source link

Bhagavad Gita API

Introduction presents a free, anonymous and highly available API for the Bhagavad Gita.


Many existing APIs of the Gita requires you to either sign-up or get some token to perform requests.
We feel like this is not something one wants when reading The Gita.
Holy books and texts should be generally available and not holding any string attached to it, therefore we created this completely free and anonymous API to let anyone get the Words of God on the internet, let it be for an Android app or for your next Web Project.


The API has multiple translations and commentaries in two languages: English and Hindi.
There is also available chapters meanings and summaries (Hindi only).

API Routes and Endpoints

By making a GET request to the API homepage (/) you get an answer with all the routes available.

$ curl

/text/:ch/:verse: "Get text by chapter and verse",
/text/translations/:ch/:verse: "Get text translations by chapter and verse",
/text/transliterations/:ch/:verse: "Get text transliterations by chapter and verse",
/text/commentaries/:ch/:verse: "Get text commentaries by chapter and verse",
/chapters/:ch: "Get chapters",
/chapters/:ch/transliterations: "Get chapters transliterations",
/chapters/:ch/translations: "Get chapters translations",
/chapter/meaning/:ch: "Get chapter meaning",
/chapter/summaries/:ch: "Get chapter summaries"


Let’s try to get the texts from Chapter 1, verse 1.

$ curl

  "data": [
      "id": 1,
      "bg_id": "BG1.1",
      "chapter": "1",
      "verse": "1",
      "shloka": "धृतराष्ट्र उवाच 

Now, what does that mean? Let’s try to hit the translations endpoint!

$ curl

  "data": [
      "id": 1,
      "bg_id": "BG1.1",
      "lang": "hi",
      "name": "Swami Tejomayananda",
      "author": "Swami Tejomayananda",
      "translation": "।।1.1।।धृतराष्ट्र ने कहा -- हे संजय ! धर्मभूमि कुरुक्षेत्र में एकत्र हुए युद्ध के इच्छुक (युयुत्सव:) मेरे और पाण्डु के पुत्रों ने क्या किया?"
      "id": 2,
      "bg_id": "BG1.1",
      "lang": "en",
      "name": "Swami Sivananda",
      "author": "Swami Sivananda",
      "translation": "1.1 Dhritarashtra said  What did my people and the sons of Pandu do when they had assembledntogether eager for battle on the holy plain of Kurukshetra, O Sanjaya."
      "id": 3,
      "bg_id": "BG1.1",
      "lang": "en",
      "name": "Shri Purohit Swami",
      "author": "Shri Purohit Swami",
      "translation": "1.1 The King Dhritarashtra asked: "O Sanjaya! What happened on the sacred battlefield of Kurukshetra, when my people gathered against the Pandavas?""
      "id": 4,
      "bg_id": "BG1.1",
      "lang": "en",
      "name": "Dr.S.Sankaranarayan",
      "author": "Dr.S.Sankaranarayan",
      "translation": "1.1. Dhrtarastra said  O Sanjaya ! What did my men and the sons of Pandu do in the Kuruksetra, the field of  righteousness, where the entire warring class has assembled ?nornO Sanjaya !  What did the selfish intentions and the intentions born of wisdom do in the human body which is the field-of-duties,  the repository of the senseorgans and in which all the murderous ones (passions and asceticism etc.) are confronting [each other]."
      "id": 5,
      "bg_id": "BG1.1",
      "lang": "en",
      "name": "Swami Adidevananda",
      "author": "Swami Adidevananda",
      "translation": "1.1 Dhrtarastra said  On the holy field of Kuruksetra, gathered together eager for battle, what did my people and the Pandavas do, O Sanjaya?"
      "id": 6,
      "bg_id": "BG1.1",
      "lang": "en",
      "name": "Swami Gambirananda",
      "author": "Swami Gambirananda",
      "translation": "1.1. Dhrtarastra said  O Sanjaya, what did my sons (and others) and Pandu's sons (and others) actually do when, eager for battle, they assembled on the sacred field, the Kuruksetra (Field of the Kurus)?"
      "id": 7,
      "bg_id": "BG1.1",
      "lang": "hi",
      "name": "Swami Ramsukhdas",
      "author": "Swami Ramsukhdas",
      "translation": "।।1.1।। धृतराष्ट्र बोले (टिप्पणी प0 1.2) - हे संजय! (टिप्पणी प0 1.3) धर्मभूमि कुरुक्षेत्र में युद्ध की इच्छा से इकट्ठे हुए मेरेे और पाण्डु के पुत्रों ने भी क्या किया?"
      "id": 8,
      "bg_id": "BG1.1",
      "lang": "en",
      "name": "Sri Ramanuja",
      "author": "Sri Ramanuja",
      "translation": "1.1 - 1.19 Dhrtarastra said - Sanjaya said  Duryodhana, after viewing the forces of Pandavas protected by Bhima, and his own forces protected by Bhisma conveyed his views thus to Drona, his teacher, about the adeacy of Bhima's forces for conering the Kaurava forces and the inadeacy of his own forces for victory against the Pandava forces. He was grief-stricken within.nnObserving his (Duryodhana's) despondecny, Bhisma, in order to cheer him, roared like a lion, and then blowing his conch, made his side sound their conchs and kettle-drums, which made an uproar as a sign of victory. Then, having heard that great tumult, Arjuna and Sri Krsna the Lord of all lords, who was acting as the charioteer of Arjuna, sitting in their great chariot which was powerful enough to coner the three worlds; blew their divine conchs Srimad Pancajanya and Devadatta. Then, both Yudhisthira and Bhima blew their respective conchs separately. That tumult rent asunder the hearts of your sons, led by Duryodhana. The sons of Dhrtarastra then thought, 'Our cause is almost lost now itself.' So said Sanjaya to Dhrtarastra who was longing for their victory.nnSanjaya said to Dhrtarastra:  Then, seeing the Kauravas, who were ready for battle, Arjuna, who had Hanuman, noted for his exploit of burning Lanka, as the emblem on his flag on his chariot, directed his charioteer Sri Krsna, the Supreme Lord-who is overcome by parental love for those who take shelter in Him who is the treasure-house of knowledge, power, lordship, energy, potency and splendour, whose sportive delight brings about the origin, sustentation and dissolution of the entire cosmos at His will, who is the Lord of the senses, who controls in all ways the senses inner and outer of all, superior and inferior - by saying, 'Station my chariot in an appropriate place in order that I may see exactly my enemies who are eager for battle.'"
      "id": 9,
      "bg_id": "BG1.1",
      "lang": "en",
      "name": "Sri Abhinav Gupta",
      "author": "Sri Abhinav Gupta",
      "translation": "1.1  Dharmaksetre etc. Here some [authors] offer a different explanation as1 :-Kuruksetra : the man's body is the ksetra i.e., the facilitator, of the kurus, i.e., the sense-organs. 2 The same is the field of all wordly duties, since it is the cuse of their birth; which is also the field of the righteous act that has been described as :nn'This is the highest righteous act viz., to realise the Self by means of the Yogas';nnnand which is the protector4 [of the embodied Self] by achieving emancipation [by means of this], through the destruction of all duties. It is the location where there is the confrontation among all ksatras, the murderous ones-because the root ksad means 'to kill' - viz, passion and asceticism, wrath and forbearance, and others that stand in the mutual relationship of the slayer and the slain. Those that exist in it are the mamakas,-i.e., the intentions that are worthy of man of ignorance and are the products of ignorance-and those that are born of Pandu: i.e., the intentions, of which the soul is the very knowledge itself5 and which are worthy of persons of pure knowledge. What did they do? In other words, which were vanished by what? Mamaka : a man of ignorance as he utters [always] 'mine'6. Pandu : the pure one.7"
      "id": 10,
      "bg_id": "BG1.1",
      "lang": "en",
      "name": "Sri Shankaracharya",
      "author": "Sri Shankaracharya",
      "translation": "1.1 Sri Sankaracharya did not comment on this sloka. The commentary starts from 2.10."
      "id": 11,
      "bg_id": "BG1.1",
      "lang": "hi",
      "name": "Sri Shankaracharya",
      "author": "Sri Shankaracharya",
      "translation": "।।1.1।।Sri Sankaracharya did not comment on this sloka."

Technical information

The REST API has a very strong caching policy to allow latest latency possible all over the world.

2022-01-21T16:00:15.772 app[348d4703] cdg [info] [GIN] 2022/01/21 - 16:00:15 | 200 | 150.994µs | REMOTE_IP | GET "/text/translations/1/1"

As you can see 150.994µs is the time needed to query the cache database for the translation.

Tech Stack

The API is written in Golang by using the GIN framework and Gorm.
The Database is SQLite.
The project runs on Cloudflare and

Support and The Bhagavad Gita API doesn’t earn a single penny from Ads, you will never find them on our pages and domains.
However running these websites isn’t free.
Running servers costs in terms of Bandwidth and CPU usage to handle the traffic.
If you can please donate to help the project keep going.

If you wish to support us you can click the following button:

Buy Me A Coffee

Source link

Fiverr API: Scrape Fiverr in seconds

Fiverr API v0.0.8 – Scrapes Fiverr profile

Fiverr API (Newer Version) – This Fiverr scrapping API is capable of getting all the info from a gig in Fiverr.


Use the package manager pip to install fiverr-api.

pip install fiverr-api

Essential Links

GitHub Repo, Issue A Bug and ask for help in Webmatrices Forum!


from fiverr_api import Scrape

# the gig you wanna scrape
gig_url = ""

# the profile you wanna scrape
profile_url = ""

# initialize fiverr scrapper
scraper = Scrape()

# returns the scraped gig's data in dictory or json format
gig_data = scraper.gig_scrape(gig_url)

# returns the scraped profile's data in dictory or json format
profile_data = scraper.profile_scrape(profile_url)

# print data or do what ever you want to do with it


    "user_name": "deesmithvo",
    "title": "Voiceover",
    "categories_breadcrumbs": [
        "Music & Audio",
        "Voice Over"
    "rating": "5",
    "ratings_count": "292",
    "images": [
    "description": "... HIGHEST QUALITY voice over recordings on FIVERR! Please read full description before submitting an order.ELITE talent and SUPERIOR customer service.With over 13 years of experience in vocal recording and engineering, your words are in good hands, Let me tell your story and add a little magic to your next project.As a dynamic African American male voice ...",
        "Gender": [
        "Language": [
        "Purpose": [
            "Video Narration",
        "Accent": [
            "English - American"
        "Age Range": [
        "Tone": [
    "seller_bio": "Unique and dynamic voice overs that bring life to any project!",
    "profile_photo": ",q_auto,f_auto/attachments/profile/photo/092096782fcd79252a1c8bce84951a81-1616396583081/ae9bdd13-6917-4a93-80bb-d3623ab533bc.png",
        "From": "United States",
        "Member since": "Jan 2021",
        "Avg. response time": "1 hour",
        "Last delivery": "1 day"
    "user_discription": "A refreshing African American millennial male voice over artist ready to help you tell your story and bring your project to life.n",
        "Number of words": 
            "price": "$50",
            "discription": "",
            "features": [
                "HQ Audio File (WAV format))"
    "gig_tags": [
        "male voice over",
        "audio recording",
        "voice acting",
        "voice talent"
    "delivery_days": "3 Days Delivery",
    "revisions": "1 Revision"


      "name": "bishwasbh",
      "photo": ",q_auto,f_auto/attachments/profile/photo/602e60b8e98a3d98ebf47508e874051e-1619826399837/c9ce5be4-1cd8-4a3d-abba-323b3096c69a.jpg",
      "level": "",
      "bio": "Fiverr software developer for python, django, automation, webscraping",
      "from": "Nepal",
      "member_since": "Jul 2018",
      "response_time": "6 hours",
      "recent_delivery": "1 month",
      "description": "Hello, I am the best fiverr Django Python developer, Web Scrapping and Automation Expert. I have huge expertise in frontend and backend (Django) development. We have completed 174+ projects with different clients at various marketplaces since 2016, we have proficiency in the feild of Python Django Web Development and, Web Scrapping and Automation feild. We have 2+ years of hands on experience in Django Web Development and our team has 3.7+ experience in Django.",
      "languages": [
      "skill_set": [
          "Python programming",
          "Python django",
          "Web application",
          "Web development",
          "Python programmer",
      "gigs": [
              "I will install, setup or update flarum on cpanel, cloud server",
              "I will deploy django in cpanel server",
              "I will develop bots, web scraping, automation, and custom scripts",
              "I will deploy django in heroku or pythonanywhere",
              "I will python programming articles and seo",
              "I will develop django forum, tool web, portal, django blog and cms",
              "I will develop tool website and web apps",
          "seller_communication_level": "5",
          "recommended_to_friend": "5",
          "service_as_described": "5"
      "reviews": [
              "buyer_name": "wesleyboy245",
              "given_rating": "5",
              "country_name": "United States"
              "buyer_name": "wesleyboy245",
              "given_rating": "5",
              "country_name": "United States"
              "buyer_name": "zoleoab",
              "given_rating": "5",
              "country_name": "Sweden"
              "buyer_name": "nftprotocol",
              "given_rating": "5",
              "country_name": "United States"
              "buyer_name": "zoleoab",
              "given_rating": "5",
              "country_name": "Sweden"


Please follow these precautions while using the fiverr api:

  • Try not to scrape the same url frequently without any break
  • Try to scrape multiple url in some time internal/break


Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.



Source link

Building GitHub Apps with Golang

If you’re using GitHub as your version control system of choice then GitHub Apps can be incredibly useful for many tasks including building CI/CD, managing repositories, querying statistical data and much more. In this article we will walk through the process of building such an app in Go including setting up the GitHub integration, authenticating with GitHub, listening to webhooks, querying GitHub API and more.

TL;DR: All the code used in this article is available at

Choosing Integration Type

Before we jump into building the app, we first need to decide which type of integration we want to use. GitHub provides 3 options – Personal Access Tokens, GitHub Apps and OAuth Apps. Each of these 3 have their pros and cons, so here are some basic things to consider:

  • Personal Access Token is the simplest form of authentication and is suitable if you only need to authenticate with GitHub as yourself. If you need to act on behalf of other users, then this won’t be good enough
  • GitHub Apps are the preferred way of developing GitHub integrations. They can be installed by individual users as well as whole organizations. They can listen to events from GitHub via webhooks as well as access the API when needed. They’re quite powerful, but even if you request all the permissions available, you won’t be able to use them to perform all the actions that a user can.
  • OAuth Apps use OAuth2 to authenticate with GitHub on behalf of user. This means that they can perform any action that user can. This might seem like the best option, but the permissions don’t provide the same granularity as GitHub Apps, and it’s also more difficult to set up because of OAuth.

If you’re not sure what to choose, then you can also take a look at diagram in docs which might help you decide. In this article we will use GitHub App as it’s very versatile integration and best option for most use cases.

Setting Up

Before we start writing any code, we need to create and configure the GitHub App integration:

  1. As a prerequisite, we need a tunnel which we will use to deliver GitHub webhooks from internet to our locally running application. You will need to install localtunnel tool with npm install -g localtunnel and start forwarding to your localhost using lt --port 8080.

  2. Next we need to go to to configure the integration. Fill the fields as follows:

    • Homepage URL: Your localtunnel URL
    • Webhook URL: https://<LOCALTUNNEL_URL>/api/v1/github/payload
    • Webhook secret: any secret you want (and save it)
    • Repository Permissions: Contents, Metadata (Read-only)
    • Subscribe to events: Push, Release
  3. After creating the app, you will be presented with the settings page of the integration. Take note of App ID, generate a private key and download it.

  4. Next you will also need to install the app to use it with your GitHub account. Go to Install App tab and install it into your account.

  5. We also need installation ID, which we can find by going to Advanced tab and clicking on latest delivery in the list, take a note of installation ID from request payload, it should be located in "installation": "id": <...> .

If you’ve got lost somewhere along the way, refer to the guide GitHub docs which shows where you can find each of the values.

With that done, we have the integration configured and all the important values saved. Before we start receiving events and making API requests we need to get the Go server up and running, so let’s start coding!

Building the App

To build the Go application, we will use the template I prepared in This application is ready to be used as GitHub app and all that’s missing in it, are a couple of variables which we saved during setup in previous section. The repository contains convenience script which you can use to populate all the values:

git clone && cd go-github-app

The following sections will walk you through the code but if you’re inpatient, then the app is good to go. You can use make build to build a binary of the application or make container to create a containerized version of it.

First part of the code we need to tackle is authentication. It’s done using ghinstallation package as follows:

func InitGitHubClient() 
    tr := http.DefaultTransport
    itr, err := ghinstallation.NewKeyFromFile(tr, 12345, 123456789, "/config/github-app.pem")

    if err != nil 

    config.Config.GitHubClient = github.NewClient(&http.ClientTransport: itr)

This function, which is invoked from main.go during Gin server start-up, takes App ID, Installation ID and private key to create a GitHub client which is then stored in global config in config.Config.GitHubClient. We will use this client to talk to the GitHub API later.

Along with the GitHub client, we also need to set up server routes so that we can receive payloads:

func main() 
    // ...
    v1 := r.Group("/api/v1")
        v1.POST("/github/payload", webhooks.ConsumeEvent)
        v1.GET("/github/pullrequests/:owner/:repo", apis.GetPullRequests)
        v1.GET("/github/pullrequests/:owner/:repo/:page", apis.GetPullRequestsPaginated)
    r.Run(fmt.Sprintf(":%v", config.Config.ServerPort))

First of these is the payload path at http://.../api/v1/github/payload which we used during GitHub integration setup. This path is associated with webhooks.ConsumeEvent function which will receive all the events from GitHub.

For security reasons, the first thing the webhooks.ConsumeEvent function does is verify request signature to make sure that GitHub is really the service that generated the event:

func VerifySignature(payload []byte, signature string) bool 
    key := hmac.New(sha256.New, []byte(config.Config.GitHubWebhookSecret))
    computedSignature := "sha256=" + hex.EncodeToString(key.Sum(nil))
    log.Printf("computed signature: %s", computedSignature)

    return computedSignature == signature

func ConsumeEvent(c *gin.Context) {
    payload, _ := ioutil.ReadAll(c.Request.Body)

    if !VerifySignature(payload, c.GetHeader("X-Hub-Signature-256")) 
        log.Println("signatures don't match")
    // ...

It performs the verification by computing a HMAC digest of payload using webhook secret as a key, which is then compared with the value in X-Hub-Signature-256 header of a request. If the signatures match then we can proceed to consuming the individual events:

func ConsumeEvent(c *gin.Context) {
    // ...
    event := c.GetHeader("X-GitHub-Event")

    for _, e := range Events 
        if string(e) == event 
            log.Printf("consuming event: %s", e)
            var p EventPayload
            json.Unmarshal(payload, &p)
            if err := Consumers[string(e)](p); err != nil 
                log.Printf("couldn't consume event %s, error: %+v", string(e), err)
                // We're responding to GitHub API, we really just want to say "OK" or "not OK"
                c.AbortWithStatusJSON(http.StatusInternalServerError, gin.H"reason": err)
            log.Printf("consumed event: %s", e)
    log.Printf("Unsupported event: %s", event)
    c.AbortWithStatusJSON(http.StatusNotImplemented, gin.H"reason": "Unsupported event: " + event)

In the above snippet we extract the event type from X-GitHub-Event header and iterate through a list of events that our app supports. In this case those are:

const (
    Install     Event = "installation"
    Ping        Event = "ping"
    Push        Event = "push"
    PullRequest Event = "pull_request"

var Events = []Event

If the event name matches one of the options we proceed with loading the JSON payload into a EventPayload struct, which is defined in cmd/app/webhook/models.go. It’s just a struct generated using with unnecessary fields stripped.

That payload is then sent to function that handles the respective event type, which is one of the following:

var Consumers = map[string]func(EventPayload) error
    string(Install):     consumeInstallEvent,
    string(Ping):        consumePingEvent,
    string(Push):        consumePushEvent,
    string(PullRequest): consumePullRequestEvent,

For example for push event one can do something like this:

func consumePushEvent(payload EventPayload) error 
    // Process event ...
    // Insert data into database ...
    log.Printf("Received push from %s, by user %s, on branch %s",

    // Enumerating commits
    var commits []string
    for _, commit := range payload.Commits 
        commits = append(commits, commit.ID)
    log.Printf("Pushed commits: %v", commits)

    return nil

That being in this case – checking the receiving repository and branch and enumerating the commits contained in this single push. This is the place where you could for example insert the data into database or send some notification regarding the event.

Now we have the code ready, but how do we test it? To do so, we will use the tunnel which you already should have running, assuming you followed the steps in previous sections.

Additionally, we also need to spin up the server, you can do that by running make container to build the containerized application, followed by make run which will start the container that listens on port 8080.

Now you can simply push to one of your repositories and you should see a similar output in the server logs:

[GIN] 2022/01/02 - 14:44:10 | 204 |     696.813µs | | POST     "/api/v1/github/payload"
2022/01/02 14:44:10 Received push from MartinHeinz/some-repo, by user MartinHeinz, on branch refs/heads/master
2022/01/02 14:44:10 Pushed commits: [9024da76ec611e60a8dc833eaa6bca7b005bb029]
2022/01/02 14:44:10 consumed event: push

To avoid having to push dummy changes to repositories all the time, you can redeliver payloads from Advanced tab in your GitHub App configuration. On this tab you will find a list of previous requests, just choose one and hit the Redeliver button.

Making API Calls

GitHub apps are centered around webhooks to which you can subscribe and listen to, but you can also use any of the GitHub REST/GraphQL API endpoints assuming you requested the necessary permissions. Using API rather than push events is useful – for example – when creating files, analyzing bulk data or querying data which cannot be received from webhooks.

For demonstration of how to do so, we will retrieve pull requests of specified repository:

func GetPullRequests(c *gin.Context) 
    owner := c.Param("owner")
    repo := c.Param("repo")
    if pullRequests, resp, err := config.Config.GitHubClient.PullRequests.List(
        c, owner, repo, &github.PullRequestListOptions
        State: "open",
    ); err != nil 
        var pullRequestTitles []string
        for _, pr := range pullRequests 
            pullRequestTitles = append(pullRequestTitles, *pr.Title)
        c.JSON(http.StatusOK, gin.H
            "pull_requests": pullRequestTitles,

This function takes 2 arguments – owner and repo – which get passed to PullRequests.List(...) function of GitHub client instance. Along with that, we also provide PullRequestListOptions struct to specify that we’re only interested in pull requests with state set to open. We then iterate over returned PRs and accumulate all their titles which we return in response.

The above function resides on .../api/v1/github/pullrequests/:owner/:repo path as specified in main.go so we can query it like so:

curl http://localhost:8080/api/v1/github/pullrequests/octocat/hello-world | jq .

It might not be ideal to query API as shown above in situations where we expect a lot of data to be returned. In those cases we can utilize paging to avoid hitting rate limits. A function called GetPullRequestsPaginated that performs the same task as GetPullRequests with addition of page argument for specifying page size can be found in cmd/app/apis/github.go.

Writing Tests

So far we’ve been testing the app with localtunnel, which is nice for quick ad-hoc tests against live API, but it doesn’t replace proper unit tests. To write unit tests for this app, we need to mock-out the API to avoid being dependent on the external service. To do so, we can use go-github-mock:

func TestGithubGetPullRequests(t *testing.T) 
    expectedTitles := []string "PR number one", "PR number three" 
    closedPullRequestTitle := "PR number two"
    mockedHTTPClient := mock.NewMockedHTTPClient(
                State: github.String("open"), Title: &expectedTitles[0],
                State: github.String("closed"), Title: &closedPullRequestTitle,
                State: github.String("open"), Title: &expectedTitles[1],
    client := github.NewClient(mockedHTTPClient)
    config.Config.GitHubClient = client

    res := httptest.NewRecorder()
    ctx, _ := gin.CreateTestContext(res)
    ctx.Params = []gin.Param
        Key: "owner", Value: "octocat",
        Key: "repo", Value: "hello-world",

    body, _ := ioutil.ReadAll(res.Body)

    assert.Equal(t, 200, res.Code)
    assert.Contains(t, string(body), expectedTitles[0])
    assert.NotContains(t, string(body), closedPullRequestTitle)
    assert.Contains(t, string(body), expectedTitles[1])

This test starts by defining mock client which will be used in place of normal GitHub client. We give it list of pull request which will be returned when PullRequests.List is called. We then create test context with arguments that we want to pass to the function under test, and we invoke the function. Finally, we read the response body and assert that only PRs with open state were returned.

For more tests, see the full source code which includes examples of tests for pagination as well as handling of errors coming from GitHub API.

When it comes to testing our webhook methods, we don’t need to use a mock client, because we’re dealing with basic API requests. Example of such tests including generic API testing setup can be found in cmd/app/webhooks/github_test.go.


In this article I tried to give you a quick tour of both GitHub apps, as well as the GitHub repository containing the sample Go GitHub project. In both cases, I didn’t cover everything, the Go client package has much more to offer and to see all the actions you can perform with it, I recommend skimming through the docs index as well as looking at the source code itself where GitHub API links are listed along each function. For example, like the earlier shown PullRequests.List here.

As for the repository, there are couple more things you might want to take a look at, including Makefile targets, CI/CD or additional tests. If you have any feedback or suggestions, feel free to create an issue or just star it if it was helpful to you. 🙂

Source link

Imgur API Image Uploader using JavaScript (+ HTML)

Source :-

See Example :-

Video Documentation :-

Codepen Demo :-

Imgur is great for hosting images for free.

There are other platforms like FileStack, Cloudinary, and UploadCare; but among all Imgur is the best for uploading images because it’s free for non-commercial usage.

And, there is a simple way to set up the Imgur API to upload images directly from the local disk.

Here’s how to do it:

Imgur API Image Uploader

Let’s break it into simple baby steps:

Step #1 – Get the Imgur API

First of all, you will have to register your application with the Imgur API. Go to the API page and register an application. It should look like the below screenshot:

Imgur Api

Fill in the following details in the respective fields:

  • Application name: whatever you would like to name it
  • Authorization type: OAuth 2 authorization with a callback URL
  • Authorization callback URL: –
  • Application website: your website address (it’s optional)
  • Email: your email address
  • Description: however you’d like to describe your app

As soon as you submit, you will be presented with the Client ID and Client Secret, save both somewhere.


It should look much like the screenshot above.

Step #2 – Create the Uploader

Well, most of the work is done by now.

You just have to create an HTML file, copy the below code and save.

And yes, don’t forget to replace the YOUR_CLIENT_ID with the real Client ID that you saved in the Step #1.

<!DOCTYPE html>
<html lang="en">

    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Imgur API Image Uploader</title>

    <img src="" id="img" height="200px">
    <br />
    <input type="file" id="file">
    <br />
        <p id="url"></p>

        const file = document.getElementById("file")
        const img = document.getElementById("img")
        const url = document.getElementById("url")
        file.addEventListener("change", ev => 
            const formdata = new FormData()
                method: "post",
                    Authorization: "Client-ID YOUR_CLIENT_ID"
                body: formdata
            ).then(data => data.json()).then(data => 
                img.src =
                url.innerText =


Voila! Your Imgur API Image Uploader is ready.

Try opening the HTML file in your browser and test it out by uploading any image, it should return you the URL of the uploaded image.

That’s it.

And yes, either you can run the HTML file in the browser directly from the local disk, or you can upload it on Netlify or Github Pages.

If you’ve got any related query, feel free to let me in the comments.

Source link

How to Manage Dates and Times in PHP Using Carbon

Date and time manipulation is one of a few frequently-experienced challenges of developing web apps in PHP. And one of it’s most prevalent issues is identifying time disparities and making them readable, such as “one hour ago”.

However, handling dates and times — and issues such as this — is greatly simplified by using Carbon; it’s a library which reduces lengthy hours of coding and debugging to only a few lines of code. This is because Carbon, created by Brian Nesbit, extends PHP’s own DateTime class and makes it much simpler to use.

If you’ve not heard of it before, it is self-described as:

A basic PHP API extension for DateTime

In this tutorial, you will learn Carbon’s core features and capabilities, giving you the ability to far more easily manipulate date and time in PHP.


To follow this tutorial you need the following components:

  • PHP 7.4 or higher.
  • Composer globally installed.


To install Carbon, first create a new project directory called carbon, change into the directory, and install the package, by executing the following commands:

mkdir carbon
cd carbon
composer require nesbot/carbon


Carbon is already included if you’re using Laravel. If you are, have a look at the suggested Laravel settings and best practices. If you’re using Symfony, have a look at the Symfony configuration and best-practices guidelines.


Format dates using Carbon

With Carbon installed, in your editor or IDE, create a new PHP file, named index.php in the root of your project directory. Then, add the code below to index.php to include Composer’s Autoloader file, vendor/autoload.php, and imported Carbon’s core class_._

require 'vendor/autoload.php';
use CarbonCarbon;

Print dates

Now that Carbon’s installed, let’s start working through some examples, starting with the most essential: printing out some dates. To do that, we’ll use carbon::today to retrieve today’s date via Carbon, which you can see in the example below.


require __DIR__ . "/vendor/autoload.php";

echo carbon::today() . "n";

Add that to index.php and then run it.

2021-10-25 00:00:00

The output, which you can see an example of above, returns the current date, with the time being blank. However, if you update index.php to use carbon:: now instead, which you can see in the example below, you can retrieve the time along with the date.


require __DIR__ . "/vendor/autoload.php";

$now = carbon::now()
echo "$nown";

After updating index.php and running it, you should see output similar to the example below, in your terminal.

2021-01-25 22:49:56

In contrast to Carbon::now() which returns the current date and time, and Carbon:today() which only returns the current date, Carbon::yesterday() and Carbon::tomorrow() generate Carbon instances for yesterday and tomorrow, respectively, as in the examples below.


require __DIR__ . "/vendor/autoload.php";

$yes = Carbon::yesterday();
echo "Yesterday: $yesn";

$tomorrow = Carbon::tomorrow();
echo "Tomorrow: $tomorrown";

The functions today(), yesterday(), now, and tomorrow() are examples of common static instantiation helpers.

Create dates with precision

Carbon also allows us to generate dates and times based on a set of parameters. For example, to create a new Carbon instance for a specific date use the Carbon::createFromDate() method, passing in the year, month, day, and timezone, as in the following example.


require __DIR__ . "/vendor/autoload.php";

$year = 2020;
$month = 08;
$day = 21;
$timezone = 'Europe/Berlin';
Carbon::createFromDate($year, $month, $day, $timezone);

You can also specify the time, by calling Carbon::create(), passing in the year, month, day, timezone, hour, minute, and second, as in the following example


require __DIR__ . "/vendor/autoload.php";

$year = 2021;
$month = 04;
$day = 21;
$timezone = 'Europe/Berlin';
$hour = 11;
$minute = 11;
$second = 11;
Carbon::create($year, $month, $day, $hour, $minute, $second, $timezone);

If any one or more of $year, $month, $day, $hour, $minute, or $second are set to null their now() equivalent values will be used. If $hour is not null, however, then the default values for $minute and $second will be 0.

If you pass in null for any of those attributes, it will default to the current date and time.

Update index.php in your editor or IDE to match the code below and run it.


require __DIR__ . "/vendor/autoload.php";

use CarbonCarbon;

$date1 = Carbon::create(2021,10, 25, 12, 48, 00);
echo $date1 . "n";

$date2 = Carbon::create(2021, 8, 25, 22, 48, 00, 'Europe/Moscow');
echo $date2 . "n";

$date3 = Carbon::createFromDate(2018, 8, 14, 'America/Chicago');
echo $date3 . "n";

$date4 = Carbon::createFromDate(2021,10, 25, 'Africa/Lagos');
$date5 = Carbon::createFromTimestamp(1633703084);
echo $date5. "n";

The create() function in the first variable creates a Carbon instance from date and time components; A timezone was supplied on the constructor to the second variable.

A Carbon object was constructed using date components with Carbon::createFromDate() when initializing the third and fourth variables. Doing so generates a Carbon instance based on just on a date.

It’s worth pointing out that if no timezone is specified, your default timezone is used. However, if a timezone other than yours is specified, the timezone’s actual time is supplied. The current time is set in the time section.

The final variable, initialized using Carbon::createFromTimestamp, generates a date based on a timestamp.

Relative Modifiers

Another fantastic feature of Carbon is relative modifiers. These allow strings such as “next friday” or “a year ago” to be used when constructing Carbon instances relative to the current date.

The following are examples of strings that are considered relative modifiers.

  • +
  • -
  • ago
  • first
  • next
  • last
  • this
  • today
  • tomorrow

    Modify the date and time

When working with dates, you’ll need to do more than just get the date and time. You’ll frequently need to modify the date or time as well, such as adding a day or a week or subtracting a month.

A good example of needing this functionality is when building an affiliate program. In this scenario you’ll want the affiliate cookie which the user receives to expire after a specified period of time, making the referral invalid.

Let’s assume a cookie has a 90-day lifespan. With Carbon’s add and subtract methods, we could compute that time quite trivially. The example below uses addDays() to determine when the cookie expires.


require __DIR__ . "/vendor/autoload.php";

$name = 'Affliate_Program';
$value = 'Referrer ID';
$path = '/';
$current = Carbon::now();

// add 90 days to the current time
$time = $current->addDays(90);
$expires = strtotime($time);
setcookie($name, $value, $expires, $path);

It also uses some of the other add() and sub() methods which Carbon provides. If you’re adding a single date, such as a day, you use addDay(), but if you’re adding several days, you use addDays(). Using Carbon’s add and subtract methods can provide you with adjusted date and times.

Looking forward and back

Carbon also provides the next() and previous() functions which return the upcoming and previous occurrences of a particular weekday, which you can see an example of in the code below.


require __DIR__ . "/vendor/autoload.php";

use CarbonCarbon;

$now = Carbon::now();
echo "$nown";

$next_monday = $now->next(Carbon::MONDAY);
echo "Next monday: $next_mondayn";

$prev_monday = $now->previous(Carbon::MONDAY);
echo "Previous monday: $prev_mondayn";

Format the date and time

Yet another fantastic option Carbon provides is the ability to format dates and times in whatever format that you desire.

As Carbon is an expanded version of PHP’s built-in date and time functions, Carbon can use PHP’s built-in date formats via the format() function. In addition, toXXXString() methods are available to display dates and times with predefined formatting.


require __DIR__ . "/vendor/autoload.php";

$dt = Carbon::create(2021,10, 25, 12, 48, 00);
echo $dt->toDateString();//2021-10-25
echo $dt->toFormattedDateString();//Oct 25, 2021
echo $dt->toTimeString();//12:48:00
echo $dt->toDateTimeString();//2021-10-25 12:48:00
echo $dt->toDayDateTimeString();//Mon, Oct 25, 2021 12:48 PM
echo $dt->format('Y-m-d h:i:s A');//2021-10-25 12:48:00 PM

Other typical datetime formatting methods available to Carbon include the following.


require __DIR__ . "/vendor/autoload.php";


Calculate relative time

The diffForHumans() functions in Carbon also allow us to represent time in relative terms. Datetime discrepancies are frequently displayed in a so-called humanized format, such as in one year or three minutes ago.

Let’s assume we’re developing an extension for a blog CMS and we want to display the article’s publish time in “hours ago” or the comments publish time in “hours ago.”

First, the time and date the article was published, as well as other parameters, would be recorded in a database field. As a result, we extract the date from the database in the format Y-m-d H:i:s and store it in a variable. Let’s call it $time.


$time = $row['articledate']; 

If the date in our database is August 4th, 2021, such as in the example below, you would use the carbonCreateFromFormat() function to produce a Carbon date, and then use diffForHumans() to find the difference.


require __DIR__ . "/vendor/autoload.php";

$row['articledate'] = 2021-08-04 16:19:49;
$dt = Carbon::createFromFormat('Y-m-d H:i:s', $time);

echo $dt->diffForHumans() . "n";

If the date was saved as a timestamp, you can call Carbon::createFromTimestamp. Carbon also provides user translator services. So if your site makes use of a user’s language preferences, call the language. If you have a user that speaks French, for example, all you have to do is call the function before the code, as seen below.


require __DIR__ . "/vendor/autoload.php";

echo $dt->diffForHumans() . "n";

Output in this case would be, for example, ‘il y a 2 mois’.

That’s the essentials of managing dates and times in PHP using Carbon

In this tutorial, you learned how to install Carbon and its core functionality. However, Carbon has a great deal more functionality than has been covered in this tutorial. Check out their docs if you’re keen to learn more about the available functionality.

Source link

Some Awesome APIs for your next project

Some Awesome APIs for your next project

Several free web APIs are available to connect to your mobile app, web app, or website to add compelling functionality.

A web API is an application programming interface that may be accessed through the internet using web-specific protocols.

Here are nine APIs to create some fantastic projects:

1. The CheapShark API

CheapShark is a service that monitors the pricing of PC games on sites such as Amazon, Steam, and GamersGate and displays the best discounts to customers. Users may check for top bargains, search for the lowest price on a specific game, sign up for notifications, or browse what’s available on the site. Developers may use the CheapShark API to incorporate the site’s pricing data into their websites or apps.

linkCheapShark API

2. The Wit.AI

Turn text or speech into recognizable actions that your app/website can use. is an interface for natural language processing (NLP) that converts natural language (voice or text communications) into structured data. Wit is used by developers because it streamlines creating apps and gadgets with which users can speak. Developers would have to master natural language processing methods without it. That would take too much time if you only wanted to create a simple application. API

3. GrammarBot API

The GrammarBot API offers spelling and grammatical checks to your application. Submit the text, and you’ll get a JSON response with potential issues and suggested fixes.

linkGrammarBot API

4. Rapid API

Based only on APIs, this is a handy tool. It’s more than simply an API directory; it’s also an API marketplace. If you’ve created an API and want to charge others to use it, you can publish it on RapidAPI.

If you only want to utilize APIs, RapidAPI provides an API playground to test an API in several languages! It is pretty beneficial.

linkRapid API

Thanks for reading!

I hope it motivates you to build more amazing projects, acquire confidence, and grow as a developer!

Follow me on Github and Twitter

Source link

Launched PandaDoc Tech Blog

Just 3 months ago, we launched PandaDoc for Developers. Since then, many developers have created their sandbox accounts and started exploring our API for free.

We carefully collected all the feedback and feature requests that we received and have been working to continuously improve our API capabilities.

And today, the first update is available! We are happy to announce the launch of 8 long-awaited API features to speed up and simplify your workflow:

To get updates from our team in the future, please, follow our Medium publication!

Source link

Changelog #0004 — 🖥️ Desktop app

Another week, another changelog! Check out what we shipped 👇

HTTPie for Web & Desktop

HTTPie’s mission is to make APIs simple and intuitive for all those building the tools of our time. And today we got a bit closer to that goal by becoming the first API testing platform with clients for the Web, Desktop, and Terminal.

🖥️ Desktop app released

Now we can finally say that “HTTPie for Web & Desktop” is entirely accurate: the HTTPie for Desktop app is live!

HTTPie for Desktop login

HTTPie for Desktop icon

A dedicated desktop app has been the most frequent request from our beta users. And for a good reason: without it, it’s hard to test APIs running on localhost and behind a firewall. A focused app also serves as a protection against distraction rabbit holes, which are just a bit harder to avoid when working in the browser (as one of our beta users has pointed out).

HTTPie for Desktop

If you’re a beta member, download the app today, and start working locally & without distractions.

✨ Improvements

  • Whenever you feel like making a new API request, you can now use the short & sweet alias for
  • There’s a new 📣 icon in your left-bottom navigation linking to these changelogs.
  • Layout improvements: you can’t see that annoying space at the right anymore, and the scroll works much better, among others.

🪲 Fixes

  • You couldn’t send a request with an invalid header name, even if that header was disabled. Now you can.
  • The web app was adding some unnecessary headers to outgoing requests. It’s no longer the case.
  • The panels resizing feature we shipped last week had some naughty bugs. They’re gone now.

HTTPie for Terminal

Here’s a summary of this week’s improvements to the development version of HTTPie for Terminal, which will be part of the upcoming v3.0.0 release.

🌲️ Nested JSON support (#1224)

The long-waited nested JSON support just landed! You can now use a new syntax based on the JSON Form notation to rapidly build complex JSON requests.

http --offline --print=B 
  result[status][type]=ok ids:=1 ids:=2

            "type": "ok"
    "ids": [

This new syntax is very expressive, and we believe it will save a lot of keystrokes. See more examples in the unstable docs.

✨ Improvements

— Startup time is now 40% faster. (#1221)

  • Are you authenticating with a bearer token? Great news then, bearer token auth is now a built-in method. You can use -A bearer -a token to send requests with it. (#1216)
  • There are two new operators: ==@ for reading query params from a file; and :@ for reading headers from a file. (#1218)
  • If any of the response headers include Content-Type: text/event-stream, then we’ll now auto-stream the response body. (#1226)

🪲 Fixes

  • An XML declaration was auto-added to the beginning of each formatted XML response, but not anymore. Now you’ll only see it if it’s already present in the raw response. (#1183)

Happy testing, and see you next week!

Originally published on HTTPie blog.

Source link

Exploring Google Analytics Realtime Data with Python

Google Analytics can provide a lot of insight into traffic and about users visiting your website. A lot of this data is available in nice format in web console, but what if you wanted to build your own diagrams and visualizations, process the data further or just generally work with it programmatically? That’s where Google Analytics API can help you, and in this article we will look at how you can use it to query and process realtime analytics data with Python.

Exploring The API

Before jumping into using some specific Google API, it might be a good idea to first play around with some of them. Using Google’s API explorer, you can find out which API will be most useful for you, and it will also help you determine which API to enable in Google Cloud console.

We will start with Real Time Reporting API as we’re interested in realtime analytics data, whose API explorer is available here. To find other interesting APIs, check out the reporting landing page, from where you can navigate to other APIs and their explorers.

For this specific API to work, we need to provide at least 2 values – ids and metrics. First of them is so-called table ID, which is the ID of your analytics profile. To find it, go to your analytics dashboard, click Admin in bottom left, then choose View Settings, where you will find the ID in View ID field. For this API you need to provide the ID formatted as ga:<TABLE_ID>.

The other value you will need is a metric. You can choose one from metrics columns here. For the realtime API, you will want either rt:activeUsers or rt:pageviews.

With those values set, we can click execute and explore the data. If the data looks good, and you determine that this is the API you need then it’s time enable it and set up the project for it…

Setting Up

To be able to access the API, we will need to first create a project in Google Cloud. To do that, head over to Cloud Resource Manager and click on Create Project. Alternatively, you can do it also via CLI, with gcloud projects create $PROJECT_ID. After a few seconds you will see new project in the list.

Next, we need to enable the API for this project. You can find all the available APIs in API Library. The one we’re interested in – Google Analytics Reporting API – can be found here.

API is now ready to be used, but we need credentials to access it. There are couple different types of credentials based on the type of application. Most of them are suited for application that require user consent, such as client-side or Android/iOS apps. The one that is for our use-case (querying data and processing locally) is using service accounts.

To create a service account, go to credentials page, click Create Credentials and choose Service Account. Give it some name and make note of service account ID (second field), we’ll need it in a second. Click Create and Continue (no need to give service account accesses or permissions).

Next, on the Service Account page choose your newly created service account and go to Keys tab. Click Add Key and Create New Key. Choose JSON format and download it. Make sure to store it securely, as it can be used to access your project in Google Cloud account.

With that done, we now have project with API enabled and service account with credentials to access it programmatically. This service account however doesn’t have access to your Google Analytics view, so it cannot query your data. To fix this, you need to add the previously mentioned service account ID ( as user in Google Analytics with Read & Analyse access – a guide for adding users can be found here.

Finally, we need to install Python client libraries to use the APIs. We need 2 of them, one for authentication and one for the actual Google APIs:

pip install google-auth-oauthlib
pip install google-api-python-client

Basic Queries

With all that out of the way, let’s write our first query:

import os
from googleapiclient.discovery import build
from google.oauth2 import service_account

KEY_PATH = os.getenv('SA_KEY_PATH', 'path-to-secrets.json')
TABLE_ID = os.getenv('TABLE_ID', '123456789')
credentials = service_account.Credentials.from_service_account_file(KEY_PATH)

scoped_credentials = credentials.with_scopes([''])

with build('analytics', 'v3', credentials=credentials) as service:
    realtime_data =
        ids=f'ga:TABLE_ID', metrics='rt:pageviews', dimensions='rt:pagePath').execute()


We begin by authenticating to the API using the JSON credentials for our service account (downloaded earlier) and limiting the scope of the credentials only to the read-only analytics API. After that we build a service which is used to query the API – the build function takes name of the API, it’s version and previously created credentials object. If you want to access different API, then see this list for the available names and versions.

Finally, we can query the API – we set ids, metrics and optionally dimensions as we did with API explorer earlier. You might be wondering where did I find the methods of service object (.data().realtime().get(...)) – they’re all documented here.

And when we run the code above, the print(...) will show us something like this (trimmed for readability):

    "ids": "ga:<TABLE_ID>",
    "dimensions": "rt:pagePath",
    "metrics": [
    "profileName": "All Web Site Data",
    "rt:pageviews": "23"
  "rows": [
    ["/", "2"],
    ["/404", "1"],
    ["/blog/18", "1"],
    ["/blog/25", "3"],
    ["/blog/28", "2"],
    ["/blog/3", "3"],
    ["/blog/51", "2"],

That works, but considering that the result is dictionary, you will probably want to access individual fields of the result:

# All Web Site Data
# ['rt:pageviews']
# rt:pagePath
# 23

The previous example shows usage of the realtime() method of the API, but there are 2 more we can make use of. First of them is ga():

with build('analytics', 'v3', credentials=credentials) as service:
    ga_data =
        metrics='ga:sessions', dimensions='ga:country',
        start_date='yesterday', end_date='today').execute()

    # 'totalsForAllResults': 'ga:sessions': '878', 'rows': [['Angola', '1'], ['Argentina', '5']]

This method returns historical (non-realtime) data from Google Analytics and also has more arguments that can be used for specifying time range, sampling level, segments, etc. This API also has additional required fields – start_date and end_date.

You probably also noticed that the metrics and dimensions for this method are a bit different – that’s because each API has its own set of metrics and dimensions. Those are always prefixed with the name of API – in this case ga:, instead of rt: earlier.

The third available method .mcf() is for Multi-Channel Funnels data, which is beyond scope of this article. If it sounds useful for you, check out the docs.

One last thing to mention when it comes to basic queries is pagination. If you build queries that return a lot of data, you might end up exhausting your query limits and quotas or have problems processing all the data at once. To avoid this you can use pagination:

with build('analytics', 'v3', credentials=credentials) as service:
    ga_data =
        metrics='ga:sessions', dimensions='ga:country',
        start_index='1', max_results='2',
        start_date='yesterday', end_date='today').execute()

    print(f'Items per page  = ga_data["itemsPerPage"]')
    # Items per page  = 2
    print(f'Total results   = ga_data["totalResults"]')
    # Total results   = 73

    # These only have values if other result pages exist.
    if ga_data.get('previousLink'):
        print(f'Previous Link  = ga_data["previousLink"]')
    if ga_data.get('nextLink'):
        print(f'Next Link      = ga_data["nextLink"]')
        #       Next Link      =<TABLE_ID>&dimensions=ga:country&metrics=ga:sessions&start-date=yesterday&end-date=today&start-index=3&max-results=2

In the above snippet we added start_index='1' and max_results="2" to force pagination. This causes the previousLink and nextLink to get populated which can be used to request previous and next pages, respectively. This however doesn’t work for realtime analytics using realtime() method, as it lacks the needed arguments.

Metrics and Dimensions

The API itself is pretty simple. The part that is very customizable is arguments such as metrics and dimensions. So, let’s take a better look at all the arguments and their possible values to see how we can take full advantage of this API.

Starting with metrics – there are 3 most important values to choose from – rt:activeUsers, rt:pageviews and rt:screenViews:

  • rt:activeUsers gives you number of users currently browsing your website as well as their attributes
  • rt:pageviews tells you which pages are being viewed by users
  • rt:screenViews – same as page views, but only relevant within application, e.g. Android or iOS

For each metric a set of dimensions can be used to break down the data. There’s way too many of them to list here, so let’s instead see some combinations of metrics and dimensions that you can plug into above examples to get some interesting information about visitors of your website:

  • metrics="rt:activeUsers", dimensions="rt:userType" – Differentiate currently active users based on whether they’re new or returning.
  • metrics="rt:pageviews", dimensions="rt:pagePath" – Current page views with breakdown by path.
  • metrics="rt:pageviews", dimensions="rt:medium,rt:trafficType" – Page views with breakdown by medium (e.g. email) and traffic type (e.g. organic).
  • metrics="rt:pageviews", dimensions="rt:browser,rt:operatingSystem" – Page views with breakdown by browser and operating system.
  • metrics="rt:pageviews", dimensions="rt:country,rt:city" – Page views with breakdown by country and city.

As you can see there’s a lot of data that can be queried and because of the sheer amount it might be necessary to filter it. To filter the results, filters argument can be used. The syntax is quite flexible and supports arithmetic and logical operators as well as regex queries. Let’s look at some examples:

  • rt:medium==ORGANIC – show only page visits from organic search
  • rt:pageviews>2 – show only results that have more than 2 page views
  • rt:country=~United.*,ga:country==Canada – show only visits from countries starting with “United” (UK, US) or Canada (, acts as OR operator, for AND use ;).

For complete documentation on filters see this page.

Finally, to make results a bit more readable or easier to process, you can also sort them using sort argument. For ascending sorting use you can use e.g. sort=rt:pagePath and for descending you will prepend -, e.g. sort=-rt:pageTitle.

Beyond Realtime API

If you can’t find some data, or you’re missing some features in Realtime Analytics API, then you can try exploring other Google Analytics APIs. One of them could be Reporting API v4, which has some improvements over older APIs.

It however, also has a little different approach to building queries, so let’s look at an example to get you started:

with build('analyticsreporting', 'v4', credentials=credentials) as service:
    reports = service.reports().batchGet(body=
        "reportRequests": [
                "viewId": f"ga:TABLE_ID",
                "dateRanges": [
                        "startDate": "yesterday",
                        "endDate": "today"
                "dimensions": [
                        "name": "ga:browser"
                "metrics": [
                        "expression": "ga:sessions"


As you can see, this API doesn’t provide large number of arguments that you can populate, instead it has single body argument, which takes request body with all the values that we’ve seen previously.

If you want to dive deeper into this one, then you should check out the samples in documentation, which give complete overview of its features.

Closing Thoughts

Even though this article shows only usage of analytics APIs, it should give you general idea for how to use all Google APIs with Python, as all the APIs in client library use same general design. Additionally, the authentication shown earlier can be applied to any API, all you need to change is the scope.

While this article used google-api-python-client library, Google also provides lightweight libraries for individual services and APIs at At the time of writing the specific library for analytics is still in beta and lacks documentation, but when it becomes GA (or more stable), you should probably consider exploring it.

Source link