How to Connect Your Local Project’s Codebase to a GitHub Repository Fast!

GitHub is one of the most powerful tools for developers, whether you are working on your project solo or working amongst members of a team. Git and GitHub adds a version control layer to your code so anyone can see the change history, the edits, and also see various branches of the codebase.

In this episode of the Tech Stack Playbook, we are going to review the process of uploading a local codebase repository from a computer to GitHub from the command line.

This episode is packed with content, so here’s a glance at what you’ll learn about below, and a series of sections further down in this blog post highlighting the important topics we discussed:

Time stamps:
00:00 GitHub 101
02:15 Set up your code project locally
03:20 Create an empty repository in GitHub
04:47 Initialize your GitHub connection locally
10:28 Review the pushed changes in GitHub
10:53 Set up GitHub Desktop to manage our repository
11:33 Push new changes via GitHub Desktop to GitHub
12:57 Wrap-up and reflection on what we set up with GitHub

👨‍💻 GitHub 101

is one of the most powerful tools for developers, whether you are working on your project solo or working amongst members of a team. Git and GitHub adds a version control layer to your code so anyone can see the change history, the edits, and also see various branches of the codebase.

I like to think of GitHub as the code-version of Google Docs. You can switch back to a previous version of your document, make edits and push those in real time, and also collaborate with others on the same version of the document.

Another major benefit to GitHub is branching, allowing you to have different states of your codebase for different reasons. A common practice for codebases involves 3 core branches: dev, stage, and prod. The dev branches is what you will use to build from and test, debug, and add in new features. The stage branch is for new additions that are ready for review ahead of going to prod – the reason being, you need to thoroughly test the addition to make sure it is ready for users and so you don’t mess with the client-facing build. The prod, or production, version of your codebase is what is running live for your clients or customers or users. This (hopefully) is free of bugs and errors because of the previous two steps to push code to this stage.

However, if you are working on your project solo, you might only need 2 core branches: main, a version for you to build/test your app, and prod, a version in production that is always live.

In today’s tutorial, we are going to review the process of uploading a local codebase repository from a computer to GitHub from the command line. In each of these below steps, I denote which ones are things you do (local) – on your computer, or (web) – on the GitHub website.

👨‍💻 Step 1: Set up your code project folder (local)

For this example, I have created a ReactJS Soundcloud Clone application with the create-react-app framework and implemented the AWS Amplify framework with Cognito identity and access management, DynamoDB NoSQL database storage, S3 object oriented storage for media items, and AppSync to help us manage a GraphQL API. The app allows users to create an account that then allows them to upload songs to the cloud through the app and then play those media files through the built-in player. Stay tuned for a full-tutorial on this build coming soon ☺️

If you do have a local codebase on your computer that you want to push to GitHub, feel free to jump right into Step 2 below.

If you do not have a local codebase on your computer to push to GitHub, you can spin up a practice repo with either a React.js or NEXT.js template below to get started:

For React, run:

npx create-react-app techstackplaybookpracticerepo

For Next, run:

npx create-next-app --example with-tailwindcss techstackplaybookpracticerepo

Once you have a folder for your app created with one of these frameworks, move onto Step 2 below.

👨‍💻 Step 2: Create an empty repository in GitHub (web)

When you go to, at the top right, when you click on your profile avatar, there is a drop-down of menu items.

Click on the drop-down item that says “Your Repositories” which will bring you to a page that lists out all of the repositories in your GitHub account. There will be a green button that says “New” – make sure to click that to pull up the create repository flow.

There will be a number of options to select, but here’s a quick guide:

  • Repository template: (keep default option)
  • Repository name: TechStackPlaybookPracticeRepo
  • Description: (optional)
  • Public/Private: Public
  • Initialize this repository with: (keep these options unchecked)

When you are ready, click “Create repository” to finalize the setup of an empty repository in GitHub.

When the empty repository page loads, the link will look something like this:

You will notice on this page, there is a URL that will be to the right of the HTTPS button. It will look like this: You will want to copy this URL down as we will need it in Step 3 later on.

👨‍💻 Step 3: Initialize your GitHub connection (local)

From the root of your project folder (the outermost folder that wraps everything, for me this is called soundcloud which contains my /amplify folder, /public folder, /src folder, etc.), make sure that your terminal window is set at this level.

You will initialize an empty git repository with a branch called main with the following:

git init -b main

This will create a hidden folder called .git which will actually save and store all of our version control changes. It’s almost like a cookie that connects our local repository to the GitHub version.

Next, we add our locally created files to this .git file with the following:

git add .

We then want to commit these files we’ve added onto main to our specific repository that we are initializing for GitHub with:

git commit -m “First Commit to GitHub”

This will probably add a lot of files listed out. Make sure that .gitignore is included in this list of added files and includes node_modules so that you don’t upload a gazillion node_modules files to GitHub ☺️

In the page with the URL that we copied down in Step 2, we will now use this to send our github files to this URL endpoint:

  • make sure to change YourGitHubHandle to your actual account:
  • make sure to change TechStackPlaybookPracticeRepo to the name of your actual repo you created on GitHub
git remote add origin

What this is effectively doing is telling git that, from the remote local version of our repository, we are going to add all of those files to the origin of this empty GitHub repository link online on the web.

We will now set the new remote with this:

git remote -v

You will then see that there are 2 lines printed in the terminal, one that ends with (fetch) and one that ends with (push). We are calling this GitHub repository and pushing our code locally from the remote to GitHub in the cloud.

Now that we’ve initialized the connection, we will push our code locally to the origin main which we’ve set as the destination in GitHub:

git push -u origin main

This will enumerate all the objects we want to push, it will then get compressed into threads to push them and will push to this GitHub link which is the one we want for this repository and the branch is set as one called main and sets it to track it from origin.

👨‍💻 Step 4: Review the pushed changes in GitHub (web)

On our GitHub repository page (, what was once empty, upon refreshing the page, should now show our codebase that we had locally on our computer now on this web page.

What we have done is create a synced pair between our local repository (remote) and our GitHub repository (origin). However, this is just for our most recent changes on our local repository. What if we want to create ongoing pushes to our GitHub repository and do regular pushes as a backup to GitHub? We will review this with a tool called GitHub Desktop in the next step below.

👨‍💻 Step 5: Set up GitHub Desktop to manage our repository (local)

GitHub Desktop
, a Microsoft-created GitHub manager, is a GUI (graphical user interface) client/platform that creates an easy and efficient way to manage our GitHub repository right from our computer without needing to worry about typing the right command line scripts and sequences in the terminal.

While it is very important to understand what is happening behind the scenes at the terminal level, for us to move fast, we need tools and ways to expedite and automate our work flow processes. When you are typing in the terminal, spelling errors and human error can cause us to make mistakes, errors, or lose precious time. GitHub Desktop helps developers move faster with their repositories and has been an amazing tool in my workflow.

As a side note, there are other GUIs for Git and SCM (source control management) tooling, such as Kraken which is optimized for Azure DevOps, as well as GitLab.

We will need to create a new repository in our GitHub Desktop client because while the repository is synced with, our GitHub Desktop client wouldn’t have been updated to track this repository yet until we allow it.

In the “Add” drop-down on the button to the right of the text field in the GitHub Desktop client, you will select the drop-down option: Add Local Repository

When we have the option to “Choose” a folder, we will want to select the outermost folder container for our project. For you, this might look like: /user/Documents/GitHub/TechStackPlaybookPracticeRepo

Once the outermost folder is selected, we will click Add Repository

This will now connect to our hidden .git file and anytime we make changes and save them in our code editor, GitHub Desktop will show those changes reflected in the GUI.

👨‍💻 Step 6: Push new changes via GitHub Desktop to GitHub (local)

In GitHub Desktop, we should see 1 or more file changes reflected in the list of “changed files” on the left half of the app. In this video, I updated the file, so that is why it has a check-mark next to and the app says 1 changed file at the top.

In the bottom right, we will give our commit a name, which can be anything you wish. I said: Updated Readme for YouTube!. You can also write a description if you want, but it is optional.

At the top, you will see I have the current branch set to main, as I only have 1 branch created for this video.

When everything looks good, you will click the blue bottom at the bottom left that says “Commit to main`

The bottom right button should now say Push origin, and once you select this, it will send those updated changes committed to our local remote branch to the main GitHub branch on the web.

👨‍💻 Step 7: Review the pushed changes in GitHub (web)

On our GitHub repository page (, upon refreshing the page, you should see your changes reflected in the online version of the codebase, matching your changes locally as well.

In this example, the file reflects the change and in the file/folder list, you will see that all the folders/files have the commit message First Commit to GitHub from Local except for one, which is that file. It has a message that reads the same message we put into GitHub desktop: Update Readme for YouTube!

Check out the full recording below:

Let me know if you found this post helpful! And if you haven’t yet, make sure to check out these free resources below:

Let’s digitize the world together! 🚀

— Brian

Source link

GitHub como Ferramenta Organizacional

O nosso código dorme hoje dentro do Github e nos últimos seis meses passamos a nos organizar em volta dele.

Faz todo sentido para a gente, quanto mais próximo a empresa está do nosso código, mais fácil fica a comunicação, identificação de problemas e oportunidades.


Repositório de código
Começamos a usar o Github como todos começam, usando para guardar o nosso código.

Além dele, fizemos fork de algumas gemas que tivemos que fazer alguma modificação para atender nosso interesse.

Essas gemas ficam em repositório aberto, enquanto nosso código permanece fechado.

Todo atendimento de Dev ocorre por meio de issues, as mesmas são divididas em melhorias, bugs ou no code.

Uma ferramenta que o Github tem, na qual nos mudou de patamar, foi a possibilidade de templates de issues. Com isso, quando qualquer pessoa da empresa quer abrir uma issue, ela já sabe os pontos importantes que precisa preencher.

O legal é que depois de um tempo toda a empresa mudou para issues. E quando alguma pessoa nova entra por exemplo, é criada uma issue de onboard.

Assim todo mundo tem visibilidade do que cada time tem de tarefa, aumentando a viabilidade.

Abandonamos o Trello e migramos todos os nosso boards para o Projects.

As issues deram visibilidade de quais tarefas existiam para cada time. No projects temos a ideia de qual etapa cada issue está no nosso processo de priorização.

Assim, evitamos a necessidade de reuniões para ficar perguntando onde cada coisa está, e podemos usar as reuniões para focar em como podemos ajudar uns aos outros a mover a issue para frente.

Também movemos todo nosso pipe de CI para dentro do Github.

Assim, o mesmo ficou integrado dentro do nosso processo, sendo executado tanto quando adicionamos uma label específica, quanto por commits.

E também, geramos os nossos relatórios de releases semanais para informar a empresa tudo que entrou no produto na semana.

Qualquer pessoa do time tem acesso, por exemplo, a deploys que funcionaram ou quebraram, testes que não passaram, etc…

Pull Request
O coração aonde o Dev faz suas tarefas. Todo pull request tem que estar vinculado a uma ou mais issues ao qual vai fechar/avançar.

Ao ser marcado como pronto para revisão, o próprio Github atribui ele para algum outro Dev revisar, não necessariamente da mesma squad.

Assim, além de melhorar a qualidade do que está sendo entregue, pois a modificação tem que ser simples para qualquer um entender, garantimos que todo o time tenha algum conhecimento de todas as partes do nosso produto.

Muitas vezes as melhores decisões de código vêm de pessoas que estão mais afastadas do problema, pois essas tendem a questionar mais as decisões tomadas.

Github CLI
Infelizmente o insight do Github é bem ruim. Mas resolvemos isso adicionando labels necessárias nas issues e um pouco de código para pegar essas informações pela api do Github.

Dessa maneira, conseguimos tirar métricas de velocidade do time, quantidade de bugs, etc.

E assim, podemos saber se as mudanças organizacionais que tomamos estão tendo efeitos positivos ou negativos.

Essa ferramenta acabou de deixar fase beta do Github.

Ela é um fórum interno do mesmo, e usamos para registrar todas as discussões que temos internamente.

Quando algo começa a se prolongar no Discord, migramos para lá.

Até para ser fácil pessoas novas entenderem o contexto do porquê as coisas hoje são como são.

Também registramos resumo de livros que gostamos, e como resolvemos algum problema mais sério dos nossos clientes.

Aqui mora toda a nossa documentação, desde documentação de como usar o produto, responder a certos questionamentos do cliente, até nossa boas práticas de programação.

Também usamos nossa wiki para o nosso processo de onboard. Temos documentado nela (e gosto de acreditar que cada pessoa que entra melhora esse processo) o que se espera que a pessoa saiba a cada semana até conseguir chegar em um ponto que consegue realizar as primeiras atividades sozinha.

Quem sabe não adotamos no futuro o editor de código que o Github acabou de lançar?

O que eu posso falar é, cada dia que passa sinto menos necessidade de ter outras ferramentas de gestão de time, produto e código, além do próprio Github.

Source link

Multiple GitHub accounts on one laptop

Imagine the next situation:

  • You have only one laptop
  • You have your personal GitHub account
  • Your employer stores code on GitHub as well
  • You need to commit your personal code to your personal repositories but also work code to the employer`s repositories.
  • You can`t do this from your personal account but do not want to create an additional one (with corporate email)

What to do in this situation? There is a way, how you can configure your laptop to commit to work repositories with work credentials, and to personal one with your personal one.

This solution is based on 2 aspects:

  • correcting SSH config
  • git URL re-writing

The main advantage of this approach is that it doesn’t require any additional work to get it right. You will not need to change remote URLs or remember to clone things differently. The second part ( the URL rewriting) will take care of it.

First of all, let’s correct our .ssh config. Assuming you have 2 ssh keys, your personal (github_personal) and your work (github_work). How to create ssh keys you can read in the GitHub docs.


  User git
  IdentityFile ~/.ssh/github_personal

# Work GitHub
Host github-work
  User git
  IdentityFile ~/.ssh/github_work

Host *
  AddKeysToAgent yes

Both this configs have same user and domain, but we will take care about it later. Next – global git config.


Here we need to add our default name and email (the one we used in ssh creation for our personal account)

    name = My Name
    email =

[includeIf "gitdir:~/path/work_dir/"]
    path = ~/path/work_dir/.gitconfig

[url "github-work:work-github-org-name/"]
    insteadOf =

What happens here? First, set our default name and email. Second, we point to use local .gitconfig file for all repositories located by mask ~/path/work_dir/. And the last, replace (default account for Github) with the profile we set under github-work in .ssh/config.

The last part is modification of local .gitconfig for all our working repositories:

It is easy – just replace your email with your corporate one.

    email =

That is all! As long as you keep all your work repos under ~/path/work_dir/ and personal stuff elsewhere, git will use the correct SSH key when doing pulls/clones/pushes to the server, and it will also attach the correct email address to all of your commits.

How to check? Clone repository via SSH, cd to that folder and execute git config --get

Source link

Building GitHub Apps with Golang

If you’re using GitHub as your version control system of choice then GitHub Apps can be incredibly useful for many tasks including building CI/CD, managing repositories, querying statistical data and much more. In this article we will walk through the process of building such an app in Go including setting up the GitHub integration, authenticating with GitHub, listening to webhooks, querying GitHub API and more.

TL;DR: All the code used in this article is available at

Choosing Integration Type

Before we jump into building the app, we first need to decide which type of integration we want to use. GitHub provides 3 options – Personal Access Tokens, GitHub Apps and OAuth Apps. Each of these 3 have their pros and cons, so here are some basic things to consider:

  • Personal Access Token is the simplest form of authentication and is suitable if you only need to authenticate with GitHub as yourself. If you need to act on behalf of other users, then this won’t be good enough
  • GitHub Apps are the preferred way of developing GitHub integrations. They can be installed by individual users as well as whole organizations. They can listen to events from GitHub via webhooks as well as access the API when needed. They’re quite powerful, but even if you request all the permissions available, you won’t be able to use them to perform all the actions that a user can.
  • OAuth Apps use OAuth2 to authenticate with GitHub on behalf of user. This means that they can perform any action that user can. This might seem like the best option, but the permissions don’t provide the same granularity as GitHub Apps, and it’s also more difficult to set up because of OAuth.

If you’re not sure what to choose, then you can also take a look at diagram in docs which might help you decide. In this article we will use GitHub App as it’s very versatile integration and best option for most use cases.

Setting Up

Before we start writing any code, we need to create and configure the GitHub App integration:

  1. As a prerequisite, we need a tunnel which we will use to deliver GitHub webhooks from internet to our locally running application. You will need to install localtunnel tool with npm install -g localtunnel and start forwarding to your localhost using lt --port 8080.

  2. Next we need to go to to configure the integration. Fill the fields as follows:

    • Homepage URL: Your localtunnel URL
    • Webhook URL: https://<LOCALTUNNEL_URL>/api/v1/github/payload
    • Webhook secret: any secret you want (and save it)
    • Repository Permissions: Contents, Metadata (Read-only)
    • Subscribe to events: Push, Release
  3. After creating the app, you will be presented with the settings page of the integration. Take note of App ID, generate a private key and download it.

  4. Next you will also need to install the app to use it with your GitHub account. Go to Install App tab and install it into your account.

  5. We also need installation ID, which we can find by going to Advanced tab and clicking on latest delivery in the list, take a note of installation ID from request payload, it should be located in "installation": "id": <...> .

If you’ve got lost somewhere along the way, refer to the guide GitHub docs which shows where you can find each of the values.

With that done, we have the integration configured and all the important values saved. Before we start receiving events and making API requests we need to get the Go server up and running, so let’s start coding!

Building the App

To build the Go application, we will use the template I prepared in This application is ready to be used as GitHub app and all that’s missing in it, are a couple of variables which we saved during setup in previous section. The repository contains convenience script which you can use to populate all the values:

git clone && cd go-github-app

The following sections will walk you through the code but if you’re inpatient, then the app is good to go. You can use make build to build a binary of the application or make container to create a containerized version of it.

First part of the code we need to tackle is authentication. It’s done using ghinstallation package as follows:

func InitGitHubClient() 
    tr := http.DefaultTransport
    itr, err := ghinstallation.NewKeyFromFile(tr, 12345, 123456789, "/config/github-app.pem")

    if err != nil 

    config.Config.GitHubClient = github.NewClient(&http.ClientTransport: itr)

This function, which is invoked from main.go during Gin server start-up, takes App ID, Installation ID and private key to create a GitHub client which is then stored in global config in config.Config.GitHubClient. We will use this client to talk to the GitHub API later.

Along with the GitHub client, we also need to set up server routes so that we can receive payloads:

func main() 
    // ...
    v1 := r.Group("/api/v1")
        v1.POST("/github/payload", webhooks.ConsumeEvent)
        v1.GET("/github/pullrequests/:owner/:repo", apis.GetPullRequests)
        v1.GET("/github/pullrequests/:owner/:repo/:page", apis.GetPullRequestsPaginated)
    r.Run(fmt.Sprintf(":%v", config.Config.ServerPort))

First of these is the payload path at http://.../api/v1/github/payload which we used during GitHub integration setup. This path is associated with webhooks.ConsumeEvent function which will receive all the events from GitHub.

For security reasons, the first thing the webhooks.ConsumeEvent function does is verify request signature to make sure that GitHub is really the service that generated the event:

func VerifySignature(payload []byte, signature string) bool 
    key := hmac.New(sha256.New, []byte(config.Config.GitHubWebhookSecret))
    computedSignature := "sha256=" + hex.EncodeToString(key.Sum(nil))
    log.Printf("computed signature: %s", computedSignature)

    return computedSignature == signature

func ConsumeEvent(c *gin.Context) {
    payload, _ := ioutil.ReadAll(c.Request.Body)

    if !VerifySignature(payload, c.GetHeader("X-Hub-Signature-256")) 
        log.Println("signatures don't match")
    // ...

It performs the verification by computing a HMAC digest of payload using webhook secret as a key, which is then compared with the value in X-Hub-Signature-256 header of a request. If the signatures match then we can proceed to consuming the individual events:

func ConsumeEvent(c *gin.Context) {
    // ...
    event := c.GetHeader("X-GitHub-Event")

    for _, e := range Events 
        if string(e) == event 
            log.Printf("consuming event: %s", e)
            var p EventPayload
            json.Unmarshal(payload, &p)
            if err := Consumers[string(e)](p); err != nil 
                log.Printf("couldn't consume event %s, error: %+v", string(e), err)
                // We're responding to GitHub API, we really just want to say "OK" or "not OK"
                c.AbortWithStatusJSON(http.StatusInternalServerError, gin.H"reason": err)
            log.Printf("consumed event: %s", e)
    log.Printf("Unsupported event: %s", event)
    c.AbortWithStatusJSON(http.StatusNotImplemented, gin.H"reason": "Unsupported event: " + event)

In the above snippet we extract the event type from X-GitHub-Event header and iterate through a list of events that our app supports. In this case those are:

const (
    Install     Event = "installation"
    Ping        Event = "ping"
    Push        Event = "push"
    PullRequest Event = "pull_request"

var Events = []Event

If the event name matches one of the options we proceed with loading the JSON payload into a EventPayload struct, which is defined in cmd/app/webhook/models.go. It’s just a struct generated using with unnecessary fields stripped.

That payload is then sent to function that handles the respective event type, which is one of the following:

var Consumers = map[string]func(EventPayload) error
    string(Install):     consumeInstallEvent,
    string(Ping):        consumePingEvent,
    string(Push):        consumePushEvent,
    string(PullRequest): consumePullRequestEvent,

For example for push event one can do something like this:

func consumePushEvent(payload EventPayload) error 
    // Process event ...
    // Insert data into database ...
    log.Printf("Received push from %s, by user %s, on branch %s",

    // Enumerating commits
    var commits []string
    for _, commit := range payload.Commits 
        commits = append(commits, commit.ID)
    log.Printf("Pushed commits: %v", commits)

    return nil

That being in this case – checking the receiving repository and branch and enumerating the commits contained in this single push. This is the place where you could for example insert the data into database or send some notification regarding the event.

Now we have the code ready, but how do we test it? To do so, we will use the tunnel which you already should have running, assuming you followed the steps in previous sections.

Additionally, we also need to spin up the server, you can do that by running make container to build the containerized application, followed by make run which will start the container that listens on port 8080.

Now you can simply push to one of your repositories and you should see a similar output in the server logs:

[GIN] 2022/01/02 - 14:44:10 | 204 |     696.813µs | | POST     "/api/v1/github/payload"
2022/01/02 14:44:10 Received push from MartinHeinz/some-repo, by user MartinHeinz, on branch refs/heads/master
2022/01/02 14:44:10 Pushed commits: [9024da76ec611e60a8dc833eaa6bca7b005bb029]
2022/01/02 14:44:10 consumed event: push

To avoid having to push dummy changes to repositories all the time, you can redeliver payloads from Advanced tab in your GitHub App configuration. On this tab you will find a list of previous requests, just choose one and hit the Redeliver button.

Making API Calls

GitHub apps are centered around webhooks to which you can subscribe and listen to, but you can also use any of the GitHub REST/GraphQL API endpoints assuming you requested the necessary permissions. Using API rather than push events is useful – for example – when creating files, analyzing bulk data or querying data which cannot be received from webhooks.

For demonstration of how to do so, we will retrieve pull requests of specified repository:

func GetPullRequests(c *gin.Context) 
    owner := c.Param("owner")
    repo := c.Param("repo")
    if pullRequests, resp, err := config.Config.GitHubClient.PullRequests.List(
        c, owner, repo, &github.PullRequestListOptions
        State: "open",
    ); err != nil 
        var pullRequestTitles []string
        for _, pr := range pullRequests 
            pullRequestTitles = append(pullRequestTitles, *pr.Title)
        c.JSON(http.StatusOK, gin.H
            "pull_requests": pullRequestTitles,

This function takes 2 arguments – owner and repo – which get passed to PullRequests.List(...) function of GitHub client instance. Along with that, we also provide PullRequestListOptions struct to specify that we’re only interested in pull requests with state set to open. We then iterate over returned PRs and accumulate all their titles which we return in response.

The above function resides on .../api/v1/github/pullrequests/:owner/:repo path as specified in main.go so we can query it like so:

curl http://localhost:8080/api/v1/github/pullrequests/octocat/hello-world | jq .

It might not be ideal to query API as shown above in situations where we expect a lot of data to be returned. In those cases we can utilize paging to avoid hitting rate limits. A function called GetPullRequestsPaginated that performs the same task as GetPullRequests with addition of page argument for specifying page size can be found in cmd/app/apis/github.go.

Writing Tests

So far we’ve been testing the app with localtunnel, which is nice for quick ad-hoc tests against live API, but it doesn’t replace proper unit tests. To write unit tests for this app, we need to mock-out the API to avoid being dependent on the external service. To do so, we can use go-github-mock:

func TestGithubGetPullRequests(t *testing.T) 
    expectedTitles := []string "PR number one", "PR number three" 
    closedPullRequestTitle := "PR number two"
    mockedHTTPClient := mock.NewMockedHTTPClient(
                State: github.String("open"), Title: &expectedTitles[0],
                State: github.String("closed"), Title: &closedPullRequestTitle,
                State: github.String("open"), Title: &expectedTitles[1],
    client := github.NewClient(mockedHTTPClient)
    config.Config.GitHubClient = client

    res := httptest.NewRecorder()
    ctx, _ := gin.CreateTestContext(res)
    ctx.Params = []gin.Param
        Key: "owner", Value: "octocat",
        Key: "repo", Value: "hello-world",

    body, _ := ioutil.ReadAll(res.Body)

    assert.Equal(t, 200, res.Code)
    assert.Contains(t, string(body), expectedTitles[0])
    assert.NotContains(t, string(body), closedPullRequestTitle)
    assert.Contains(t, string(body), expectedTitles[1])

This test starts by defining mock client which will be used in place of normal GitHub client. We give it list of pull request which will be returned when PullRequests.List is called. We then create test context with arguments that we want to pass to the function under test, and we invoke the function. Finally, we read the response body and assert that only PRs with open state were returned.

For more tests, see the full source code which includes examples of tests for pagination as well as handling of errors coming from GitHub API.

When it comes to testing our webhook methods, we don’t need to use a mock client, because we’re dealing with basic API requests. Example of such tests including generic API testing setup can be found in cmd/app/webhooks/github_test.go.


In this article I tried to give you a quick tour of both GitHub apps, as well as the GitHub repository containing the sample Go GitHub project. In both cases, I didn’t cover everything, the Go client package has much more to offer and to see all the actions you can perform with it, I recommend skimming through the docs index as well as looking at the source code itself where GitHub API links are listed along each function. For example, like the earlier shown PullRequests.List here.

As for the repository, there are couple more things you might want to take a look at, including Makefile targets, CI/CD or additional tests. If you have any feedback or suggestions, feel free to create an issue or just star it if it was helpful to you. 🙂

Source link

12 ways to get more GitHub stars for your open-source project

The steps that we took to grow from 0 to 4,500 stars on GitHub within a few months.

We launched ToolJet ( in June 2021, since then we’ve got more than 4500 stars for our repository. Here is a list of things that worked for us. This is not an article about how to just get more stars for your repository. The article instead explains how to present your project well so that it is helpful for the open-source community. Some of these points have helped us get contributions from more developers, we have contributions from more than 100 developers now.

PS: The graph above was generated using an app built with ToolJet. You can use it here to generate a star history chart for your project –

1) Readme matters

Readme is the first thing that a visitor to your repository sees. The readme should be able to convey what your project does, how to install the project, how to deploy the project ( if applicable ), how to contribute and how it works. Also, use badges that are helpful for the developers. We used for adding badges to our Readme.

Here is how our Readme looks like:


Examples of projects with great Readme:

2) Documentation

We get more traffic to our documentation portal ( than our main website. A well-documented project is always loved by the community. Open-source projects like Docusaurus makes it super easy to build documentation portals that look great just out of the box. Adding links to the repository from the documentation can drive more visitors to your repository.

Here are some projects with great documentation:

3) Drive visitors from your website to GitHub


A lot of visitors checked out our repository after visiting our website first. Add banners, badges, etc to your website so that the website visitors will check out your repository. To drive more visitors to your website, writing blog posts about relevant topics helps.

4) Be active in developer communities

There are many discord/slack communities, forums, Reddit communities, etc where developers usually hang out. Be active in these communities without making it look like self-promotion ( which can get you banned for obvious reasons ). Try to add value to the communities by participating in relevant discussions. For example, if you are building a charting library and if someone is asking a question about plotting charts using React, you can pitch in to help.

Play nice. Do not try to link to your project if it does not add any value to the discussion.

5) Email campaigns

You might already have users signed up for your website. Add a link to your GitHub repository in the welcome email.

Do not spam people who haven’t signed up for your updates.

6) Trending repositories on GitHub


If you make it to the list of trending GitHub repositories ( ), it can get your repository a lot more visibility. Whenever we made it to the trending list, we always got more visitors to our repository and website. There are trending lists for specific languages too. Many Twitter bots and other tools notify developers whenever there is a new repository that has made it to the trending list.

7) Ask for feedback from relevant communities


Communities such as ProductHunt, Hackernews, Reddit communities, etc may find your project useful. This can bring in more visitors and stargazers to your repository.

Target only the relevant communities. If you think majority of the members won’t find your project interesting, it is not a relevant community. Spamming can cause more harm than good. Also, it’s just not nice.

8) Grow a community

Start a community on Discord or Slack for your users and contributors to hang out. Communities can be helpful when the members are stuck with something and if they want to propose something new. If there is an active community, your future posts and announcements might get more reach. We created the community on Slack since most of the developers have a Slack account. Do not use lesser-known platforms for building your community as it would take an additional step for the person to join the community.

Play nice. Appreciate the work of others.

9) Add a public roadmap

A public roadmap helps your users and contributors understand where your project is headed. There are many tools available for creating public roadmaps but in most cases, GitHub projects will be more than enough for creating a simple yet effective public roadmap. We have created one using GitHub projects –

10) Twitter

Being active on posts related to your projects can create awareness, increase the number of followers on Twitter and drive more visitors to your repository. Make sure to link your repository on the project’s Twitter profile. Also, add a tweet button to your GitHub repository.

11) Respond to feedback

Open-source communities are usually very helpful and give a lot of feedback. Respond to all this feedbacks as the person has taken their valuable time to help you improve your project. Positive feedback helps you stay motivated while negative feedback helps you rethink.

Do not try to evade negative feedback, work on it if it aligns with your vision, otherwise politely explain.

12) Add relevant labels for contributors

Adding labels such as “good first issue” and “up for grabs” can attract more contributors to your repository. There are many platforms such as that scans for issues tagged with relevant labels to help contributors discover new repositories and issues to contribute to. Make sure you respond to contributors quickly. Contributors can be experienced developers as well as developers in the early stages of their careers or students. Try to help the first time contributors to help them onboard easily.

You landed on this article possibly because you have an interesting open-source project. I’d love to see your project. I’m available at and on Twitter.

Hope this article was helpful for you. We would really appreciate it if you can take a moment to give us feedback on ToolJet –

Source link

Rare & useful Git commands summarized + solution to difficult scenarios while using Git

Git commands

  • git restore . – restores the files to the previous commit/ undos all the local changes that haven’t been commited.

  • git restore index.html – restores only that particular file to the recent commit/ undos all the local/uncommited changes for that file.

  • git reset --hard <hash code of the commit> – removes commits and goes back to the commit for that hash code

  • git reset --source <hash code> index.html>– removes commits and goes back to the commit for that hash code only for that particular file.

  • git commit --amend -m 'Your message'– helps re-write messages

  • git revert <hash code>– helps to roll back to a previous commit by creating a new commit for it. Doesn’t removes those commits from the log like git reset does.

  • git reflog– this can be useful to bring back deleted commits/files/changes. Use git reset <hash code of lost commit from reflog> to bring back rolled changes.

  • git reset HEAD~2– Helps roll back by 2 commits and unstage all the changes in those 2 removed commits.

  • git reset HEAD~2 --hard

  • git rebase (most useful command)- Reapply commits on top of another base tip. ex. git rebase master sets the branch at the tip of master branch

Moving commited changes to a new branch: (scenario: you accidently worked on master)

  • – Use git checkout -b new-feature
  • – Then roll back commits on master using git reset HEAD~1 --hard: (this command will roll back 1 commit)


  • Use git stash when you want to record the current state of the working directory and the index, but want to go back to a clean working directory. The command saves your local modifications away and reverts the working directory to match the HEAD commit.
  • The modifications stashed away by this command can be listed with git stash list, inspected with git stash show, and restored (potentially on top of a different commit) with git stash apply. Calling git stash without any arguments is equivalent to git stash push.

  • git stash

    • stashes/ saves the changes in the back of the project/ another directory of the project and the control moves back to the last working copy of the last commit.
    • saves the changes as a draft and moves back to code of the last commit
  • git stash push -m "Message"– Adds a message for the stash to the stash list

  • git stash list – lists all the draft changes in the back of the project

    Tip- The stash list stores all the stashes and each stashed feature/code has a unique index number to it. The last added stash always appears at the top with index 0.

  • git stash apply – applies the last stashed draft to our current working directory

  • git stash apply <index number> – applies the particular indexed stash to our current working directory

  • git stash drop <index number> – drops the stash out of the stash list with the particular index

  • git stash pop– pops the last draft changes back into the working directory/ on the working branch and that draft is then removed from the stash list

  • git stash pop <index number>– pops the draft change with the particular index back into the working directory/ on the working branch and that draft is then removed from the stash list

  • git stash clear– clears/ deletes all the draft changes stored

Moving commited changes to an already existing branch using cherry-pick:

  • git checkout feature-branch
  • git cherry-pick <hash code of that commit on master>
  • git checkout master
  • git reset HEAD~1 --hard (rolls back 1 commit)

Squashing commits-

  • git rebase -i <hash code of the commit above which all the commits need to be squashed>
    • i stands for interactive squash
    • opens up squashing in vim editor where you can pick or squash and update commit messages

Source link

How to Commit like a Boss

What’s Committing anyways?

Well committing in here refers to staging up the changes that you make in your local machine and far from this pushing it to the branch where other teammates can get themselves update with what you’ve done in project.

Why is it to learn committing, can’t you commit just right away?

No, talking about myself I generally work on different projects almost daily. And which lead to change in programming environment.
By this I mean like if you’re working on some project and commits some changes with some message that might not be very descriptive and you switch to some other work.

When you return back to the older project to work on it again, you might need to get familiar with what you did last time and what changes have already been made by your teammates(if you have any :P) before you actually start to work.

This leads to confusion and reduced in degree of DRY rule.

The right way to commit/communication

Commenting as you work

When you’re doing some work on a project make sure you place comments on what these lines of code do. This slowly but surely increases the productivity of you and your team when they return to work, as it reminds the programmer about things rather than understanding them him/herself.

Writing less but more
Writing a message during your commit gives idea about what this commit does and contributes to. So, giving a write message counts +1 towards productivity.

What not to do

Don’t commit after every changes

It’s recommended that you commit once you’re sure you’d commit your work. Completing a section of work and doing commits in order always benefits.

Don’t write weird and non-sense messages

Committing a small yet informative message is necessary. It doesn’t have to be a whole novel.

git commit -m "today i changed some theme colors and added few of mine because your colors suck I fkin hated them so i got rid of them. Apart from this I also added a Palette section it's was a hell lot of work i better get a raise :P"


git commit -m "added Palette section & few theme changes"

Hope you find this informative.

Source link

Track Multiple CI/CD Builds Using Meercode

Building software products can be challenging. There can be more than one tool that is in the play. Not to mention the usernames and passwords for each tool and authorization issues. So is there a way you could just monitor all the processes directly from a single screen? Today we shall be talking about a tool that can solve your build problems by providing a single entry point for a 360-degree view into your build, integration, and deployment processes.

Meercode is a unique tool that allows you to monitor and manage your builds from a single dashboard. Your product might have more than one CI/CD process. These processes can be across different servers such as GitHub or Azure DevOps. No matter what the server may be, Meercode lets you visualize your running and completed workflows on a clean and beautiful UI.

Build Monitors

Meercode comes with build monitors. This allows organizations visibility across all the builds and their statuses. Build monitors are an essential part of any CI setup. By using build monitors, any team member can instantly know the status of the builds while doing their work.

Integrate with your favorite tool

Big projects are built across many tools. Meercode has support for a vast number of tools to help make your monitoring easier.

Across every provider, the process remains simple: Sign up, Integrate and Monitor builds.


This is probably the most popular CI/CD tool. GitHub can integrate and build code and make deployments. Meercode can help you monitor these processes with a single click.

GitLab CI

GitLab CI is a popular tool to build and deploy. Meercode allows out-of-the-box support for integration with Gitlab CI. Just sign up and you are ready to go.

Travis CI

Meercode can connect with Travis CI to monitor builds.

Azure DevOps

Meercode has full support for Azure DevOps builds.


No configuration is needed for Vercel. Just sign up and start building.


Ever wanted to monitor multiple Bitrise workflows on a single dashboard? Meercode lets you visualize your running and completed workflows, on a clean and beautiful UI.


Meercode connects seamlessly with Buddy. Integrate with Buddy to monitor your builds from a single monitor.


Jenkins is one of the most popular CI/CD tools. Unfortunately, right now, Jenkins support is coming soon to make sure you are fully covered.

Beautiful UI

Meercode is designed to help you through each stage of CI/CD. So UI is also built keeping in mind the user experience. Meercode comes with a fresh and unique card design for your workflows.

Clarity at each step

No need to go through long long complicated log files to point out a simple error. Meercode is built with clarity in mind. Via beautiful and comprehensive charts, using Meercode you can easily identify gaps, errors, and bottlenecks!

Meercode highlights your currently running builds on the top and shows them separately. Relax as your build files transform into colourful bars.


So is the Meercode satisfy your security requirements? Security can make or break your product. You do not want another tool that asks for excessive permissions or submit your source code. Meercode is designed to ensure that your safety and privacy are not compromised. To achieve this Meercode comes with the below features:

Keep your source code private.

Source code is the heart of your product. Letting this source code exposed can be a disaster to any company. Of course, Meercode understands this, hence Meercode does not ask you for your source code. This means your code and product details are kept to yourself and you can safely build and maintain CI/CD from Meercode.

Token-based Authorization

Meercode does not perform any action by itself. Whether it’s building, integrating, or deploying any pipeline, Meercode requires that you authorize the operation by using secure token-based authorization. As a result, users can only monitor resources which they can access on the service side. This also means that if a user does not have any access to any repository on let suppose GitHub, then corresponding actions are not shown to that user. Hence you do not need to worry about any unauthorized operations.

No excessive permissions

So does Meercode need any permissions? Yes, Meercode needs read/write permissions. Meercode is designed to require minimal permissions to your CI/CD resources and we never access your source code. We need to enable one-click functionality to cancel or re-run actions. This is only why Meercode needs to have permission.


So is Meercode affordable for your business? Meercode comes in three different pricing models. This means that its suitable for not just small businesses but also enterprises and start-ups. I am summarizing below the available model with feature sets available:

Feature Free Team Pro
Private Repository Limit 1 15 Unlimited
Public Repository Limit Unlimited Unlimited Unlimited
Public Sharable Links No Yes Yes
Custom dashboards No Yes (5) Yes unlimited
Dashboard Refresh Interval 40 secs 10 secs 5 secs
Custom domain Coming soon Coming soon Coming soon
Custom dashboard Coming soon Coming soon Coming soon
Overtime Build Notifications Coming soon Coming soon Coming soon
Includes Trial? Free forever Includes a 14 day free trial Includes a 14 day free trial
Price 0$ $29 per seat / month $59 per seat / month


Gone are the days when your DevOps needed to switch between multiple screens to monitor the processes. No need to share different URLs to see how your build processes are doing anymore. Meercode is here to meet your needs regarding process monitoring and optimization.

Want to have complete visibility about your product? Make sure in your next project, give Meercode a try.

Source link

Some GitHub Terms You Should Know!

GitHub is one of the most used hosting platform for version control and collaboration, we must have heard about the terms like repo and PR in your coding career what do they mean, or simply put what do they even mean to us? here are a few Github terms you must know!

Repository (Repo)

it is a directory that stores all of the files and folders you used to build the project and it also stores the changes made to the project.


commit is a change that you bring to your program, it can be adding, removing, modifying code or files from your project.

Local and Remote

your project will have two independent repos one which is offline is called Local and one which is hosted online on platforms like GitHub or GitLab is Remote

Pull, Push or Fetch

to synchronize your project between local and remote we use these three operations.

Pull – pull changes from remote to local

Push – push changes from local to remote

Fetch – only downloads new data but doesn’t integrate it to your working project in local


they basically divert you from the mainline of development, so that you can fix the bug or build a new feature and then merge it back without messing up the main code.

Pull Request

it is simply a way of telling people that you want the changes you made in the branch to get included in Main Code.

Source link