SQL vs NoSQL Database – A Complete Comparison

There are two main categories of database in use in the development world today, commonly referred to as SQL and NoSQL. In this article, we will compare an SQL vs. NoSQL database based on their pros and cons.

SQL, or Structured Query Language, is the universally known query language for relational databases. SQL databases make it simpler to work with structured data in a database through CRUD operations. CRUD stands for create, retrieve (or read), update, and delete – the primary operations for manipulating data.

SQL databases are commonly referred to as relational database management systems (RDBMS). Traditional RDBMS uses SQL syntax as these systems utilize row-based database structures that connect related data objects between tables. Examples of RDBMS SQL databases include Backendless, Microsoft Access, MySQL, Microsoft SQL Server, SQLite, Oracle Database, IBM DB2, etc.

NoSQL databases, on the other hand, are databases without any structured tables fixed for holding data. Technically, all non-relational databases can be called NoSQL databases. Because a NoSQL database is not a relational database, it can be set up very quickly and with minimal pre-planning. Examples of NoSQL databases include MongoDB, DynamoDB, SimpleDB, CouchDB, CouchBase, Orient DB, Infinite Graph, Neo4j, FlockDB, Cassandra, HBase, etc.

As of May 2021, five of the top six database systems according to the DB-Engines ranking are relational databases, including the top four – Oracle, MySQL, Microsoft SQL Server, and PostgreSQL.

DB-Engine May 2021 Ranking

In this article, we will take a deep dive into the pros and cons of SQL and NoSQL databases.

Contents

1. SQL Pros

2. SQL Cons

3. NoSQL Pros

4. NoSQL Cons

5. Conclusion



SQL pros

Broadly speaking, SQL databases require more advance preparation and planning of the relational model, but the benefit is that your data will be consistent and clean. The relational model represents how data is stored in the database, such as how each table is structured and related to other tables.



Standardized schema

Although SQL databases with standardized schema and relational databases are typically more rigid and difficult to modify, they still have many benefits. Every data object added to the database must conform to the recognized schema of linked tables (comprising rows and columns). While some could find this restrictive, it is essential for data compliance, integrity, consistency, and security.



A large number of users

SQL is an established programming language that is very widely used. It has a large user community comprising countless experts vast in well-established best practices. Developing a strong working knowledge of SQL can give application developers numerous opportunities to consult, collaborate and sharpen their skills.



ACID compliance

SQL databases are ACID compliant (described in detail below) thanks to how relational database tables are precisely structured. This helps ensure that tables are in-sync and transactions are valid. It is the best choice when running applications with no room for error. An SQL database supports a high data integrity level.

SQL database ACID

ACID properties:

  • Atomicity: All data and transactional changes are completely executed as a single operation. No changes are performed if that is not possible.
  • Consistency: Data must be consistent and valid at the beginning and completion of a transaction.
  • Isolation: Transactions run synchronously, without any competition. They act as though they are happening consecutively.
  • Durability: Once a transaction is complete, its connected data is permanent and cannot be altered.

Let’s look at an inventory management system as an example. For such a system, it is important that items be removed from inventory as soon as they are purchased so as to prevent overstock or understock issues. When an order is placed, the inventory can be updated, a new shipment data object can be created, payment information can be updated, and the customer information can be updated. All of these related tables will be updated in unison in order for the transaction to complete.

Learn more about relational database management with Backendless using our Shipping and Tracking app blueprint.



Requires little to no code

SQL is a developer-friendly language. It uses plain English, making it easy to learn to manage and query any relational database while using only simple keywords without coding.

Backendless Database queries, for example, can be written using SQL. Additionally, SQL terminology is used to craft precise API calls to access and modify data. Using Database Views, you can create these queries visually, making it even easier for those without a background in writing SQL queries.



SQL cons



Hardware

SQL databases have historically required that you scale up vertically. This meant you could only expand capacity by increasing capabilities, such as CPU, SSD, and RAM, on the existing server or by purchasing a larger, costlier one.

As your data continues to grow, you’ll invariably need to constantly increase hard drive space and require faster and more efficient machines to run newer and more advanced technologies. With this, hardware can quickly become obsolete.

Modern SQL databases may use a process called sharding. Sharding allows for horizontal scaling by separating, or partitioning, data among multiple data tables with identical schemas. Rather than storing 100,000 objects in one table, for example, sharding creates two tables with identical schemas that each store 50,000 objects, with no duplication between the tables.

Of course, utilizing a serverless hosting service such as Backendless can alleviate the scaling concern. The Backendless system is designed to manage scaling automatically for you, so that you don’t have to worry about physical server management while achieving database efficiency at scale.



Rigidity

A traditional relational model, or schema, of a SQL database has to be defined before use. Once this is done, they become inflexible, and any adjustment can become resource-intensive and difficult. Due to this, significant time should be invested in planning before putting the database into production.

With Backendless, however, developers can always modify schema even after their app is launched. New tables and columns can be added, relations established, etc., providing greater flexibility than a traditional SQL database. This makes the Backendless system well suited for early product development as you are not locked into a schema at the beginning of the development process.



Data Normalization

The goal behind the development of relational databases is to negate data duplication. There is different information for each table, and this information can be queried and connected using common values. But, when SQL databases become large, the joins and lookups needed between several tables can slow things down considerably.

To put more simply, relational databases commonly store related data in different tables. The more tables storing data needed for a single query, the more processing power is needed to complete that query without the system slowing down significantly.



Traditionally resource-intensive upgrade and scaling

As previously mentioned, vertical scaling-up of SQL databases is done by expanding hardware investment. This is costly and time-consuming to do on your own. Some organizations try to scale up horizontally through partitioning. However, this further complexity increases the resources and time expended. It will likely involve coding and require highly-skilled, well-paid developers.

Systems like Backendless, however, are designed to manage the scaling process for you automatically. This is often referred to as infrastructure as a service, or IaaS, and is far less expensive than managing infrastructure yourself. IaaS providers handle the difficult tasks of server maintenance and resource allocation for you so that you can focus on building a great product without worrying about what will happen when your database grows.



NoSQL pros



Query speed

NoSQL queries are denormalized. Therefore, with no fear of data duplication, all the needed information for a specific query is often stored together. This means that joins are not required. As a result, lookups are easier when dealing with large volumes of data. NoSQL is very fast for simple queries.



Continuous availability

For a NoSQL database, data is distributed across different regions and multiple servers, implying no single failure point. This makes NoSQL databases more resilient and stable, with zero downtime and continuous availability.



Agility

This database gives developers enough flexibility to help improve their productivity and creativity. They are not bound by rows and columns, and their schemas do not have to be predefined. They are dynamic such that they can handle all data types, including polymorphic, semi-structured, structured, and unstructured.

SQL vs NoSQL database flexibilityImage source

Application developers can just come in and start building a database without needing to spend effort and time on planning upfront. It allows for quick modifications when there are changes in requirements or a new data type needs to be added. This flexibility makes this database a perfect fit for companies with varying data types and constantly changing features.



Low-cost scaling

It is cost-effective to expand the capacity as a NoSQL database scales up horizontally. Instead of upgrading costly hardware, the difference with this database is that you can expand cheaply by simply adding cloud instances or commodity servers. Also, many open-source NoSQL databases offer cheap options for many companies.



NoSQL cons



No standardized language

There is no fixed language for conducting NoSQL queries. There is variation in the syntax used in querying data for different NoSQL database types. Unlike SQL, where there is only one language to learn, NoSQL has a higher learning curve. Similarly, it can be more difficult to find experienced developers with knowledge of the NoSQL system that you have implemented. Thus, it is more likely that you will need to train new hires, increasing onboarding time and cost.



Inefficiency in conducting complex queries

Querying isn’t very efficient due to the numerous data structures present in NoSQL databases. There is no standard interface to perform complex queries. Conducting simple NoSQL queries might even require programming skills due to the structure of your data. As a result, costlier and more technical staff might be needed to perform the queries. This is one of the major NoSQL limitations, particularly for less technical (i.e. no-code) developers.



A smaller number of users

Developers are now starting to use NoSQL databases more and more and are quickly becoming a growing community. However, it is still not as mature as the SQL community. Also, with fewer experts and consultants, it could be more difficult to solve undocumented issues.



Inconsistency in data retrieval

Data is quickly available thanks to the distributed nature of the database. However, it could also be harder to ensure that the data is always consistent. Sometimes, queries might not return updated data or accurate information. The distributed approach makes it possible for the database to return different values consecutively, depending on the queried server.

This is a major reason why NoSQL is not ACID-level compliant. “C” – Consistency implies that data must be consistent and valid at the beginning and completion of a transaction. Rather, many NoSQL databases are BASE compliant, where “E” signifies Eventual Consistency. NoSQL places importance on availability and speed over consistency. Inconsistency in data retrieval is one of the major drawbacks of NoSQL databases.



Conclusion – Considering your options

Both SQL and NoSQL databases are used in meeting specific needs. Depending on the goals and data environment of an organization, their specific pros and cons could be amplified.

A common misconception is that it is bad to use both technologies together; as a matter of fact, you can use both together, such that each database type play to its strengths. Many companies use both databases within their cloud architecture. Some even use it within the same application.

In the end, it is all about weighing your options and going with the preferred choice that best suits your needs.

Thanks for reading, and Happy Codeless Coding!


Source link

Build a REST API with Golang and MongoDB – Fiber Version

Representational state transfer (REST) is an architectural pattern that guides an Application programming interface(API) design and development. REST APIs have become the standard of communication between the server part of the product and its client to increase performance, scalability, simplicity, modifiability, visibility, portability, and reliability.

This post will discuss building a user management application with Golang using the Fiber framework and MongoDB. At the end of this tutorial, we will learn how to structure a Fiber application, build a REST API and persist our data using MongoDB.

Fiber is an Express inspired HTTP web framework written in Golang with performance and zero memory allocation support. Fiber is built on top of Fasthttp, an HTTP engine written in Golang.

MongoDB is a document-based database management program used as an alternative to relational databases. MongoDB supports working with large sets of distributed data with options to store or retrieve information seamlessly.

You can find the complete source code in this repository.



Prerequisites

The following steps in this post require Golang experience. Experience with MongoDB isn’t a requirement, but it’s nice to have.

We will also be needing the following:



Let’s code



Getting Started

To get started, we need to navigate to the desired directory and run the command below in our terminal

mkdir fiber-mongo-api && cd fiber-mongo-api

This command creates a fiber-mongo-api folder and navigates into the project directory.

Next, we need to initialize a Go module to manage project dependencies by running the command below:

go mod init fiber-mongo-api

This command will create a go.mod file for tracking project dependencies.

We proceed to install the required dependencies with:

go get -u github.com/gofiber/fiber/v2 go.mongodb.org/mongo-driver/mongo github.com/joho/godotenv github.com/go-playground/validator/v10

github.com/gofiber/fiber/v2 is a framework for building web application.

go.mongodb.org/mongo-driver/mongo is a driver for connecting to MongoDB.

github.com/joho/godotenv is a library for managing environment variable.

github.com/go-playground/validator/v10 is a library for validating structs and fields.

After installing the required dependencies, we might get github.com/klauspost/compress is not in your go.mod filego mod tidy error in go.mod file. To fix this, we need to manually install the required package with

go get github.com/klauspost/compress



Application Entry Point

With the project dependencies installed, we need to create main.go file in the root directory and add the snippet below:

The snippet above does the following:

  • Import the required dependencies.
  • Initialize a Fiber application using the New function.
  • Use the Get function to route to / path and an handler function that returns a JSON of Hello from Fiber & mongoDB. fiber.Map is a shortcut for map[string]interface, useful for JSON returns.
  • Set the application to listen on port 6000.

Next, we can test our application by starting the development server by running the command below in our terminal.

go run main.go

Testing the app



Modularization in Golang

It is essential to have a good folder structure for our project. Good project structure simplifies how we work with dependencies in our application and makes it easier for us and others to read our codebase.
To do this, we need to create configs, controllers, models, responses and routes folder in our project directory.

Updated project folder structure

PS: The go.sum file contains all the dependency checksums, and is managed by the go tools. We don’t have to worry about it.

configs is for modularizing project configuration files

controllers is for modularizing application logics.

models is for modularizing data and database logics.

responses is for modularizing files describing the response we want our API to give. This will become clearer later on.

routes is for modularizing URL pattern and handler information.



Setting up MongoDB

With that done, we need to log in or sign up into our MongoDB account. Click the project dropdown menu and click on the New Project button.

New Project

Enter the golang-api as the project name, click Next, and click Create Project..

enter project name
Create Project

Click on Build a Database

Select Shared as the type of database.

Shared highlighted in red

Click on Create to setup a cluster. This might take sometime to setup.

Creating a cluster

Next, we need to create a user to access the database externally by inputting the Username, Password and then clicking on Create User. We also need to add our IP address to safely connect to the database by clicking on the Add My Current IP Address button. Then click on Finish and Close to save changes.

Create user
Add IP

On saving the changes, we should see a Database Deployments screen, as shown below:

Database Screen



Connecting our application to MongoDB

With the configuration done, we need to connect our application with the database created. To do this, click on the Connect button

Connect to database

Click on Connect your application, change the Driver to Go and the Version as shown below. Then click on the copy icon to copy the connection string.

connect application
Copy connection string

Setup Environment Variable
Next, we must modify the copied connection string with the user’s password we created earlier and change the database name. To do this, first, we need to create a .env file in the root directory, and in this file, add the snippet below:

MONGOURI=mongodb+srv://<YOUR USERNAME HERE>:<YOUR PASSWORD HERE>@cluster0.e5akf.mongodb.net/myFirstDatabese?retryWrites=true&w=majority

Sample of a properly filled connection string below:

MONGOURI=mongodb+srv://malomz:malomzPassword@cluster0.e5akf.mongodb.net/golangDB?retryWrites=true&w=majority

Updated folder structure with .env file

Load Environment Variable
With that done, we need to create a helper function to load the environment variable using the github.com/joho/godotenv library we installed earlier. To do this, we need to navigate to the configs folder and in this folder, create an env.go file and add the snippet below:

The snippet above does the following:

  • Import the required dependencies.
  • Create an EnvMongoURI function that checks if the environment variable is correctly loaded and returns the environment variable.

Connecting to MongoDB
To connect to the MongoDB database from our application, first we need to navigate to the configs folder and in this folder, create a setup.go file and add the snippet below:

The snippet above does the following:

  • Import the required dependencies.
  • Create a ConnectDB function that first configures the client to use the correct URI and check for errors. Secondly, we defined a timeout of 10 seconds we wanted to use when trying to connect. Thirdly, check if there is an error while connecting to the database and cancel the connection if the connecting period exceeds 10 seconds. Finally, we pinged the database to test our connection and returned the client instance.
  • Create a DB variable instance of the ConnectDB. This will come in handy when creating collections.
  • Create a GetCollection function to retrieve and create collections on the database.

Next, we need to connect to the database when our application startup. To do this, we need to modify main.go as shown below:



Setup API Route Handler and Response Type

Route Handler
With that done, we need to create a user_route.go file inside the routes folder to manage all the user-related routes in our application, as shown below:

Next, we need to attach the newly created route to the http.Server in main.go by modifying it as shown below:

Response Type
Next, we need to create a reusable struct to describe our API’s response. To do this, navigate to the responses folder and in this folder, create a user_response.go file and add the snippet below:

The snippet above creates a UserResponse struct with Status, Message, and Data property to represent the API response type.

PS: json:"status", json:"message", and json:"data" are known as struct tags. Struct tags allow us to attach meta-information to corresponding struct properties. In other words, we use them to reformat the JSON response returned by the API.



Finally, Creating REST API’s

Next, we need a model to represent our application data. To do this, we need to navigate to the models folder, and in this folder, create a user_model.go file and add the snippet below:

The snippet above does the following:

  • Import the required dependencies.
  • Create a User struct with required properties. We added omitempty and validate:"required" to the struct tag to tell Fiber to ignore empty fields and make the field required, respectively.

Create a User Endpoint
With the model setup, we can now create a function to create a user. To do this, we need to navigate to the controllers folder, and in this folder, create a user_controller.go file and add the snippet below:

The snippet above does the following:

  • Import the required dependencies.
  • Create userCollection and validate variables to create a collection and validate models using the github.com/go-playground/validator/v10 library we installed earlier on, respectively.
  • Create a CreateUser function that returns an error. Inside the function, we first defined a timeout of 10 seconds when inserting user into the document, validating both the request body and required field using the validator library. We returned the appropriate message and status code using the UserResponse struct we created earlier. Secondly, we created a newUser variable, inserted it using the userCollection.InsertOne function and check for errors if there are any. Finally, we returned the correct response if the insert was successful.

Next, we need to update user_routes.go with the route API URL and corresponding controller as shown below:

Get a User Endpoint
To get the details of a user, we need to modify user_controller.go as shown below:

The snippet above does the following:

  • Import the required dependencies.
  • Create a GetAUser function that returns an error. Inside the function, we first defined a timeout of 10 seconds when finding a user in the document, a userId variable to get the user’s id from the URL parameter and a user variable. We converted the userId from a string to a primitive.ObjectID type, a BSON type MongoDB uses. Secondly, we searched for the user using the userCollection.FindOne, pass the objId as a filter and use the Decode attribute method to get the corresponding object. Finally, we returned the decoded response.

Next, we need to update user_routes.go with the route API URL and corresponding controller as shown below:

PS: We also passed a userId as a parameter to the URL path. The specified parameter must match the one we specified in the controller.

Edit a User Endpoint
To edit a user, we need to modify user_controller.go as shown below:

The EditAUser function above does the same thing as the CreateUser function. However, we included an update variable to get updated fields and updated the collection using the userCollection.UpdateOne. Lastly, we searched for the updated user’s details and returned the decoded response.

Next, we need to update user_routes.go with the route API URL and corresponding controller as shown below:

Delete a User Endpoint
To delete a user, we need to modify user_controller.go as shown below:

The DeleteAUser function follows the previous steps by deleting the matched record using the userCollection.DeleteOne. We also checked if an item was successfully deleted and returned the appropriate response.

Next, we need to update user_routes.go with the route API URL and corresponding controller as shown below:

Get List of Users Endpoint
To get the list of users, we need to modify user_controller.go as shown below:

The GetAllUsers function follows the previous steps by getting the list of users using the userCollection.Find. We also read the retuned list optimally using the Next attribute method to loop through the returned list of users.

Next, we need to update user_routes.go with the route API URL and corresponding controller as shown below:

Complete user_controller.go

Complete user_route.go

With that done, we can test our application by starting the development server by running the command below in our terminal.

go run main.go

terminal output

Create a user endpoint

Get a user endpoint

Edit a user endpoint

Delete a user endpoint

Get list of users endpoint

Database with users document



Conclusion

This post discussed how to structure a Fiber application, build a REST API and persist our data using MongoDB.

You may find these resources helpful:


Source link

How to choose a MongoDB shard key

In this article, I will show you what is the ideal pattern of a MongoDB shard key. Although there is a good page on the MongoDB official manual, it still not provides a formula to choose a shard key.

TL;DR

The formula is

coarselyAscending : 1, search : 1

I will explain the reason in the following sections.



User Scenario

In order to well-describe the formula, I will use an example to illustrate the scenario. There is a collection within application logs, and the format is like:


    "id": "4df16cf0-2699-410f-a07e-ca0bc3d3e153",
    "type": "app",
    "level": "high",
    "ts": 1635132899,
    "msg": "Database crash"

Each log has the same template, id is a UUID, ts is an epoch, and both type and level are a finite enumeration. I will leverage the terminologies in the official manual to explain some incorrect designs.



Low Cardinality Shard Key

From the mentioned example, we usually choose type at first sight. Because, we always use type to identify the logging scope. However, if we choose the type as the shard key, it must encounter a hot-spot problem. Hot-spot problem means there is a shard size much larger than others. For example, there are 3 shards corresponding to 3 types of logs, app, web and admin, the most popular user is on app. Therefore, the shard size with app log will be very large. Furthermore, due to the low-cardinality shard key, the shards cannot be rebalanced anymore.



Ascending Shard Key

Alright, if type cannot be the shard key, how about ts? We always search for the most recently logs, and ts are fully uniform distributed, it should be a proper choice. Actually, no. When the shard key is an ascending data, it works at the very first time. Nevertheless, it will result in a performance impact soon. The reason is ts is always ascending, so the data will always insert into the last shard. The last shard will be rebalanced frequently. Worst of all, the query pattern used to search from the last shard as well, i.e. the search will often be the rebalance period.



Random Shard Key

Based on the previous sections, we know type, level and ts all are not good shard key candidates. Thus, we can use id as the shard key, so that we can spread the data evenly without frequent changes. This approach will work fine when the data set is limited. After the data set becomes huge, the overhead of rebalance will be very high. Because the data is random, MongoDB has to random access the data while rebalancing. On the other hand, if the data is ascending, MongoDB can retrieve the data chunks via the sequential access.



Solution

A good MongoDB shard key should be like:

coarselyAscending : 1, search : 1

In order to prevent the random access, we choose the coarsely ascending data be the former. This pick also won’t meet the problem of frequently rebalancing. And we put a search pattern on the latter to ensure the related data can be located at the same shard as much as possible. In our example, I will not only choose the shard key but also redesign our search pattern. The ts is fine to address the log at the specific time; however, it is a bit inefficient for a time range query like from 3 month ago til now. Hence, I will add one more key, month, in the document, so we therefore can leverage the MongoDB date type and make a proper shard key. The collection will be:


    "id": "4df16cf0-2699-410f-a07e-ca0bc3d3e153",
    "type": "app",
    "level": "high",
    "ts": 1635132899,
    "msg": "Database crash",
    "month": new Date(2021, 10) // only month

And, the shard key is month: 1, type: 1.

The key point here is we use month instead of ts to avoid frequently rebalaning. The month is not made just for the shard key; on the contrary, we also use it for our search pattern. Instead of calculating the relationship between timestamp and the date, we can use getMonth to find results faster. For instance,

var d = new Date();
d.setMonth(d.getMonth() - 1); //1 month ago
db.data.find(month:$gte: d);

To sum up, this article provides the concepts of designing MongoDB shard key. You might not have a coarsely ascending data so, but you can refer to the concepts and find out a proper key design for your applications.


Source link

Short Note on CRUD Operations of MongoDB…



MongoDB CRUD Operations

CRUD operations create, read, update, and delete documents in MongoDB.

Create Operations
Create or insert operations add new documents to a collection in the database. If the collection does not currently exist in the database, then insert operations will create the collection.

MongoDB provides the following methods to insert documents into a collection:

  • db.collection.insertOne()
  • db.collection.insertMany()
    Here, insert operations target a single collection.

Read Operations
Read operations retrieve documents from a collection in the database.

MongoDB provides the following methods to read documents from a collection:

  • db.collection.find()
    We can specify query filters or any criteria that identify the documents to return.

Update Operations
Update operations modify existing documents in a collection of the database.

MongoDB provides the following methods to update documents of a collection:

  • db.collection.updateOne()
  • db.collection.updateMany()
  • db.collection.replaceOne()
    Here, update operations target a single collection.
    We can also specify any criteria, or filters, that identify the documents to update.

Delete Operations
Delete operations remove documents from a collection in the database.

MongoDB provides the following methods to delete documents of a collection:

  • db.collection.deleteOne()
  • db.collection.deleteMany()
    Here, delete operations target a single collection.
    We can specify any criteria, or filters, that identify the documents to remove.

Source link

Our MongoDB Atlas hackathon idea: news headlines and NLP



The idea: a news headlines search engine with sentiment analysis

After scrolling through possible subjects and MongoDB technologies we might use for the hackathon, we (me and 2 friends) came up with an idea that would leverage the text search capabilities of Atlas Search with a massive amount of news headlines data.

We want our news headlines search engine to be able to do these kinds of searches:

  • “War in Iraq” => would give all headlines related to that subject even if the title does not exactly match
  • “sentiment about war in Iraq in news headlines from date 1 to date 2” => would output a main sentiment related to that subject using NLP

More query filters and capabilities could later be added to the app’, but if we make a text box work that outputs relevant results related showing that the aforementioned features work, we would be very pleased 🙂

We have no prior formal knowledge of Data Science Atlas Search or NLP, so I guess it’s gonna be a hell of a ride ^^



initial steps

We derived a few major steps to create our app’:

  1. get a maximum of data related to news headlines as CSV’s or JSON’s from various sources
  2. define common data structure of the news headlines entity(ies) we’ll use in the app
  3. I/O algorithm to format all data from various sources into one or multiple files with same format
  4. fill mongo DB with formatted data
  5. implement full text search with Atlas Search
  6. add sentiment analysis to headlines text search feature

These are all the vague steps we thought about, I guess these will be split into multiple sub todos as we go along.

If you want to see how our project moves on, check out => https://github.com/yactouat/dev.to_mongodbatlas_hackathon_2022/projects/2

Stay Tuned !


Source link

Using MongoDB with Node.JS

First, create a new project.

CD into the project folder and run npm init. Follow those steps until you’re done.

Run: npm i mongodb. This will install the official MongoDB driver for Node.

Create an index.js, or main.js, depending on your main file when you ran npm init.

Inside there: add this:

const MongoClient = require("mongodb");
const mongouri = 'mongodb://your_connection_string';
const client = new MongoClient(mongouri);

client.connect().then(console.log("Connected to MongoDB"));

Congrats, if you run node ., you should see ‘Connected to MongoDB’.

Let’s create a quick question database by using an asynchronous function. Add this above client.connect().then(console.log("Connected to MongoDB")); and under the constants:

async function createListing(db, collection, data) 
    await client.db(db).collection(collection).insertOne(data);

Then, under client.connect(..., put:

createListing('question', 'questions', 
    question: "What's 2+2?",
    answer: 4
);

Go ahead and run node .. If you have access to your database, you should see that listing in the database.

Let’s read a listing and compare an answer by creating another asynchronous function. Under the ‘createListing’ function, add:

async function readListing(db, collection, data) 
    const result = await client.db(db).collection(collection).findOne(data);
    if(result === null 

Then, let’s remove the lines where we created our listing, and we will replace it with:

let guess = 4;
const res = readListing('question', 'questions', 
    answer: guess
);
if(res === false) 
    console.log("Oops, you got it wrong.");
 else 
    console.log("Yay! You got it right!");

And now, we will run node ., it should output: “Yay! You got it right!”

Congratulations! You’ve just created and read data from a database!

To the beginners: Keep learning. You never know what you can accomplish if you keep putting your all into it. This tutorial has just showed you how to use one of the BEST databases out there, very easily. So go and do what we all beginners should do, keep learning, and keep attempting new things. Good luck!


Source link

MongoDB $weeklyUpdate (November 22, 2021): Latest MongoDB Tutorials, Events, Podcasts, & Streams!



👋 Hi everyone!

Welcome to MongoDB $weeklyUpdate!

Here, you’ll find the latest developer tutorials, upcoming official MongoDB events, and get a heads up on our latest Twitch streams and podcast, curated by Adrienne Tacke.

Enjoy!



🎓 Freshest Tutorials on DevHub

Want to find the latest MongoDB tutorials and articles created for developers, by developers? Look no further than our DevHub!



Let’s Give Your Realm-Powered Ionic Web App the Native Treatment on iOS and Android!

Diego Freniche
We can convert an existing Ionic React Web App that saves data in MongoDB Realm using Apollo GraphQL into an iOS and Android app.



Building an Autocomplete Form Element with Atlas Search and JavaScript

Nic Raboy
In this tutorial, we’re going to see how to create a simple web application that surfaces autocomplete suggestions to the user.



Migrate from Azure CosmosDB to MongoDB Atlas Using Apache Kafka

Robert Walters
In this blog post, we will cover how to leverage Apache Kafka to move data from Azure CosmosDB Core (Native API) to MongoDB Atlas.



Create a REST API with Cloudflare Worker, MongoDB Atlas & Realm

Maxime Beugnet, Luke Edwards
In this blog post, we create a serverless REST API with a Cloudflare worker using the MongoDB Realm Web SDK and a MongoDB Atlas cluster to store the data.



📅 Official MongoDB Events & Community Events

Attend an official MongoDB event near you! Chat with MongoDB experts, learn something new, meet other developers, and win some swag!

Nov 29-Dec 2 (Las Vegas) – MongoDB at AWS re:Invent 2021



📺 MongoDB on Twitch & YouTube

We stream tech tutorials, live coding, and talk to members of our community via Twitch and YouTube. Sometimes, we even stream twice a week! Be sure to follow us on Twitch and subscribe to our YouTube channel to be notified of every stream!

Latest Stream
https://www.twitch.tv/videos/1203473721

🍿 Follow us on Twitch and subscribe to our YouTube channel so you never miss a stream!



🎙 Last Word on the MongoDB Podcast

Latest Episode

Catch up on past episodes:
Ep. 87 – Matt Asay and Joe Drumgoole Life at .Local London

Ep. 86 – NextJS with Ado Kukic

Ep. 85 – Tech Conferences with Nancy Monaghan and Dorothy McClelland

(Not listening on Spotify? We got you! We’re most likely on your favorite podcast network, including Apple Podcasts, PlayerFM, Podtail, and Listen Notes 😊)

💡 These $weeklyUpdates are always posted to the MongoDB Community Forums first! Sign up today to always get first dibs on these $weeklyUpdates and other MongoDB announcements, interact with the MongoDB community, and help others solve MongoDB related issues!




Source link

mongoose recommended plugin

the scope of this library is allow mongoose users to implement in a simple way a content-based recommended system with mongoose schema,
is pretty simple and in future i want to introduce a collaborative-filter method.

how work?:
calculate similarities between mongoose entities on a single text field using tfidf and vector distance, for more search content-based systems descriptions

how to use this library
after install in your project add the plugin in entity schema in wich you want similar entities:

import  RecommendedPlugin  from 'mongoose-recommended-plugin';

const mongooseSchema = 

    YOUR SCHEMA DEFINITION

    ;


    // before generating the model 
    mongooseSchema.plugin(RecommendedPlugin);

after add the plugin to schema you can put in schema types two new field:

  • similar = indicate the text field to calculate similarity like name or description
  • minSimilarity = indicate the min percentage to mark another entity similar (es 0.1 is 10%)

an example:


        offerCode: 
            type: String,
            odinQFilter: true
        ,
        discountCode: 
            type: String,
        ,
        // make sure place similar on a String field!
        discountDescription: 
            type: String,
            odinQFilter: true,
            similar: true,
            minSimilarity: 0.1
        ,
        originalPrice: 
            type: Number
        ,
        discountedPrice: 
            type: Number
        ,
        discountPercentage: 
            type: Number
        ,
        startDate: 
            type: Date
        ,
        endDate: 
            type: Date
        ,
        neverExpire: 
            type: Boolean,
            default: false
        ,
        offerLink: 
            type: String
        ,


after this on the basic schema you have 2 new methods that allow you to calculate similars and get it:

  • calculateSimilars
  • getSimilar

important
before calling getSimilar you have to call calculateSimilars to calculate and save in the db the similars.
we will see it now

now we have to call calculateSimilars to get and save into db the results (plugin will save results in a collection name: BASIC_COLLECTION_NAME+similarresults).

for using it i suggest using a schedulr like it:

import schedule from 'node-schedule';
import Offers from '../../api/offers/model';

const log = logger.child( section: 'x1B[0;35mScheduler:x1B[0m' );

export const start = function () 
    log.info('Starting...');

    schedule.scheduleJob('*/10 * * * * *',calculateSimilarsResult);

    log.info('Starting...', 'DONE');
;

async function calculateSimilarsResult()
    await Offers.calculateSimilars();


this is an example of how calculate similars every 10 seconds, ut you can call it when you want and how you want.

after this we can call seconds method passing the _id of entity for wich we want similars:


await Offers.getSimilar('619d2d91eac832002d2f36de')

and thats all!

db format of plugin save


 
    "_id" : ObjectId("61a25cae646804e510d84f92"), 
    "relatedScore" : [
        
            "id" : ObjectId("619d2d91eac832002d2f36de"), 
            "score" : 0.45293266622972733
        
    ], 
    "entityId" : "619ac77c39dd6b002d1bd3bb", 
    "__v" : NumberInt(0)


for questions or contribute write at marco.bertelli@runelab.it

i hope this library will be helpfull, if you like the project like and share this article!


Source link

Testing Node.js/Express app + MongoDB with jest and supertest



Introduction

I find it quite hard to find the right steps when you already have a set of technologies in your project, and as the title goes, my target audience are those who already know how to develop a backend application in Express + MongoDB but not how to write tests for it. If you are still with me, let’s get started.



Tech Stack

  • Node.js
    JavaScript runtime environment outside of browser
  • Express
    Backend application framework for Node.js
  • MongoDB
    NoSQL database like JSON.
  • Jest
    JavsScript testing framework maintained by Facebook
  • supertest
    npm package that helps test HTTP



Writing tests



Steps

  1. Prepare a mongodb in memory server for testing
  2. Write tests with jest and supertest
  3. (Optional) Set up NODE_ENV to test



Prepare a mongodb in memory server for testing

First, install the in-memory-mongodb-server with the command below.
npm i -D mongodb-memory-server

I put all test files inside __test__ folder but feel free to modify the path as needed.

/__tests__/config/database.js

import mongoose from "mongoose";
import  MongoMemoryServer  from "mongodb-memory-server";
import  MongoClient  from "mongodb";

let connection: MongoClient;
let mongoServer: MongoMemoryServer;

const connect = async () => 
  mongoServer = await MongoMemoryServer.create();
  connection = await MongoClient.connect(mongoServer.getUri(), );
;

const close = async () => 
  await mongoose.connection.dropDatabase();
  await mongoose.connection.close();
  await mongoServer.stop();
;

const clear = async () => 
  const collections = mongoose.connection.collections;
  for (const key in collections) 
    await collections[key].deleteMany();
  
;
export default  connect, close, clear ;

As you do with ordinary MongoDB, you connect to the database before running tests and you close connection after running tests. You can also nuke the data in database using clear. I use default export here to import the module as db and use the functions like db.connect() or db.clear(), but it is totally up to you or TypeScript settings.



Write tests with jest and supertest

I assume most of you already have installed the testing dependencies, but if not, please run the command below.
npm i -D jest supertest

import request from "supertest";
import app from "../src/index";
import db from "./config/database";

const agent = request.agent(app);

beforeAll(async () => await db.connect());
afterEach(async () => await db.clear());
afterAll(async () => await db.close());

describe("tags", () => 
  describe("POST /tags", () => 
    test("successful", async () => 
      const res = await agent.post("/tags").send( name: "test-tag");
      expect(res.statusCode).toEqual(201);
      expect(res.body).toBeTruthy();
    );
  );
);

As mentioned in the previous step, you can make use of beforeAll, afterEach, and afterAll hooks for the database connections/modifications. If you want to keep the data that you create with POST, you can remove db.clear() from afterEach hook so you can interact with the same object for other methods like PUT or DELETE.



Set up NODE_ENV to test

For better maintanance, I have passed NODE_ENV=test just before tests.

package.json

"scripts": 
  "test": "export NODE_ENV=test && jest --forceExit --runInBand",

In order to avoid port collision, my express app will not occupy the port while testing. And I use dotenv for dealing with environment variables for those who aren’t familiar with this.

/src/index.ts

if (process.env.NODE_ENV !== "test") 
  app.listen(port, () => 
    console.log(`Express app listening at $process.env.BASE_URI:$port`);
  );



Conclusion

In the end, it is all about the database setup for testing. And I hope this post was right for you.

Feel free to reach out if you have any questions or suggestions to make this article better. Thank you for reading. Happy Coding!


Source link