Thirty Days of Rust: Day Five

Today I wanted to take it easy a little bit and try some WebAssembly with Rust. Over the years I’ve gotten very used to JavaScript, but now that I’m doing this challenge, I wanted to dip my feet into WebAssembly, so that’s exactly what I did. I found a book that basically told me everything I needed to know, and then I got started.



Rust Setup

So instead of a new Rust app, I needed to make a Rust lib, which I could do with:

$ cargo new day5 --lib

Then I added two things to my Cargo.toml so it looked like this:

[package]
name = "day5"
version = "0.1.0"
edition = "2021"

[lib]
crate-type = ["cdylib", "rlib"]

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
wasm-bindgen = "0.2.78"

Then I went ahead and installed wasm-pack with cargo:

$ cargo install wasm-pack



Rust Code

The only thing I wanted this app to do was add two numbers together. It’s probably too simple for this challenge to really mean anything, but I didn’t want to spend too much time on this because tomorrow I want to rebuild my hangman game in the browser and I figured today could be a little bit shorter.

use wasm_bindgen::prelude::*;

#[wasm_bindgen]
pub fn add(n1: i32, n2: i32) -> i32 
    return n1 + n2;

So that was simple, as I expected and now I just had to pack it together with:

$ wasm-pack build



JS Setup

$ pnpm init wasm-app www
$ cd www
$ pnpm i

That’s all, now I just imported the wasm file and console logged the result of the add function:

import * as wasm from "../pkg";

console.log(wasm.add(21, 19));

Now I could just run pnpm serve and open up localhost:8080 to get to my app. When I opened up the console, it showed the logged value of 40.
That’s about it from me today, but I look forward to tomorrow and sorry if this one was a little less exciting.


Source link

References to Literals in Rust?!

One day messing around with Rust, I found that the following code is valid:

fn main() 
    let x = &0;

That’s assigning a variable to a reference to the literal 0 – how?! Why?! This absolutely shocked me. Just try doing this in C++ and you’ll see why:

error: non-const lvalue reference to type 'int' cannot bind to a temporary of type 'int'
    int &r = 0;
         ^   ~

The literal is a temporary – you can’t have a reference to that! String literals are lvalues in C++, but that’s a weird special case. That’s why you can assign it to a pointer like const char *, but can’t get a const int * from an integer literal.



Why is this shocking?

This may not seem that shocking to some. Literals are generally temporary and don’t really live anywhere in memory – they’re essentially hard coded constants in the program. A reference points to some place in memory. How do we point to something that doesn’t live in memory? Well, we can’t, and we don’t!



Rvalue Static Promotion

This concept in Rust is called rvalue static promotion. We can look at each part to see what that means:

Rvalue: Something that can only be on the right hand side of an assignment. For example, you can’t do 1 = x because the literal 1 is an rvalue.

Static: Something that is valid for the whole lifetime of the program.

So we promote the rvalue to a static value in order to take a reference to it. Looking at the program earlier, we can see this in action in Rust’s playground. We can see the MIR (one of the intermediate representations of Rust) is:

fn main() -> () 
    let mut _0: ();                      // return place in scope 0 at src/main.rs:1:11: 1:11
    let _1: &i32;                        // in scope 0 at src/main.rs:2:9: 2:10
    let mut _2: &i32;                    // in scope 0 at src/main.rs:2:13: 2:15
    scope 1 
        debug x => _1;                   // in scope 1 at src/main.rs:2:9: 2:10
    

    bb0: 
        _2 = const main::promoted[0];    // scope 0 at src/main.rs:2:13: 2:15
                                         // ...
        _1 = _2;                         // scope 0 at src/main.rs:2:13: 2:15
        return;                          // scope 0 at src/main.rs:3:2: 3:2
    


promoted[0] in main: &i32 = 
    let mut _0: &i32;                    // return place in scope 0 at src/main.rs:2:13: 2:15
    let mut _1: i32;                     // in scope 0 at src/main.rs:2:14: 2:15

    bb0: 
        _1 = const 0_i32;                // scope 0 at src/main.rs:2:14: 2:15
        _0 = &_1;                        // scope 0 at src/main.rs:2:13: 2:15
        return;                          // scope 0 at src/main.rs:2:13: 2:15
    

This is a little weird to look at if you’ve never seen MIR before, but the important part is the line promoted[0] in main: &i32 – that’s where we see the promoted variable! Then in the main program we assign with _2 = const main::promoted[0];. So we lift the literal out to a static lifetime in order to return a reference, pretty neat.



Why did they do this?

I find this the interesting part. We can see a lot of the motivation for this in the feature:

The necessary changes in the compiler did already get implemented as part of codegen optimizations (emitting references-to or memcopies-from values in static memory instead of embedding them in the code).

It seems like it was just an easy thing to implement, so they did it. Their drawback is pretty interesting:

One more feature with seemingly ad-hoc rules to complicate the language…

I found this funny. Seems like they just thought “it’s easy enough, could be useful, why not?” So, they added a new feature to the Rust language. So many languages get by without this, but the Rust devs said, why not?



It’s useful!

You can see this exact thing in action in Rust’s source code! At the time of writing, you can see this here:

    dump_mir(infcx.tcx, None, "renumber", &0, body, |_, _| Ok(()));

The fourth parameter is a reference to the literal 0!

Well, I don’t know how useful you’d say it is. But, it’s an interesting thing in a common compiler that not many languages have.


Source link

How to build a blockchain from scratch in Rust

2021 was a huge year for cryptocurrencies, NFT’s, and decentralized applications (DAPPs), and 2022 will be even bigger. Blockchain is the underlying technology behind all these technologies.

Blockchain technology has the potential to change nearly every aspect of our lives from the Finance industry, Travel & mobility, Infrastructures, Healthcare, Public sector, Retail, Agriculture & mining, Education, Communication, Entertainment, and more.

Every smart person that I admire in the world, and those I semi-fear, is focused on this concept of crypto for a reason. They understand that this is the driving force of the fourth industrial revolution: steam engine, electricity, then the microchip — blockchain and crypto is the fourth.
— Brock Pierce



What is a blockchain?

A blockchain is a decentralized ledger of transactions across a peer-to-peer network, you can also think of a blockchain like a decentralized database that is immutable. A blockchain can be broken down fundamentally into several components e.g Node, Transaction, Block, Chain and The consensus protocol (proof of work, proof of stake, proof of history).

If you are anything like me, you learn by building. Now the reason I’m writing this article is to give you a basic overview of how blockchains work by building a blockchain with Rust.

Sounds good? Let’s get to it.



Getting Started

Let us start by creating a new Rust project:

cargo +nightly new blockchain

Change to the directory you just created:

cd blockchain

Let’s add the necessary packages we need to build a blockchain:

[dependencies]
chrono = "0.4"
serde =  version = "1.0.106", features = ["derive"] 
serde_json = "1.0"
sha2 = "0.10.0"

Next, create folder called models, that’s where you will keep most of your blockchain logic. In that folder create two (2) files called blockchain.rs and block.rs.

Import the following packages in both of the files and save them:

blockchain.rs

use chrono::prelude::*;
// Internal module
use super::block::Block;

block.rs

use super::blockchain::Blockchain;
use chrono::prelude::*;
use sha2::Sha256, Digest;
use serde::Deserialize, Serialize;

If you noticed we imported use super::block::Block; in our blockchain.rs file, we are just importing the struct located in our block.rs file here, don’t worry I will explain that a bit later.

After we have imported the necessary packages let’s create a type in our blockchain.rs file called Blocks:

type Blocks = Vec<Block>;

Next, let’s create a Blockchain type in blockchain.rs and an empty implementation for our Blockchain type:

// `Blockchain` A struct that represents the blockchain.
#[derive(Debug)]
pub struct Blockchain 
  // The first block to be added to the chain.
  pub genesis_block: Block,
  // The storage for blocks.
  pub chain: Blocks,
  // Minimum amount of work required to validate a block.
  pub difficulty: usize

impl Blockchain 

Next, let’s create a block type in block.rs and an empty implementation for our block type:

// `Block`, A struct that represents a block in a Blockchain.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Block 
   // The index in which the current block is stored.
   pub index: u64,
   // The time the current block is created.
   pub timestamp: u64,

   // The block's proof of work.
   pub proof_of_work: u64,
   // The previous block hash.
   pub previous_hash: String,
   // The current block hash.
   pub hash: String

impl Block 



Creating the genesis block:

The genesis block is the first block created in a blockchain. Let’s create a function that creates a genesis block for our blockchain and returns a new Blockchain type instance.

Add the following code in our Blockchain implementation in blockchain.rs:

impl Blockchain 
   pub fn new(difficulty: usize) -> Self 
     // First block in the chain.
     let mut genesis_block = Block 
        index: 0,
        timestamp: Utc::now().timestamp_millis() as u64,
        proof_of_work: u64::default(),
        previous_hash: String::default(),
        hash: String::default()
     ;
     // Create chain starting from the genesis chain.
     let mut chain = Vec::new();
     chain.push(genesis_block.clone());
     // Create a blockchain Instance.
     let blockchain = Blockchain 
        genesis_block,
        chain,
        difficulty
     ;
     blockchain
   

In the code above, we did the following:

  • Created our genesis_block instance.

  • Added the genesis_block we created to the chain in our Blockchain type.

  • Returned a Blockchain type instance.

In the genesis_block instance we created, notice how we set our previous_hash key to an empty string value (String::default()) that’s because there would be no previous block since the genesis block is the first block in the blockchain.

Also notice we made the hash of our genesis_block to be an empty string (“”) that’s because we haven’t calculated the hash value for our genesis block yet.



Generating the hash of a block

A hash is generated with the help of cryptography and current information present in the block.

Let’s create a function in our block implementation in the block.rs file we created called calculate_hash() :

// Calculate block hash.
pub fn calculate_hash(&self) -> String 
  let mut block_data = self.clone();
  block_data.hash = String::default();
  let serialized_block_data = serde_json::to_string(&block_data).unwrap();
  // Calculate and return SHA-256 hash value.
  let mut hasher = Sha256::new();
  hasher.update(serialized_block_data);
  let result = hasher.finalize();
  format!(":x", result)

In the code above, we did the following:

  • Converted the block’s data to JSON format.

  • Hashed the block’s data with the SHA256 algorithm.

  • Returned the hashing result in base16.



Creating a new block

Great!, we have implemented functionalities for creating our genesis block and calculating the block hashes of our blocks.

Now let’s add the functionality for adding new blocks to the blockchain, in our blockchain.rs file add this function to the Blockchain type implementation:

pub fn add_block(&mut self, nonce: String) 
  let new_block = Block::new(
    self.chain.len() as u64,
    nonce,
    self.chain[&self.chain.len() - 1].previous_hash.clone()
  );
  new_block.mine(self.clone());
  self.chain.push(new_block.clone());
  println!("New block added to chain -> :?", new_block);

Here we did the following:

  • Created an add_block function that takes in an argument called &mut self (Blockchain type instance).

  • Created our Block type instance.

  • Mined a block hash using the Block type’s mine function.

  • Added the new block to the chain of blocks.

Next, in our block.rs file add the following code in the Block type implementation:

// Create a new block. The hash will be calculated and set automatically.
pub fn new (
 index: u64,
 previous_hash: String,
) -> Self 
   // Current block to be created.
   let mut block = Block 
      index: 0,
      timestamp: Utc::now().timestamp_millis() as u64,
      proof_of_work: u64::default(),
      previous_hash: String::default(),
      hash: String::default(),
   ;
   block

Here we did the following:

  • Created a function called new() that takes in three arguments index, nonce and previous_hash.

  • Created our Block type instance.

  • Generated a block hash for our block.

  • Returned a Block type instance.



Mining new block

We have successfully implemented functionality for creating a new block.

Let’s implement functionality for mining new blocks. The process of mining new blocks involves generating a SHA256 hash that starts with a desired number of 0s which would be the mining difficulty miners have to solve to mine a new block.

Let’s create a function in our block.rs file inside our Block type implementation:

// Mine block hash.
pub fn mine (&mut self, blockchain: Blockchain) 
  loop 
    if !self.hash.starts_with(&"0".repeat(blockchain.difficulty)) 
      self.proof_of_work += 1;
      self.hash = self.generate_block_hash();
     else 
       break
    
  

Great job, we are done with implementing our blockchain, now let’s test it out.

Let’s create a file called mod.rs in our models folder and save the following code:

pub mod block;
pub mod blockchain;

All we are doing here is making the files we created earlier blockchain.rs and block.rs publicly accessible in our main.rs file.

Now let’s paste the following code in our main.rs file:

mod models;
fn main() 
   let difficulty = 1;
   let mut blockchain = models::blockchain::Blockchain::new(difficulty);
   models::blockchain::Blockchain::add_block(&mut blockchain);

Now to initiate a transaction run cargo +nightly run in your terminal.



Conclusion

In this tutorial you’ve learned how to create a simple blockchain from scratch with Rust.

I hope you’ve enjoyed reading this article, you can get the full source code of this Rust blockchain here

If you have any comments, please feel free to drop them below.


Source link

Custom commands in Bevy with extension traits

Bevy is an ECS-based game engine built in Rust. Extension traits are a pattern in rust that allows you to add methods to an existing type defined outside of your crate. You can probably guess where I’m going with this.

In bevy, any system can access the Commands structure to issue commands manipulate the World. The most common one would probably be the
Commands#spawn method which lets you spawn an entity with the components you specify. You can pass a structure implementing the Bundle trait to this method. Luckily, tuples of none to many components implement this trait thanks to macro magic, so you can just call the method like:

commands.spawn((Component1 x: 3.0, y: 4.0, Component2 value: true))

Or, you can easily define your own bundles and use them:

use bevy::ecs::*;

#[derive(Bundle)]
struct HumbleBundle 
    component1: Component1,
    component2: Component2,


// ... lines later, somewhere in a system
commands.spawn(HumbleBundle 
    component1: Component1 x: 3.0, y: 4.0,
    component2: Component2 value: true
);

But maybe you need to spawn multiple entities that refer to each other or something. Then, you need to implement a new
Command yourself.

use bevy::ecs::*;

struct ReferringComponent 
    refers_to: Entity


struct ComponentFoo 
    bar: i32


struct SpawnReferringPair 
    first_bar: i32,
    second_bar: i32,


// create two entities, the second one referring to the first one
impl Command for SpawnReferringPair 
    fn write(self: Box<Self>, world: &mut World, resources: &mut Resources) 
        let first_entity = world.spawn((ComponentFoo bar: self.first_bar, ));
        world.spawn((ComponentFoo bar: self.second_bar, ReferringComponent refers_to: first_entity));
    

And we can use this Command with Commands, like:

commands.add_command(SpawnReferringPair first_bar: 5, second_bar: 10);

Now, this is completely fine and functional. But I think we can make it prettier, so we can use it like:

commands.spawn_referring_pair(5, 10);

We just have to add a method to Bevy’s already defined Commands structure with an extension trait.

// imagine that the code defining SpawnReferringPair is here as well :)

trait CommandsExt 
    fn spawn_referring_pair(&mut self, first_bar: i32, second_bar: i32) -> &mut Self;


impl CommandsExt for Commands 
    fn spawn_referring_pair(&mut self, first_bar: i32, second_bar: i32) -> &mut Self 
        self.add_command(SpawnReferringPair 
            first_bar, second_bar // field init shorthand
        );
        self
    

And voila, we can use this method just like the spawn method:

commands
    .spawn_referring_pair(5, 10)
    .spawn((SomeOtherComponent,))
    .spawn_referring_pair(1024, 1);

In conclusion, don’t let the computer tell you what to do, make it do what you want, however arbitrary it might be. Also, Bevy is pre-1.0 and as much as they try to keep things backwards-compatible, this article might not be correct beyond 0.4, or it could be, check the source code, nerd.


Source link

CryptoPals Crypto Challenges Using Rust: Fixed XOR

This is Challenge 2 of Cryptopals challenges implemented in Rust language.



Context

Given two hex encoded strings of similar length we have to return xor of it.
XOR (or, Exclusive OR) is a binary operation (like AND, OR) on bits. XOR gives true/1 as when the two inputs differ, otherwise false/0:

|  A  |  B  |XOR(A^B)|
|-----|-----|--------|
|  0  |  0  |    0   |
|  0  |  1  |    1   |
|  1  |  0  |    1   |
|  1  |  1  |    0   |

So, theoretically you can convert hex to binary and xor them to get output, like:

10110011 ^ 01101011 = 11011000

To solve the challenge with Rust, we can make use of hex crate to decode hex strings to bytes vec, zip the two vecs, then perform xor byte by byte to get XORed bytes. And finally encode xored bytes to hex:

use hex::decode, encode;

pub fn fixed_xor(hex1: &str, hex2: &str) -> String (&b1, &b2)

And we’re done.

See the code on Github.

Find me on:
Twitter – @heyNvN

naveeen.com




Source link

What is Collaborative IoT?



The problem

After building the platform “House-Of-Iot”(HOI) that required users to have direct authentication credentials for the HOI general server, I realized there is no easy way to collaborate with others with less of a risk.

HOI isn’t the only platform that lacks built in minimal risk collaboration. The platform “Home Assistant”(HA) suffers from the same issue as HOI and requires direct access for collaboration.



The solution

The solution was to build a system that allows owners of an IoT server to temporarily and safely give others access, with the ability to easily revoke access. Users will join “Rooms”, communicate in a clubhouse like environment and yield temporary control over their IoT server.



What makes this safer than giving direct access?

Direct access means users could directly communicate with a server with no restrictions, possibly even modify settings of the server and mess up the underlying functionality.



Revoking/Giving access

Users have permission levels when they join a room, each room has an “IoT Board” which is the panel for concurrently controlling multiple IoT servers at once. Once a user with mod permissions spawns a connection to their IoT server, they can give permission to anyone in the room to control it.

When this user disconnects from collaborative or anything goes wrong with its communication, the user’s spawned connection to the IoT server is destroyed along with everyone who had access.

When this user decides they don’t want a specific user to have control anymore, they can revoke access. Revoking access just removes the ability to control a specific spawned IoT server connection.


Source link

Writing and deploying Rust Lambda function to AWS: Image glitch as a service

Japanese version: https://tech.prog-8.com/entry/2021/11/16/174709

Ever since AWS released Rust runtime for AWS lambda I’ve been wanting to try it out. In this article I am going to walk you through every step required to write and deploy a lambda written in Rust to AWS.

To avoid making this article too big I assume you are familiar with basic Rust, Docker and Node. Also make sure you have Rust toolchain, Docker and Node.js installed in your environment.

To avoid building yet another boring hello-world-like handler, we will build a n a n o s e r v i c e that takes an image and returns a glitched version of it (which you can use as a profile picture etc. but that is up to you). Pretty useless in isolation but still fun and just enough for a good walkthrough.



Start a fresh Rust project

Let’s begin with a fresh Rust project.

cargo new glitch
cd glitch

Let’s build the core of our API: a glitch function. Actually, two glitch functions. I must warn you that I’m not a professional glitch artist and that there is a lot of depth to glitch art, but the two simple tricks below will suffice. One trick is to just take a byte of the image you want to glitch and replace it with some random byte. Another trick is to take an arbitrary sequence of bytes and sort it. Rust does not come with a random number generator so we need to install it first:

[dependencies]
rand = "0.8.4"

And here’s the byte-replacing glitch function. We put it in src/lib.rs.

use rand::self, Rng;

pub fn glitch_replace(image: &mut [u8]) 
    let mut rng = rand::thread_rng();
    let size = image.len() - 1;
    let rand_idx: usize = rng.gen_range(0..=size);
    image[rand_idx] = rng.gen_range(0..=255);

Nothing extraordinary here, we just take a reference to our image as a mutable slice of bytes and replace one. Next is the sort glitch:

const CHUNK_LEN: usize = 19;

pub fn glitch_sort(image: &mut [u8]) 
    let mut rng = rand::thread_rng();
    let size = image.len() - 1;
    let split_idx: usize = rng.gen_range(0..=size - CHUNK_LEN);
    let (_left, right) = image.split_at_mut(split_idx);
    let (glitched, _rest) = right.split_at_mut(CHUNK_LEN);
    glitched.sort();

Again there is nothing complicated here. Note a very convenient split_at_mut method that easily lets us select the chunk we want to sort. CHUNK_LEN is a variable in the sense that you can try different values and expected different glitch outcomes. I randomly chose 19.

Finally, for more noticeable effect we apply these two functions multiple times as steps of one big glitch job.

pub fn glitch(image: &mut [u8]) 
    glitch_replace(image);
    glitch_sort(image);
    glitch_replace(image);
    glitch_sort(image);
    glitch_replace(image);
    glitch_sort(image);
    glitch_sort(image);

Next we move on to building a lambda.



Cargo.toml: Download required dependencies

These are the minimal dependencies we’ll need.

[dependencies]
lambda_http = "0.4.1"
lambda_runtime = "0.4.1"
tokio = "1.12.0"
rand = "0.8.4"
jemallocator = "0.3.2"

lambda_runtime is the runtime for our functions. This is required because Rust is not (yet) in the list of default runtimes at the time of writing. It possible, however to BYOR (bring your own runtime) to AWS and that’s what we’re doing here. lambda_http is a helper library that gives us type definitions for the request and context of the lambda. tokio is an async runtime. Our handler is so simple that we don’t need async but lambda_runtime requires it so we have no choice but to play along. Just in case you are unfamiliar with it think of it as a library that runs Rust futures. We won’t need to worry much about async apart from defining our functions as async. Finally, there is jemallocator. We will get to it later.



main.rs: Handler

Alright, we have a glitch function but how do use it in our request handler? Let us define apply_glitch handler that takes the request, extracts image bytes from the body and copies the glitched version into the response.

use lambda_http::handler;
use lambda_http::Body;
use lambda_http::IntoResponse, Request;

async fn apply_glitch(mut req: Request, _c: Context) -> Result<impl IntoResponse, Error> 
    let payload = req.body_mut();
    match payload 
        Body::Binary(image) => 
            glitch(image);
            Ok(image.to_owned())
        
        // Ideally you want to handle Text and Empty cases too.
        // We use a special macro unimplemented!() that prevents the compiler from failing without all cases handled.
        _ => unimplemented!(),
    

Note the useful IntoResponse trait that allows us to just return things like Strings and Vec<u8>s without thinking much about response headers.



main.rs: Main

Next, we need to simply wrap our actual handler in a lambda_http::handler. This creates an actual lambda that can be run by the lambda runtime we installed. Literally two lines of code to hook everything up.

use lambda_runtime::self, Error;
use lambda_http::handler;
use jemallocator;

#[global_allocator]
static ALLOC: jemallocator::Jemalloc = jemallocator::Jemalloc;

#[tokio::main]
async fn main() -> Result<(), Error> 
    let func = handler(apply_glitch);
    lambda_runtime::run(func).await?;
    Ok(())

Don’t forget the #[tokio::main] bit. This is an attribute macro from tokio that does some magic under the hood to make our main function async. The #[global_allocator] part is also needed to make the lambda work but we will get to it later.



Deploying to AWS

There are multiple ways to deploy this to AWS. One of them is using the AWS console. I find the console confusing for many tasks, even simple ones, so I am very excited that there exists another way: CDK. It is a Node.js library that allows us to define the required AWS resources declaratively with real code. It comes with TypeScript type definitions so in a lot of cases we don’t even need to look into the documentation.



CDK project

The only downside of CDK is that it requires a couple things in our local environment: aws CLI and Node.js. Make sure the CLI is configured with your credentials. Next, install CDK:

npm install -g aws-cdk
cdk --version

CDK requires that some resources exist prior to any deployments like buckets that your CDK output (which is basically a CloudFormation stack) and other artifacts like Lambda functions are uploaded to. This is done with bootstrap command.

cdk bootstrap aws://ACCOUNT-NUMBER/REGION

Now we are ready to create a new CDK project which is responsible for deploying our lambda to the cloud. Create a lambda folder (or choose whatever name you want) in the root of your Rust project and execute the following command in it:

cdk init app --language=typescript

This will generate all the files we’ll need. Open lambdaliblambda-stack.ts which should look like this:

import * as cdk from "@aws-cdk/core";

export class LambdaStack extends cdk.Stack 
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) 
    super(scope, id, props);

    // The code that defines your stack goes here
  

Let’s check that everything is OK by running cdk synth which does a dry run and shows you the CloudFormation code it would generate. Right now, there is not much we can do without installing additional constructs – the basic building blocks of AWS CDK apps, so let’s do it first.

npm install @aws-cdk/aws-lambda @aws-cdk/aws-apigatewayv2-integrations @aws-cdk/aws-apigatewayv2 @aws-cdk/aws-apigatewayv2

Import these in your lambda/lib/lambda-stack.ts:

import * as apigw from "@aws-cdk/aws-apigatewayv2";
import * as intg from "@aws-cdk/aws-apigatewayv2-integrations";
import * as lambda from "@aws-cdk/aws-lambda";
import * as cdk from "@aws-cdk/core";

Now we can actually define a lambda function inside the constructor above:

const glitchHandler = new lambda.Function(this, "GlitchHandler", 
  code: lambda.Code.fromAsset("../artifacts"),
  handler: "unrelated",
  runtime: lambda.Runtime.PROVIDED_AL2,
);

code is where our binary lies (we will get to it soon). handler, normally, is the name of the actual function to call but it seems to be irrelevant when using custom runtimes, so just choose any string you want. Finally, runtime is PROVIDED_AL2 which simply means we bring our own runtime (which we earlier installed as a Rust dependency) that will work on Amazon Linux 2. Just a lambda is not enough, however. Lambdas are not publicly accessible from outside of the cloud by default and we need to use API Gateway to connect the function to the outside world. To do this, add the following to your CDK code:

const glitchApi = new apigw.HttpApi(this, "GlitchAPI", 
  description: "Image glitching API",
  defaultIntegration: new intg.LambdaProxyIntegration(
    handler: glitchHandler,
  ),
);

This code is pretty self-explanatory. It creates an HTTP API Gateway that will trigger our lambda, glitchHandler, which we defined above, on incoming requests. Note how CDK makes it easy to refer to other resources: by using actual references within code.



Building a binary

We’re almost ready but we need to make sure that CDK can see and upload our lambda binary. Normally Rust puts the build output inside target/ folder and gives it the same name as your package name:

[package]
name = "glitch"

One weird thing about AWS Rust lambdas is that the binary needs to be named bootstrap. To do this, we need to add some settings to Cargo.toml:

[package]
autobins = false

[[bin]]
name = "bootstrap"
path = "src/main.rs"

This takes care of the name. Next, we could also change the output folder to artifacts so CDK can see it and cargo build the project directly but let’s imagine that you want to work on this project in different environments. The bootstrap binary actually MUST be built with x86_64-unknown-linux-gnu target. This is not possible on e.g. Windows, so let’s use Docker!

If you’ve ever used Docker with Rust you probably know that compiling can be painfully slow. This is because there is no cargo option to build only dependencies at the moment of writing.
Luckily, there is a very good project cargo-chef that provides a workaround. Here’s how we use it in our Dockerfile (mostly copy-paste from the project’s README):

FROM lukemathwalker/cargo-chef:latest-rust-1.53.0 AS chef
WORKDIR /app

FROM chef AS planner
COPY . .
RUN cargo chef prepare --recipe-path recipe.json

FROM chef AS builder 
COPY --from=planner /app/recipe.json recipe.json
# Build dependencies - this is the caching Docker layer!
RUN cargo chef cook --release --recipe-path recipe.json
COPY . .
RUN cargo build --release

FROM scratch AS export
COPY --from=builder /app/target/release/bootstrap /

With this, if we run:

docker build -o artifacts .

Docker will build a x86_64-unknown-linux-gnu binary and put it inside artifacts folder. Finally, CDK has all it needs to successfully deploy our lambda! So let’s do it (you need to be inside the lambda folder):

cdk deploy

Ideally we want to know the URL of our API Gateway immediately and there is a nice way to make CDK output this info by writing a couple more lines:

new cdk.CfnOutput(this, "glitchApi", 
      value: glitchApi.url!,
    );

Now if we add the --outputs-file option to the cdk command like this:

cdk deploy --outputs-file cdk-outputs.json

we will see a lambda/cdk-outputs.json file that has the URL inside:


  "LambdaStack": 
    "glitchApi": "https://your-gateway-api-url.amazonaws.com/"
  



Glitch!

That was a lot of work but now we can finally call our glitch API. It would be mean of me to not share a working link here as well. So here’s the command that you can try right now to get a feel of the API:

curl -X POST https://ifzc7embya.execute-api.ap-northeast-1.amazonaws.com --data-binary "@pic.jpg" -o "glitches/$(date +%s).jpg"

I cannot guarantee that the service is going to be always up, though.

Generally, to use the API ypu need to prepare an image file you want to glitch and do this:

curl -X POST https://your-gateway-api-url.amazonaws.com --data-binary "@pic.jpg" -o glitched.jpg

You should see a glitched.jpg file that is glitched and hopefully looks aesthetically pleasing! Now that everything is working, you can play with the settings like the number and order of glitches, the size of the chunk that is sorted etc. If you know other simple ways to achieve nice-looking glitches, feel free to tell me on Twitter!



Examples

Here are some of my favorite glitches I generated while playing with the API.

glitch1

glitch2

glitch3

glitch4

Wait…what about jemallocator? Oh yes, I promised to explain this as well. So, it seems that for quite a long time AWS lambdas needed to be built for x86_64-unknown-linux-musl target. This was a pain because it needed a musl toolchain which is not available by default. However, it looks like now you CAN use x86_64-unknown-linux-gnu but with a caveat: you need to use jemallocator. This is literally just one install and one more line to your code. The default allocator Rust uses on Unix platforms is malloc. I do not know if this limitation will disappear in the future.




Source link

SafeCloset, a Secret Safe – Why and how I made it in Rust

Like everybody, I have small secrets to store, like door codes, some passwords, where I buried the body, etc.

They must be kept away from other eyes but, more importantly, they must be available, even if I’m traveling far from my computers.

And they must be easily backed up without risk.

In the past, I’ve dealt with such secret storage with various solutions, like having files decrypted, edited with my favorite editor, then encrypted again.

They were full of weaknesses:

  • temporary clear files that could be inadvertently backed up, or staying here if I had to leave or in case of crash
  • editor weaknesses, like backup files and plugins
  • OS specificity making them inaccessible when far from my computers

The temporary clear files where the most dangerous problem. I’ve had to hunt for my clear files after a badly parameterized backup rule.
Editor plugins were a real threat too, especially as I started using an AI plugin storing whole extracts of my texts for better auto-completion.

So I had to find or build better.

The obvious requirements for my new solution were:

  • an encrypted storage file, with a strong algorithm
  • a storage format making it possible to keep the file on non secure disks
  • a multi-platform application, that can be easily carried too
  • totally open-source, so that the program can be fixed or rewritten
  • no clear file ever created, no data sent to external API, so the application is an editor too
  • pure Rust, to avoid most nasty bugs

I added a few less obvious requirements:

  • easy fuzzy key search
  • fast opening, instant closing
  • auto-closing (dead man switch)
  • focus on ergonomics, I want to feel comfortable editing in the application

And in case I inadvertently become a secret agent:

  • plausible deniability by putting drawers (storage units) inside other ones
  • non observability of deep drawers (having several versions of the file doesn’t let you know whether there were changes and in which deep drawer)

Now, let’s see the technical choices.



Programming language: Rust

As I said, this was obvious to me. Such program can’t really be written today in another language. Rust doesn’t prevent all bugs, but it makes it possible to avoid the nasty ones which stay hidden and compromise security.



Cryptographic algorithm and library : AES-GCM

I never considered rolling my own algorithm or using a lightly tested library.

I choose an AEDS crate from the RustCrypto group: AES-GCM in its SIV variant (the SIV variant isn’t really needed but it doesn’t cost much).



File format

The minimal unit of secret in SafeCloset is an entry, which is made of a name and a value, for example “VISA Card code” and “9875”.

Entries are stored together in a drawer.

SafeCloset uses the metaphor of closets and drawers:
A SafeCloset file contains something which is called a closet.
A closet contains several drawers. Each drawer is separately encrypted, with its own passphrase (and nonce).

A drawer also contains a closet, which contains deeper drawers.

To ensure plausible deniability, drawers are automatically created, including deep ones, and nothing distinguishes drawers that you created and you can open from the ones which were automatically created and that you can’t open (you could if you knew their password but they aren’t displayed on creation and they only contain random bytes anyway).

Drawers are serialized using the serde crate which is kind of standard in the Rust world and is very convenient. For the encoding format, I choose MessagePack which, like JSON, allows field addition but is much more compact. Having optional fields is very important to allow evolution of the file format while ensuring old files will stay compatible with newer versions of the application.

Combining the chosen encryption scheme and the serialization encoding with the list of structures and fields, the complete file format is described in the community page to allow replacement of the application if needed.



Making an UI in the terminal

There are many low level libraries whose features go from the basic (and easy) task of coloring and styling the text you print in the terminal to handling events, terminal size, alternate screen, etc. I personally like Crossterm which is cross platform and well designed.

I combine it with Termimad, a crate I made to manage skins, generate texts without mixing the style and the content, handle text inputs, even with wide characters, and a lot of small TUI related problems.

Termimad allows fancy things like editing texts in small areas of your terminal and fading the view behind the menus or dialogs:

the fancy color fading

As the author of Termimad I feel comfortable to not recommend you use it for your own TUI. Not that it’s bad, I like it, but it’s a strange beast covering much more than what a library should do, and at a much lower level than your typical framework. If you’re not used to low level TUI libraries and well versed in Rust, I suggest you look at other higher level TUI crates.
If you still hesitate, come to my chat and I’ll tell you whether Termimad might be the right choice for your application.



The result: SafeCloset

Here it is: https://dystroy.org/safecloset/

If I may say so, I’d say SafeCloset is convenient and pleasant to use.
It lets me find and read my secrets in a few keystrokes.
And it seems to be the most secure solution I ever used.
I designed it to be intuitive for other users too, and I hope I succeeded.

To better introduce it, I made a website explaining how it works, how to install it, how to use it: https://dystroy.org/safecloset/

The current version is a candidate for a release as 1.0, there’s nothing important missing right now, but I’ll let some time run to be sure.

I’d welcome your opinion!


Source link

Optimising Code – The Devil Lies In the Details

Optimising code for high performance can be exhausting; where can you squeeze that last bit of speed out of your code? How can you be faster than that one piece of code you found online?

For that, let’s take a look at the following example of our Rust crate:

pub fn new(values: &Vec<f64>) -> MathVector 
    let mut len: f64 = 0;
    for i in values 
        len += i * i;
    
    MathVector  values:    values.clone(),
                 dimension: values.len(),
                 length:    len.sqrt() 

Among other things – like the for loop for example – one thing is the problem. Can you spot it? The answer is: len.sqrt(). Getting the square root of a number is performance-wise extremely expensive – how should we get rid of it? You might guess, that we could implement something like the infamous “Fast inverse square root” algorithm from Quake III. This has one problem though: It’s inaccurate. And for something like mathematical applications, this is sometimes very unwanted. So we need to solve this in another way.
I got it: Why don’t we calculate it only when the value is actually needed? Let’s do it!
But firstly, we need to find a value for this field which cannot be occupied by pure chance. 0 is no candidate, as a vector may have the length 0. But what about a negative length? That sounds good. You are not able to have a negative length of a vector because of the nature of mathematics; the length of a vector is calculated by the sum of the squares of all values, and then taking the square root. As the value of a square root is – by definition – never negative, we can say with confidence that a negative number is ideal. Let’s take -1.
Now we need to generate a new getter for the field length, which only calculates it when actually needed. The code looks like this:

pub fn length(self: &mut Self) -> f64 
    match self.length < 0f64 
        true  =>  let mut len: f64 = 0f64; 
                  for i in &self.values 
                      len += i * i;
                   
                   self.length = len;
                   return self.length;
                 
        false =>  return self.length; 

    

As you can clearly see, we check if the length is already set using the logic from our thought; if so, we can just return that. If it is -1 (or some other negative value), we just calculate the length, store it and return the newly calculated length.
And that’s it! With just a few more lines of code, we removed unneccessary computations by calculating something only when it is needed. Feel inspired to apply that whereever you can!


Source link

How to build a blockchain in Rust

Written by Mario Zupan ✏️

When we think about P2P technology and what it’s used for nowadays, it’s impossible not to immediately think of blockchain technology. Few topics in IT have been as hyped or as controversial over the last decade as blockchain tech and cryptocurrency.

And while the broad interest in blockchain technology has varied quite a bit — which is, naturally, due to the monetary potential behind some of the more widely known and used cryptocurrencies — one thing is clear: it’s still relevant and it doesn’t seem to be going anywhere.

In a previous article, we covered how to build a very basic, working (albeit rather inefficient) peer-to-peer application in Rust. In this tutorial, we’ll demonstrate how to build a blockchain application with a basic mining scheme, consensus, and peer-to-peer networking in just 500 lines of Rust.

We’ll cover the following in detail:



Why blockchain is exciting

While I’m personally not particularly interested in cryptocurrencies or financial gambling in general, I find the idea of decentralizing parts of our existing infrastructure very appealing. There are many great blockchain-based projects out there that aim to tackle societal problems such as climate change, social inequality, privacy and governmental transparency.

The potential behind technology built on the idea of a secure, fully transparent, decentralized ledger that enables actors to interact without having to establish trust first is as game-changing as it seems. It will be exciting to see which of the aforementioned ambitious ideas will get off the ground, gain traction, and succeed going forward.

In short, blockchain technology is exciting, not only for its world-changing potential, but also from a technical perspective. From cryptography over peer-to-peer networking to fancy consensus algorithms, the field has quite a few fascinating topics to dive into.



Writing a blockchain app in Rust

In this guide, we’ll build a very simple blockchain application from scratch using Rust. Our app will not be particularly efficient, secure, or robust, but it will help you understand how some of the fundamental concepts behind widely known blockchain systems can be implemented in a simple way, explaining some of the ideas behind them.

We won’t go into every detail on every concept, and the implementation will have some serious shortcomings. You wouldn’t want to use this project for anything within miles of a production use case, but the goal is to build something you can play around with, apply to your own ideas, and examine to get more familiar with both Rust and blockchain tech in general.

The focus will be on the technical part — i.e., how to implement some of the concepts and how they play together. We won’t explain what a blockchain is, nor will we touch on mining, consensus, and the like beyond what’s necessary for this tutorial. We will mostly be concerned with how to put these ideas, in a simplified version, into Rust code.

Also, we won’t build a cryptocurrency or similar system. Our design is much simpler: every node in the network can add data (strings) to the decentralized ledger (the blockchain) by mining a valid block locally and then broadcasting that block.

As long as it’s a valid block (we’ll see later on what this means), each node will add the block to its chain and our piece of data become part of a decentralized, tamper-proof, indestructible (except that all notes shutdown in our case) network!

This is obviously a quite simplified and somewhat contrived design that would run into efficiency and robustness issues rather quickly when scaling up. But since we’re just doing this exercise to learn, that’s totally fine. If you make it to the end and have some motivation, you can extend it in any direction you want and maybe build the next big thing from our paltry beginnings here — you never know!



Setting up our Rust app

To follow along, all you need is a recent Rust installation.

First, create a new Rust project:

cargo new rust-blockchain-example
cd rust-blockchain-example

Next, edit the Cargo.toml file and add the dependencies you’ll need.

[dependencies]
chrono = "0.4"
sha2 = "0.9.8"
serde = version = "1.0", features = ["derive"] 
serde_json = "1.0"
libp2p =  version = "0.39", features = ["tcp-tokio", "mdns"] 
tokio =  version = "1.0", features = ["io-util", "io-std", "macros", "rt", "rt-multi-thread", "sync", "time"] 
hex = "0.4"
once_cell = "1.5"
log = "0.4"
pretty_env_logger = "0.4"

We’re using libp2p as our peer-to-peer networking layer and Tokio as our underlying runtime.

We’ll use the sha2 library for our sha256 hashing and the hex crate to transform the binary hashes into readable and transferable hex.

Besides that, there’s really only utilities such as serde for JSON, log, and pretty_env_logger for logging, once_cell for static initialization, and chrono for timestamps.

With the setup out of the way, let’s start by implementing the blockchain basics first and then, later on, putting all of it into a P2P-networked context.



Blockchain basics

Let’s first define our data structures for our actual blockchain:

pub struct App 
    pub blocks: Vec,


#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct Block 
    pub id: u64,
    pub hash: String,
    pub previous_hash: String,
    pub timestamp: i64,
    pub data: String,
    pub nonce: u64,

That’s it — not much behind it, really. Our App struct essentially holds our application state. We won’t persist the blockchain in this example, so it will go away once we stop the application.

This state is simply a list of Blocks. We will add new blocks to the end of this list and this will actually be our blockchain data structure.

The actual logic will make this list of blocks a chain of blocks, where each block references the previous block’s hash will be implemented in our application logic. It would be possible to build a data structure that already supports the validation we need out of the box, but this approach seems simpler and we definitely aim for simplicity here.

A Block in our case will consist of an id, which is an index starting at 0 counting up. Then, a sha256 hash (the calculation of which we’ll go into later), the hash of the previous block, a timestamp, the data contained in the block and a nonce, which we will also cover when we talk about mining the block.

Before we get to mining, let’s first implement some of the validation functions we need to keep our state consistent and some of the very basic consensus needed, so each client knows which blockchain is the correct one, in case there are multiple conflicting ones.

We start by implementing our App struct:

impl App 
    fn new() -> Self 
        Self  blocks: vec![] 
    

    fn genesis(&mut self) 
        let genesis_block = Block 
            id: 0,
            timestamp: Utc::now().timestamp(),
            previous_hash: String::from("genesis"),
            data: String::from("genesis!"),
            nonce: 2836,
            hash: "0000f816a87f806bb0073dcf026a64fb40c946b5abee2573702828694d5b4c43".to_string(),
        ;
        self.blocks.push(genesis_block);
    
...

We initialize our application with an empty chain. Later on, we’ll implement some logic. We ask other nodes on startup for their chain and, if its longer than ours, use theirs. This is our simplistic consensus criteria.

The genesis method creates the first, hard-coded block in our blockchain. This is a “special” block in that it doesn’t really adhere to the same rules as the rest of the blocks. For example, it doesn’t have a valid previous_hash, since there simply was no block before it.

We need this to “bootstrap” our node — or, really, the whole network as the first node starts. The chain has to start somewhere, and this is it.



Blocks, blocks, blocks

Next, let’s add some functionality enabling us to add new blocks to the chain.

impl App 
...
    fn try_add_block(&mut self, block: Block) 
        let latest_block = self.blocks.last().expect("there is at least one block");
        if self.is_block_valid(&block, latest_block) 
            self.blocks.push(block);
         else 
            error!("could not add block - invalid");
        
    
...

Here, we fetch the last block in the chain — our previous block — and then validate whether the block we’d like to add is actually valid. If not, we simply log an error.

In our simple application, we won’t implement any real error handling. As you’ll see later, if we run into trouble with race-conditions between nodes and have an invalid state, our node is basically broken.

I will mention some possible solutions to these problems, but we won’t implement them here; we have quite a bit of ground to cover even without having to worry about these annoying real-world issues.

Let’s look at is_block_valid next, a core piece of our logic.

const DIFFICULTY_PREFIX: &str = "00";

fn hash_to_binary_representation(hash: &[u8]) -> String 
    let mut res: String = String::default();
    for c in hash 
        res.push_str(&format!(":b", c));
    
    res


impl App 
...
    fn is_block_valid(&self, block: &Block, previous_block: &Block) -> bool 
        if block.previous_hash != previous_block.hash 
            warn!("block with id:  has wrong previous hash", block.id);
            return false;
         else if !hash_to_binary_representation(
            &hex::decode(&block.hash).expect("can decode from hex"),
        )
        .starts_with(DIFFICULTY_PREFIX)
        
            warn!("block with id:  has invalid difficulty", block.id);
            return false;
         else if block.id != previous_block.id + 1 
            warn!(
                "block with id:  is not the next block after the latest: ",
                block.id, previous_block.id
            );
            return false;
         else if hex::encode(calculate_hash(
            block.id,
            block.timestamp,
            &block.previous_hash,
            &block.data,
            block.nonce,
        )) != block.hash
        
            warn!("block with id:  has invalid hash", block.id);
            return false;
        
        true
    
...

We first define a constant DIFFICULTY_PREFIX. This is the basis for our very simplistic mining scheme. Essentially, when mining a block, the person mining has to hash the data for the block (with SHA256, in our case) and find a hash, which, in binary, starts with 00 (two zeros). This also denotes our “difficulty” on the network.

As you can imagine, the time to find a suitable hash increases quite a bit if we want three, four, five, or even 20 leading zeros. In a “real” blockchain system, this difficulty would be a network attribute, which is agreed upon between nodes based on a consensus algorithm and based on the network’s hash-power, so the network can guarantee to produce a new block in a certain amount of time.

We won’t deal with this here. For simplicity’s sake, we’ll just hardcode it to two leading zeros. This doesn’t take too long to compute on normal hardware, so we don’t need to worry about waiting too long when testing.

Next, we have a helper function, which is simply the binary representation of a given byte array in the form of a String. This is used to conveniently check whether a hash fits our DIFFICULTY_PREFIX condition. There are, obviously, much more elegant and faster ways to do this, but this is simple and works for our case.

Now to the logic of validating a Block. This is important because it ensures our blockchain adheres to it’s chain property and is hard to tamper with. The difficulty of changing something increases with every block since you’d have to recalculate (i.e., re-mine) the rest of the chain to get a valid chain again. This would be expensive enough to disincentivise you in a real blockchain system)

There are a few rules of thumb you should follow:

  1. The previous_hash needs to actually match the hash of the last block in the chain
  2. The hash needs to start with our DIFFICULTY_PREFIX (i.e., two zeros), which indicated that it was mined correctly
  3. The id needs to be the latest ID incremented by 1
  4. The hash needs to actually be correct; hashing the data of the block needs to give us the block hash (otherwise, you might as well just create a random hash starting with 001)

If we think about this as a distributed system, you might notice that it’s possible to run into trouble here. What if two nodes mine a block at the same time based on block ID 5? They would both create block ID 6 with the previous block pointing to block ID 5.

So then we’d be sent both blocks. We would validate them and add the first one coming in, but the second one would be thrown out during validation since we already have a block with ID 6.

This is an inherent problem in a system such as this and the reason there needs to be a consensus algorithm between nodes to decide which blocks (i.e., which chain) to agree on and use.

Optimally, if the block you mined isn’t added to the agreed -upon chain, you’ll have to mine it again and hope it works better the next time. In our simple case here, this retry mechanism won’t be implemented; if such a race happens, that node is essentially out of the game.

There are more sophisticated approaches to fix this in the blockchain space, of course. For example, if we were to send our data as “transactions” to other nodes and nodes would mine blocks with a set of transactions, this would be somewhat mitigated. But then everyone would mine all the time and the fastest one wins. So, as you can see, this would generate additional, but less severe, problems we’d have to fix.

Anyway, our simple approach will work for our local test network.



Which chain to use?

Now that we can validate a block, let’s implement logic for validating a whole chain:

impl App 
...
    fn is_chain_valid(&self, chain: &[Block]) -> bool 
        for i in 0..chain.len() 
            if i == 0 
                continue;
            
            let first = chain.get(i - 1).expect("has to exist");
            let second = chain.get(i).expect("has to exist");
            if !self.is_block_valid(second, first) 
                return false;
            
        
        true
    
...

Ignoring the genesis block, we basically just go through all the blocks and validate them. If one block fails the validation, we fail the whole chain.

There’s one more method left in App that will help us choose which chain to use:

impl App 
...
    // We always choose the longest valid chain
    fn choose_chain(&mut self, local: Vec, remote: Vec) -> Vec 
        let is_local_valid = self.is_chain_valid(&local);
        let is_remote_valid = self.is_chain_valid(&remote);

        if is_local_valid && is_remote_valid 
            if local.len() >= remote.len() 
                local
             else 
                remote
            
         else if is_remote_valid && !is_local_valid 
            remote
         else if !is_remote_valid && is_local_valid 
            local
         else 
            panic!("local and remote chains are both invalid");
        
    

This happens if we ask another node for its chain to determine whether it’s “better” (according to our consensus algorithm) than our local one.

Our criteria is simply the length of the chain. In real systems, there are usually more factors, such as the difficulty factored in and many other possibilities. For the purpose of this exercise, if a (valid) chain is longer than the other, then we take that one.

We validate both our local and the remote chain and take the longer one. We will also be able to use this functionality during startup when we ask other nodes for their chain and. Since ours only includes a genesis block, we’ll immediately get up to speed with the “agreed on” chain.



Mining

To finish our blockchain-related logic, let’s implement our basic mining scheme.

impl Block 
    pub fn new(id: u64, previous_hash: String, data: String) -> Self 
        let now = Utc::now();
        let (nonce, hash) = mine_block(id, now.timestamp(), &previous_hash, &data);
        Self 
            id,
            hash,
            timestamp: now.timestamp(),
            previous_hash,
            data,
            nonce,
        
    

When a new block is created, we call mine_block, which will return a nonce and a hash. Then we can create the block with its timestamp, the given data, ID, previous hash, and the new hash and nonce.

We talked about all of the above fields, except for the nonce. To explain what this is, let’s look at the mine_block function:

fn mine_block(id: u64, timestamp: i64, previous_hash: &str, data: &str) -> (u64, String) 
    info!("mining block...");
    let mut nonce = 0;

    loop 
        if nonce % 100000 == 0 
            info!("nonce: ", nonce);
        
        let hash = calculate_hash(id, timestamp, previous_hash, data, nonce);
        let binary_hash = hash_to_binary_representation(&hash);
        if binary_hash.starts_with(DIFFICULTY_PREFIX) 
            info!(
                "mined! nonce: , hash: , binary hash: ",
                nonce,
                hex::encode(&hash),
                binary_hash
            );
            return (nonce, hex::encode(hash));
        
        nonce += 1;
    

After announcing that we’re about to mine a block, we set the nonce to 0.

Then, we start an endless loop, which increments the nonce in each step. Inside the loop, besides logging every 100000’s iteration to have a rough progress indicator, we calculate a hash over the data of the block using calculate_hash, which we’ll check out next.

Then, we use our hash_to_binary_representation helper and check whether the calculated hash adheres to our difficulty criteria of starting with two zeros.

If so, we log it and return the nonce, the incrementing integer, where it happened, and the (hex-encoded) hash. Otherwise, we increment nonce and go again.

Essentially, we’re desperately trying to find a piece of data — in this case, the nonce and a number, which, together with our block data hashed using SHA256, will give us a hash starting with two zeros.

We need to record this nonce in our block so other nodes can verify our hash, since the nonce is hashed together with the block data. For example, if it would take us 52,342 iterations to calculate a fitting hash (starting with two zeros), the nonce would be 52341 (1 less, since it starts at 0).

Let’s look at the utility for actually creating the SHA256 hash as well.

fn calculate_hash(id: u64, timestamp: i64, previous_hash: &str, data: &str, nonce: u64) -> Vec<u8> 
    let data = serde_json::json!(
        "id": id,
        "previous_hash": previous_hash,
        "data": data,
        "timestamp": timestamp,
        "nonce": nonce
    );
    let mut hasher = Sha256::new();
    hasher.update(data.to_string().as_bytes());
    hasher.finalize().as_slice().to_owned()

This one is rather straightforward. We create a JSON-representation of our block data using the current nonce and put it through sha2‘s SHA256 hasher, returning a Vec<u8>.

That’s essentially all of our blockchain logic implemented. We have a blockchain data structure: a list of blocks. We have blocks, which point to the previous block. Theseare required to have an increasing ID number and a hash that adheres to our rules of mining.

If we ask to get new blocks from other nodes, we validate them and, if they’re OK, add them to the chain. If we get a full blockchain from another node, we also validate it and, if it’s longer than ours (i.e., has more blocks in it), we replace our own chain with it.

As you can imagine, since every node implements this exact logic, blocks and the agreed-on chains can propagate through the network quickly and the network converges to the same state (as with the aforementioned error handling limitations in our simple case).



Peer-to-peer basics

Next, we’ll implement the P2P-based network stack.

Start by creating a p2p.rs file, which will hold most of the peer-to-peer logic we’ll use in our application.

There, again, we define some basic data structures and constants we’ll need:

pub static KEYS: Lazy = Lazy::new(identity::Keypair::generate_ed25519);
pub static PEER_ID: Lazy = Lazy::new(|| PeerId::from(KEYS.public()));
pub static CHAIN_TOPIC: Lazy = Lazy::new(|| Topic::new("chains"));
pub static BLOCK_TOPIC: Lazy = Lazy::new(|| Topic::new("blocks"));

#[derive(Debug, Serialize, Deserialize)]
pub struct ChainResponse 
    pub blocks: Vec,
    pub receiver: String,


#[derive(Debug, Serialize, Deserialize)]
pub struct LocalChainRequest 
    pub from_peer_id: String,


pub enum EventType 
    LocalChainResponse(ChainResponse),
    Input(String),
    Init,


#[derive(NetworkBehaviour)]
pub struct AppBehaviour 
    pub floodsub: Floodsub,
    pub mdns: Mdns,
    #[behaviour(ignore)]
    pub response_sender: mpsc::UnboundedSender,
    #[behaviour(ignore)]
    pub init_sender: mpsc::UnboundedSender,
    #[behaviour(ignore)]
    pub app: App,

Starting from the top, we define a key pair and a derived peer ID. Those are simply libp2p’s intrinsics for identifying a client on the network.

Then, we define two so-called topics: chains and blocks. We’ll use the FloodSub protocol, a simple publish/subscribe protocol, for communication between the nodes.

This has the advantage, that it’s very simple to set up and use, but the disadvantage, that we need to broadcast every piece of information. So even if we just want to respond to one client’s “request for our chain”, that client will send this request to all nodes they’re connected to on the network and we will also send our response to all of them.

This is no problem in terms of correctness, but in terms of efficiency, it’s obviously horrendous. This could be handled by a simple point-to-point request/response model, which is something libp2p supports, but this would simply add even more complexity to this already complex example. If you’re interested, you can check out the libp2p docs.

We could also use the more efficient GossipSub instead of FloodSub. But, again, it’s not as convenient to set up and we’re really not particularly interested in performance at this point. The interface is very similar. Again, if you’re interested in playing around with this, check out the official docs.

Anyway, the topics are basically “channels” to subscribe to. We can subscribe to “chains” and use them to send our local blockchain to other nodes and to receive theirs. The same is true for “blocks”, which we’ll use to broadcast and receive new blocks.

Next up, we have the concept of a ChainResponse holding a list of blocks and a receiver. This is a struct, which we’ll expect if someone sends us their local blockchain and use to send them our local chain.

The LocalChainRequest is what triggers this interaction. If we send a LocalChainRequest with the peer_id of another node in the system, this will trigger that they send us their chain back, as we’ll see later on.

To handle incoming messages, lazy initialization, and keyboard-input by the client’s user, we define the EventType enum, which will help us send events across the application to keep our application state in sync with incoming and outgoing network traffic.

Finally, the core of the P2P functionality is our AppBehaviour, which implements NetworkBehaviour, libp2p’s concept for implementing a decentralized network stack.

We won’t go into the nitty-gritty here, but my comprehensive libp2p tutorial goes into more detail on this.

The AppBehaviour holds our FloodSub instance for pub/sub communication and and Mdns instance, which will enable us to automatically find other nodes on our local network (but not outside of it).

We also add our blockchain App to this behaviour, as well as channels for sending events for both initialization and request/response communication between parts of the app. We’ll see this in action later on.

Initializing the AppBehaviour is also rather straightforward:

impl AppBehaviour 
    pub async fn new(
        app: App,
        response_sender: mpsc::UnboundedSender,
        init_sender: mpsc::UnboundedSender,
    ) -> Self 
        let mut behaviour = Self 
            app,
            floodsub: Floodsub::new(*PEER_ID),
            mdns: Mdns::new(Default::default())
                .await
                .expect("can create mdns"),
            response_sender,
            init_sender,
        ;
        behaviour.floodsub.subscribe(CHAIN_TOPIC.clone());
        behaviour.floodsub.subscribe(BLOCK_TOPIC.clone());

        behaviour
    



Handling incoming messages

First, we implement the handlers for data coming in from other nodes.

We’ll start with the Mdns events since they’re basically boilerplate:

impl NetworkBehaviourEventProcess<MdnsEvent> for AppBehaviour {
    fn inject_event(&mut self, event: MdnsEvent) {
        match event 
            MdnsEvent::Discovered(discovered_list) => 
                for (peer, _addr) in discovered_list 
                    self.floodsub.add_node_to_partial_view(peer);
                
            
            MdnsEvent::Expired(expired_list) => 
                for (peer, _addr) in expired_list 
                    if !self.mdns.has_node(&peer) 
                        self.floodsub.remove_node_from_partial_view(&peer);
                    
                
            
        
    }
}

If a new node is discovered, we add it to our FloodSub list of nodes so we can communicate. Once it expires, we remove it again.

More interesting is the implementation of the NetworkBehaviour for our FloodSub communication protocol.

// incoming event handler
impl NetworkBehaviourEventProcess for AppBehaviour {
    fn inject_event(&mut self, event: FloodsubEvent) {
        if let FloodsubEvent::Message(msg) = event {
            if let Ok(resp) = serde_json::from_slice::(&msg.data) 
                if resp.receiver == PEER_ID.to_string() r
             else if let Ok(resp) = serde_json::from_slice::(&msg.data) 
                info!("sending local chain to ", msg.source.to_string());
                let peer_id = resp.from_peer_id;
                if PEER_ID.to_string() == peer_id 
                    if let Err(e) = self.response_sender.send(ChainResponse 
                        blocks: self.app.blocks.clone(),
                        receiver: msg.source.to_string(),
                    ) 
                        error!("error sending response via channel, ", e);
                    
                
             else if let Ok(block) = serde_json::from_slice::(&msg.data) 
                info!("received new block from ", msg.source.to_string());
                self.app.try_add_block(block);
            
        }
    }
}

For incoming events, which are FloodsubEvent::Message, we check whether the payload fits any of our expected data structures.

If it’s a ChainResponse, it means we got sent a local blockchain by another node.

We check wether we’re actually the receiver of said piece of data and, if so, log the incoming blockchain and attempt to execute our consensus. If it’s valid and longer than our chain, we replace our chain with it. Otherwise, we keep our own chain.

If the incoming data is a LocalChainRequest, we check whether we’re the ones they want the chain from, checking the from_peer_id. If so, we simply send them a JSON version of our local blockchain. The actual sending part is in another part of the code, but for now, we simply send it through our event channel for responses.

Finally, if it’s a Block that’s incoming, that means someone else mined a block and wants us to add it to our local chain. We check whether the block is valid and, if it is, add it.



Putting it all together

Great! Now let’s wire this all together and add some commands for users to interact with the application.

Back in main.rs, it’s time to actually implement the main function.

We start with the setup:

#[tokio::main]
async fn main() {
    pretty_env_logger::init();

    info!("Peer Id: ", p2p::PEER_ID.clone());
    let (response_sender, mut response_rcv) = mpsc::unbounded_channel();
    let (init_sender, mut init_rcv) = mpsc::unbounded_channel();

    let auth_keys = Keypair::::new()
        .into_authentic(&p2p::KEYS)
        .expect("can create auth keys");

    let transp = TokioTcpConfig::new()
        .upgrade(upgrade::Version::V1)
        .authenticate(NoiseConfig::xx(auth_keys).into_authenticated())
        .multiplex(mplex::MplexConfig::new())
        .boxed();

    let behaviour = p2p::AppBehaviour::new(App::new(), response_sender, init_sender.clone()).await;

    let mut swarm = SwarmBuilder::new(transp, behaviour, *p2p::PEER_ID)
        .executor(Box::new(|fut| 
            spawn(fut);
        ))
        .build();

    let mut stdin = BufReader::new(stdin()).lines();

    Swarm::listen_on(
        &mut swarm,
        "/ip4/0.0.0.0/tcp/0"
            .parse()
            .expect("can get a local socket"),
    )
    .expect("swarm can be started");

    spawn(async move 
        sleep(Duration::from_secs(1)).await;
        info!("sending init event");
        init_sender.send(true).expect("can send init event");
    );

That’s a whole lot of code, but it basically just sets up things we already talked about. We initialize logging and our two event channels for initialization and responses.

Then, we initialize our key pair, the libp2p transport, behavior, and the libp2p Swarm, which is the entity that runs our network stack.

We also initialize a buffered reader on stdin so we can read incoming commands from the user and start our Swarm.

Finally, we spawn an asynchronous coroutine, which waits a second and then sends an initialization trigger on the init channel.

This is the signal we’ll use after starting a node to wait for a bit until the node is up and connected. We then ask another node for their current blockchain to get us up to speed.

The rest of main is the interesting part — the part where we handle keyboard events from the user, incoming data, and outgoing data.

    loop {
        let evt = 
            select! 
                line = stdin.next_line() => Some(p2p::EventType::Input(line.expect("can get line").expect("can read line from stdin"))),
                response = response_rcv.recv() => 
                    Some(p2p::EventType::LocalChainResponse(response.expect("response exists")))
                ,
                _init = init_rcv.recv() => 
                    Some(p2p::EventType::Init)
                
                event = swarm.select_next_some() => 
                    info!("Unhandled Swarm Event: :?", event);
                    None
                ,
            
        ;

        if let Some(event) = evt {
            match event 
                p2p::EventType::Init => 
                    let peers = p2p::get_list_peers(&swarm);
                    swarm.behaviour_mut().app.genesis();

                    info!("connected nodes: ", peers.len());
                    if !peers.is_empty() 
                        let req = p2p::LocalChainRequest 
                            from_peer_id: peers
                                .iter()
                                .last()
                                .expect("at least one peer")
                                .to_string(),
                        ;

                        let json = serde_json::to_string(&req).expect("can jsonify request");
                        swarm
                            .behaviour_mut()
                            .floodsub
                            .publish(p2p::CHAIN_TOPIC.clone(), json.as_bytes());
                    
                
                p2p::EventType::LocalChainResponse(resp) => 
                    let json = serde_json::to_string(&resp).expect("can jsonify response");
                    swarm
                        .behaviour_mut()
                        .floodsub
                        .publish(p2p::CHAIN_TOPIC.clone(), json.as_bytes());
                
                p2p::EventType::Input(line) => match line.as_str() 
                    "ls p" => p2p::handle_print_peers(&swarm),
                    cmd if cmd.starts_with("ls c") => p2p::handle_print_chain(&swarm),
                    cmd if cmd.starts_with("create b") => p2p::handle_create_block(cmd, &mut swarm),
                    _ => error!("unknown command"),
                ,
            
        }
    }

We start an endless loop and use Tokio’s select! macro to race multiple async functions.

This means whichever one of these finishes first will get handled first and then we start anew.

The first event emitter is our buffered reader, which will give us input lines from the user. If we get one, we create an EventType::Input with the line.

Then, we listen to the response channel and the init channel, creating their events respectively. And if events come in on the swarm itself, this means they are events that are neither handled by our Mdns behavior nor our FloodSub behavior and we just log them. They’re mostly noise, such as connection/disconnection in our case, but helpful for debugging.

With the corresponding events created (or no event created), we go about handling them.

For our Init event, we call genesis() on our app, creating our genesis block. If we’re connected to nodes, we trigger a LocalChainRequest to the last one in the list.

Obviously, here it would make sense to ask multiple nodes, and maybe multiple times, and select the best (i.e., longest) chain of the responses we get. But for simplicity’s sake, we just ask one and accept whatever they send us.

Then, if we get a LocalChainResponse event, that means something was sent on the response channel. If you remember above, that happened in our FloodSub behavior when we sent back our local blockchain to a requesting node. Here, we actually send the incoming JSON to the correct FloodSub topic, so it’s broadcast to the network.

Finally, for user input, we have three commands:

  • ls p lists all peers
  • ls c prints the local blockchain
  • create b $data creates a new block with $data as it’s string content

Each command calls one of these helper functions:

pub fn get_list_peers(swarm: &Swarm) -> Vec 
    info!("Discovered Peers:");
    let nodes = swarm.behaviour().mdns.discovered_nodes();
    let mut unique_peers = HashSet::new();
    for peer in nodes 
        unique_peers.insert(peer);
    
    unique_peers.iter().map(

pub fn handle_print_peers(swarm: &Swarm) 
    let peers = get_list_peers(swarm);
    peers.iter().for_each(

pub fn handle_print_chain(swarm: &Swarm) 
    info!("Local Blockchain:");
    let pretty_json =
        serde_json::to_string_pretty(&swarm.behaviour().app.blocks).expect("can jsonify blocks");
    info!("", pretty_json);


pub fn handle_create_block(cmd: &str, swarm: &mut Swarm) 
    if let Some(data) = cmd.strip_prefix("create b") 
        let behaviour = swarm.behaviour_mut();
        let latest_block = behaviour
            .app
            .blocks
            .last()
            .expect("there is at least one block");
        let block = Block::new(
            latest_block.id + 1,
            latest_block.hash.clone(),
            data.to_owned(),
        );
        let json = serde_json::to_string(&block).expect("can jsonify request");
        behaviour.app.blocks.push(block);
        info!("broadcasting new block");
        behaviour
            .floodsub
            .publish(BLOCK_TOPIC.clone(), json.as_bytes());
    

Listing clients and printing the blockchain is rather straightforward. Creating a block is more interesting.

In that case, we use Block::new to create (and mine) a new block. Once that happens, we JSONify it and broadcast it to the network so others may add it to their chain.

This where we would put some logic for r-trying this. For example, we could add it to a queue and see whether, after a while, our block propagates to the widely agreed-upon blockchain and, if not, get a new copy of the agreed-on chain and mine it again to get it on there. As mentioned above, this design certainly won’t scale to many nodes mining their blocks all the time, but that’s OK for the purpose of this tutorial.

Let’s start it and see if it works!



Testing our Rust blockchain

We can start the application using RUST_LOG=info cargo run. It’s best to actually start multiple instances of it in different terminal windows.

For example, we can start two nodes:

INFO  rust_blockchain_example > Peer Id: 12D3KooWJWbGzpdakrDroXuCKPRBqmDW8wYc1U3WzWEydVr2qZNv

And:

INFO  rust_blockchain_example > Peer Id: 12D3KooWSXGZJJEnh3tndGEVm6ACQ5pdaPKL34ktmCsUqkqSVTWX

Using ls p in the second app shows us the connection to the first one:

INFO  rust_blockchain_example::p2p > Discovered Peers:
 INFO  rust_blockchain_example::p2p > 12D3KooWJWbGzpdakrDroXuCKPRBqmDW8wYc1U3WzWEydVr2qZNv

Then, we can use ls c to print the genesis block:

INFO  rust_blockchain_example::p2p > Local Blockchain:
 INFO  rust_blockchain_example::p2p > [
  
    "id": 0,
    "hash": "0000f816a87f806bb0073dcf026a64fb40c946b5abee2573702828694d5b4c43",
    "previous_hash": "genesis",
    "timestamp": 1636664658,
    "data": "genesis!",
    "nonce": 2836
  
]

So far, so good. let’s create a block:

create b hello
 INFO  rust_blockchain_example      > mining block...
 INFO  rust_blockchain_example      > nonce: 0
 INFO  rust_blockchain_example      > mined! nonce: 62235, hash: 00008cf68da9f978aa080b7aad93fb4285e3c0dbd85fc21bc7e83e623f9fa922, binary hash: 0010001100111101101000110110101001111110011111000101010101000101111110101010110110010011111110111000010100001011110001111000000110110111101100010111111100001011011110001111110100011111011000101111111001111110101001100010
 INFO  rust_blockchain_example::p2p > broadcasting new block

On the first node, we see this:

INFO  rust_blockchain_example::p2p > received new block from 12D3KooWSXGZJJEnh3tndGEVm6ACQ5pdaPKL34ktmCsUqkqSVTWX

And calling ls c:

INFO  rust_blockchain_example::p2p > Local Blockchain:
 INFO  rust_blockchain_example::p2p > [
  
    "id": 0,
    "hash": "0000f816a87f806bb0073dcf026a64fb40c946b5abee2573702828694d5b4c43",
    "previous_hash": "genesis",
    "timestamp": 1636664655,
    "data": "genesis!",
    "nonce": 2836
  ,
  
    "id": 1,
    "hash": "00008cf68da9f978aa080b7aad93fb4285e3c0dbd85fc21bc7e83e623f9fa922",
    "previous_hash": "0000f816a87f806bb0073dcf026a64fb40c946b5abee2573702828694d5b4c43",
    "timestamp": 1636664772,
    "data": " hello",
    "nonce": 62235
  
]

The block got added!

Let’s start a third node. It should automatically get this updated chain because it’s longer than its own (only the genesis block).

INFO  rust_blockchain_example > Peer Id: 12D3KooWSDyn83pJD4eEg9dvYffceAEcbUkioQvSPY7aCi7J598q

 INFO  rust_blockchain_example > sending init event
 INFO  rust_blockchain_example::p2p > Discovered Peers:
 INFO  rust_blockchain_example      > connected nodes: 2
 INFO  rust_blockchain_example::p2p > Response from 12D3KooWSXGZJJEnh3tndGEVm6ACQ5pdaPKL34ktmCsUqkqSVTWX:
 INFO  rust_blockchain_example::p2p > Block  id: 0, hash: "0000f816a87f806bb0073dcf026a64fb40c946b5abee2573702828694d5b4c43", previous_hash: "genesis", timestamp: 1636664658, data: "genesis!", nonce: 2836 
 INFO  rust_blockchain_example::p2p > Block  id: 1, hash: "00008cf68da9f978aa080b7aad93fb4285e3c0dbd85fc21bc7e83e623f9fa922", previous_hash: "0000f816a87f806bb0073dcf026a64fb40c946b5abee2573702828694d5b4c43", timestamp: 1636664772, data: " hello", nonce: 62235 

After sending the init event, we requested the second node’s chain and got it.

Calling ls c here shows us the same chain:

INFO  rust_blockchain_example::p2p > Local Blockchain:
 INFO  rust_blockchain_example::p2p > [
  
    "id": 0,
    "hash": "0000f816a87f806bb0073dcf026a64fb40c946b5abee2573702828694d5b4c43",
    "previous_hash": "genesis",
    "timestamp": 1636664658,
    "data": "genesis!",
    "nonce": 2836
  ,
  
    "id": 1,
    "hash": "00008cf68da9f978aa080b7aad93fb4285e3c0dbd85fc21bc7e83e623f9fa922",
    "previous_hash": "0000f816a87f806bb0073dcf026a64fb40c946b5abee2573702828694d5b4c43",
    "timestamp": 1636664772,
    "data": " hello",
    "nonce": 62235
  
]

Creating a block also works:

create b alsoworks
 INFO  rust_blockchain_example      > mining block...
 INFO  rust_blockchain_example      > nonce: 0
 INFO  rust_blockchain_example      > mined! nonce: 34855, hash: 0000e0bddf4e603da675b92b88e86e25692eaaa8ad20db6ecab5940bdee1fdfd, binary hash: 001110000010111101110111111001110110000011110110100110111010110111001101011100010001110100011011101001011101001101110101010101010100010101101100000110110111101110110010101011010110010100101111011110111000011111110111111101
 INFO  rust_blockchain_example::p2p > broadcasting new block

Node 1:

 INFO  rust_blockchain_example::p2p > received new block from 12D3KooWSDyn83pJD4eEg9dvYffceAEcbUkioQvSPY7aCi7J598q

ls c
 INFO  rust_blockchain_example::p2p > Local Blockchain:
 INFO  rust_blockchain_example::p2p > [
  
    "id": 0,
    "hash": "0000f816a87f806bb0073dcf026a64fb40c946b5abee2573702828694d5b4c43",
    "previous_hash": "genesis",
    "timestamp": 1636664658,
    "data": "genesis!",
    "nonce": 2836
  ,
  
    "id": 1,
    "hash": "00008cf68da9f978aa080b7aad93fb4285e3c0dbd85fc21bc7e83e623f9fa922",
    "previous_hash": "0000f816a87f806bb0073dcf026a64fb40c946b5abee2573702828694d5b4c43",
    "timestamp": 1636664772,
    "data": " hello",
    "nonce": 62235
  ,
  
    "id": 2,
    "hash": "0000e0bddf4e603da675b92b88e86e25692eaaa8ad20db6ecab5940bdee1fdfd",
    "previous_hash": "00008cf68da9f978aa080b7aad93fb4285e3c0dbd85fc21bc7e83e623f9fa922",
    "timestamp": 1636664920,
    "data": " alsoworks",
    "nonce": 34855
  
]

Node 2:

 INFO  rust_blockchain_example::p2p > received new block from 12D3KooWSDyn83pJD4eEg9dvYffceAEcbUkioQvSPY7aCi7J598q
ls c
 INFO  rust_blockchain_example::p2p > Local Blockchain:
 INFO  rust_blockchain_example::p2p > [
  
    "id": 0,
    "hash": "0000f816a87f806bb0073dcf026a64fb40c946b5abee2573702828694d5b4c43",
    "previous_hash": "genesis",
    "timestamp": 1636664655,
    "data": "genesis!",
    "nonce": 2836
  ,
  
    "id": 1,
    "hash": "00008cf68da9f978aa080b7aad93fb4285e3c0dbd85fc21bc7e83e623f9fa922",
    "previous_hash": "0000f816a87f806bb0073dcf026a64fb40c946b5abee2573702828694d5b4c43",
    "timestamp": 1636664772,
    "data": " hello",
    "nonce": 62235
  ,
  
    "id": 2,
    "hash": "0000e0bddf4e603da675b92b88e86e25692eaaa8ad20db6ecab5940bdee1fdfd",
    "previous_hash": "00008cf68da9f978aa080b7aad93fb4285e3c0dbd85fc21bc7e83e623f9fa922",
    "timestamp": 1636664920,
    "data": " alsoworks",
    "nonce": 34855
  
]

Great — it works!

You can play around and try to create race conditions (e.g., by increasing the difficulty to three zeros and starting multiple blocks in multiple nodes. You’ll immediately notice some of the flaws of this design, but the basics work. We have a peer-to-peer blockchain application, a real decentralized ledger with basic robustness, built entirely from scratch in Rust. Awesome!

You can find the full example code at GitHub.



Conclusion

In this tutorial, we built a simple, quite limited, but working blockchain application in Rust. Our blockchain app has a very basic mining scheme, consensus, and peer-to-peer networking in just 500 lines of Rust.

Most of this simplicity is thanks to the fantastic libp2p library, which does all the heavy lifting in terms of networking. Clearly, as is always the case in software engineering tutorials, for a production-grade blockchain-application, there are many, many more things to consider and get right.

However, this exercise sets the stage for the topic, explaining some of the basics and showing them off in Rust, so that we can continue this journey by looking at how we would go about building a blockchain application that could actually be used in practice with a framework such as Substrate.



LogRocket: Full visibility into production Rust apps

Debugging Rust applications can be difficult, especially when users experience issues that are difficult to reproduce. If you’re interested in monitoring and tracking performance of your Rust apps, automatically surfacing errors, and tracking slow network requests and load time, try LogRocket.

LogRocket Dashboard Free Trial Banner

LogRocket is like a DVR for web apps, recording literally everything that happens on your Rust app. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your app’s performance, reporting metrics like client CPU load, client memory usage, and more.

Modernize how you debug your Rust apps — start monitoring for free.


Source link