Download / stream large file with :hackney in Elixir

In my project, I have a quite large csv file for seeding my database. Put it is project source code increasing size of docker image but that file only run once. So I decide to upload to S3 and stream/download to server and then run seeding code. In this post I will show you how I do this.

I found some post which stream file using HTTPoison library but in some case I don’t want to add more dependencies so I write my own module, and I think it’s a good way to learn new things.

What you will learn

You will learn 2 cool function

  1. Stream.resource to build a new stream
  2. And :hackney.stream_body to read chunk of data from remote file

1. Build the stream

For Stream.resource, read full document here

Basically, this function receives 3 function as arguments: one function to setup stream, one for build data stream, and one for handle stream completion. This example from the

  fn ->!("sample") end,
  fn file ->
    case, :line) do
      data when is_binary(data) -> [data], file
      _ -> :halt, file
  fn file -> File.close(file) end
  1. Fisrt function open the file and its result is passed as argument for second function

  2. Second function read data line by line until the end of file

  3. Third function handle close file handler

For downloading file, we do similar

  1. Open the connection
  2. Stream parts of file
  3. Close connection
def stream_url(url) do
      fn -> begin_download(url) end,

2. Open connection

defp begin_download(url) do
    :ok, _status, headers, client = :hackney.get(url)
    headers = Enum.into(headers, %)
    total_size = headers["Content-Length"] |> String.to_integer()

    client, total_size, 0 # 0 is current downloaded size

Here we:

  • Use :hackney.get to open connection to server
  • Extract content length from header, this is useful to verify length later
  • Return tuple of client, total_size, current_download_size these data would be used to stream content in the next function

3. Stream chunks

defp continue_download(client, total_size, size) do
    case :hackney.stream_body(client) do
      :ok, data ->
        # update downloaded size
        new_size = size + byte_size(data)
        [data], client, total_size, new_size

      :done ->
         # no more data, tell stream to close
         # and move to function 3
        :halt, client, total_size, size

      :error, reason ->
        raise reason

Here we use :hackney.stream_body to read data from connection chunk by chunk

4. Close connection

defp finish_download(client, total_size, size) do
    Logger.debug("Complete download #size / #total_size bytes")

Here we simply close the connection

5. Save to the file

In the above steps, we build a stream of data, now we save it to a file

def download(url, save_path) do
    |> Stream.into(!(save_path))

Remember to invoke to actually run the stream.

6. Stream by line

In our case, we don’t want to store file on our server because we only use it once. So we stream and process file content on the fly. We use csv library to decode csv content because it supports stream but it only accepts stream of lines.

So here we transform stream of chunk to stream of line

def stream_url(url, :line) do
    |> Stream.concat([:end]) # to known when the stream end
    |> Stream.transform("", fn
      :end, prev ->
        [prev], ""

      chunk, prev ->
        [last_line | lines] =
          String.split(prev <> chunk, "n")
          |> Enum.reverse()

        Enum.reverse(lines), last_line

Details about why to split it like this you can read this post from


Thanks for idea from
and that help me to solve my problem.

Full source code here

Thanks for reading and your feedback are warmly welcome.

Source link

Aspiration for evision (Aspiration for 2022)

evision is quite an excellent product! It is a bridge between Elixir and OpenCV.

I developed NxEvision, a bridge between Nx and evision. I hope it will be a bridge between machine learning and computer vision in Elixir.

I also have an aspiration to apply the Pelemay technology to evision. That is, I’m going to implement optimization invocations of evision.

To be continued…

Source link

The Origin of the Programming "Paradigm" by the Combination of Enum Functions and Pipeline Operators

Interviewee: José Valim, the creator of Elixir, Chief Adoption Officer at Dashbit;
Interviewer: Susumu Yamazaki, an associate professor at Univ. of Kitakyushu, an organizer of ElixirConf JP.

Japanese Translation

Dec. 17, 2021. To: José Valim

Hi, José,

Masakazu Mori (He made a presentation at ElixirConf US 2021 [1]) and I lecture the course “Programming Theory” at the University of Kitakyushu, including the mainstream programming paradigms: imperative programming, OOP, and functional programming, relating to the history of the architecture of the computer systems. Of course, one of the course’s main topics is “Why Elixir?”. In Programming Elixir [2], Dave Thomas said, “I don’t want to hide data. I want to transform it.” I identified this consideration as a new programming paradigm instead of functional programming: The data transformation paradigm.

So, I want to interview you about the history of what you consider about designing Elixir by email, including this topic. I plan to edit and publish this interview as one of the 10th-anniversary articles of the foundation of Elixir.

[1] Masakazu Mori: Live coding a membership site in 20min by Phoenix and phx_gen_auth, ElixirConf 2021. The movie is available at
[2] Dave Thomas: Programming Elixir 1.6: Functional |> Concurrent |> Pragmatic |> Fun, 2nd edition, The Pragmatic Bookshelf, 2018.

First of all, an essential feature of Elixir, which brings a base to the data transformation, is a combination of Enum functions and pipeline operators. As you know, this brings intense pleasure to programming! In fact, it is one of the reasons that someone chooses Elixir. However, the first version of Elixir 10 years ago is not functional but object-oriented. Then, I guess your first ideation of Elixir may not include such a concept towards data transformation. So, my first question is about your consideration process towards the feature: When did you form the idea, and what brought it?

Note: José said that Elixir was on born May 24, 2012.

Dec. 28, 2021. From: José Valim

When did you form the idea, and what brought it?

I don’t think there was one specific moment when the idea was formed. Rather, it was the exposure to several ideas and concepts that slowly unraveled Object Orientation. One of such moments was a talk by Rich Hickey, the creator of Clojure, called Simple Made Easy. The other one was while reading a book called “Concepts, Techniques, and Models of Computer Programming”, by Peter Van Roy and Seif Haridi.

In the book, they build a programming language by introducing new concepts and their benefits one by one. When it comes to Object Orientation, they argue that Object Orientation is nothing more than syntax sugar for dispatching to a known module. For example, if you have:

car = new Car()

That’s the same as:

car = new Car()

If you see Object Orientation as syntax sugar, you have to answer the question: is the syntax sugar worth it? The trouble with this syntax sugar (and Object Orientation) is that it couples state and behaviour. The state (car) can only be handled by a given entity (the class Car). This is often sold as a feature but the truth is that developers spend a lot of time trying to undo or reason about this coupling. The need for inheritance, for example, is caused directly by this coupling. However, introducing inheritance brought its own issues, leading languages to introduce concepts such as multiple inheritance (mixins), open-classes (monkey patching), etc. All with their own flaws too!

The other part of the puzzle is that functional programming has shown the complexity of a software is not in its computations and algorithms. If the system has no shared state, if the system has no side-effects, it becomes much easier for both humans and compilers to reason about your code. Therefore, by encapsulating state, objects have taught us to hide the complex parts of our system. Not only that, we often split this state into several objects, which makes understanding and visualizing how our applications work very hard!

Finally, the industry has learned something that was known in academy for decades: the properties that are positively associated to Object Orientation, such as encapsulation, abstraction, and polymorphism, are not actually specific to objects, and can be leveraged, sometimes even with more success, in other paradigms.

So, going back to our original questions: is this syntax sugar worth it? Is object orientation worth it? The answer to me is a clear no, the downsides are many more than the upsides. And, without objects, all we have is state (data) and functions (transformations), as separate entities. The Elixir programming modal comes as a direct consequence of it.

Elixir also provides the pipeline operator, which can also be seen as syntax sugar. It transforms this:


into this:

"Elixir" |> String.graphemes() |> Enum.frequencies()

In some sense, it can be seen as the “.” in Object Oriented languages, but without the coupling of the state and the behaviour. The state, in this case it starts as the “Elixir” string, can be sent to any behaviour that accepts the string type.

Source link

Elixir in the eyes of Node.js developer

Cover photo by Kaizen Nguyễn on Unsplash

I got into Elixir some time ago, but at that time, I was more interested in statically typed languages. I didn’t exclude Elixir at that time, but instead, I moved it to a second plan. One of the signals to give Elixir a try was the talk from Saša Jurić – The Soul of Erlang and Elixir. I highly recommend watching this talk. I discovered that the BEAM VM and Elixir features could offer many benefits. So I decided to try and see how all the pieces are working together in an actual application. I’d like to share some critical ecosystem points that convinced me to try.

  1. Community
    One of the first things that I noticed when I started was the community libraries. Almost all the libraries shared the same structure and had all the API interfaces generated along with the type spec. So I searched for a few libraries that I often use, like web framework, GraphQL implementation or Database management. I can say that all of them look solid, and the documentation also contains a lot of guidelines, so I didn’t need to leave the page to have a good understanding of them.

  2. Phoenix framework
    The Phoenix is a web framework that makes building web servers easy and fast. Great thing is that Phoenix has a built-in code generator. This generator is done via the mix task and you can generate almost all needed parts for creating an endpoint, context or database schema. Additionally, the documentation and guidelines described in the next point make you much more comfortable in the first place.

  3. Testing and documentation
    When looking back on different projects, documentation and testing are some of the forgotten things during development. Within the Elixir, those things are built in the language, making a considerable change for development and maintenance. You can write the documentation and examples right next to the code, and as we advance, you can turn these examples into quick tests. It was a nice thing that convinced me to write more tests and documentation.

  4. GenServer

    The GenServer allows you to abstract logic around small services. For example, all these services might have a separate state and business logic encapsulated inside. The service code is executed as a lightweight BEAM process, which is fast compared to standalone microservice solutions. Therefore, you do not need any extra HTTP layer or queue to communicate within the service.

  5. Type system, pattern matching and language itself

    I need to say that I’m a big fan of statically typed languages. So, when I heard about the Elixir for the first time, missing a type system was a big downside for me. Also, I understand that making such a dynamic language static would be a big challenge. To fill this gap, I used Dialixir and Typespecs. The experience is slightly different, but you have some tangibility of the type system, called success typing.

    Elixir has a functional language style that fits my personality best, but everyone can feel differently. On top of this, you have a great set of language features like With statements, function guards, the pipe operator and excellent pattern matching.

  6. BEAM virtual machine
    I think it was one of the biggest deal-breaker for using the Elixir heavier. The BEAM architecture, combined with the language features described above, make it a great combo!
    The virtual machine is responsible for running your code in small, cheap and fast processes. One of the philosophies that are coming from Erlang is Let it fail. The philosophy allows writing the system that is working more consistently and reliably. I could compare this to our systems like Linux, Windows or macOS. The system is working, but some programs that we installed are crashing from time to time, but usually, your system is still working, and only what you have to do is open your program once again. Like BEAM VM, one process might crash, but the whole system is still working as usual.

    Overall, I feel surprised how good working with Elixir was. One of the gaps is the lack of a static type system. To fill this gap, I used Credo, Dialixir and TypeSpecs to analyze the codebase statically. The language features make writing the code quicker, easier and cleaner to maintain. For example, built-in documentation and testing might turn your codebase into an environment that is a pleasure to work with. The last piece of this whole stack is that all of this runs on BEAM VM, which is the cherry on the cake! So I need to say that the lack of a static type system is no longer a significant disadvantage with such a combo!

    It is the first blog about my elixir journey, and I plan to share more detailed knowledge soon in my next blog.

Source link

Hacking an IoT App at the Civo Hackathon, 2021


Civo organised its very first hackathon and we got a chance to work on a great project with a skilled team. Thanks to Civo for the experience. Our project, Home Smart Home, won the Best IoT Hack prize.

See the video demo!

Table of contents

  1. Meet the team
  2. About the project
  3. How we built the project
  4. Our experience with Civo
  5. What’s next for our project
  6. Repo links

Meet the team

Our team comprises of three members with varied experiences and expertise ranging from full-stack web development to programming microcontrollers like NodeMCU ESP8266 and RaspberryPi.
Here’s a brief introduction of each of the members:

  • Atchyut is a full-stack developer with expertise on both the front-end and back-end development he was behind the ReactJS PWA UI app we developed as a part of this hackathon.
  • Kevin is a backend developer with expertise working with NodeJS, Python and Elixir, which he used to build the back-end application for our app with the Phoenix web framework.
  • Hardik is a lecturer at Dayalbagh Educational Institute, Agra, with expertise with Python, ML, AI & IoT. He built the IoT backend using C++ and a NodeMCU ESP8266 microcontroller which queries our backend API and talks to the smart devices. He’s also programmed the microcontroller to turn non-smart devices into smart devices by just attaching it to a switchboard.

While the three of us have strong expertise in what we do, we’re all DevOps and cloud enthusiasts and that is the reason we came together and built this project as a part of the Civo hackathon

About the project

Home Smart Home, H2S, allows its users to register their smart devices and control them remotely. As an MVP, users can turn their devices on and off right from their internet-enable devices, however, we plan to incrementally update this to keep adding new features.

Our initial goal was to build a simple platform to allow a layperson to be able to get into the world of IoT and smart devices. The innovation behind our app is that, users can even turn their non-smart devices to smart-devices with little to no programming knowledge and highly available hardware using some utilities that our Hardik is currently developing.

As for the app itself, we initially planned on going with a React Native mobile app, but ended up going with a ReactJS PWA since we wanted the users to not just be limited to using a smartphone to automate their homes. Now, they can pretty much use any smartphone, tablet or computer in order to automate their homes.

How we built the project

Our app consists of a front-end PWA, a back-end API layer and the actual IoT Component. Here’s a breakdown for each of these components:

Front-end PWA App – This app was built using ReactJS, Tailwind CSS, React hooks for state management and CRA’s PWA Capabilities

Back-end API Layer – Our back-end currently consists of the API layer that both, the front-end app as well as the IoT component, use in order enable, disable, turn-on, turn-off, register smart devices into our system. Our app is built on the Phoenix framework using Elixir programming language and PostgreSQL DB. This is the component that we’ve deployed on a Civo Compute instance.

IoT Component – Our IoT component contains various utilities that are built using C++ and run on top of an NodeMCU ESP8266 microcontroller. The utilities are subscribed to our back-end API on a pub-sub model whenever there are changes in the DB, they query the API and communicate with the smart devices.

Our Experience with Civo

Our experience with the Civo platform has been great, while we were all new to DevOps, the guides on the Civo website helped us in deploying our backend micro-service on Civo. It’s been a great experience and we definitely plan on using the platform as we scale up. It is remarkable how fast the Compute instance and the Kubernetes cluster were created.

What’s next for our project

We plan to make our PWA available to as many users as possible. We’ve also started enhancing our platform to be able to have more features such as being able to control various aspects of a smart device than only being able to turn it on/off. With this, we’re also developing kits that users can use in order to turn their non-smart devices to smart-devices. We believe there’s a lot of hidden potential in what we build and that we’re onto something really good.

For now, the goal is to enable as many users as possible to use our platform for free, while the platform itself will always remain free, we plan to add a monetization model the kits that we produce to enable non-smart devices to behave like smart-devices and that would be our fundamental business model.

However, at the place we’re currently at, there’s so many directions we can go in and we’re really excited to build on top of this.

Links to the repos

Source link

Steering your submarine with Elixir, Leex and Yecc (AoC'21, day 2)

After solving a Advent of Code challenge by treating the input as a program last year, I wanted more this year. The opportunity came today and I decided to take it. Instead of Parlet and Ruby, though, I decided to use Elixir/Erlang tooling to get the job done.

The problem

In the Day 2 this year you need to pilot your submarine. This is done by a series of commands, such as this:

forward 5
down 5
forward 8
up 3
down 8
forward 2

We have a command, followed by a number – one pair per line. There are three commands:

  • forward moves the submarine horizontally by number
  • down moves it down, increasing the depth we’re at
  • up does exactly the opposite of down

A series of commands – that’s a program! To execute it, we need basically 3 things: a lexer, a parser and an interpreter. Fortunately, Elixir gives us a tooling for first two for free, and the last one is easy. Let’s do it.



We are going to start with creating a new mix project with mix new submarine_lang. Our first step will be to create a lexer, which will tokenize the input. This is what I put in src/lexer.xrl:

FORWARD       = (forward)
UP            = (up)
DOWN          = (down)
WHITESPACE    = [stn]
DIGITS        = [0-9]+

WHITESPACE : skip_token.
FORWARD    : token, move, forward.
UP         : token, move, up.
DOWN       : token, move, down.
DIGITS     : token, number, list_to_integer(TokenChars).

Erlang code.

This lexer is not perfect. It could me more strict, for example to not allow two commands in the same line, but it serves its purpose for this task, while at the same time remains relatively simple. We basically have three commands, a number (DIGITS) and whitespace.

Let’s take our parser for a test drive then, with iex -S mix. The important thing to remember is that Leex only takes Erlang strings as inputs, so you either have to use single-quoted strings or use to_charlist method from Elixir.Kernel.

Here are some examples:

Interactive Elixir (1.12.2) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> :lexer.string('forward 5')
:ok, [move: :forward, number: 5], 1
iex(2)> :lexer.string('forward 5ndown 1ndown 100')
 [move: :forward, number: 5, move: :down, number: 1, move: :down, number: 100],
iex(3)> :lexer.string('backward 6')                 
:error, 1, :lexer, :illegal, 'b', 1

Note that since the result is a list of tuples with two elements, iex displays it as a keyword list (because that’s what a keyword list is).


Since the lexing/tokenizing part is done, we are now going to move on to parser, which will put some basic meaning into our tokens. The parser will reside in src/parser.yrl and it is really simple:

Nonterminals command command_list.
Terminals number move.
Rootsymbol command_list.

command      -> move number : '$1', '$2'.
command_list -> command : ['$1'].
command_list -> command command_list : ['$1' | '$2'].

We have two terminal symbols, two non-terminal to group them and a non-terminal command_list should be the root. Let’s test it:

iex(1)> :ok, tokens, _ = :lexer.string('forward 5ndown 1ndown 100')
 [move: :forward, number: 5, move: :down, number: 1, move: :down, number: 100],
iex(2)> :parser.parse(tokens)
   :move, :forward, :number, 5,
   :move, :down, :number, 1,
   :move, :down, :number, 100

Ok, nice. We have a list of tuples, where each one of them contains two other tuples – a move command and a number. With that, we can move on to a very basic interpreter.


We have the semiotic part done, now let’s add some semantic into it. Our interpreter is going to just take a list of commands and apply them one by one, along with some representation of context or state. This is exactly what Enum.reduce does and so we are going to use it.

defmodule SubmarineLang do
  def eval_file(name) do
    input =
      |> to_charlist()

    :ok, tokens, _ = :lexer.string(input)
    :ok, ast = :parser.parse(tokens)

  defp eval(ast) when is_list(ast), do: Enum.reduce(ast, 0, 0, &eval/2)
  defp eval(:move, :forward, :number, x, h, depth), do: h + x, depth
  defp eval(:move, :down, :number, x, h, depth), do: h, depth + x
  defp eval(:move, :up, :number, x, h, depth), do: h, depth - x

And this is it. When we run the interpreter, it will go through the commands one by one and adjust the context (a tuple with horizontal position and a depth) accordingly. The result will be such a tuple after applying all the commands. All that’s left is to multiply first element by the second.

The second part

I’m not going to go into details about second part, but there the meaning of each command changes – now it makes a different modification to the context. Therefore you need to change the interpreter and only the interpreter.

My complete solution is available on Github, if you want to take a look.

Some reading

Source link

Elixir Trickery: Cheating on Structs, And Why It Pays Off

While we can’t say cheating on anyone is okay, we’re not as absolutistic when it comes to cheating on Elixir at times.

Structs are there for a reason (we’ll start from a brief overview), and that’s certainly not for us to cheat on them. But we can if we have to – and we’ll sometimes even justify that and get away with it!

Today’s article will come in handy especially for those who are interested in developing libraries for Elixir and making them usable across different dependency versions, which is always a problem when writing code intended to be pluggable into different applications. Read more…

Source link

How database transactions work in Ecto and why Elixir makes it awesome?

Today we’re going to look at how Ecto, which is Elixir’s first-choice database access library, addresses the issue of handling database transactions. We’ll briefly introduce you to the very concept of transaction, then focus on describing the Ecto way of handling them, and explaining how it feels superior to what other languages’ libraries offer us in this department. We’ll give plenty of examples corresponding to a simple app you can pull from our GitHub repository, so you can have some fun testing it out! Read more…

Source link