GitHub como Ferramenta Organizacional

O nosso código dorme hoje dentro do Github e nos últimos seis meses passamos a nos organizar em volta dele.

Faz todo sentido para a gente, quanto mais próximo a empresa está do nosso código, mais fácil fica a comunicação, identificação de problemas e oportunidades.

——

Repositório de código
Começamos a usar o Github como todos começam, usando para guardar o nosso código.

Além dele, fizemos fork de algumas gemas que tivemos que fazer alguma modificação para atender nosso interesse.

Essas gemas ficam em repositório aberto, enquanto nosso código permanece fechado.

Issues
Todo atendimento de Dev ocorre por meio de issues, as mesmas são divididas em melhorias, bugs ou no code.

Uma ferramenta que o Github tem, na qual nos mudou de patamar, foi a possibilidade de templates de issues. Com isso, quando qualquer pessoa da empresa quer abrir uma issue, ela já sabe os pontos importantes que precisa preencher.

O legal é que depois de um tempo toda a empresa mudou para issues. E quando alguma pessoa nova entra por exemplo, é criada uma issue de onboard.

Assim todo mundo tem visibilidade do que cada time tem de tarefa, aumentando a viabilidade.

Projects
Abandonamos o Trello e migramos todos os nosso boards para o Projects.

As issues deram visibilidade de quais tarefas existiam para cada time. No projects temos a ideia de qual etapa cada issue está no nosso processo de priorização.

Assim, evitamos a necessidade de reuniões para ficar perguntando onde cada coisa está, e podemos usar as reuniões para focar em como podemos ajudar uns aos outros a mover a issue para frente.

Actions
Também movemos todo nosso pipe de CI para dentro do Github.

Assim, o mesmo ficou integrado dentro do nosso processo, sendo executado tanto quando adicionamos uma label específica, quanto por commits.

E também, geramos os nossos relatórios de releases semanais para informar a empresa tudo que entrou no produto na semana.

Qualquer pessoa do time tem acesso, por exemplo, a deploys que funcionaram ou quebraram, testes que não passaram, etc…

Pull Request
O coração aonde o Dev faz suas tarefas. Todo pull request tem que estar vinculado a uma ou mais issues ao qual vai fechar/avançar.

Ao ser marcado como pronto para revisão, o próprio Github atribui ele para algum outro Dev revisar, não necessariamente da mesma squad.

Assim, além de melhorar a qualidade do que está sendo entregue, pois a modificação tem que ser simples para qualquer um entender, garantimos que todo o time tenha algum conhecimento de todas as partes do nosso produto.

Muitas vezes as melhores decisões de código vêm de pessoas que estão mais afastadas do problema, pois essas tendem a questionar mais as decisões tomadas.

Github CLI
Infelizmente o insight do Github é bem ruim. Mas resolvemos isso adicionando labels necessárias nas issues e um pouco de código para pegar essas informações pela api do Github.

Dessa maneira, conseguimos tirar métricas de velocidade do time, quantidade de bugs, etc.

E assim, podemos saber se as mudanças organizacionais que tomamos estão tendo efeitos positivos ou negativos.

Discussion
Essa ferramenta acabou de deixar fase beta do Github.

Ela é um fórum interno do mesmo, e usamos para registrar todas as discussões que temos internamente.

Quando algo começa a se prolongar no Discord, migramos para lá.

Até para ser fácil pessoas novas entenderem o contexto do porquê as coisas hoje são como são.

Também registramos resumo de livros que gostamos, e como resolvemos algum problema mais sério dos nossos clientes.

Wiki
Aqui mora toda a nossa documentação, desde documentação de como usar o produto, responder a certos questionamentos do cliente, até nossa boas práticas de programação.

Também usamos nossa wiki para o nosso processo de onboard. Temos documentado nela (e gosto de acreditar que cada pessoa que entra melhora esse processo) o que se espera que a pessoa saiba a cada semana até conseguir chegar em um ponto que consegue realizar as primeiras atividades sozinha.

Futuro
Quem sabe não adotamos no futuro o editor de código que o Github acabou de lançar?

O que eu posso falar é, cada dia que passa sinto menos necessidade de ter outras ferramentas de gestão de time, produto e código, além do próprio Github.


Source link

12 Products in 12 Months

It’s been a while since I’ve built anything for myself. One of the reasons I consider myself a “product-focused engineer” is because of my passion for building. Programming has enabled me to create something from nothing. Its turned me into a builder.

My partner and I have been talking about doing an ambitious project for 2022. We’ve decided to build 12 products in 12 months.



Why?

There are a lot of reasons I’m excited about this project. The first one was mentioned above.

I’m a builder. Creating things excites me. It makes me feel alive. I love everything from naming products, designing logos, writing bits of code, and setting up deployments. Every bit of the process is a fun way to flex my creativity. It brings me joy.

My partner is a computer science student. She’s been developing her coding skills for some time now. Although we’ve collaborated before, she’s grown a lot in the past 6 months. Now she’s ready to take on bigger projects! For her, it’ll be a great way to practice the skills she’s learning in school and in her internships. It’ll help her flex the programming muscle. It’s a unique way for us to spend more time together.

I suck at marketing. I’m not great at design. The whole product life cycle is something I’m interested in. This project will be a safe space for me to play in areas I don’t get to do as often at work. I’ll be consuming a lot of product resources and seeing where we can take our products.



Timeline

We’re making one product every month.

Some people I’ve told have criticized the project. You can’t build a business in a month. You can’t build a good product in a month. Tell that to Pieter Levels.

Either way, that isn’t why we are doing this. We aren’t setting out to build billion dollar startups here. We’re looking to build skills, be creative, and to have fun. I do hope some of our products end up being useful for ourselves (and others!) but that would be a beneficial side effect.

At the start of each month, we’re going to pick the product we want to build. We have a giant list but we’re also open to suggestions! Once we’ve decided on the product, we’ll break down our MVP and write tickets for work to be done. There’s flexibility here. We’re basically just going to use Kanban as we build throughout the month.

At the end of each month, we’ll each reflect on what we’ve learned. Where did we fall short? What did we have the most fun doing? And of course, we’ll post on Product Hunt.



Following Along

We’ve setup a public Notion page where you can follow along. This is where we’ll do our planning. You can see each product, the scope of the MVP, and our tickets as we work through them.

We also plan to stream on Twitch every Sunday at 12 PM PST! This is will just be a fun way to interact with others. If you want to cheer us on, ask questions, or shit talk us, you can do it there. My guess is we won’t have any viewers but it’s just going to be another fun way to engage with the project. It should also add some aspect of accountability to the whole thing.



Our First Project

Today we decided on our first project, Poke! Poke is a personal accountability system. You setup recurring goals on the platform and get reminded about them via text message.

For example, if you want to go to the gym three days a week, you can add a reminder in Poke to message you on your gym days. Poke will keep track of when you accomplish your goals or when you’ve slipped up.

I’ve been thinking a lot about building platforms that interact with users by sending them text messages. I use Messages on my phone more than anything else. Don’t be surprised if we build other text-based things later in the year.

This project is extremely ambitious. There’s a good chance we don’t even build 12 products. Some of our ideas could barely be described as products, others could be startups in their own right. A lot of people will doubt the value of this project or our ability to succeed. In the end, I don’t care about any of that. I’m excited to be building. I’m excited to be giving myself permission to be creative just for the heck of it.

The last few years have been taxing on all of us. This project has me looking at 2022 with hope and excitement. I hope you find something to make you feel the same. I can’t wait to get started.

I can’t wait to share the journey as we make our way through 12 products!


Source link

Deep learning application in healthcare and wellness

Deep learning has been an increasingly popular research direction, reforming modern performances such as object recognition, images, and machine translation. In the world of savvy-tech, data-driven machine health monitoring is more common owing to the widespread use of low-cost sensors.

Deep learning provides useful tools for processing and analyzing data, especially in the healthcare and medical fields. This article gives you definitions, benefits, and applications of deep learning in healthcare such as computer vision, natural language process, etc. In addition, it discusses some new trends of DL-based healthcare that might boom in the upcoming years.



Deep learning in healthcare: Definition & Examples



What is deep learning in healthcare contexts?

Deep learning, a subfield of machine learning, has experienced a dramatic emergence in the past few years. The increasing uses of computational power and the availability of massive databases have driven the demand for deep learning. Healthcare and medical fields have witnessed striking advances in the ability to analyze data such as images, language, and speech. The healthcare industry stands to benefit from deep learning due to data, the increasing proliferation of medical devices, and digital records.

Deep-learning models will scale to large datasets and continue improving with more data. This deep learning system can accept or read multiple data types as its input. For example, the most common deep learning models use supervised learning in which datasets are data points. Thus, many healthcare software development firms apply deep learning in healthcare to handle a large number of datasets.



Examples of deep learning in healthcare and medical aspects



Genomics

As an example of deep learning in healthcare aspects, Genomics uses deep learning techniques to help patients undergoing treatment. Professional clinical get an idea that might affect patient treatments in the future. Genomics is a steadily growing aspect. This deep learning technique supports clinical practice to give more accurate diagnoses.



Cell scope

Cell scope is one of the most effective examples of deep learning in healthcare. Thanks to deep learning in healthcare, helping parents monitor the health conditions or health status of their kids. People can see these deep learning techniques on any device, reducing the visits of parents to hospitals.



Insurance fraud

Insurance fraud is another example of deep learning in healthcare, which is used for medical insurance claim fraud. This deep learning technique is considered as predictive analysis which predicts possible fraud claims. Besides, deep learning also helps the insurance industry to send discounts or offers to target patients.



Medical imaging

Medical imaging is another example of deep learning in healthcare with healthcare software development such as CT scan, ECG, MRI, etc. This technique helps define and diagnose diseases for patients such as heart attacks, cancer, brain tumor, etc. Thus, deep learning in healthcare supports doctors to analyze patients’ diseases and give them useful advice.



Discovery drug

Deep learning in healthcare helps to discover drugs and develop them. Thanks to deep learning in healthcare, we gain insights from patients’ tests and disease-related symptoms. Therefore, discovering drugs is one of the examples of deep learning in healthcare.



Alzheimer’s disease

One of the most crucial challenges people are facing is Alzheimer’s disease, especially the elders. So deep learning in healthcare helps detect Alzheimer’s disease at its initial phase, making it convenient for doctors to treat.



Benefits of deep learning in the medical field

To define how deep learning can benefit people in healthcare and medical aspects, let’s look into the healthcare treatments. So people apply deep learning in healthcare to assist professionals in the medical sciences and lab technicians. Here are a few benefits of deep learning in healthcare that you might know:

  • Deep learning in healthcare can be a learning tool collecting data and recording information about patients, their symptoms, and treatments. Doctors or professionals in the medical field can use this information as a future preference for the patients’ treatments.
  • Deep learning in healthcare allows you to create a model based on data sources available when you require a risk score on administration. Furthermore, healthcare software development firms apply deep learning techniques in healthcare to provide accurate and timely risk scores. That boosts confidence and appropriately allocates the resources.
  • When using deep learning in healthcare, people can reduce costs and get improved outcomes. For example, electronic health records (EHR) and digital healthcare applications can make data more accessible to trained algorithms than ever.
  • Thanks to deep learning in healthcare, health staff give more accurate and faster diagnostics during patient treatments. Doctors can identify patterns by connecting custom healthcare software. For instance, deep learning in healthcare can determine whether the skin lesions are cancerous or not like any certificated dermatologists.



Top applications of deep learning in healthcare



Computer vision: One of the largest successes of deep learning in healthcare

Computer vision emphasizes images and videos and handles tasks such as object classification, segmentation, and detection. This deep learning technique is useful in determining whether patients’ radiograph contains malignant tumors.

Medical imaging, for example, can benefit from advances in object classification and image detection. Many studies have proved the results in complicated diagnostics spanning dermatology, radiology, and pathology. Furthermore, deep learning in healthcare could support physicians by giving second opinions and providing concerning areas in images.

Remarkably, the models of deep learning in healthcare have obtained physician-level accuracy at a variety of diagnostic tasks. Thanks to the custom healthcare software, people can identify moles from melanomas, diabetic retinopathy, cardiovascular risk, and spinal analysis with magnetic resonance imaging.



Natural language processing

Natural language processing is one of the top applications of deep learning in healthcare. This application of deep learning in the medical field focuses on analyzing text and speech to infer meaning from words. When developing custom healthcare software, software engineers use deep learning algorithms to process inputs such as language, speech, and time-series data.

Significant successes of natural language processing include machine translation, image captioning, and text creation. In terms of the healthcare industry, sequential deep learning and healthcare languages benefit electronic health records (EHR). For example, a hospital typically generates about 150,000 pieces of data. With such a huge amount of data, the applications of deep learning in healthcare can solve many problems.



Reinforcement learning

As one of the most successful applications of deep learning in healthcare, reinforcement learning is a technique to train computational agents. This reinforcement learning can happen through trial and error, demonstration, or a hybrid method. Healthcare software development firms use reinforcement learning in which health fields accomplish better goals through learning expert demonstration. People can learn to predict the expert’s actions through imitation or by inferring the experts’ objectives.

Another healthcare domain that can benefit from deep reinforcement learning is robotic-assisted surgery (RAS). Deep learning can promote the robustness of RAS by using computer vision models to adapt to surgical environments and learn from physical motions.



Generalized deep learning

Beyond computer vision, natural language processing, etc, generalized deep learning is adaptable to healthcare domains where data requires customized processes. More specifically, modern genomic technologies collect various measurements of proteins in their blood.

Thus, deep learning in healthcare can use these data to analyze these measures, so it helps to provide more accurate treatments and diagnosis. Moreover, deep learning can further boost additional modalities such as medical images, and wearable device data.



Future of deep learning: A ray of hope for medical and health fields

The future of deep learning in healthcare has never been more promising. Artificial Intelligence and Machine Learning not only bring such a precious opportunity to develop custom healthcare software meeting specific needs. Moreover, deep learning in healthcare is beneficial for supporting clinical and patient care.

High-dimensional bio-medical information remains a challenging issue in shaping the healthcare industry. Different types of data are rising in the world of medical sciences such as images, sensor text, sensor data, etc. Healthcare software development firms use deep learning algorithms to solve this problem. Deep learning in healthcare makes unstructured data more successful representations. The latest application of deep learning in healthcare provides efficient paradigms to get the end-to-end learning models for complex data.

The use of electronic health records (EHR) helps advance clinical research and better decisions made during patient treatments. This custom healthcare software prevents the practice of learning models by synthesizing and demonstrating the data. Deep learning in healthcare can support and even affect decision-making processes in the clinical environment.

Deep learning is a set of many computational methods allowing an algorithm to demonstrate desired outcomes. The massive applications of deep learning in healthcare have brought many opportunities for the future of deep learning. For example, there is a further assessment and validation of medical images.

Healthcare software development services have used deep learning to train algorithms with feasible outcomes and measures. The algorithms in detecting reference diabetic retinopathy (RDRs) can be moderate. So the deep learning in healthcare-trained algorithms evaluates two operating points selected from high specificity and high sensitivity. That leads to better results. Thus, the future of deep learning in healthcare can be a ray of hope for the medical and healthcare industry.



Al and deep learning in healthcare: New trends of deep learning-based healthcare

Artificial Intelligence and deep learning in healthcare have boosted the healthcare and medical industry with emerging trends. The deep learning algorithm is convolutional networks. It helps analyze medical images, medical classification, segmentation, and other tasks. People use deep learning in healthcare areas such as retinal, digital pathology, and neural. Healthcare software development services see deep learning as an emerging trend in the field of data analysis. These deep learning algorithms have been named one of the 10 breakthrough technological applications in 2013.

Another trend of deep learning in healthcare that you might know is its application in healthcare predictions. People usually use deep learning algorithms in custom healthcare software to improve clinical predictions. Therefore, deep learning is an essential machine learning tool in imaging, neural networks, computer vision, etc.

Health informatics is also an emerging trend of deep learning in healthcare. Thanks to the applications of deep learning in healthcare, professional clinical doctors make predictions of disease to provide personalized services. Biomedical data in the healthcare industry has obtained knowledge from many applications of deep learning in healthcare based on techniques.

The healthcare field nowadays has various strategies which benefit individuals and societies owing to their spectrums. We have experienced advancements in Machine Learning and Artificial Intelligence in numerous fields so does deep learning in healthcare. Thus, deep learning in healthcare has emerged in recent years. A large number of data sets from clinical management systems feature the demand for healthcare services. This provides an opportunity for the application of deep learning in healthcare.



Closing

Deep learning is an element of machine learning, based on data learning methodologies. People use deep learning in healthcare such as recognizing speech, computer vision, natural language processing, etc. That has led to changes in the healthcare and medical fields. Besides, it helps boost the development of the healthcare industry. Healthcare software development services use deep learning algorithms to help doctors give more accurate diagnoses.


Source link

Open Source Software Product Development, Building DDTJ – Day 1

I’m on a two-week vacation from Lightrun and I have this urge to build something new. I also have a great product idea: DDT. In the past I built many projects, both commercial and open source. I never documented the complete process. This is something I’d like to change. So in this blog I’ll go over that process from concept development to product prototype. Notice that this process is identical for proprietary software too, it’s a remarkably similar approach.

You, the reader, have a crucial part in this: You’re my “daily”. I don’t have project and product managers who can keep me in check, so I need you…

Procrastination is the biggest point of failure in any project. It’s where projects rise or fall. In normal company settings, we have daily meetings to combat that. You know you’ll have to stand in front of the entire team tomorrow to talk about what you did. So you’ve got to “do something” so you’ll be covered in the daily meeting.

With a single person open source project, you’re all alone. There’s no product team to report to, no product roadmap and you can’t get fired. Procrastination becomes a major temptation. That’s where blogging can help by leveraging the power of the open source community. I hope you will keep me “honest”, I need you to read and follow this so stopping would be embarrassing. But I also need you to ask questions and help keep me focused on the product strategy. It’s easy to get carried away and try to create an overly complex product. If it looks like I’m straying from MVP, please call me out on that.

I plan to write 10 blog posts until we have a working first version of the project. I’ll skip working weekends because my family would murder me if I do that… I hope I’ll be able to keep the pace and document this process well. I also hope it will be entertaining.



The Process

I already went through the first major part, which is the product idea. I think a lot has been written about idea generation so I won’t bother writing about that. As I said, the idea I’m working on is DDT (or DDTJ to be exact), I’ll get to that soon enough.

This is the plan for the next 10 days. I don’t know if I’ll be able to stick to it or exceed it, but that’s my general direction. It isn’t “really” product management, but in the early stage a hacker mentality is more useful than an organized process:

  • Initial Developer Guide and Basic Design
  • Scaffold the project and implement CI
  • Connect to server with initial server unit tests
  • Implement the first version of CLI
  • Implement mocking abstraction logic
  • Create tests for mocking well known libraries
    *Performance and Integration tests

This is a flexible guideline and not as a product roadmap. As I move forward, I’m leaving room for mistakes, omissions and delays.



Target Market

There’s one thing that’s missing from this list, which I already did. You need to validate the concept of the project you’re working on, a “product to market fit”. There’s a famous “quote” of Henry Ford:

“If I had asked people what they wanted, they would have said faster horses.”

But the thing is, he didn’t say that!

It’s also a bullshit concept. People wanted cars and asked for them. Ford built what people asked for: faster, cheaper cars. So did every innovator. A Successful product or successful open source project starts with a need by real people.

When I explain this to people I often get the response that this is “closed source” thinking related to proprietary software. That’s just wrong, open source software needs a proper product development process just like any proprietary software tool. We want people to use our tools… But we want them to spend time with our tools and time is money. We need to offer a sublime product concept regardless of our source license!



Developer Guide is First

I sorted the list in mostly chronological order. I’m a big believer in very “light” design. I really can’t stand these huge documents that end up as a legacy of all our mistakes.

You can’t debug design. There are some cases where it’s very warranted, but they are usually the exception, not the rule.

I usually start by creating a simple developer guide for the final physical products. This has the following advantages:

  • It forces us to think first about the finished product. How it will look and feel
  • We maintain it since it’s the guide, it won’t go too stale. A “living” document is important
  • It lets us focus our product strategy on specific goals. E.g. this block in the guide relates to that module in the system
  • It explains the product to other people. Testing product to market fit is important and having a clear guide is crucial

I finished the first draft of the developer guide for DDTJ today. You can check it out here.



Scaffolding & CI

I’m a believer in creating mocks for all the big pieces first. Deciding on the big set pieces and flushing them out together. The logic behind this is to see the first full stack process running as soon as possible to find any conceptual problems we might have. It also helps the development teams move faster when we have more than one developer. We can find our respective sandboxes.

In our specific architecture, we have three tiers and a common library among them. We can always refactor after the MVP so we shouldn’t get too hung up on decisions. A major part of the scaffolding is the choice of technologies. Mostly, this isn’t a big deal. However, we need to limit our scope and be wary of RDD (Resume Driven Design) which is a silent but deadly project killer.

Having a CI build in place with some code quality verification is just good common sense. Especially with security, static analysis, etc. This is important even when there’s just one person working on the project…



So What’s DDT or DDTJ?

I’ll discuss the other points as we move forward, but let’s talk a bit about the project. It would be great if you can follow it here.

DDT stands for Development Driven Testing. DDTJ is the implementation of the DDT idea.

The idea is simple. When we have a bug at pretty much any company, there’s a requirement to add a test that fails for the bug.This is often harder to do than the fix itself. DDT is about fixing the bug and running your server. Then generating the unit test for the case that failed.

Unit tests are normally easy to write but the mocks aren’t trivial. That’s where DDT will try to shine.

There are many other uses for the basic technology, e.g. we can detect when code that isn’t covered by tests is reached and generate unit tests for that code automatically. But that’s not part of the MVP.



The Product Management Roadmap

I said there’s no roadmap, but I planned the MVP, which is a bit of a roadmap. There’s a minimum we need to “prove” DDT is useful.

  • A CLI that generates the tests
  • Support for Java with Spring Boot applications

I’ve set these goals since they will provide something useful for a large enough community and I know Java/Spring Boot well enough. I want the architecture to be generic since the concept is translatable to most languages and frameworks. So if the MVP is successful, DDT will add support to additional platforms/languages.



Technical Challenges

To be clear, I’m not sure if DDT is technically workable. So I’ll try to prove it with a product prototype as soon as possible. I think that even if the product development fails, there’s still a lot to learn, so this will still be a valuable experience.

I need to be prepared though, so I’ve given a lot of thought to the challenges that lie ahead and organized them in this ordered list:

  1. It isn’t possible – Essentially I would need to monitor every method in a running application. Initially, I thought I would use the debugger API to walk through the app. But I’m not sure that would scale. I’m considering bytecode manipulation, but that has its own problems. The main issue is one of scale. The debugger approach will work for a small application but might fail for larger apps.

  2. Performance – it might be impractical because it has such a significant impact on performance, making the application unusable. It might consume too much RAM in real-world applications.

  3. Generating mocks might be difficult – the generation phase would be pretty difficult since we need to understand the classes involved. We need to generate mocking code that compiles for classes we’ve never “seen”.

  4. Supporting other languages/platforms might be challenging. They don’t all have the same capabilities.

I’ll address these concerns in my following posts as I explain my architectural choices.



Tomorrow

Tomorrow I plan to talk about why I made some of my architectural choices and how you should choose the right tools for building your MVP.

I will also plan to talk about the scaffolding process and how I got started with the project.


Source link

The Future of Coding is 'No Code'

What’s in your mind when you think about building some sort of Startup or App. Probably, your answer will be is to create a plan, deal with UX/UI and then start coding the software itself. So, the part of coding is one of the most difficult parts for many folks and there’s a solution! Called No-Code tools.



What is No-Code Revolution?

In the past, No-Code tools were used by developers to prototype their ideas and get a MVP (Minimum Viable Product) out quicker. But now these tools are more user-friendly and you don’t need to be a developer to use them! So, if you have an idea for website or app, don’t worry about coding and just go for it with No Code tools!

There are many No Code tools on the market but we’re going to focus on several of them. All of these platforms enable users without any coding experience to build a professional-grade web or mobile app. And even better? They’re all free to use!



No Coding = More Time for Creativity!

In this article, we’ve mentioned several No-Code tools which you can use to create a website or an app by yourself without having any coding skills. These apps are perfect not only for beginners but also for experienced non-software developers who want to be more productive in their workflow by creating something new faster than ever before. All these No-Code tools have been designed around user experience so that anyone could build a product even if they don’t know HTML/CSS or Javascript language at all – just imagine that!

So, don’t waste your time and start using No Code tools today to turn your idea into a reality without any headaches. And remember, the future of coding is No-Code so it’s time to jump in!



Webflow

Well, when we already discussed the Code Revolution and why you should pay attention to it, now it’s time for the services themselves. And the first one is Webflow that is a great builder for your website. It’s not something like Wix or builders like this, it’s like a Figma, however, with a logic implemented with design. It’s even better than WordPress

Also, Webflow has courses about Web-Design, Creating Portfolio, and UX/UI. So even if you won’t use it as a No Code tool you can use it as a free platform with courses.

And as I said, you don’t need to build 100% of your website by using this tool, you can make the main part and then replace or add something by using your custom code.

There’s another platform called Bubble.io, however, personally, I think that Webflow is better because it gives your more customization for your design, rather than Bubble.io which is more directed on the business sphere where the most important thing is a piece of information that you provides on your website



Bravo

The one killer feature that Bravo has is the implementation with Figma. You can make a design of your app in Figma and then just past a link and it will understand which element is going to do and that’s all, you have a fully working Mobile App.

And it’s free, you don’t need to pay anything. I honestly think that this one is a great No-Coding app because it helps people to think more about the creativity and the idea of the project, so there will be more folks that will want to create on their own

No coding is a great thing, but No-Coding + Figma = Awesome!



Airtable

This is an app that will help you and your co-workers do your work better by creating custom interfaces that give each and every teammate the relevant information they need, and a simple way to take action.

Airtable is a great No-Coding tool that will save you time and provide you with the freedom to work on what’s important. It has pre-built templates for project management, sales pipeline reporting, research library management – so basically everything! And again it’s free if your team isn’t huge (up to 32 people). You can check how many users are currently using Airtable.



Stackla & Storedot

We’ve talked a lot about No Codings but now it’s time to talk about Stackla and Storedot. These No Code tools are used for creating a digital asset, mostly for social media.

Stackla is an AI-driven content marketing platform that enables brands to discover and curate user-generated content from around the Web and social media, then publish it across all channels in real-time. It also has a great feature – you can use your own custom code if you want something special

Storedot is a No Code tool that helps you collect, manage and share your favorite web articles, videos, tweets or any other type of online content with anyone in just a few clicks. You don’t need an account, just enter the URL into the box on their website and it will download it in a format that you can easily send in an email or print out, for example. No Coding skills required 😉



Conclusion

So far we’ve seen how No-Code tools can help us build amazing things without having to learn complex coding languages. These tools are perfect for beginners and experienced developers alike, and they’re free to use! If you have an idea for a website or app, go ahead and try out one of these platforms – you won’t regret it!


Source link

Top 10 Logistics Startups in India for E-commerce

In a country like India, managing logistics with flair can sometimes be a very difficult task. Combine that with the insatiable appetite of the Indian population for buying things online and expecting timely deliveries, and things rapidly descend into a nightmare.

According to latest reports, the E-commerce industry in India will touch the US$ 18 billion mark. The sector is growing at an astounding CAGR of over 55%. It stands to reason that the demand for better and more agile logistics solutions are in high demand.

While there are many startups that operate in the logistics domain, only a handful have managed to create a niche of their own. Here are 10 such firms.

1. Ecom Express: Kicking off the list is the Delhi-headquartered logistics firm Ecom Express. It was established in 2012, and has since then received several rounds of funding from the likes of Warburg Pincus and Peepul Capital. What sets Ecom Express apart is the fact that it was one of the first companies to study the growth of E-commerce in India, realise its potential and provide E-commerce-centric shipping options.

2. BlackBuck: BlackBuck is based out of Bengaluru and is a dominant player in the B2B logistics arena. It is most commonly favoured by firms which manufacture heavy goods and equipment, and which needs hauling to various locations across India. For this, BlackBuck offers trucks of various sizes and load-bearing capacities. Its backend operations leverage modern technology seamlessly.

3. Rivigo: Rivigo was founded in 2014 in Gurugram. It has an off-beat business model which helps it stand out in a crowd. Rivigo provides shipping of goods using heavy-duty trucks. It has distributed the country into several zones, each manned by its personnel. Once a truck reaches point A to B, the driver is allowed to rest while a reserve driver completes the journey. It is similar to taking pit-stops in a sense.

4. Shadowfax: Shadowfax leverages technologies like IoT and GIS to deliver goods all over the country. It was founded in 2015 and has since garnered several funding rounds. The company serves over 17,000 Pin Codes and ships to over 220 countries.

5. 4Tigo: This is a brand owned by Fortigo Network Logistics and was founded in 2015. The company helms its formidable technological prowess as an SaaS platform behind the scenes. It is known for its fleet management services which is popular with truck owners pan-India. Hence, it is also known as ‘The Truck Network.’

6. QikPod: Founded in 2015 by noted entrepreneur Ravi Gururaj in Bengaluru, the company has a quirky value proposition. It has placed smart lockers, which are essentially collection points, at strategic locations in Bengaluru. QikPod calls these ‘Host Lockers’. Customers can then use these delivery collection solutions at their leisure.

7. Delhivery: Delhivery is, of course, one of the country’s foremost players in logistics for E-commerce businesses. With a fleet of more than 5,000 trucks and capable of covering north of 2,500 cities across India, Delhivery has grown exponentially since its founding in 2011.

8. Gojavas: This is a 360-degree logistics provider for all E-commerce requirements. It began life in 2013 in Gurugram and is currently co-owned by Snapdeal. The latter has invested significant amounts in Gojavas.

Rounding off the list are 2 dynamic logistics providers- ‘Letstransport’ and ‘Locus.’

With each passing month, the number of eCommerce logistics companies catering almost exclusively to shipments is rising. This is indicative of the growing strength of the E-commerce sector. It is also a measure of how bright and young entrepreneurs are creating a difference!


Source link

The Developer's Guide TO Building Notification Systems: Part 3 – Routing & Preferences

Your CTO handed you a project to revamp or build your product’s notification system recently. You realized the complexity of this project around the same time as you discovered that there’s not a lot of information online on how to do it. Companies like LinkedIn, Uber, and Slack have large teams of over 25 employees working just on notifications, but smaller companies like yours don’t have that luxury. So how can you meet the same level of quality with a team of one? This is the third post in our series on how you, a developer, can build or improve the best notification system for your company. It follows the first post about identifying user requirements and designing with scalability and reliability in mind. In this piece, we will learn about setting up routing and preferences.

Notifications serve a range of purposes, from delivering news to providing crucial security alerts that require immediate attention. A reliable notification system both enables valuable interactions between an organization and its customers and prospects and also drives user engagement. These systems combine software engineering with the art of marketing to the right people at the right time.

Building a service capable of dynamically routing notifications and managing preferences is vital to any notification system. But if you’ve never built a system like this, it might be difficult to figure out what the requirements are and where the edge cases lie.

In this article, you’ll learn invaluable points to consider when building your own routing service. You’ll understand the requirements for multi-channel support and in choosing the right API providers. You’ll also learn how to design user preferences so that you can make the most out of each message.



Multi-channel support: a necessity

Let’s say that you have just built a web-based application. The first channel that you’ll use to connect with your users is likely email because of how ubiquitous it is. However, with the diversification of channels and depending on your use case, email might not be the most efficient notification channel for you. Compared to other channels, emails typically have a low delivery rate, a low open rate, and a high time to open rate. It’s not uncommon for people to take a full day to even notice your email. If your email gets to the user, it might take awhile before they open it, if at all.

To engage with your users more effectively, you’ll want to support channels across a broad range of systems not limited to any one application or device. It’s vital to understand not only which channels are most relevant for you but also for your users. If you opt to use Telegram and your users don’t have it, it won’t be a very useful channel to interact with them. Multi-channel support is also vital because while you might pick appropriate channels today, you won’t know which channels you will need to support in the future. Typically, the more appropriate channels you support, the higher the chances of intersecting with applications your users actually use now and in the future.



Choosing notification channels and providers

You’ll have to select relevant channels and appropriate providers for each channel. For example, two core providers for mobile push notifications are Apple Push Notification Service (APNs) and Firebase Cloud Messaging (FCM). APNs only supports Apple devices while Firebase supports both Android and iOS as well as Chrome web apps.

In the world of email providers, SendGrid, Mailgun, and Postmark are all popular but there are hundreds more. All email APIs differ in what they offer, both in supported functionality and API flexibility. Some providers, like Mailgun, only support transactional emails triggered by user activity. Other providers, like SendGrid and Sendinblue, offer both transactional and marketing emails. If your company opts for a provider that can handle both, you’ll still want to separate the traffic sources, by using different email addresses or domains, to aid email deliverability. If you only have one domain for sending both types of emails and the domain gets flagged as spam, your critical transactional emails will also be affected. Whichever provider you choose, you’ll still want to meticulously verify your DKIM, SPF, and DMARC checks, and domain and IP blacklisting using your own tools or a site like Mail-Tester.

Making requests and receiving responses also differs with each email API provider. Some providers, like Amazon SES, require the developer to handle sending attachments, while others, like Mailgun, provide fields in the API schema for including attachment files directly.

There are minute variances in formatting HTTPS requests. The maximum payload sizes range from 10MB with Amazon SES API and up to 50MB with Postmark. There are also differences between the rate limits for requests.

In terms of API responses, Amazon SES provides a message identifier when an email is sent successfully through the API, but, for example, SendGrid returns an empty response in that situation. The HTTP response codes also differ slightly depending on the provider. For example, AWS SES uses the response code 200 for successful email send operations, while Sendinblue uses 201, and SendGrid uses 202.

No matter which provider you end up choosing, don’t build your application solely to fit their logic and specifications. If you do so, it will be much more difficult to change providers in the future as you’ll have to overhaul your backend. It’s crucial to invest in a layer of abstraction based on your own paradigm.



Dynamically routing notifications between channels

How do you determine which channels to use and when? Just because you’re able to use email, SMS and mobile push doesn’t mean that you should use all of them simultaneously, since doing so carries a high risk of annoying your users. This is where you begin to formulate an algorithm to route messages between the different channels and the different providers within each channel. The algorithm needs to be robust to handle delivery failures and other errors. For example, if the user hasn’t engaged with a push notification after a day, do you resend it or use email instead?

You can begin constructing the algorithm using basic criteria. For example, if there is no phone number, eliminate SMS as an option for that user. If email is the primary channel, opting to send at 10 a.m. or 1 p.m. local time typically improves read rates. If the user is present or active in the app, consider sending an in-app push notification instead of an email. Finally, and especially important, get your user’s preferences for how and when they want to be contacted and integrate these preferences into your routing service.



Adding user preferences to your system

Once you’ve got your channels, providers, and routing algorithm figured out, you need to think about providing users with granular control over notification preferences instead of just a binary opt-in/opt-out switch.

Consider this: if you only allow opting in to or out of all notifications at once, your users might unsubscribe from all your communications because they find one specific notification annoying. As a result, you will lose out on valuable user engagement.

With granular control over preferences, a user identifies exactly how and when they hear from you. If a user doesn’t like email but wants SMS messages (not common, but possible!), they can adjust their preferences and keep the SMS line of communication open. Every enabled notification channel is another opportunity to engage the user in a way that’s productive for them. From the end user’s perspective, it’s empowering to control how and when they are contacted.

Note that for some channels, the user’s preferences should be ignored. For instance, two-factor authentication should go to SMS or mobile push regardless of the user’s preference for email. The possibility to override the default logic should be incorporated into your algorithm while you are designing your routing engine.

If you want to take user engagement further, allow users to opt-in/opt-out of specific channels, frequency, timing and topics. You can allow them to set up their preferences based on time of day, frequency per period, or to specify more than one email address. You can give them the option to receive transactional, digest emails, daily newsletters, or only the critical ones. You can also allow them to redirect their notifications to another address, for example if the user is out of office.

Granular preferences also extend past the dominion of developers and the user’s experience. Granularity of consent is becoming part of privacy compliance laws in Europe and in the state of California and might follow elsewhere in the future. Separately, granular preferences are an extremely advantageous analytical tool for the marketing team to improve brand strategy and personalization efforts. Is there a particular channel or topic that seems to be more popular? That information can be highly helpful to pivot in line with your users and grow your company.



Tips for future-proof maintenance

When you’re starting with notifications for a new product, there is nothing wrong with sticking to one channel and one provider. The most important principle to keep in mind is to design your notification system so that you can expand it in the future. You should leave the door open to include more providers when you need them.

Don’t assume that API paradigms are the same for each provider or notification type. For example, you want to send an email, and if delivery fails to send a push notification instead. But you won’t get a 400 HTTP response from the email provider in case of failure. The provider will retry your email over a couple of days. Instead, you’ll want to include webhooks or queues to notify you of the failure, and you’ll need to track the state of the message here. If you make blanket assumptions about how API calls work or how errors are returned, you’ll have trouble adapting to a different paradigm in the future. Instead, you can add a layer of abstraction on top of the API.

It’s also invaluable to centralize the way you call the provider APIs. If you spread out calls to an API throughout your code base, it will be more difficult to integrate other channels or API providers in the future. Let’s say you’re starting with email and AWS SES as the provider. In two years’ time, you might decide to integrate mobile push notifications as well. What might that look like? The incurred technical debt will include scouring the code base for all instances of calls to the AWS SES API before you can integrate mobile push as an additional channel. But with centralized calls, you’ll have more consistent, cleaner, and reusable code as you grow.



How many notification channels should you have?

Typically, having three or four channels that are relevant to your product is an ideal scenario for a mature product. When you intersect channels with the preferences and availability of users, you create higher levels of complexity for your algorithm. Offering many channels for notifications might become too complex to maintain. But offering too few channels might harm your chances of interacting with users since some channels might not be viable for all users. For instance, you might decide to offer email and push notifications. But if a user didn’t download your product, your interaction with them is limited only to email.



Best technologies for routing and preferences engines

It ultimately pays to choose technologies that will be a good fit for your routing and preferences needs. There will be a great deal of asynchronous programming, as the routing service will often be waiting to receive responses for each function. You’ll want to pick a language or a framework that allows you to respond to async events at scale.

The routing service also involves considerable state tracking, as most of the routing will depend on waiting on a response for each notification before changing state. The routing service will also need to be re-activated every time it receives a response from a provider and will need to determine if the notification was sent successfully or if it has to pursue next steps. See the example below of how a notification function’s state might be tracked.

routing-and-preferences-rough-1

At Courier, we use AWS Lambda. Since our usage tends to come in bursts, serverless technology allows us to adjust and scale for changes in demand throughout each day as well as handle asynchronous operations efficiently.



Don’t forget: compliance in notification routing

When creating your own routing and preferences service, you will need to ensure that whichever channels you implement are fully compliant with applicable laws. For example, there are legal mandates on how users may be contacted or how they can unsubscribe from contact.

For commercial email messages, the CAN-SPAM Act of 2003 is a federal United States law that spells out distinct rules and gives recipients a way to stop all contact. Penalties can cost as much as $16,000 per email in violation. This law also outlines requirements such as not using misleading header information or subject lines, identifying ads, and telling recipients how they can opt out of all future email from you. The opt-out process itself is strictly regulated.

For SMS, the United States Telephone Consumer Protection Act (TCPA) of 1991 sets forth rules against telemarketing and SMS marketing. Under this law, businesses cannot send messages to a recipient without their consent. This consent needs to be explicit and documented. The consent is also twofold: recipients need to consent to receiving SMS marketing messages and they need to consent to receiving them on their mobile device. Recipients need to be provided a description of what they are subscribing to, how many messages they should expect, a link to the terms and conditions of the privacy policy, and instructions on how to opt-out.

In California especially, the California Consumer Privacy Act (CCPA) of 2018 provides additional rights for California residents only. These rights include the right to know which information a company has collected about them and how it’s used as well as the right to delete it or to opt-out of the sale of this information. Information that qualifies under the consumers’ right-to-know includes names, email addresses, products purchased, browsing history, geolocation information, fingerprints, and anything else that can be used to infer preferences. Should a consumer request this information, the company has to share the preceding 12 months of records, and also include sources of this information and with whom it was shared and why. In 2020, California Privacy Rights Act (CPRA) of 2020 amended the CCPA. The CRPA provides further consumer rights to limit the use and disclosure of their personal information.

Other countries have their own compliance laws for businesses reaching out to leads and customers. Canada has its Anti-Spam Legislation (CASL). The European Union has the General Data Protection Regulation (GDPR) which now also covers granularity of consent. The United Kingdom has its own regulations along with the GDPR, the Privacy and Electronic Communications Regulations (PECR) and Data Protection Act.

Compliance itself needs to be integrated at the developer level. Providers, like SendGrid, don’t know what you’re sending. It’s up to the developer to ensure that all applicable compliance laws are followed for their choice of channels.



Conclusion

Building a notification system into a product is not for everyone. The process is time-consuming, complex, and expensive. The level of notification customizability and routing options you decide to implement will ultimately dictate a preference for either maximizing user engagement or optimizing cost. A startup with a product that hasn’t yet found its product-market fit has to focus on finding early customers and getting their feedback. But established companies with a proven customer base will have concerns related to more complex routing logic, future-proofing and compliance. This would require more functionality and higher maintenance costs.

This piece taught us about the necessity of sending data for notifications to the right people, at the right frequency, at the right time and how this can be done through routing and customized preferences. Tune in for the next post in this series to learn about observability and analytics to monitor the functioning and performance of your in-house notifications system. To stay in the loop about the upcoming content, subscribe below or follow us @trycourier!




Source link

How I decided which languages to use for my tech startup

This is article numero dos (that means number 2) in the series about starting Arbington.com.

Lets talk about how I decided which languages to use at my startup.



What it boils down to…

Simplicity. Efficiency. Community support. And.. do I know it?

It all boils down to these four things. And most startups probably say this, I get it. I ain’t that unique 😛



Simplicity

Which language is the easiest to read, write and learn?

Python.

Even if you disagree, it’s Python. Like, it’s just a fact of programming.

I need code to not become a crazy nest of curly brackets. Something I can hack away at and it maintains it’s cleanliness (to some degree).

It’s easy for future developers to pick up and learn quickly, and easy to read through to understand the business logic.

Plus, there are a lot of Python developers so I’ll never be worried about finding a developer (it’s the worlds most popular language, officially).

And! It has an insane ecosystem of packages that lets you install awesome tools super fast. Need to make an API request? Use requests. Need to parse HTML? Use BeautifulSoup4. This kind of “need x, use y” pattern goes on for AGES.



Efficiency

Python is relatively fast. As is JavaScript. Both of which I use A LOT.

Are they the fastest out there? Heck no. But they are well supported, popular, and fast enough for what I need.



Community support

I touched on this a little in the Simplicity section. But having access to packages, libraries and frameworks is very important.

Don’t reinvent the wheel.

And when you inevitably have questions, are there a sufficient amount of answers available on the web?

Python ✔️
JavaScript ✔️



Do I know it?

This is the most important part, to be honest.

Build using languages you know. Don’t learn a programming language just to build something new. That’s how you write unmaintainable code.

So, I built using what I know. But I also know other languages, so I also chose what was simple and easy for future developers to pick up after me.



Why is this important to you?

You’re going to see lots of companies showing off their tech stacks and you’ll be pulled in 100 different directions with no idea what to learn.

Pick a language, learn it, then apply for those jobs (if you’re looking for a job). You can’t be the perfect dev for every company, ever. Just do what you can, and see which jobs exist for you.



So what languages (and other things) do we use?

Remember the above because I’m going to blow your mind with one of these.

Here is what we use:

  • HTML/CSS/JavaScript (because that’s what 100% of all websites use)
  • jQuery. Not React.js. React is awesome! But it’s slow to code when your company is moving at the speed of light. Told you – mind blown yet!?
  • Tailwind CSS. It’s awesome once you learn about it and how it works. Truly, it’s powerful. We wrote like 50 lines of custom CSS, the rest is all in the class="" attribute.
  • Python. Because it’s powerful, simple, etc.
  • Django. It’s a batteries included framework that lets you get a lot done with very little code, and it’s super secure (and open source!)
  • PostgresQL. Just needed a database, and Postgres is a world class database and it’s also the one Django devs prefer.

Yes, we use jQuery. Why? It’s simple, we know it inside and out, it’s fast to develop with and the barrier to entry is incredibly low. And it’s a wee bit less typing than vanilla JS with cross browser compatibility. But we’ll eventually move to something else like Vue or React, I’m sure.

In the next article I’ll highlight which frameworks and libraries we use, and why.


Source link

Development Team You Need to Build an Investment App

Recently I came across the Forbes article that revealed some numbers related to investing. It turned out that investing has become prevalent, and more than 96 million people in the US are active investors. So this information got me thinking about the investment opportunities. Why do people invest and in what way do they actually do it?

I completed my own research and discovered that people tend to invest because they are worried about their retirement and comfortable living in the near future, they strive to achieve higher level of financial security, and most importantly they want to increase their current wealth.

And when it comes to the investment ways, it turned out that people prefer using digital helpers – web and mobile applications. So what does it all mean? Simply that investment popularity is growing and there is a huge demand in new custom investment platforms on the fintech market.

This investment fever can be a perfect business opportunity. By developing an investment solution now and entering the market, a startup company can become highly profitable.

So when it comes to investment app development, where should you start from? First of all, you need to find a reliable developers team who can guide you through all crucial development steps and deliver a top-notch solution. Speaking about the developers team, you need the following specialists to complete your investment platform:

Business Analyst – who will complete all necessary researches and help you with the selection of necessary tech stack and app architecture, write technical documentation and project requirements.

Designer – who is going to provide you first with wireframes of your future solution and its overall design concept, and then create a unique user-friendly design.

Back-end developers – who will write high-quality code and perform all kinds of necessary integrations.

HTMLCSS coders – who will be responsible for the front-end part of your solution.

Quality assurance specialists – whose job will be to perform all kinds of quality, performance and security checks.

Scrum Master – who will be in charge of all processes and make sure that everything is developed on time and within the discussed budget.

And while everything is quite clear with the team that you will need, there still can be one question left in your head. How much custom investment app development may cost in 2021? Well, the final price depends on many factors starting with the app complexity and feature set, and ending with the selection of a platform.

The price of web and mobile investment apps differ greatly. If you take approximate prices – mobile apps may cost you up to $45,000 per one platform (iOS or Android) and web solutions can be up to $50,000-$75,000.


Source link