My first impressions with pyenv

pyenv provides an easy way to install almost any version of python from a large list of distributions. I have simply been using the version of python from the os package manager for awhile, but recently I bumped my home system to Ubuntu 21.10 impish, and it is only 3.9+ while the libraries I needed were only compatable with up to 3.8.

I needed to install an older version of python on ubuntu

I’ve been wanting to check out pyenv for awhile now, but without a burning need to do so.


Based on the Readme it looked like I needed to install using homebrew,so this is what I did, but I later realized that there is a pyenv-installer repo that may have saved me this need.

List out install candidates

You can list all of the available versions to install with
pyenv install --list. It does reccomend updating pyenv if you suspect that it is missing one. At the time of writing this comes out to 532 different versions!

pyenv install --list

Let’s install the latest 3.8 patch

Installing a version is as easy as pyenv install 3.8.12. This will install it, but not make it active anywhere.

pyenv install 3.8.12

let’s use python 3.8.12 while in this directory

Running pyenv local will set the version of python that we wish to use while in this directory and any directory underneath of it while using the pyenv command.

pyenv local python3.8.12

.python-version file

This creates a .python-version files in the directory I ran it in, that contains simply the version number.


using with pipx

I immediately ran into the same issue I was having before when trying to run pipx, as pipx was running my system python. I had to install pipx in the python3.8 environment to get it to use it.

pyenv exec pip install pipx
pyenv exec pipx run kedro new

python is still the system python

When I open a terminal and call python its still my system python that I installed and set with update-alternatives. I am not sure if this is expected or based on how I had installed the system python previously, but it’s what happened on my system.

update-alternatives --query python

Name: python
Link: /home/walkers/.local/bin/python
Status: auto
Best: /usr/bin/python3
Value: /usr/bin/python3

making a virtual environment

To make a virtual environment, I simply ran pyenv exec python in place of where I would normally run python and it worked for me. There is a whole package to get pyenv and venv to play nicely together, so I suspect that there is more to it, but this worked well for me and I was happy.

pyenv exec python -m venv .venv --prompt $(basename $PWD)

Now when my virtual environment is active it points to the python in that virtual environment, and is the version of python that was used to create the environment.


I wrote this during my first few minutes of using pyenv. It’s been working great for me since then and has been practically invisible. If you have more experience with pyenv I would really appreciate a comment on your experience below.

Source link

Minimum Marketable Feature in Agile – What is it?

What is ‘Minimum Marketable Feature’?

Minimum marketable feature (MMF) is a small feature which is delivered fast and gives significant value to the user. The term, MMF, isn’t very widely used. However, the first agile principle and the MMF are in alignment.

The first agile principle states that the highest priority of the agile team is to satisfy the user. This is through early and continuous delivery of valuable software to the customer.

Both the first agile principle as well as MMF talk about delivering value to the customer. They try to give value that the customer hadn’t had before, even if this is done frequently. The term ‘marketable’ defines this.

Now, value can have a lot of definitions based on where you’re looking. It could increase the revenue of the company or reduce customer cost. So, the MMF concept applies to both internal and external products. Products used within an organization as well as ones which are sold outside can make use of MMF.


As we have mentioned above, the MMF concept isn’t much used. However, it’s been around for a few years now. Mark Denne Dr. JaneCleland-Huang wrote about the concept in 2004. Their 2004 book, Software by Numbers: Low-Risk, High-Return Development, spoke about MMF for the first time. The concept has been in use since.


You may know of the concept of theMinimum Viable Product (MVP) and wondering if the MVP and MMF differ in any way. Well, they are different in practice. Let’s understand the difference here.


Firstly, let’s look at the definition of the MVP as defined by Eric Reiss in the Lean Startup methodology. The Minimum Viable Product is a version of a new product which requires the least effort but allows maximum learning. This learning is in the form of customer feedback, when the MVP is adopted by early customers of the product.

The MVP tries to check whether the team is building the right thing in the first place. For this sake, they try to use a minimum amount of time and money when making it. The early adopters then give valuable feedback about their experience with the MVP. This allows the team to determine if it’s worth going ahead and building the complete product.

MVPs need not necessarily be a product. Anything which explains to the user what the product would do can also be considered as an MVP. For example, Dropbox only made a video to show their customers what they could expect out of the end product. Only after they saw that there was demand did they put in effort into creating it.


Now, we’ve already defined an MMF at the start of this blog. The exact textbook definition, as given in ‘Software by Numbers’, is

‘”The Minimum Marketable Feature is the smallest unit of functionality with intrinsic market value.’

It is a real feature, which solves the customer’s need in some way. The MMF can be marketed and sold. It’s all about releasing products that have a high value, and doing so fast. Building an initial product with the main features and later making incremental, high value changes is an example of an MMF.

Minimum Marketable Feature vs Minimum Viable Product

So, from the above two you can see that while the MVP focuses on learning, the MMF’s focus is on providing value to the user.

The MVP may or may not have an MMF, or it could have more than one MMF built into it. Your use of either concept would depend on your context and need.

The MMP – Minimum Marketable Product

So now you know about the MMF and the MVP. However, there’s yet another term, the MMP, which may lead to some confusion. Let’s understand what the Minimum Marketable Product is.

After the MVP, the MMP can be considered as the next practical step in the product building process. The MMP is the very core version of the product which has just the basic features needed by it. In other words, only the ‘must-have’ functionalities are incorporated into the MMP. The ‘good-to-haves’ aren’t added in this stage.

Like the MMF, the MMP tries to market a product at a fast pace and with the necessary features. The MMF, focusing on a feature, then becomes a subset of the MMF.

Example of Minimum Marketable Feature

After an initial product with some solid, core features has been released, there may be a progressive addition of new features. One very stark example is the operating systems of cell phones or computers. The smartphone would of course work right out of the box. But, over the course of using it, users would see that there are more updates added regularly. These are features that add value to the product, but aren’t required at the very start. Hence, they can be added incrementally.

Originally published here.

Also check out:

  1. Incumbency Certificate
  2. 4 Ds of Time Management
  3. Kano Prioritization Model
  4. Pure Competition
  5. Working Capital Turnover
  6. RICE Model

Source link

🤯 Did you know there are F13-F24 keys? 🤯

I have been using a computer for years and although I will have stumbled across this at some point it never stuck / sank in before.

There are 24 function keys assigned for keyboards. (F1-F24!)

That blew my mind!

What made you (re)discover this?

Recently I got a Stream Deck XL and while setting it up I had keyboard combinations clashing across applications and it was driving me round the bend!

While I was looking through the menu for assigning key combinations I spotted that the Function keys section had F13-F24 keys!?

5 minutes of Googling later and I was gobsmacked that I didn’t know this before! Keyboards can have up to 24 function keys?

Anyway now that I knew about these “dead” keys that nobody uses anymore I had a way of stopping clashes.

How many extra keys / combinations does that give that won’t clash?

When combined with Shift, Ctrl and Alt and Windows it gave me 60 keys / combinations guaranteed not to clash with anything else on your keyboard or interfere with 99% of software (and potentially another 120 keys / combinations if I want to use combinations of Shift, Ctrl, Alt and Windows combined).

Now I can program 60 keys into my stream deck for global functions and macros that will not clash with anything I currently have set up (or shortcuts already set within applications).

Anyway, I just thought it was interesting and I would share it!

Why Do You Have A Stream Deck, You Aren’t A Streamer!

As to why I got a stream deck when I don’t stream – article coming out next month on that but I can tell you the conclusion now…every developer should save up and buy one!

Anyway, a random post for tonight, did you know that there were 24 function keys and…does anybody own a keyboard that has the F13-F24 keys on?!?

Source link

Integrating Percy and Cypress to your Next.js application.

In this blog post, we will be seeing how to integrate Percy and Cypress to your Next.js application.


  1. Visual Testing
  2. End to End Testing
  3. Intro on Percy
  4. Intro on Cypress
  5. Writing your first E2E test with Cypress.
  6. Connecting Percy with Cypress.
  7. Integrating Visual Testing with CI pipeline
  8. Conclusion

Visual testing :

Visual testing makes sure that UI looks the same for all the users. At the end of the build, a visual testing tool takes a screenshot so that it can check , analyse and give us information as how our application gets rendered on multiple environments like browsers, devices and how screen size can affect the layout of the application.

Below are some of the visual testing tools

  • Percy (BrowserStack)
  • Kobiton
  • Applitools
  • LambdaTest
  • CrossBrowserTesting (SMARTBEAR)
  • Chromatic
  • Screener by SauceLabs (Sauce Visuals)

End to End Testing :

E2E or End-to-end testing is a test strategy where we subject our application to test scenario which will closely impersonate how an end user will interact with application.

  • WebdriverJS.
  • Protractor.
  • WebdriverIO.
  • NightwatchJS.
  • Cypress.
  • TestCafe.

Alright now we know about the two high level testing strategies. Let’s see short intro about the tools which we are going to use.


Percy helps teams automate visual testing. It captures screenshots, compares them against the baseline, and highlights visual changes.


Cypress is a JavaScript-based end-to-end testing framework built on top of Mocha which runs on the browser. It’s make the testing process more reliable and faster.

Let’s code.

Note: I will not be going in depth about writing E2E test. Please refer to the cypress documentation on writing your first test.

Bootstrapping your Next.js application:

We will be using the create-next-app cli to bootstrap our demo application. Go to your terminal and type the following command.

npx create-next-app cypress-next-demo --ts

cd cypress-next-demo

yarn dev

The above command will start up a brand new next.js application and spin up in your local machine.

You can now visit localhost:3000 in your browser.


Before writing our first test. Let’s clean up the boilerplate code in the index.tsx file. Paste the following in your pages/index.tsx file.

import type  NextPage  from 'next'
import Head from 'next/head'
import Image from 'next/image'
import styles from '../styles/Home.module.css'

const Home: NextPage = () => 
  return (
    <div className=styles.container>
        <title>Create Next App</title>
        <meta name="description" content="Generated by create next app" />
        <link rel="icon" href="/favicon.ico" />

      <main className=styles.main>
        <h1 className=styles.title>
          Cypress + Next.js + Percy

        <p className=styles.description>
         playing around with cypress , next and percy


export default Home

We are having a simple h1 and p tags in our demo app. Save it and check in your browser to verify the changes.


Writing your first E2E test with Cypress:

Let’s first install cypress. Head over to the terminal and run the following command.

yarn add cypress --dev

Once the installation is done , open the package.json add the following line to the script

"cypress:open": "cypress open"

and run the following command in your terminal

yarn run cypress:open

This is will open up the cypress and generate the examples with recommended folder structure

âžś  cypress git:(main) ls -ltra

total 0
drwxr-xr-x   3 karthikeyan.shanmuga  253301862   96 Nov 16 22:11 plugins
drwxr-xr-x   6 karthikeyan.shanmuga  253301862  192 Nov 16 22:11 .
drwxr-xr-x   3 karthikeyan.shanmuga  253301862   96 Nov 16 22:11 fixtures
drwxr-xr-x   4 karthikeyan.shanmuga  253301862  128 Nov 16 22:11 support
drwxr-xr-x   3 karthikeyan.shanmuga  253301862   96 Nov 16 22:12 integration
drwxr-xr-x  19 karthikeyan.shanmuga  253301862  608 Nov 17 00:22 ..

You can run the sample test in the Cypress UI to see how it is working.


Now let’s remove the example test and create our own. Do the following,

cd integrations

touch app.spec.ts

Add the following content to the app.spec.ts file

// app.spec.ts 

describe('home page', () => 
  it('checking for tags', () => 

Make sure to add the "baseUrl": "http://localhost:3000" to  cypress.json configuration file.

Code Walkthrough:

  1. describe and it come from Mocha.
  2. expect comes from Chai.
  3. Since we have configured the baseUrl to our local development url. We will be replacing We can just straight away visit the root of our application with cy.visit('/').
  4. In next two consecutive lines , we are checking to see if h1 and p we added to our index.tsx file is visible in the DOM

Running your Cypress tests:

Since Cypress is testing a real Next.js application, it requires the Next.js server to be running prior to starting Cypress.

Run npm run build and npm run start, then run npm run cypress in another terminal window to start Cypress.

Alright before automating by connecting it with Github actions CI. Let’s connect it with Percy.

Connecting with Percy :

Install @percy/cypress and @percy/cli:

$ yarn add --dev @percy/cli @percy/cypress

In order to add Percy snapshots to your Cypress tests, you’ll need to first import the @percy/cypress package into your cypress/support/index.js file:

import '@percy/cypress';

Head over to the app.spec.ts file and add the following line.

// for visual diffing
cy.percySnapshot('Home page')

Once done your app.spec.ts file should look something like this.

describe('home page', () => 
    it('checking for the tags', () => 

        // Take a snapshot for visual diffing


Note: Since our project is using typescript, please include the following types in the tsconfig.json.

"types": ["cypress","@percy/cypress"]

Since we have not connected to the CI yet , let’s see how we can run the test in our local and send the screenshot to Percy for visual diffing. We need PERCY_TOKEN for this.

Create an account in Browserstack if you don’t have one and navigate to Percy.

  • Create a new project and give the name as percy-cypress-demo and connect it your github repository.


  • Copy PERCY_TOKEN from the new project screen, then run:

export PERCY_TOKEN=your_token_here

npx percy exec -- cypress run

This will run the Percy test in your local environment and send the build to Percy. since it is the first build it will be considered as the base build and used for comparison.


Let’s automate the process shall we.

Connecting with CI – Github Action

Let’s connect it with our CI pipeline. We will be using Github actions to achieve this. Create a workflow file in our root directory.

From Next.js docs👇

You can install the start-server-and-test package and add it to the package.json . In the scripts field: "test": "start-server-and-test start http://localhost:3000 cypress" to start the Next.js production server in conjunction with Cypress. Remember to rebuild your application after new changes.

We will also be doing the same. After updating the package.json as mentioned it should look something like this

  "name": "cypress-percy-demo",
  "private": true,
    "dev": "next dev",
    "build": "next build",
    "start": "next start",
    "lint": "next lint",
    "cypress:open": "cypress open",
    "cypress:run": "cypress run",
    "percy:cypress": "percy exec -- cypress run",
    "start:server": "serve -l 3000 .",
    "test": "start-server-and-test start http://localhost:3000 percy:cypress"
    "next": "12.0.4",
    "react": "17.0.2",
    "react-dom": "17.0.2"
    "@percy/cli": "^1.0.0-beta.70",
    "@percy/cypress": "^3.1.1",
    "@types/node": "16.11.7",
    "@types/react": "17.0.35",
    "cypress": "^9.0.0",
    "eslint": "7",
    "eslint-config-next": "12.0.4",
    "serve": "^13.0.2",
    "start-server-and-test": "^1.14.0",
    "typescript": "4.4.4"

we will using the above configured command yarn run test in our CI.

# .github/workflows/main.yml

name: CI
      - main
      - main
    runs-on: ubuntu-latest
      - name: Checkout
        uses: actions/checkout@v2
      - name: Install
        run: yarn
      - name: Build Next.js 
        run: yarn run build
      - name: Run tests
        uses: percy/exec-action@v0.3.1
          custom-command: "npm test"
          PERCY_TOKEN: $ secrets.PERCY_TOKEN 


  1. Whenever we push the code to the main branch or send a pull request test will be triggered.
  2. Install the dependencies and build your Next.js application
  3. Run the test.

Note: Please add the PERCY_TOKEN to your Github secrets.

What is the need to run the test when code gets pushed to main branch ?

Percy needs a base screenshot which it can use it to compare with fixes which will be sent it’s way. If it does not have the screenshots to compare to , then you will have only one screenshot of your pull-request branch.

From Percy docs 👇

We encourage you to run Percy builds for every commit on the main branch, as these provide the baseline builds for the pull request and feature builds.

More info on docs .

You can also add Percy to your pull/merge requests to be notified when visual changes are detected, and when those changes are approved and ready to merge.

Head to your settings to give Percy access to GitHub or GitLab. Once you’ve given access, connect your project on Percy to your project’s source repository. Then the next time you commit, Percy will show up in your pull/merge request checks:


Since the there is no visual difference , you don’t have to approve the build in percy. Now head over to the pages/index.tsx file and change the p tag content and send in the pull request.

Once the test run, you will get the screenshot appearing on Percy.


Once you approve it , you will be able to merge the Pull request and then it will trigger another action to compare the new and old main branch screenshot. Since the new main branch screenshot is the latest one , it will auto-approved and considered as the base screenshot for the further comparison

What have we achieved so far ?

  • Learnt about visual and e2e testing.
  • How to write your first e2e test using Cypress.
  • How to connect Percy with Cypress.
  • Automating visual test with CI pipeline.

I have attached some reference blog post to get more familiar on Github actions and creating your own workflow.


That’s pretty much it. Thank you for taking the time to read the blog post. If you found the post useful , add ❤️ to it and let me know if I have missed something in the comments section. Feedback on the blog are most welcome.

Link to the repository:

Let’s connect on twitter : (


  1. Cypress Framework tutorial — Browserstack
  2. Next.js docs — testing
  3. Visual testing with percy – digital ocean.
  4. Creating your first github action

Source link

pip stuff you might need to know


pip is the standard package manager for Python. It allows you to install and manage additional packages that are not part of the Python standard library. The concept of a package manager might be familiar to you if you are coming from other languages. For example, JavaScript uses npm for package management.

pip3 vs pip

pip is also the CLI command that you will use to interact with pip, and there are many variants.

> pip install pandas
> pip2 install pandas
> pip3 install pandas

The thing to note here is that pip3 operates on a Python3 environment only, as pip2 does with Python2. pip (w/o the 2 or 3) operates contextually. For example, if you are in a Python3 virtual environment, pip will operate on the Python3 environment.

But pip3 can mean many things - like if I have Python3.7 and Python3.8?

Yes that’s correct. Let’s say I have two versions of Python installed, like Python 3.7 and 3.8. Now, if you were to type pip or pip3 in your terminal, it’s not very clear which Python interpreter gets involved.

And this is why you’ll see many developers use python -m pip. python -m pip executes pip using the Python interpreter you specified as python. Here you can provide the full path to the interpreter like: /usr/bin/python3.7 -m pip instead of relying on an alias.

Source link

3 Soft Skills To Succeed as a Developer

1. Creativity
When problems surface, a creative developer knows solutions likely already exist. And if it doesn’t, the developer isn’t afraid to come up with new solutions.

As a software developer, solutions aren’t handed over to you to mindlessly code. Instead, you must explore possibilities, weighing different technologies and your team’s skills. After gaining some experience to understand what technologies exist, the creativity of combining these solutions together becomes easier.

As a full-time software developer, you cannot fail — if a problem exists in your code, there is a solution, and you will find it. This will be the true test of your creativity.

2. Reliability
In a team, people rely on you to get your work done, especially when you promise to complete a task. If you’re reliable, no one will need to check up on your progress, as you’ve proven you can take on responsibilities.

Leaders want software developers who don’t need any babysitting. They want direct reports from who agree to do something and then follow through on their commitment. Believe it or not, many people aren’t reliable, so being a reliable developer will make you the go-to person for new tasks and opportunities.

3. Steller Communication
All softwares are built on a team composed of people with different ideologies, beliefs, biases, and experiences.

The best software developers communicate complex technical concepts to non-technical folks or technical ones who are still learning. You will go far as a developer if you can communicate across roles and teach others.

Source link

Last night I dreamt that somebody merged my PR…

I created my first open source pull request (PR) on October 11. When I saw the automated tests transition one-by-one from processing to passed, I was already hooked. The notification came through a few hours later that my PR had been approved and merged, and it was official: I wanted to contribute to more projects, learn new languages, and master every new-to-me developer productivity tool under the sun.

A classmate had recently completed a ‘100 Days of Code’ challenge. I’d considered doing something similar, but hadn’t come up with a good focus. After that first PR merge, I found my focus: create 100 non-trivial pull requests in 100 days.

(Why specify ‘non-trivial’? It’s surprisingly easy to automate PRs that correct common typos or formatting errors without ever even cloning the repository. I was determined that each PR in my challenge would require me to build and use a local development environment.)

I’m almost 20 days into my challenge. I’ve created 30 pull requests in 29 repositories, and 26 of them have been approved and merged. (3 are awaiting review and 1 was rejected.) The projects I’ve contributed to range from personal portfolios and practice projects, to large open-source apps and chrome extensions used by millions of people.

Here are some things I’ve learned:

100 PRs in 100 days is a breakneck pace

It limits contributions to relatively easy tasks, like hunting down a small UI bug or adjusting CSS breakpoints to make a layout more responsive. Harder tasks like refactoring take a few days. I’m OK with my contributions being relatively minor in this challenge, because I’m going for breadth at this stage instead of depth.

Everything breaks, all the time

If a project has contributing guidelines that outline how to set up your local development environment, follow them TO THE LETTER. You never know when completing steps in an order that shouldn’t break the setup process in theory will, in practice, break the setup process. Guides about setting environment variables should be read very closely.

Contributors are flaky

I’ve seen so many issues that have been claimed and then abandoned. The issue owner will check in with the assignee and get no response, then eventually un-assign it. A few days later someone new will come along asking to be assigned, and the same process will start again. Best practice is to write a note to set expectations on timeline and blockers. If solving the issue will involve multiple commits, it might also be a good idea to create a draft pull request after the first commit so that its public knowledge that progress is being made.

Reviewers are flaky too

It took 8 days for my simple formatting changes in O3DE to be reviewed. I’m still waiting (15 days and counting) for an answer & assignment on a Public Labs project. I’ve learned to not be too upset with radio silence – I just move on and appreciate the experience.

Every community is different

Some projects have no contributing guidelines but expect you to know & follow a strict community code. Others have delightfully extensive guides to help onboard new contributors. Repositories sometimes have strict PR templates that must be followed to the letter; almost all projects require commits to be signed off on (thankfully there is a setting in VSCode that makes that automatic.) Becoming familiar with & following multiple of these divergent community rules each week is a bigger challenge than I’d anticipated.

Contributing is incredibly rewarding

In just a few weeks, I’ve had the opportunity to practice so many parts of what the onboarding process will entail at a new job – setting up the environment, learning the codebase, working with new people, finding issues & tickets that are challenging enough to teach you something new, but not so challenging that you’ll delay others taking too long to complete them. I am grateful every day that I’ve found a way to practice these skills. If I can contribute 100 PRs in 100 days, I can certainly get fully up to speed on 1 project in the same time frame.


Creating 100 pull requests in 100 days as a SWE-in-training is far tougher than I expected. I’m not certain the format is ideal, either – the time spent in any one repository feels so fleeting. I thought many times about altering the challenge, but ultimately the only change I made was allowing myself two PRs against the same repository (if the issues were significantly different or exceptionally complex for my skill level.)

The thing is, a challenge is supposed to be difficult. If it’s easy, it’s not a challenge. And I’m still in the early stages, feeling just as overwhelmed as I was in the first couple weeks of learning JavaScript, R, or C. The beauty of a challenge like this is that it turns something that seems difficult and insurmountable into a regular routine.

I can’t wait to see where it takes me.

Source link

Experimenting with a CSS pure inspect element

CSS, a language that defines style and design, has many interesting functions. Some of them are the attr() and counter() functions. Today we’ll use them to create very simple element inspector with pure CSS.

Note that while JS would be the sane solution here, we’re experimenting with CSS and HTML.

What are these functions?

Both return values when used with the content property. attr returns the value of a given HTML attribute while counter can be used to count a number of elements (e.g you want to prepend a number before every h2).

You can read more about them at MDN.

The Idea

If attr() returns an attribute, we can use it to get the class and id from the element, which is one of the main usecases of Inspect Element.

As for counter(), we’ll use it to show the number of the list item in an unordered list.

Writing the HTML

A simple HTML snippet that works to demo our idea:

<div class="css-pure-inspect">
  <div class="hello" id="devto">
    This is a test

Note that I wrapped it in a div to avoid clashing with other elements and create a visual hell.


First thing we are going to do is styling the box that will show on hover. For this demo, I used this styling

.css-pure-inspect *:hover::after 
  padding: 6px;
  position: absolute;
  font-family: sans-serif;
  font-size: 13px;
  background: #fff;
  border: 1px solid #ccc;
  white-space: pre;

  • For a real project, * is generally discouraged for performance reasons.
  • white-space: pre; is a trick to be used with the a character escape to create line breaks in CSS content.
  • absolute is mandatory to prevent the box to mess with the flow of the document. Depending on the site you might need a z-index.

Now that we styled the box, we’re going to show the actual content – CSS classes and IDs, but data attributes or even alt texts could work if you added them.

.css-pure-inspect *[class]:hover::after 
  content: "Classes: " attr(class);

.css-pure-inspect *[id]:hover::after 
  content: "ID: " attr(id);

.css-pure-inspect *[class][id]:hover::after 
  content: "Classes: " attr(class) "a ID: "attr(id);

.css-pure-inspect *[class][id]:hover::after 
  content: "Classes: " attr(class) "a ID: "attr(id);

  • Yes, you can concatenate plaintext and attributes in content.
  • a is the line break trick explained above.
  • Some elements have a class, others just have a div. The attributes in the selector were added to check it. And thanks to CSS order and specifity rules, we override the previous style.

And that’s it for most elements. But we haven’t used the counter() function. Remember the HTML ul list we created earlier?

.css-pure-inspect ul 
  counter-reset: list-inspect;

.css-pure-inspect li 
  counter-increment: list-inspect;

.css-pure-inspect li:hover::after 
  content: "List item #"counter(list-inspect);

  • The counter resets at every ul
  • It increments every li and we show it in the hover box

Demo result

This could be more robust. Maybe you could manually add every tag and their attributes defined in the W3C spec, but you’d probably lose your sanity. Thanks for reading!

Source link as an atomic design tool

Recently, I discovered a tool that helped me build a design system: Described in by Rachel Andrew in her article Pattern Library First back in 2018, fractal does look a little old school, but it can be customized and does a good job without getting into your way.

alternatives to fractal

Fractal looks less shiny than Storybook, that I have used for ReactJS projects, but it can easily be used for projects without any JavaScript framework.

Fractal seemed easier, at least to me, to understand and maintain, than PatternLab, which I failed to install due a bug in the current installer (and when I managed to install the grunt version, I was already told that there is fractal as a possible alternative).

atomic design and design systems

So what are design systems and what is atomic design?
Much has been said and written about CSS methodologies like BEM, ABEM, ITCSS, and utility-based approaches like Tailwind or Bootstrap. Follow the links for further reading, if you like.

agnostic fractal

Fractal is quite agnostic about tools, methods, and coding style. Which also allows for a pragmatic approach that does not adhere to one single methodology.

The default setup allows you to build and compose components using handlebars, HTML, and CSS. Fractal can be customized to use any other markup language like Twig or Nunjucks, so you could probably use it for a JAMStack setup with 11ty as well.

boilerplates to start with

Other users have created boilerplates for using ABEM CSS in fractal or ditching handlebars to use fractal with twig templates instead.

To use CSS on a component level, you can add a tool chain of your choice (or just the first copy-and-paste-able example you find on Google), like SASS or PostCSS, together with a build process based on Webpack, Gulp, or plain Node.js.

In my first example, I used a gulp setup with SASS for a quick proof of concept. In a future JAMStack project, I would go for PostCSS to use native CSS 3 / CSSnext features and try to avoid unnecessary tool depencies.

But still, after changing one’s mind about tools or language choices, any existing code could be refactored easily while keeping the same folder structure.

advantages and suggestions

Apart from its agnostic and pragmatic approach, fractal has some other advantages.

preview theme customization

Fractal’s user interface can be themed / customized, so we do not have to stick to the original UI. We can set colors, logo, and fonts to match our customers’ corporate design before a presentation.

component composition

Components can include other components, so we can build a design system bottom-up starting with colors, icons, buttons etc. to be used in forms, paragraphs, sliders, navigation which can then be composed to larger blocks and pages.


Components can have variants, either by configuration (in a config file) or by using file names accordingly, like in this example:

  my-component.config.yml (or .json)
  my-component.hbs (default variant)
  my-component.css (classes used by my component)

This can get confusing quickly, but you can (mis)use the default variant to display an overview page.

<!-- my-component.hbs -->

<h2>Component with Arrow</h2>
> @my-component--with-arrow 

<h2>Component with Arrow but without Borders</h2>
> @my-component--with-arrow-without-borders 


Some aspects to consider before choosing fractal:

invalid markup breaks the preview

Some invalid markup can break the whole preview. One single mistyped character inside a handlebars include will show an error message instead of the preview.

component names must be unique

This might be an advantage or a disadvantage, according to your own point of view: while components can be nested and composed, there is no hierarchy.

Instead, all components exist on the same level and share the same namespace, so their technical names have to be unique.

you must do it by yourself

Apart from its agnostic and pragmatic approach being an advantage for me, it might be a disadvantage to you.

Fractal is just a tool, and quite a simple one, at least when you have experience with other tools and frameworks. It is up to you to complete the setup by making further choices and implementations.


Despite fractal being not the latest fad (or maybe even because of that) I have discovered it as a practical development and preview tool that does not get in your way.

Source link

Database says NOPE


I joined Virtual Coffee last week and they have this awesome zoom meeting that members occasionally spin up for pairing and coworking. A great dev named Travis Martin was adapting an existing project that had an app bundled with a Postgres v9 DB in a docker context, and he was trying to redeploy it in a different context with a newer version of Postgres. At the point I joined the zoom meeting, the app was having trouble authenticating to Postgres.

I’ve worked with a few different databases before, and I’d contributed to the TAU project in the past which uses Django and Postgres. As I tried to make suggestions, I referred to a few of the bootstrapping scripts I encountered on that project, and they helped to some degree of making sure all the pieces were in place in the database server (pasted below):

  • check if user exists: SELECT COUNT(*) AS count FROM pg_catalog.pg_user WHERE usename=db_user
  • check if database exists: SELECT COUNT(*) AS count FROM pg_database WHERE datname=db_name
  • create the database if needed: CREATE DATABASE db_name;
  • create the user if needed: CREATE USER db_user WITH ENCRYPTED PASSWORD 'db_pw';
  • assign privileges: GRANT ALL PRIVILEGES ON DATABASE db_name TO db_user; # use with care
  • update user password if needed: ALTER USER db_user WITH ENCRYPTED PASSWORD 'db_pw'

However, after using statements like these to make sure the DB server was setup correctly, we still were getting the same error message. Travis verified all sorts of things, like whether the app had access to the environment variables he wanted. We had a big clue when he attempted to authenticate to the Postgres over the psql command with the app’s credentials, and he didn’t get an opportunity to enter a password. The trick turned out to be that he was logged into the OS with the same username, configured earlier in the deployment process. As we got to reading further in the Postgres docs, we found that the Postgres configuration file pg_hba.conf had the authentication method set to “ident”, which relies on a separate
“ident” service, and in order to get things working, Travis set the authentication method to different option more appropriate for clients leveraging usernames and encrypted passwords.

This was a pretty specific use case, but maybe it’ll help somebody!

Source link