Use pg_isready to health check the postgres container

Sample docker-compose file:


      context: .
        - NODE_ENV=development
    command: npm run start
    restart: always
      - "3000:3000"
      - .:/opt/node_app/app
      - ./apps/backend/src:/opt/apps/backend/src
      - ./apps/backend/src/package.json:/opt/apps/backend/src/package.json
      - ./apps/backend/src/package-lock.json:/opt/apps/backend/src/package-lock.json
      - notused:/opt/node_app/app/node_modules
      - NODE_ENV=development
        condition: service_healthy
      disable: true

    image: arm64v8/postgres:14.4-alpine
    restart: always
      - POSTGRES_USER=test
      - POSTGRES_DB=test_dev
      - "5432:5432"
      test: ["CMD-SHELL", "sh -c 'pg_isready -U test -d test_dev'"]
      interval: 10s
      timeout: 3s
      retries: 10



Ever seen a chart with too much data points?

I’ve seen that few times when someone share their visualized data on the internet, mainly on Twitter. This month I was tasked to visualize a data points as a line chart. It looks pretty easy and nice if there is only a few data points on it. The problem beginning to surface when I attempted to plot a significant amount of data points, the experience feels a bit sluggish. Not sure if that’s the right term, but that’s how I feel.

I shared this experience with my colleague, and he suggested me to look at this library: simplify.js. I took his suggestion and try it out. Once I put that library in place, the experience on the chart started to feel better as there are more than 1 charts displayed in the page with real-time data.

Use Sqribe to Backup and restore Microsoft SQL Server databases on macOS

The struggle is real when I was trying to backup SQL Server database hosted in a cloud server to my local environment on Mac OS. The database is for development purposes so I need to keep them in sync with my local DB. If you google it, it requires a slightly complicated steps to get the job done because I’m not using Windows OS.

After went to the rabbit hole, I finally found this site: after I optimize my search term on Google. Scribe is the tool that I want to get the job done because its a command line tool. So I like it very much.

Using void operator force a function to return undefined

More thing I learn from JavaScript language is a void operator. I can use this operator to force a function to return undefined. A great use case that I learn is when you have a function but you don’t want the fuction user to have a side-effect because of the changing return value (e.g. undefined to true).

button.onclick = () => void doSomethingCool();

They call this Non-leaking Arrow Functions.

jest-haste-map: duplicate manual mock found: next/config

jest will complain if we have more than one manual mock on the same module, even when they are placed in a different location:

NODE_ENV=test ./node_modules/.bin/jest --verbose src/__tests__/client/helpers/newrelic-enabled/loggingUtilNewRelicEnabled.test.js

jest-haste-map: duplicate manual mock found: next/config
  The following files share their name; please delete one of them:
    * <rootDir>/src/__tests__/client/helpers/newrelic-disabled/__mocks__/next/config.js
    * <rootDir>/src/__tests__/client/helpers/newrelic-enabled/__mocks__/next/config.js

 PASS  src/__tests__/client/helpers/newrelic-enabled/loggingUtilNewRelicEnabled.test.js
  newrelicUtil: ENABLE_NEWRELIC is set to true
      ✓ if the required env for New Relic Browser agent is set then return NREUM string (3ms)

Test Suites: 1 passed, 1 total
Tests:       1 passed, 1 total
Snapshots:   0 total
Time:        1.288s
Ran all test suites matching /src\/__tests__\/client\/helpers\/newrelic-enabled\/loggingUtilNewRelicEnabled.test.js/i.

How the networks look like before and after docker daemon host joined the swarm

When we initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host, the ingress and docker_gwbridge.


$  docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
ab785d29bc96        bridge              bridge              local
5d40ff921cc4        host                host                local
49e6728b2131        none                null                local

After the docker host joined the swarm:

$  docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
ab785d29bc96        bridge              bridge              local
72fd8affc18c        docker_gwbridge     bridge              local
5d40ff921cc4        host                host                local
xb8m6sh7gx22        ingress             overlay             swarm
49e6728b2131        none                null                local

Get a better understanding on WORKDIR instruction


The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile

WORKDIR instruction can be used multiple times

Both of the following instructions will effectively sets the working directory to /a/b.


What if we don’t specify the WORKDIR

If we don’t specify the working directory there are two scenarios:

  • The WORKDIR will be set to /
  • Or, the WORKDIR will be used from the base image

Use the WORKDIR set in the base image

If we don’t specify the working directory, but the base image has the instruction, then it will be used the working directory sets in the base image.

FROM golang:1.7.3 AS builder
RUN pwd
docker build -t test-workdir .
Sending build context to Docker daemon  10.24kB
Step 1/2 : FROM golang:1.7.3 AS builder
 ---> ef15416724f6
Step 2/2 : RUN pwd
 ---> Running in 9f8a0dfc57e0
Removing intermediate container 9f8a0dfc57e0
 ---> 4ceafd2ef636
Successfully built 4ceafd2ef636
Successfully tagged test-workdir:latest

Set the WORKDIR to /

If we don’t specify the working directory, but the base image doesn’t have the instruction, then the working directory will set to /.

FROM node:alpine3.10 AS builder
RUN pwd
docker build -t test-workdir .
Sending build context to Docker daemon  10.24kB
Step 1/2 : FROM node:alpine3.10 AS builder
alpine3.10: Pulling from library/node
89d9c30c1d48: Downloading [=>                                                 ]  59.39kB/2.787MB
89d9c30c1d48: Pull complete
7708a7b88cf9: Pull complete
1c96b50334bf: Pull complete
a0dc5889fe68: Pull complete
Digest: sha256:ebabd7c287a2852a78aaab721a6326471b9e0347c506c18fb97f7fd11ae5e41a
Status: Downloaded newer image for node:alpine3.10
 ---> 5f8b3338a759
Step 2/2 : RUN pwd
 ---> Running in 82631ae71897
Removing intermediate container 82631ae71897
 ---> 921767519d1a
Successfully built 921767519d1a
Successfully tagged test-workdir:latest

non-zero code will fail the build

This afternoon I was updating the Dockerfile to add the npm audit command and trying to build the image afterward. The interesting behaviour I just know was that the docker will fail the build as the npm audit command returned non-zero code:

found 34 high severity vulnerabilities in 2544 scanned packages
  run `npm audit fix` to fix 34 of them.
The command '/bin/sh -c npm audit' returned a non-zero code: 1

Here is the Dockerfile code snippet:

FROM dev as test
COPY . .
RUN npm audit && npm audit fix

In my case, I could simply ignore the npm audit error code by forcing the RUN to return 0 exit code:

FROM dev as test
COPY . .
RUN npm audit; exit 0 && npm audit fix

Note that in this case it still make a lot of sense while in other script we still want to return the exit code as is.