In this course I'll learn how to create a Progressive Web Application from scratch. I use client-side HTML, CSS and Vanilla JavaScript, together with a NodeJS server with the Express framework and Express-Handlebars templating language. With the usage of these languages, we create a complete application with offline-usages. The data is retrieved from a self-selected API and displayed inside the interface.
Visit: PhotoPaint.app
The advantage of using NPM scripts is that a large part of your work can be automated. There are several ways to make use of this. Grunt, Gulp, PostCSS and many other NPM packages can contribute to this. With the NPM scripts you can convert the SCSS to regular CSS, so that the server can read it and send it to the client, bundle Javascript files together so that you can work in modules, and of course all files can be minified.
Build
Inside my project I make use of three different build-scripts. The build scripts bundles my code, minify them and store them inside a dist
folder. The dist
folder will be used for the client. All my client-side code will be used in here, like my CSS, JavaScript, Service Worker, Manifest & all the assets. With the prebuild
command I remove the current existing dist
folder. The build
command performs the three tasks, and stores all the needed files inside the dist
folder.
"prebuild": "rimraf ./dist",
"build": "npm-run-all build:static:css build:static:js build:assets",
"build:static:css": "node scripts/build_css.js",
"build:static:js": "webpack --config webpack.config.js",
"build:assets": "node scripts/build_assets.js",
Watch
To be able to watch all the files in development mode, without using the build
command every time during the development process, the watch
command will be able to help you. By using this script, the chokidar
package will watch all the files on your localhost, and update these files as soon as a change is made. The single watch
command provides all the code that will be changed inside the JavaScript, CSS or assets folder.
"watch": "run-p watch:*",
"watch:js": "chokidar 'public/js/*.js' --command 'npm run build:static:js'",
"watch:css": "chokidar 'public/css/*.css' && 'public/css/pages/*.css' --command 'npm run build:static:css'",
"watch:assets": "chokidar 'public/**/*.*' --command 'npm run build:assets'"
Dev
To open the application in development mode, you will need to access the dev
command. This uses the nodemon package, so when there will be a file changed for the server, this will automatically refresh the server. This makes all changes on the server immediately visible. When the application is deployed, for example to Heroku, the host will provide the start
script, to start the server when visiting this project.
"start": "node app.js",
"dev": "nodemon app.js",
For this project, the goal was to convert our WAFS application into a Progressive Web App. For this it is important that the application can be used offline if necessary. A Service-Worker & Manifest are important here.
Manifest
A manifest.json
is a file that passes information to the browser about your Progressive Web Application, and how it should behave when installed on a desktop or mobile. A manifest
file must include a name, icon and start path.
Service Worker
A Service Worker ensures that you as a developer can manage / manipulate network traffic, cache files, add push notifications, and so on.
In my service worker I use an install, activation and fetch function. These three functions ensure that the entire application can be used offline, if the user has visited it before. Because my API uses variable results, it will not work completely offline.
The install function ensures that the service worker is installed in the browser, the static files I give in my variable cacheFiles
are put in the cache memory of the browser. This is a one-time operation, if the browser has not yet detected a service worker.
The activation function checks whether the files that I would like to cache in the cache memory of the browser already match. Duplicate files are also filtered out, so that the cache memory only ensures that files are only entered once.
The fetch function provides multiple functionalities, including all visited pages are stored in the cache memory of the browser, under the name html-runtime-cache
. All information of the pages already visited is stored here.
When the user visits the web application offline, he will receive an offline page as a response. When the user has previously visited the application, the previously visited pages are loaded. If the user comes to a new page, which cannot yet be found in the cache memory of the browser, the offline page is still served to the user.
Optimizing with picture source-sets
To optimize all the images in my web application, I used the picture element of HTML, and added the lazy loading attribute. The advantage of this is that a suitable image is loaded at the correct resolution. For example, it is of little use to load a full-HD image for mobile, if the viewport is only 400px wide.
On the homepage, it had little effect, with a minimum gain of 50ms. However, it has had a lot of effect on the detail page, taking more than 1 full second off the ** load **, saving 5MB in terms of resources retrieved
<picture>
<source media="(min-width: 760px)" srcset="{{this.src.regular}}" />
<source media="(min-width: 460px)" srcset="{{this.src.small}}" />
<img src="{{this.src}}" alt="{{this.alt}}" id="" loading="lazy" />
</picture>
Optimize homepage images with picture sourceset
Optimize detailpage image with picture sourceset
Optimizing page with gzip
With the usage of the NPM package compression will it compress all the rendered files from the server. For example my CSS and JS bundles will be compressed and send to the client. It gained small improvements on the home-page, but again a blazing fast render on the detail page.
const compression = require('compression')
app.use(compression())
Lighthouse audit optimizing result
In the end I started to get my score in lighthouse as high as possible. By running different tests and adjusting the feedback given, the score has improved little by little to the below.
Folder structure
During the project I've spend a lot of time on creating a good folder structure. Personally, I like it when the code is clean, and seperated in a lot of modules to the code-files aren't that big. All the server files will be located in the src
directory, and the client-side files inside the public
folder.
src/
+-- renders/
| +-- pages.js
+-- routes/
| +-- router.js
+-- utils/
| +-- fetch.js
| +-- filter.js
+-- views/
| +-- components/
| | +-- editor/
| | +-- error/
| | +-- home/
| | +-- profile/
| +-- layouts/
| | +-- main.hbs
| +-- pages.hbs
public/
+-- css/
| +-- pages/
| | +-- .css
| +-- index.css
+-- js/
| +-- pages/
| | +-- index.js
| +-- utils/
| | +-- components/
| | +-- filters/
| | +-- storage/
| +-- script.js
+-- icons/
+-- service-worker.js
+-- manifest.json
scripts/
+-- build_assets.js
+-- build_css.js
- Chokidar-CLI
- ES-Linter
- Prettier
- Gulp
- Gulp Autoprefixer
- Gulp Clean CSS
- Gulp concat
- Gulp uglify
- Nodemon
- NPM run all
- RimRaf
- Webpack
- Webpack-CLI
git clone https://github.com/joordy/progressive-web-apps-2021.git
npm install
To make use of this application, you will need a API key from Unsplash. Check out the .env.example file to see where you have to put the API key.
npm run dev
npm run build
For this project I made use of the Unsplash API for Developers. The API gives access to the world largest open collection of high quality photos, totally free. With using different querys, like searching, popular, etc etc, the user can receive a lot of information about the image. All the available information contains:
To make use of my application I've used two differend
https://api.unsplash.com/photos/?client_id=${API_KEY}&per_page=33&order_by=popular
https://api.unsplash.com/search/photos/?client_id=${API_KEY}&query=${SEARCH_QUERY}&per_page=33&order_by=popular
https://api.unsplash.com/photos/1gLdTsX3_70?client_id=${API_KEY}
After requesting the API you will receive an object with a lot of information about the photos. Below is described what all information means.
image = {
alt_description: , // Second description
blur_hash: , // Hashed ID
categories: , // Image categories
color: , // Color ?
created_at: , // Created timestamp
current_user_collections: [], // The current user collection
description: , // Image description
downloads: , // Total downloads
exif: , // Camera Settings
height: , // Image height in PX
id: , // Image ID
liked_by_user: , // Liked by user
likes: , // Total likes
links: , // Links to download information
location: , // Location of image
meta: , // Meta information
promoted_at: , // Promoted timestamp
related_collections: , // Related collections with this image
sponsorship: , // Sponsored image
tags: , // Image tags
updated_at: , // Updated timestamp
urls: {}, // All image URLS, thumbs, small, regular, full, raw
user: , // Information about user
views: , // Total image views
width: , // Image width in PX
}
- npm: express-handlebars. (2021, February 16). Npm. Retrieved March 8, 2021, from https://www.npmjs.com/package/express-handlebars
- Harika. (n.d.). How to change the location of views in express handlebars. Https://Koderplace.Com. Retrieved March 9, 2021, from https://koderplace.com/code-samples/255/how-to-change-the-location-of-views-in-express-handlebars
- Service Workers: an Introduction | Web Fundamentals. (n.d.). Google Developers. Retrieved March 15, 2021, from https://developers.google.com/web/fundamentals/primers/service-workers/
- Bauer, D. (2020, August 19). Why npm Scripts? CSS-Tricks. Retrieved March 22, 2021, from https://css-tricks.com/why-npm-scripts/
- divio. (n.d.). How to force HTTPS with Express.js - Developer Handbook Documentation. Https://Docs.Divio.Com/. Retrieved March 23, 2021, from https://docs.divio.com/en/latest/how-to/node-express-force-https/
This is a repository which is licensed as MIT. Developed by Jordy Fronik Β©οΈ 2021.