Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate from Webpack to Vite #879

Closed

Conversation

hyperupcall
Copy link
Contributor

@hyperupcall hyperupcall commented Apr 3, 2023

Description

New Edits

  • I have interactively rebased the previous commits and split them up (with extra comments in the Git description) to make things hopefully easier to follow
  • As noted below, performance will improve much better than indicated by the benchmarks, due to upgrading from Vite 4.2 to 4.3!
  • There are no current issues with these changed (from my testing), so this PR has been undrafted 🚀

Description

This migrates the build system from Webpack to Vite. The web ecosystem as a whole is moving away from Webpack to newer, faster, and more modern solutions like Vite.

Vite is different than Webpack. In general, Webpack is very lenient with what it accepts. For example, when ECMAScript Modules and CommonJS modules are mixed in a weird/incorrect way, Webpack tries to find some way to correctly resolve the imports, construct a module graph, and output the bundle.

On the other hand, Vite will fail fast if there is something wrong with the imports. Internally, vite uses the ESBuild bundler for the development server, and then the Rollup bundler when bundling for production. Since there are two bundlers, imports and requires must "work"/"be compatible" with both bundlers.

Much of the changes in this PR change the imports to be more precise and to make both Rollup and ESBuild happy. There are still some issues though, as mentioned further down.

The webpack npm scripts are replaced with Vite-specific ones: npm run vite:dev and npm run vite:build

Fixes #871

Some statistics about bundle time and size are shown below for comparison:

Webpack vs Vite Stats

Note that the Vite numbers are actually significantly better since updating to Vite 4.3. Vite 4.3 performs 40-76% better in benchmarks compared to version 4.2!

Docker-Compose

Webpack:

  • command: docker-compose up
  • time: 1 minute 19 seconds

Vite

  • command: docker-compose up
  • time: 48 seconds

Bundling

Webpack:

  • command: npm run webpack:build
  • time: 58 seconds
  • bundle size: 5.79 MiB (bundle.js)

Vite:

  • command: npm run vite:build
  • time: 17 seconds
  • bundle size: ~4.6 MiB (bundle.js)

Current Issues

  • As mentioned in cde54ed, Vite seemed to have a weird issue with running on port 9229 (it ran on port 9230 instead). As a workaround, port 9230 was also exposed in the docker-compose file. Since it seems running the server through docker-compose was the most documented method of running the server, I didn't think this was a big deal? Maybe this should probably still be fixed. (this has been fixed in eb0f709 by modifying the docker-compose.yml file)

For Further Investigation

  • As mentioned in d0375a3, plotly.js takes up about 3.5 gigabytes of space. In the future, this could be addressed.
  • In the future, routes within RouteComponent.tsx could be lazy-loaded (React.lazy) for improved performance. This isn't related to the Vite migration, but the thought came up.

Type of change

(Check the ones that apply by placing an "x" instead of the space in the [ ] so it becomes [x])

  • Note merging this changes the node modules
  • Note merging this changes the database configuration.
  • This change requires a documentation update

Checklist

(Note what you have done by placing an "x" instead of the space in the [ ] so it becomes [x]. It is hoped you do all of them.)

  • I have followed the OED pull request ideas
  • I have removed text in ( ) from the issue request

client.js Outdated
database: process.env.OED_DB_TEST_DATABASE
host: process.env.POSTGRES_HOST,
port: process.env.POSTGRES_PORT,
user: 'postgres',

Check failure

Code scanning / CodeQL

Hard-coded credentials Critical

The hard-coded value "postgres" is used as
user name
.
@huss
Copy link
Member

huss commented Apr 22, 2023

Thanks to @hyperupcall for taking this on and helping OED move forward. I'm sorry for not working on this sooner. The project has been focused on v1.0 release. This change is non-trivial and still seems to have an issue associated with it. Given the v1.0 timeline and the need to verify this change does not cause issues at production sites, I am putting this on hold until after v1.0. It should then receive the attention it deserves and I hope we can move forward with it. If anyone wants to look at the issues noted in the PR then that would be great.

@hyperupcall
Copy link
Contributor Author

@huss No worries, it would be good to get out v1.0 first. I've been meaning to rebase over main and make the commits a bit more nice and fix the last remaining issues, but I have not got around to doing so yet.

Vite is more strict in the imports that it accepts. The Moment
imports must be updated to only default-import Moment (not namespace
imports).
Defining the Redux store in the same file as the app creates problems
with Vite. There is some issue with circular imports not working
properly with HMR (hot module replacement). Upstream tracks this issue
at vitejs/vite#3033; a workaround is to define and export the Redux
store from a different file. This commit does exactly that to circumvent
the circular import issue.
Vite didn't play nice with these dynamic requires for some reason. To make Vite
happy, convert the imports to static imports. Also change them to
ESM-imports for consistency.
files

This makes the formatting consistent with the `.editorconfig` file.
Properly format files as per the `.editorconfig` file.
- Update various `devDependencies`
- Add missing `ini` package used in
  `src/server/models/obvius/processConfigFile.js`
- Use `NotificationSystem` as type (previous value was no longer a type)
`react-notification-system` has a bug in which is specifies React 16 as a
`peerDependency`. Since we use React 17, this causes recent versions of `npm`
to error on install. Although this can be partially worked around with `--force`
or `--legacy-peer-deps`, it is only a workaround. Until upstream fixes this, we
can fix this with the `overrides` field. Since React versions are largely
backwards-compatible, we simply set the current version (`$react`) of React (and
`react-dom`) as a peer-dependency.

Similarly, `@formatjs/intl` is not compatible with `typescript@4`. So,
update TypeScript so the `peerDependency` is satisfied.

With these fixes, passing `--legacy-peer-deps` is no longer required.
Yay!
When using Babel, `core-js` is a common dependency used for
automatically polyfillying modern ECMAScript features. However, it is a
transitive dependency, so it shouldn't be specified manually as it was. We want to
uninstall it, but it appears other parts of the codebase use its `escapeHtml`
pollyfill. So, replace these uses with the `escape-html` package, then
remove `core-js`.
The previous `TimeInterval.js` was tricky because it was used on the
server _and_ client. With Vite, the imported `TimeInterval` variables
seemed to always be undefined. It appeared to be a bug with
`TimeInterval.js` being a file with CommonJS exports.

To remedy this, convert the file so it uses ECMAScript modules (thankfully, the
technology is now stable). As a result, imports to this file must be updated as well.
It appears that with Webpack, this improper import was silently fixed.
Rollup ostensibly handles it fine, but when running the final bundle, React
error 130[1] shows. So, fix this import so the error goes away.

[1]: https://legacy.reactjs.org/docs/error-decoder.html/?args%5B%5D=object&args%5B%5D=&invariant=130
The previous version appeared to work fine, but Lodash documentation
prefers default import. And, making this change removes the TypeScript
80003 diagnostic[1].

[1]: https://github.com/microsoft/TypeScript/blob/d210074c8844e21662e40e7db27c45d796be31c4/src/compiler/diagnosticMessages.json#L6713
# host (outside the container)
extra_args="--host"
fi

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This wasn't needed before because the Webpack development server automatically listens on the public network interface (not just loopback). We have to tell Vite to do this ourselves.

build: {
outDir: './server/public',
commonjsOptions: {
// exclude: [/TimeInterval.js$/],
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This // exclude: [/TimeInterval.js$/], comment will be removed in a later commit.

@@ -2,8 +2,6 @@
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */

import { localeData } from 'moment';
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This import isn't used as the localData variable declared on line 10 shadows it. Remove it as it could be a source of confusion.

@@ -3,7 +3,7 @@
* file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/
const database = require('./database');
const { TimeInterval } = require('../../common/TimeInterval');
const { TimeInterval } = import('../../common/TimeInterval.mjs');
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To elaborate a bit more from the extended commit message: Because TimeInterval.mjs uses ESModules, we can only import things from the file using ESModule-style imports (not CommonJS).

# Don't bring this up without the DB
- "3000:3000"
- "9229:9229" # Development port; should be commented out for production
- "9230:9230"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is one of the issues that I'll mention in my update comment: In Docker, Vite does not listen on port 9229 because it is "already in use" - so as a fallback it listens on the port above that - 9230. As a workaround, expose port 9230 so things work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This issue has been fixed in eb0f709

As mentioned in the comments of the code, the default chunking strategy
for Rollup seems poor. This adds a bit of code to split up the bundle.
    This should decrease loading times, especially when using HTTP/2 and
    HTTP/3.

BEFORE:

```txt
> open-energy-dashboard@0.8.0 client:build
> vite --config ./src/vite.config.js build

vite v4.2.2 building for production...
✓ 1357 modules transformed.
server/public/index.html                     0.89 kB
server/public/assets/index-77128f32.css    160.91 kB │ gzip:    24.60 kB
server/public/assets/index-a8f247e1.js   4,886.14 kB │ gzip: 1,494.16 kB
```

AFTER:

```txt
vite v4.2.2 building for production...
✓ 1357 modules transformed.
server/public/index.html                           1.49 kB
server/public/assets/index-99e4836c.css            6.24 kB │ gzip:     1.61 kB
server/public/assets/bootstrap-2823c1df.css      154.67 kB │ gzip:    23.17 kB
server/public/assets/react-select-1d923cf1.js     56.04 kB │ gzip:    19.32 kB
server/public/assets/moment-fbc5633a.js           59.89 kB │ gzip:    19.37 kB
server/public/assets/reactstrap-07f50228.js       72.08 kB │ gzip:    20.20 kB
server/public/assets/lodash-42c17880.js           72.53 kB │ gzip:    26.73 kB
server/public/assets/react-dom-657fb334.js       119.15 kB │ gzip:    38.50 kB
server/public/assets/index-1836615c.js           319.62 kB │ gzip:    67.34 kB
server/public/assets/vendor-144eeccd.js          446.03 kB │ gzip:   141.38 kB
server/public/assets/plotly.js-e0e78342.js     3,711.26 kB │ gzip: 1,150.87 kB
```
// For js, only make the largest libraries their own chunks (>=50kB)
if (fileName.includes('.js')) {
if(/^(plotly.js|react-dom|reactstrap|lodash|moment|react-select)$/u.test(moduleName)) {
return moduleName
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This improved chunking strategy shows that plotly.js takes up 3.7 GB!!! of space over the wire (as listed in the extended commit description). I checked fd64495, and that commit didn't seem to be the cause. Perhaps the issue lies within react-plotly.js - with how it uses the dependency. 🤔

@huss
Copy link
Member

huss commented Oct 22, 2023

I know @hyperupcall plans more work but I wanted to report what happened when I tied to run the latest version:

  • The first time I did docker compose up I got these messages:

oed-web-1 | NPM install...
oed-web-1 |
oed-dev-server-1 |
oed-dev-server-1 | > open-energy-dashboard@1.0.0 client:dev
oed-dev-server-1 | > vite --config ./src/vite.config.js --host
oed-dev-server-1 |
oed-dev-server-1 | sh: 1: vite: not found
oed-dev-server-1 exited with code 127
oed-web-1 | npm WARN deprecated mini-create-react-context@0.4.1: Package no longer supported. Contact Support at https://www.npmjs.com/support for more info.

but on the second install it was all fine.

  • If I connect to localhost:9229 it seems to partly work (:3000 and :9230 do not) but I get errors in the console similar to:

oed-dev-server-1 | 5:09:15 PM [vite] http proxy error at /api/version:
oed-dev-server-1 | Error: connect ECONNREFUSED 127.0.0.1:3000
oed-dev-server-1 | at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16)

It also seems it cannot get data from the DB.

@huss
Copy link
Member

huss commented Oct 27, 2023

Again, thanks to @hyperupcall for taking on this significant task. I've been holding off the package upgrade and merging non-essential PRs so this one could move forward without conflict. I don't want to be pushy but wanted to ask if you have any idea when there will be further work (or possible finishing of this PR)? FYI that I expect more students to be putting in PRs later in this semester and those will be harder to put off.

@hyperupcall
Copy link
Contributor Author

@huss You're right, I want to get this done ASAP, before the end of this week, honestly. It looks like there are only a few things that remain, and don't hesitate to ping me if there seems to be no progress. I have time right now to finish investigate the remaining errors and stuff

@hyperupcall
Copy link
Contributor Author

hyperupcall commented Oct 30, 2023

The first time I did docker compose up I got these messages:
...
but on the second install it was all fine.

That's so odd, because 6476534 was supposed to fix that

If I connect to localhost:9229 it seems to partly work (:3000 and :9230 do not)
...
It also seems it cannot get data from the DB.

It looks like the database fails to initialize. Because the web services has a depends_on: ['database'], the failure for the database to initialize means that the web container isn't started. Which is why Vite is getting proxy errors.

About port 9230, that is no longer used - it was originally used to temporarily workaround a configuration issue, but it has now been fixed. So 9229 is used, just like before. Now, port 8085 is used.

I'll take a look at port 3000 - it's supposed to work as the same way before, where it only works after you do npm run client:build. It's not supposed to be used when starting the development server

@hyperupcall
Copy link
Contributor Author

hyperupcall commented Oct 30, 2023

FWIW I tried switching to the development branch, and I get the same "attempting to create database error":

oed-web-1       | Attempting to create database...
oed-database-1  | 2023-10-30 21:04:22.973 UTC [33] ERROR:  cannot change return type of existing function
oed-database-1  | 2023-10-30 21:04:22.973 UTC [33] DETAIL:  Row type defined by OUT parameters is different.
oed-database-1  | 2023-10-30 21:04:22.973 UTC [33] HINT:  Use DROP FUNCTION meter_line_readings_unit(integer[],integer,timestamp without time zone,timestamp without time zone,reading_line_accuracy,integer,integer) first.
oed-database-1  | 2023-10-30 21:04:22.973 UTC [33] STATEMENT:  CREATE OR REPLACE FUNCTION date_trunc_up(interval_precision TEXT, ts TIMESTAMP) RETURNS TIMESTAMP LANGUAGE SQL IMMUTABLE AS $$ SELECT CASE WHEN ts = date_trunc(interval_precision, ts) THEN ts ELSE date_trunc(interval_precision, ts + ('1 ' || interval_precision)::INTERVAL) END $$; CREATE OR REPLACE FUNCTION shrink_tsrange_to_real_readings(tsrange_to_shrink TSRANGE, meter_ids INTEGER[]) RETURNS TSRANGE AS $$ DECLARE readings_max_tsrange TSRANGE; BEGIN SELECT tsrange(min(start_timestamp), max(end_timestamp)) INTO readings_max_tsrange FROM (readings r INNER JOIN unnest(meter_ids) meters(id) ON r.meter_id = meters.id); RETURN tsrange_to_shrink * readings_max_tsrange; END; $$ LANGUAGE 'plpgsql'; CREATE OR REPLACE FUNCTION shrink_tsrange_to_meters_by_day(tsrange_to_shrink TSRANGE, meter_ids INTEGER[]) RETURNS TSRANGE AS $$ DECLARE readings_max_tsrange TSRANGE; BEGIN SELECT tsrange(min(lower(time_interval)), max(upper(time_interval))) INTO readings_max_tsrange FROM daily_readings_unit dr INNER JOIN unnest(meter_ids) meters(id) ON dr.meter_id = meters.id; RETURN tsrange(date_trunc_up('day', lower(tsrange_to_shrink)), date_trunc('day', upper(tsrange_to_shrink))) * readings_max_tsrange; END; $$ LANGUAGE 'plpgsql'; CREATE MATERIALIZED VIEW IF NOT EXISTS daily_readings_unit AS SELECT r.meter_id AS meter_id, CASE WHEN u.unit_represent = 'quantity'::unit_represent_type THEN (sum( (r.reading * 3600 / (extract(EPOCH FROM (r.end_timestamp - r.start_timestamp)))) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / sum( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) )) WHEN (u.unit_represent = 'flow'::unit_represent_type OR u.unit_represent = 'raw'::unit_represent_type) THEN (sum( (r.reading * 3600 / u.sec_in_rate) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / sum( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) )) END AS reading_rate, CASE WHEN u.unit_represent = 'quantity'::unit_represent_type THEN (max(( (r.reading * 3600 / (extract(EPOCH FROM (r.end_timestamp - r.start_timestamp)))) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) WHEN (u.unit_represent = 'flow'::unit_represent_type OR u.unit_represent = 'raw'::unit_represent_type) THEN (max(( (r.reading * 3600 / u.sec_in_rate) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) END as max_rate, CASE WHEN u.unit_represent = 'quantity'::unit_represent_type THEN (min(( (r.reading * 3600 / (extract(EPOCH FROM (r.end_timestamp - r.start_timestamp)))) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) WHEN (u.unit_represent = 'flow'::unit_represent_type OR u.unit_represent = 'raw'::unit_represent_type) THEN (min(( (r.reading * 3600 / u.sec_in_rate) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) END as min_rate, tsrange(gen.interval_start, gen.interval_start + '1 day'::INTERVAL, '()') AS time_interval FROM ((readings r INNER JOIN meters m ON r.meter_id = m.id) INNER JOIN units u ON m.unit_id = u.id) CROSS JOIN LATERAL generate_series( date_trunc('day', r.start_timestamp), date_trunc_up('day', r.end_timestamp) - '1 day'::INTERVAL, '1 day'::INTERVAL ) gen(interval_start) GROUP BY r.meter_id, gen.interval_start, u.unit_represent ORDER BY gen.interval_start, r.meter_id; CREATE MATERIALIZED VIEW IF NOT EXISTS hourly_readings_unit AS SELECT r.meter_id AS meter_id, CASE WHEN u.unit_represent = 'quantity'::unit_represent_type THEN (sum( (r.reading * 3600 / (extract(EPOCH FROM (r.end_timestamp - r.start_timestamp)))) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / sum( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) )) WHEN (u.unit_represent = 'flow'::unit_represent_type OR u.unit_represent = 'raw'::unit_represent_type) THEN (sum( (r.reading * 3600 / u.sec_in_rate) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / sum( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) )) END AS reading_rate, CASE WHEN u.unit_represent = 'quantity'::unit_represent_type THEN (max(( (r.reading * 3600 / (extract(EPOCH FROM (r.end_timestamp - r.start_timestamp)))) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) WHEN (u.unit_represent = 'flow'::unit_represent_type OR u.unit_represent = 'raw'::unit_represent_type) THEN (max(( (r.reading * 3600 / u.sec_in_rate) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) END as max_rate, CASE WHEN u.unit_represent = 'quantity'::unit_represent_type THEN (min(( (r.reading * 3600 / (extract(EPOCH FROM (r.end_timestamp - r.start_timestamp)))) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) WHEN (u.unit_represent = 'flow'::unit_represent_type OR u.unit_represent = 'raw'::unit_represent_type) THEN (min(( (r.reading * 3600 / u.sec_in_rate) * extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 hour'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ) / ( extract(EPOCH FROM least(r.end_timestamp, gen.interval_start + '1 day'::INTERVAL) - greatest(r.start_timestamp, gen.interval_start) ) ))) END as min_rate, tsrange(gen.interval_start, gen.interval_start + '1 hour'::INTERVAL, '()') AS time_interval FROM ((readings r INNER JOIN meters m ON r.meter_id = m.id) INNER JOIN units u ON m.unit_id = u.id) CROSS JOIN LATERAL generate_series( date_trunc('hour', r.start_timestamp), date_trunc_up('hour', r.end_timestamp) - '1 hour'::INTERVAL, '1 hour'::INTERVAL ) gen(interval_start) GROUP BY r.meter_id, gen.interval_start, u.unit_represent ORDER BY gen.interval_start, r.meter_id; CREATE EXTENSION IF NOT EXISTS btree_gist; CREATE INDEX if not exists idx_daily_readings_unit ON daily_readings_unit USING GIST(time_interval, meter_id); CREATE OR REPLACE FUNCTION meter_line_readings_unit ( meter_ids INTEGER[], graphic_unit_id INTEGER, start_stamp TIMESTAMP, end_stamp TIMESTAMP, point_accuracy reading_line_accuracy, max_raw_points INTEGER, max_hour_points INTEGER ) RETURNS TABLE(meter_id INTEGER, reading_rate FLOAT, min_rate FLOAT, max_rate FLOAT, start_timestamp TIMESTAMP, end_timestamp TIMESTAMP) AS $$ DECLARE requested_range TSRANGE; requested_interval INTERVAL; requested_interval_seconds INTEGER; unit_column INTEGER; frequency INTERVAL; frequency_seconds INTEGER; current_meter_index INTEGER := 1; current_meter_id INTEGER; current_point_accuracy reading_line_accuracy; BEGIN SELECT unit_index INTO unit_column FROM units WHERE id = graphic_unit_id; WHILE current_meter_index <= cardinality(meter_ids) LOOP current_point_accuracy := point_accuracy; current_meter_id := meter_ids[current_meter_index]; requested_range := shrink_tsrange_to_real_readings(tsrange(start_stamp, end_stamp, '[]'), array_append(ARRAY[]::INTEGER[], current_meter_id)); IF (current_point_accuracy = 'auto'::reading_line_accuracy) THEN IF (upper(requested_range) = 'infinity') THEN current_point_accuracy := 'daily'::reading_line_accuracy; ELSE requested_interval := upper(requested_range) - lower(requested_range); requested_interval_seconds := (SELECT * FROM EXTRACT(EPOCH FROM requested_interval)); SELECT reading_frequency INTO frequency FROM meters WHERE id = current_meter_id; frequency_seconds := (SELECT * FROM EXTRACT(EPOCH FROM frequency)); IF ((requested_interval_seconds / frequency_seconds <= max_raw_points) OR (frequency_seconds >= 86400)) THEN current_point_accuracy := 'raw'::reading_line_accuracy; ELSIF ((requested_interval_seconds / 3600 <= max_hour_points) AND (frequency_seconds <= 3600)) THEN current_point_accuracy := 'hourly'::reading_line_accuracy; ELSE current_point_accuracy := 'daily'::reading_line_accuracy; END IF; END IF; END IF; IF (current_point_accuracy = 'raw'::reading_line_accuracy) THEN RETURN QUERY SELECT r.meter_id as meter_id, CASE WHEN u.unit_represent = 'quantity'::unit_represent_type THEN ((r.reading / (extract(EPOCH FROM (r.end_timestamp - r.start_timestamp)) / 3600)) * c.slope + c.intercept) WHEN (u.unit_represent = 'flow'::unit_represent_type OR u.unit_represent = 'raw'::unit_represent_type) THEN ((r.reading * 3600 / u.sec_in_rate) * c.slope + c.intercept) END AS reading_rate, cast('NaN' AS DOUBLE PRECISION) AS min_rate, cast('NaN' AS DOUBLE PRECISION) as max_rate, r.start_timestamp, r.end_timestamp FROM (((readings r INNER JOIN meters m ON m.id = current_meter_id) INNER JOIN units u ON m.unit_id = u.id) INNER JOIN cik c on c.row_index = u.unit_index AND c.column_index = unit_column) WHERE lower(requested_range) <= r.start_timestamp AND r.end_timestamp <= upper(requested_range) AND r.meter_id = current_meter_id ORDER BY r.start_timestamp ASC; ELSIF (current_point_accuracy = 'hourly'::reading_line_accuracy) THEN RETURN QUERY SELECT hourly.meter_id AS meter_id, hourly.reading_rate * c.slope + c.intercept as reading_rate, hourly.min_rate * c.slope + c.intercept AS min_rate, hourly.max_rate * c.slope + c.intercept AS max_rate, lower(hourly.time_interval) AS start_timestamp, upper(hourly.time_interval) AS end_timestamp FROM (((hourly_readings_unit hourly INNER JOIN meters m ON m.id = current_meter_id) INNER JOIN units u ON m.unit_id = u.id) INNER JOIN cik c on c.row_index = u.unit_index AND c.column_index = unit_column) WHERE requested_range @> time_interval AND hourly.meter_id = current_meter_id ORDER BY start_timestamp ASC; ELSE RETURN QUERY SELECT daily.meter_id AS meter_id, daily.reading_rate * c.slope + c.intercept as reading_rate, daily.min_rate * c.slope + c.intercept AS min_rate, daily.max_rate * c.slope + c.intercept AS max_rate, lower(daily.time_interval) AS start_timestamp, upper(daily.time_interval) AS end_timestamp FROM (((daily_readings_unit daily INNER JOIN meters m ON m.id = current_meter_id) INNER JOIN units u ON m.unit_id = u.id) INNER JOIN cik c on c.row_index = u.unit_index AND c.column_index = unit_column) WHERE requested_range @> time_interval AND daily.meter_id = current_meter_id ORDER BY start_timestamp ASC; END IF; current_meter_index := current_meter_index + 1; END LOOP; END; $$ LANGUAGE 'plpgsql'; CREATE OR REPLACE FUNCTION group_line_readings_unit ( group_ids INTEGER[], graphic_unit_id INTEGER, start_stamp TIMESTAMP, end_stamp TIMESTAMP, point_accuracy reading_line_accuracy, max_hour_points INTEGER ) RETURNS TABLE(group_id INTEGER, reading_rate FLOAT, start_timestamp TIMESTAMP, end_timestamp TIMESTAMP) AS $$ DECLARE meter_ids INTEGER[]; requested_range TSRANGE; requested_interval INTERVAL; requested_interval_seconds INTEGER; meters_min_frequency INTERVAL; BEGIN SELECT array_agg(DISTINCT gdm.meter_id) INTO meter_ids FROM groups_deep_meters gdm INNER JOIN unnest(group_ids) gids(id) ON gdm.group_id = gids.id; IF (point_accuracy = 'auto'::reading_line_accuracy OR point_accuracy = 'raw'::reading_line_accuracy) THEN requested_range := shrink_tsrange_to_real_readings(tsrange(start_stamp, end_stamp, '[]'), meter_ids); IF (upper(requested_range) = 'infinity') THEN point_accuracy := 'daily'::reading_line_accuracy; ELSE requested_interval := upper(requested_range) - lower(requested_range); requested_interval_seconds := (SELECT * FROM EXTRACT(EPOCH FROM requested_interval)); IF (requested_interval_seconds / 3600 <= max_hour_points) THEN point_accuracy := 'hourly'::reading_line_accuracy; ELSE point_accuracy := 'daily'::reading_line_accuracy; END IF; IF (point_accuracy = 'hourly'::reading_line_accuracy) THEN SELECT min(reading_frequency) INTO meters_min_frequency FROM (meters m INNER JOIN unnest(meter_ids) meters(id) ON m.id = meters.id); IF (EXTRACT(EPOCH FROM meters_min_frequency) > 3600) THEN point_accuracy = 'daily'::reading_line_accuracy; END IF; END IF; END IF; END IF; RETURN QUERY SELECT gdm.group_id AS group_id, SUM(readings.reading_rate) AS reading_rate, readings.start_timestamp, readings.end_timestamp FROM meter_line_readings_unit(meter_ids, graphic_unit_id, start_stamp, end_stamp, point_accuracy, -1, -1) readings INNER JOIN groups_deep_meters gdm ON readings.meter_id = gdm.meter_id INNER JOIN unnest(group_ids) gids(id) ON gdm.group_id = gids.id GROUP BY gdm.group_id, readings.start_timestamp, readings.end_timestamp ORDER BY readings.start_timestamp ASC; END; $$ LANGUAGE 'plpgsql'; CREATE OR REPLACE FUNCTION meter_bar_readings_unit ( meter_ids INTEGER[], graphic_unit_id INTEGER, bar_width_days INTEGER, start_stamp TIMESTAMP, end_stamp TIMESTAMP ) RETURNS TABLE(meter_id INTEGER, reading FLOAT, start_timestamp TIMESTAMP, end_timestamp TIMESTAMP) AS $$ DECLARE bar_width INTERVAL; real_tsrange TSRANGE; real_start_stamp TIMESTAMP; real_end_stamp TIMESTAMP; unit_column INTEGER; num_bars INTEGER; BEGIN bar_width := INTERVAL '1 day' * bar_width_days; real_tsrange := shrink_tsrange_to_meters_by_day(tsrange(start_stamp, end_stamp), meter_ids); real_start_stamp := lower(real_tsrange); real_end_stamp := upper(real_tsrange); num_bars := floor(extract(EPOCH FROM real_end_stamp - real_start_stamp) / extract(EPOCH FROM bar_width)); real_start_stamp := real_end_stamp - (num_bars * bar_width); real_end_stamp := real_end_stamp - bar_width; SELECT unit_index INTO unit_column FROM units WHERE id = graphic_unit_id; RETURN QUERY SELECT dr.meter_id AS meter_id, SUM(dr.reading_rate * 24) * c.slope + c.intercept AS reading, bars.interval_start AS start_timestamp, bars.interval_start + bar_width AS end_timestamp FROM (((((daily_readings_unit dr INNER JOIN generate_series(real_start_stamp, real_end_stamp, bar_width) bars(interval_start) ON tsrange(bars.interval_start, bars.interval_start + bar_width, '[]') @> dr.time_interval) INNER JOIN unnest(meter_ids) meters(id) ON dr.meter_id = meters.id) INNER JOIN meters m ON m.id = meters.id) INNER JOIN units u ON m.unit_id = u.id AND u.unit_represent != 'raw'::unit_represent_type) INNER JOIN cik c on c.row_index = u.unit_index AND c.column_index = unit_column) GROUP BY dr.meter_id, bars.interval_start, c.slope, c.intercept; END; $$ LANGUAGE 'plpgsql'; CREATE OR REPLACE FUNCTION group_bar_readings_unit ( group_ids INTEGER[], graphic_unit_id INTEGER, bar_width_days INTEGER, start_stamp TIMESTAMP, end_stamp TIMESTAMP ) RETURNS TABLE(group_id INTEGER, reading FLOAT, start_timestamp TIMESTAMP, end_timestamp TIMESTAMP) AS $$ DECLARE bar_width INTERVAL; real_tsrange TSRANGE; real_start_stamp TIMESTAMP; real_end_stamp TIMESTAMP; meter_ids INTEGER[]; BEGIN SELECT array_agg(DISTINCT gdm.meter_id) INTO meter_ids FROM groups_deep_meters gdm INNER JOIN unnest(group_ids) gids(id) ON gdm.group_id = gids.id; RETURN QUERY SELECT gdm.group_id AS group_id, SUM(readings.reading) AS reading, readings.start_timestamp, readings.end_timestamp FROM meter_bar_readings_unit(meter_ids, graphic_unit_id, bar_width_days, start_stamp, end_stamp) readings INNER JOIN groups_deep_meters gdm ON readings.meter_id = gdm.meter_id INNER JOIN unnest(group_ids) gids(id) on gdm.group_id = gids.id GROUP BY gdm.group_id, readings.start_timestamp, readings.end_timestamp; END; $$ LANGUAGE 'plpgsql';
oed-web-1       | 
oed-web-1       | -----start of npm run createdb output-----
oed-web-1       | 
oed-web-1       | 
oed-web-1       | > open-energy-dashboard@1.0.0 createdb
oed-web-1       | > node ./src/server/services/createDB.js
oed-web-1       | 
oed-web-1       | 
oed-web-1       | -----end of npm run createdb output-----
oed-web-1       | 
oed-web-1       | 
oed-web-1       | FAILURE: creation of database failed so stopping install. Use --continue_on_db_error if you want install to continue
oed-web-1 exited with code 3

Maybe it is possible that changes to the database like df70d75 or 0e3ace8 affected things? it could be that my database is corrupted, so i will have to investigate that

@hyperupcall
Copy link
Contributor Author

hyperupcall commented Oct 30, 2023

It appears that the nodemon(start:dev) server already uses 9229 for the debugger
The vite dev server isn't consistently using the same port, sometimes the port its configured to is 'in use' so it gets bumped up to the next (9230, instead of 9229)

Oops, I missed that - I'll integrate those changes when I get off shift Pushed those changes. Since nodeman defaults to 9229, we now change our default for dev server to 8085.

@ChrisMart21 by any chance was the server not restarted when you pulled in the latest changes / switched branches? That might have been why HMR failed

@hyperupcall
Copy link
Contributor Author

hyperupcall commented Nov 2, 2023

@huss I've been thinking, since this PR contains many changes that aren't just removing Webpack / adding Vite, it might be better hold off on this PR for now, and make separate PRs for those things (like issues that indirectly fix things for Vite, formatting inconsistencies). That way, people can submit PRs now without having to worry that things will change under their feet. It might also be possible to add Vite, side-by-side to Webpack to smoothen the transition, although I haven't tried that before. What do you think?

@huss
Copy link
Member

huss commented Nov 2, 2023

@hyperupcall Do I understand correctly that you are proposing to create a PR for the other (non-Vite) changes and then see about a Vite PR? I think it is okay to separate out the work, esp. if it helps you move it along. I would try to review any PR in timely fashion to so it can clear. Thanks for thinking about this and let me know if I have the wrong idea.

@hyperupcall
Copy link
Contributor Author

hyperupcall commented Nov 2, 2023

@huss Yes! I don't want this to be blocking other things, especially if it's other students doing their first Open Source contribution. I've also been busy lately, so I think this strategy would reduce uncertainty on both ends. Most of the work is already done, I can just cherry-pick the bigger things and turn those into new PRs. Do you think this PR should be closed now, or after I submit most of the new PRs?

@huss
Copy link
Member

huss commented Dec 4, 2023

Thanks to @hyperupcall for continued work. I tried to install this and got:

ed-web-1 | For help, see: https://nodejs.org/en/docs/inspector
oed-web-1 | /usr/src/app/src/server/routes/unitReadings.js:12
oed-web-1 | import * as moment from 'moment'
oed-web-1 | ^^^^^^
oed-web-1 |
oed-web-1 | SyntaxError: Cannot use import statement outside a module
oed-web-1 | at Object.compileFunction (node:vm:352:18)
oed-web-1 | at wrapSafe (node:internal/modules/cjs/loader:1031:15)
oed-web-1 | at Module._compile (node:internal/modules/cjs/loader:1065:27)
oed-web-1 | at Object.Module._extensions..js (node:internal/modules/cjs/loader:1153:10)
oed-web-1 | at Module.load (node:internal/modules/cjs/loader:981:32)
oed-web-1 | at Function.Module._load (node:internal/modules/cjs/loader:822:12)
oed-web-1 | at Module.require (node:internal/modules/cjs/loader:1005:19)
oed-web-1 | at require (node:internal/modules/cjs/helpers:102:18)
oed-web-1 | at Object. (/usr/src/app/src/server/app.js:23:33)
oed-web-1 | at Module._compile (node:internal/modules/cjs/loader:1101:14)
oed-web-1 | [nodemon] app crashed - waiting for file changes before starting...

I tired changing the import to be similar to other moment ones:

import moment from 'moment';

but it gave the same error. Any ideas?

Also, the npm ci build failed on GitHub. Not sure exactly why but it seems the package files are inconsistent. I wonder if this is a first time issue due to the changes. Something to figure out at some point.

@hyperupcall
Copy link
Contributor Author

hyperupcall commented Dec 5, 2023

@huss When running NodeJS, the import... syntax is not enabled by default (we have no problem with the client-side code because Webpack already understands it). To enable it and prevent the error, one must add "type": "module" to the package.json. But it's not that simple because adding that key/value pair to the package JSON will break various things and cause various issues. I haven't looked at that file in particular, but it sounds like it should be using CommonJS (const moment = require('moment')).

For now though, I'm only using this PR so people can better see the progress on the migration. Before I merged from development, there were maybe +16,000 changed lines, and now there are only +3,600 changed lines. I am hoping I can soon reduce it to something even closer to zero. The next steps would be to figure out if Babel should be removed/fixed, and to make it so the Moment/Lodash imports work with both Vite and Webpack (I think it would be good to have Vite and Webpack side-by-side for a gradual transition).

In any case, I'll ping you when this is ready for review again. I wouldn't want you to be spending time reviewing this when it's not quite ready :)

@huss
Copy link
Member

huss commented Dec 5, 2023

@hyperupcall Thanks for the note. I know I let this sit for a long time in the past so I think I am wanting to move on it whenever I see an update. I think it is correct that I wait until you let us know that this is ready. Thanks for everything.

@huss huss modified the milestones: 1.1 release, 1.x Mar 22, 2024
@hyperupcall
Copy link
Contributor Author

Superceded by #1262

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Improve Build System
3 participants