Skip to content

WP1.2 Coordination Meeting January 18, 2022

Javier edited this page Jan 19, 2022 · 1 revision

Meeting Report WP1.2 ‘Modelica library for MPC’

1. MEETING SUBJECT, DATE

Subject: WP1.2

Date: 18-01-2022

Location: Teams

Minutes taken by: Lieve Helsen (KU Leuven)

2. PARTICIPANTS

3. AGENDA

  1. BOPTEST

a. Dashboard (Kyle)

b. Controller characterization (Javier)

c. Trials (different controllers and different test cases)

d. Autoregressive approach using high mean error and high standard deviation for Berkeley to be used in the emulator with TMY data (Laura - extended to next meeting)

  1. New developments

a. new KPIs needed: peak power (Dave) - extended to next meeting

b. BOPTEST - extended to next meeting

c. Emulators update?

  1. Outreaching

a. Initiatives for joint papers - extended to next meeting

b. Where have we seen BOPTEST popping up? - extended to next meeting

4. REPORT

  1. BOPTEST
  • a. Dashboard

Context: how to motivate users to share their results such that benchmarking can be done? NREL dashboard is facilitating this, this is still work in progress. Automatic transfer of test case results to the dashboard is available. Kyle explains the current status: interface has been developed (in-house design), actual design by subcontractor Weblink for experimenting: http://boptest.kbenne.net/dashboard Copy of workbook: https://colab.research.google.com/drive/1T1gvEU-QmCMCOS5f2Dlv4uI9YBHpOL75#scrollTo=z_ACm1rcn2pt refinement of language still needed user account can be made BOPTEST explanation latest test results (results are automatically stored if the BOPTEST is used on the server, not if run locally), as a user you can chose what you share publicly (all results, a selection or none), test run to BOPTEST through API key (assigned when logged in through user account, private to the user) The extra argument in test case selection: API key - test results are loaded in BOPTEST Documentation for each test case, also available as download to PDF, markdowns not yet there. Filtering is possible in test results, top level by building type and scenario parameters (based on the selected building type), narrow down to slices of KPIs. The notion of tags (user-defined): could be controller method ... selection based on tags is possible, not yet a hierarchy of how controller methods are categorized, tags are empty now. We still need an API to transfer these tags (Dashboard is able to receive them, but we cannot yet introduce them). Nomenclature of tags suggested. Or should we enforce this? Then the filtering could be more guided. Yes, enforcing is a plus. Double-clicking on a test result gives additional metadata, together with the KPI ranges. User Interface (UI): should additional information be given through the dashboard? Or another BOPTEST API? Maybe the latter is not intuitive to some users, and the former is more user-friendly? But there might be use cases (advanced users) where the latter is useful, e.g. when making loops with different parameters. Both options have their value. Free-form tags API exists already (non-graphical API), might be extended to more structured tags. Providing data from BOPTEST to the API still needs development. Can we add information in the dashboard after the results are obtained? Or setting up reusable controllers in a specific account?

New plan for new developments by subcontractor in October. Dashboard code is intended to be open source, repository will be made public. Github issues (bug reports, change requests) can be collected by October. https://github.com/NREL/boptest-dashboard Anticipated that the BOPTEST work keeps on being funded, an active community can help!

How and when do we collect data about the controller? Dashboard needs reliable data, thus you need to be in a scenario and that scenario needs to be run till the end, then there is a submission to the dashboard. There seems to be a bug somewhere: potentially when you started more scenarios. Give the user feedback (tailored error messages) through the API about what is happening still to be developed. A test ID is used to reference the test case, but this is not the right place to enter the metadata. The correct API to gather extra data about the test is ‘put scenario’ (where you select the scenario), but the data are pushed only when the full test has run. Rather a new API to push data to the dashboard? The user needs to indicate by a separate command that the test case is reported, not every case needs to be reported on the dashboard. We all agree: a separate API.

  • b. Controller characterization

Starting from Javier’s first draft with simplified branches, which tags should be used to characterize controllers? Structured tags are presented in the schemes below for MPC and for RL, we want to keep it at this high level. There is freedom to the user to give more detailed description on the lower level, or to choose ‘other’ and give a free-format text description.

Developments needed:

i. Extra API for structured meta-data

ii. Filtering of controllers (building on the existing filtering)

We should agree on terminology since changes are not easy once the developments have been made. The lower we go in level the more debate we will have.

Action Javier: check with Jan Drgona whether he has important remarks on this proposal.

  • c. Trials (different controllers and different test cases): no new trials, warm call to start these!

  • d. Autoregressive approach using high mean error and high standard deviation for Berkeley to be used in the emulator with TMY data (Laura - extended to next meeting)

  1. New developments
  • a. Emulators update?

i. Multi-zone office hybrid (simple): runs for 1 year (but takes a whole day) (Iago & Javier).

ii. Single-zone commercial air-based : ready for review by February 2022 (Dave)

iii. 5-zones commercial air-based: waiting for Filip’s review (Dave)

iv. Multi-zone commercial air-based: iteration in progress (Yeonjin), next review will be done by Konstantin

Status:

  1. Outreaching
  • a. Another workshop or hackathon that connects to the dashboard is needed, we have to reach out to the AI Community! Through Jan Drgona? Climate Change AI community (https://www.climatechange.ai/).
  1. Further issues
  • a. Go beyond the virtual path, also real buildings with MPC should be looked at, and insights should be shared.

  • b. Next meeting: March 8, on the agenda:

i. Finalization of controller characterization (Javier)

ii. How to reach out to AI Community (BOPTEST & dashboard)? (Jan)

iii. New developments on the software side: new KPIs, BOPTEST ... (Dave)

iv. Autoregressive approach using high mean error and high standard deviation for Berkeley to be used in the emulator with TMY data (Laura)

Clone this wiki locally