From b31afac5f110f198e60a6bdc965a8292dcc05a03 Mon Sep 17 00:00:00 2001 From: Konrad Kleine <193408+kwk@users.noreply.github.com> Date: Tue, 9 Oct 2018 16:27:02 +0200 Subject: [PATCH] Update core.yaml # About This description was generated using this script: ```sh #!/bin/bash set -e GHORG=${GHORG:-fabric8-services} GHREPO=${GHREPO:-fabric8-wit} cat <>'system.state' when 'closed' then '1' else null end ) as Closed FROM \\"work_items\\" left join iterations\ \ \ \ on fields@> concat('{\\"system.iteration\\": \\"', iterations.id, '\\"}')::jsonb WHERE (iterations.space_id = $1\ \ \ \ and work_items.deleted_at IS NULL) GROUP BY IterationId", ``` This query is so slow (~55 to 60ms) that and it appears so often that the log of slow queries is cluttered with this. This change is supposed to address this by limiting the number of work items that are queried to the current space. Also the counting is simplified by using PostgreSQLs `FILTER` expression (see https://www.postgresql.org/docs/9.6/static/sql-expressions.html). We have switched from a `LEFT JOIN` to an `INNER JOIN` for because we can discard work items in the search result if they don't have an iteration assigned (thanks to @jarifibrahim for measuring this here: https://github.com/fabric8-services/fabric8-wit/pull/2294#issuecomment-423544436). No functionality should have been changed which is why all the test should continue to work as expected. ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/7fd5f0bfc27256b27a393d49f5f7c291770b5667 **Author:** Konrad Kleine (193408+kwk@users.noreply.github.com) **Date:** 2018-09-21T18:40:20+02:00 Remove the duration field kind (fabric8-services/fabric8-wit#2289) We don't have any usage for the duration field kind. No space template uses this field type so far. The duration causes us far more problems than it does good. Especially in making the type system rock-solid, the duration has always caused problems. This is because we tried to store a duration as an `int64` which it really is (see https://godoc.org/time#Duration). The problem with this is that the underlying fields are stored in a JSONB structure. And JSON only supports `float64`. When we assigned an `int64` to `float64` the duration was rounded to the nearest `float64` and this is not accurate. ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/fce0b6a058771dcd6907224ba20e974973f4c360 **Author:** Konrad Kleine (193408+kwk@users.noreply.github.com) **Date:** 2018-09-21T19:12:51+02:00 Double the amount of allowed time for a pod to be ready (fabric8-services/fabric8-wit#2295) from 60 seconds to now 120 seconds See fabric8-services/fabric8-wit#2291 ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/07f931d323d4fe8a4cf6b03a60cceb1ee608393c **Author:** Ibrahim Jarif (jarifibrahim@gmail.com) **Date:** 2018-09-25T18:38:25+05:30 Improve DeleteSpace error messages (fabric8-services/fabric8-wit#2297) Minor improvements to errors returned from the Detele action on space controller. ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/cd7a01bc85da4d639239e143771bdab76a64c0b0 **Author:** Konrad Kleine (193408+kwk@users.noreply.github.com) **Date:** 2018-09-25T15:38:13+02:00 10x the amount of allowed time for a pod to be ready (fabric8-services/fabric8-wit#2298) from 120 seconds to now 1200 seconds See fabric8-services/fabric8-wit#2291 ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/2a954824f5c699c6b4ce4bb21ebdc104025bbbb7 **Author:** Konrad Kleine (193408+kwk@users.noreply.github.com) **Date:** 2018-09-26T09:38:38+02:00 Use existing number sequences instead of looking them up again (fabric8-services/fabric8-wit#2299) In order to avoid a sequential table scan on the `work_items` DB table we take the already calculated values for the new `number_sequences` table from the old `work_item_number_sequences`. Before this was the query plan for `INSERT` into the new `number_sequences` table: ``` EXPLAIN SELECT space_id, 'work_items' "table_name", MAX(number) FROM work_items WHERE number IS NOT NULL GROUP BY 1,2; +--------------------------------------------------------------------------------+ | QUERY PLAN | |--------------------------------------------------------------------------------| | GroupAggregate (cost=37097.49..38835.71 rows=37629 width=52) | | Group Key: space_id, 'work_items'::text | | -> Sort (cost=37097.49..37437.97 rows=136193 width=52) | | Sort Key: space_id | | -> Seq Scan on work_items (cost=0.00..20824.93 rows=136193 width=52) | | Filter: (number IS NOT NULL) | +--------------------------------------------------------------------------------+ ``` and now it is: ``` EXPLAIN SELECT space_id, 'work_items' "table_name", current_val FROM work_item_number_sequences GROUP BY 1,2; +--------------------------------------------------------------------------------------------------------------------------------+ | QUERY PLAN | |--------------------------------------------------------------------------------------------------------------------------------| | Group (cost=0.29..3541.66 rows=43872 width=52) | | Group Key: space_id, 'work_items'::text | | -> Index Scan using work_item_number_sequences_pkey on work_item_number_sequences (cost=0.29..3322.30 rows=43872 width=52) | +--------------------------------------------------------------------------------------------------------------------------------+ ``` Thanks go out to @jarifibrahim for bringing my attention to this sequential table scan. See https://github.com/fabric8-services/fabric8-wit/issues/2291 ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/cdef2aeb89120d9de594cbf257e17e8271a80c1f **Author:** Elliott Baron (ebaron@redhat.com) **Date:** 2018-09-26T14:43:12-04:00 Add deployments API to compute quota necessary for scale-up (fabric8-services/fabric8-wit#2286) This PR adds a new API serviced by the deployments controller at /deployments/spaces/{spaceID}/applications/{appName}/deployments/{deployName}/podlimits. Calling this API on a particular deployment determines the CPU and memory resources required in order to add a new pod to the deployment. This will allow the front-end to decide whether to stop the user from attempting to scale up a deployment if they do not have sufficient quota to do so successfully. Example usage: Request: GET https://openshift.io/api/deployments/spaces/$SPACE/applications/$APP/deployments/run/podlimits Response: {"data":{"limits":{"cpucores":1,"memory":262144000}}} This work was initially done by @chrislessard, and I have added tests and some modifications. Fixes: openshiftio/openshift.io#3388 ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/8ea84f1a182c1238ec28bfd7b8fa00b5cd0e76f5 **Author:** Konrad Kleine (193408+kwk@users.noreply.github.com) **Date:** 2018-09-28T10:29:51+02:00 Remove work item link category concept (fabric8-services/fabric8-wit#2301) For over two years we haven't used the link category concept and it is not adding any value except for a theoretical one that can be realized differently directly by modifying a link type; hence I remove this concept. This involves removal of - category relationship in the link type's design (package: `design`) - category endpoints (packages: `design` and `controller`) - repository for categories (package: `workitem/link`) - removal of foreign key constraint on the `link_category_id` column of the `work_item_link_types` table **NOTE:** the `work_item_link_categories` table or the `link_category_id` column on the `work_item_link_types` table will **NOT** be removed in the change. This is subject of a followup change. The reason is that we want old and new pods to run on the same database. (Thank you to @xcoulon for reminding me of that). Also a lot of golden files were updated because the sequential part of UUIDs affected a shift in the UUID numbering. This will fix https://github.com/openshiftio/openshift.io/issues/4299 the hard way :) ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/e483be8f1947aa05e52cd738ac9b8fd35c0e919f **Author:** Konrad Kleine (193408+kwk@users.noreply.github.com) **Date:** 2018-09-28T12:18:06+02:00 acquire exclusive lock on spaces, areas, iterations and work_item_number_sequences table to avoid deadlock during migration (fabric8-services/fabric8-wit#2303) In order to avoid the following deadlock situation, the 106 migration now acquires an exclusive lock on the `spaces`, `iterations`, `areas` and `work_item_number_sequences` tables. Legend: - relation `36029` = `spaces` - relation `36042` = `iterations` ``` Process 875 waits for AccessExclusiveLock on relation 36029 of database 13322; blocked by process 5634. Process 5634 waits for AccessShareLock on relation 36042 of database 13322; blocked by process 875. ``` ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/b80e543668a36c223ef95cf866e85458323d4cd6 **Author:** Konrad Kleine (193408+kwk@users.noreply.github.com) **Date:** 2018-10-01T12:39:11+02:00 Have convert.EqualValue and convert.CascadeEqual (fabric8-services/fabric8-wit#2285) ## About We need a way to compare if the object stored in the database is the same as the one we have loaded from a space template. In the template we don't care about when an object is created. That's why we need a method that is agnostic to the `gormsupport.Lifecycle` and a `Version` field. In this change I've extended the `convert.Equaler` interface with a function called `EqualValue` that has the same signature as `Equal`. When implementing `EqualValue` one should focus only on values that make up the object to compare. For example, if an object has a `Version` and a `gormsupport.Lifecycle` member, those are good candidates to be ignored inside of `EqualValue`. If an object has nested members that themselves implement the `convert.Equaler` interface, we want to call either `Equal` or `EqualValue` on them. It depends on what the outer, containing object is compared with. The `convert.CascadeEqual` takes care of that. ## Minor additional edits Some implementation of `Equal` tested for the wrong object because they didn't isolate the data properly using subtests. I've fixed that. Also, some implementations of `Equal` didn't test for `Version` or `Lifecycle` differences. ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/54f03e1dbc9ed28e73a2f56570db7869b58616ea **Author:** Konrad Kleine (193408+kwk@users.noreply.github.com) **Date:** 2018-10-01T13:19:30+02:00 empty commit to trigger build (fabric8-services/fabric8-wit#2308) ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/8e81220909349e2fbb2b2411b85735b5608296fb **Author:** Konrad Kleine (193408+kwk@users.noreply.github.com) **Date:** 2018-10-01T14:47:15+02:00 Revert "Introduce number column for area and iteration and allow seahching it (fabric8-services/fabric8-wit#2287)" (fabric8-services/fabric8-wit#2307) This reverts commit a917dfe49bfeaf5715f9822138b1599e94af00b9 (aka fabric8-services/fabric8-wit#2287). We're experiencing trouble to migrate the DB for this change (see https://gitlab.cee.redhat.com/dtsd/housekeeping/issues/2349). Currently the prod-preview database is not being updated and stalls at the migration from 105 to 106 (number for iteration an area). ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/af5e62d7dcf2366ac4f387407f8b41bafe495e7e **Author:** Konrad Kleine (193408+kwk@users.noreply.github.com) **Date:** 2018-10-01T17:10:20+02:00 Add number_sequences table and nullable 'number' column on areas and iterations tables (fabric8-services/fabric8-wit#2309) This adds back the `number_sequences` table the `number` columns on the `areas` and `iterations` tables (known from fabric8-services/fabric8-wit#2287 but then reverted). But it does this in three individual steps. This change is backwards compatible with the old code as the `number` column is nullable. Except for this structural change, no data is changed. ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/c824993728eff6a632a241cbb82360dd2d103d37 **Author:** Elliott Baron (ebaron@redhat.com) **Date:** 2018-10-01T16:43:57-04:00 Add API to show resource usage for the current space (fabric8-services/fabric8-wit#2306) This PR adds a deployments-related API under /deployments/environments/spaces/:spaceID. This endpoint returns resource usage for each of the user's environments, divided into usage by the specified space and usage by all other spaces combined. Currently this only returns CPU and memory usage, and not objects such as pods or secrets. There is an older /deployments/spaces/:spaceID/environments endpoint which has been left for backwards compatibility until the front-end is switched over completely. Example usage: Request: GET https://openshift.io/api/deployments/environments/spaces/$SPACE_ID Response: { "data": [ { "attributes": { "name": "run", "other_usage": { "cpucores": { "quota": 2, "used": 0.488 }, "memory": { "quota": 1073741824, "used": 262144000 }, "persistent_volume_claims": { "quota": 1, "used": 0 }, "replication_controllers": { "quota": 20, "used": 6 }, "secrets": { "quota": 20, "used": 9 }, "services": { "quota": 5, "used": 3 } }, "space_usage": { "cpucores": 1.488, "memory": 799014912 } }, "id": "run", "type": "environment" } ] } This work was started by @chrislessard, which I have put the finishing touches on and opened the PR on his behalf. Fixes: openshiftio/openshift.io#3129 ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/e75e0d152ee904a87e2bcb09b32f8777ba3ba0ab **Author:** Baiju Muthukadan (baiju.m.mail@gmail.com) **Date:** 2018-10-03T17:25:48+05:30 Query language support for child iteration & area (fabric8-services/fabric8-wit#2182) Query language support to fetch work items that belong to an iteration and its child iterations. Similarly, areas can be used. Example:- ``` {"iteration": "name", "child": true} ``` The default is true. ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/8795c9f883ba2a4f9092041193701320abf4b761 **Author:** Ibrahim Jarif (jarifibrahim@gmail.com) **Date:** 2018-10-08T19:16:45+05:30 Add more logs for Delete action on space controller (fabric8-services/fabric8-wit#2310) This PR adds more logs to the delete action on space controller. ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/a661437b58e2a4f2e16b65872b16b4e8da944b0f **Author:** Ibrahim Jarif (jarifibrahim@gmail.com) **Date:** 2018-10-09T12:48:37+05:30 Enum.ConvertFromModel should use Basetype.ConvertFromModel method (fabric8-services/fabric8-wit#2224) The `ConvertFromModel` method on `Enum Type` now uses `ConvertFromModel` method of the `base type` instead of `ConvertToModel`. ---- **Commit:** https://github.com/fabric8-services/fabric8-wit/commit/1371e8243fb9f68775a50a8eca738193db706164 **Author:** Konrad Kleine (193408+kwk@users.noreply.github.com) **Date:** 2018-10-09T16:24:11+02:00 drop constraint before modifying the path (fabric8-services/fabric8-wit#2312) Based on [this discussion](https://chat.openshift.io/developers/pl/6m4cojtiotdoueqot8uww5ybpe) we first remove the unique name index on the `iterations` and `areas` table before we modify the data. This should fix the migration issue that was visible here: https://github.com/openshiftio/saas-openshiftio/pull/1082 ---- --- dsaas-services/core.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dsaas-services/core.yaml b/dsaas-services/core.yaml index fce7ab1c..01d1a573 100644 --- a/dsaas-services/core.yaml +++ b/dsaas-services/core.yaml @@ -1,5 +1,5 @@ services: -- hash: 164762f67a3a7634fa4ee1e8bb55c458281803c7 +- hash: 1371e8243fb9f68775a50a8eca738193db706164 name: fabric8-wit path: /openshift/core.app.yaml url: https://github.com/fabric8-services/fabric8-wit/