Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error 'Unable to retrieve stack details' on stack details #1457

Closed
ardehaha opened this issue Dec 3, 2017 · 3 comments
Closed

Error 'Unable to retrieve stack details' on stack details #1457

ardehaha opened this issue Dec 3, 2017 · 3 comments
Labels
Milestone

Comments

@ardehaha
Copy link

ardehaha commented Dec 3, 2017

Description

Current version(1.15.3) may have an issue on handling stacks with name include 'docker'.

Steps to reproduce the issue:

  1. Create stack named 'docker-compose-ui' or 'docker_compose_ui';
  2. On page 'Stack List' click stack to display stack details;
  3. Error 'Unable to retrieve stack details' pop out;

Any other info e.g. Why do you consider this to be a bug? What did you expect to happen instead?

Technical details:

  • Portainer version: 1.15.3
  • Target Docker version (the host/cluster you manage): 17.09.0-ce
  • Platform (windows/linux): linux
  • Command used to start Portainer (docker run -p 9000:9000 portainer/portainer): docker-compose.yml as below

version: '3'
services:
web:
image: "portainer/portainer:1.15.3"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "9000:9000"

  • Target Swarm version (if applicable):
  • Browser: Chrome 62.0.3202.94
@deviantony
Copy link
Member

I was able to reproduce this, will investigate.

@SWW13
Copy link

SWW13 commented Mar 12, 2018

I still have this issue with 1.16.3 and 1.16.4.
Tested with stack name docker-registry, docker_registry and dockerregistry.

Let me know if you need more details.

@deviantony deviantony reopened this Mar 12, 2018
@deviantony
Copy link
Member

That is a regression introduced via eb43579#diff-070ff54db6055aeba28f539a86e5d2a6

Will be fixed ASAP.

@deviantony deviantony modified the milestones: 1.15.4, 1.16.x Mar 12, 2018
@deviantony deviantony modified the milestones: 1.16.x, 1.16.5 Apr 1, 2018
xAt0mZ pushed a commit that referenced this issue Aug 25, 2022
* Initial kaas poc

* add linode and digital-ocean feature flags

* move cloud api keys to CloudApiKeys subsettings

* change hardcoded /tmp usage to os.TempDir() (#1415)

After investigation it seems like creating a temporary file is
unfortunately required due to our usage of k8s.io/client-go/tools/clientcmd.

I moved the temporary file creation to be in a single location in
CreateKubeClientFromKubeConfig and switched to using os.TempDir to fix
the linux specific /tmp usage.

* change /api/kaas to /api/cloud (#1416)

This renames the actual API endpoint and API docs from /api/kaas to
/api/cloud but we may also want to change the term in the rest of the
places used in the code. It seems strange to use both terms
intermittently in the code.

* change linode region names (#1418)

https://portainer.atlassian.net/browse/EE-2898

This makes it possible to see where in the US you're deploying your cluster.

* fix(kaas): blacklist linode's nanode 1GB EE-2899 (#1422)

* fix(kaas): blacklist linode's nanode 1GB

The default selected node when deploying a cluster with Linode was
"Nanode 1GB". This node is not actually able to be used for a kubernetes
cluster and instead returned an error message. This simply skips adding
it to the list of node sizes.

Per Matt's feedback I add comment explaining why we skip this node size.

* feat(kaas): Put Linode and Digital Ocean behind feature flag EE-2900 (#1426)

* fix(kaas): record user activity EE-2883 (#1425)

Add user activity logging and ensure that API keys are redacted

* feat(kaas): Add support to create other KaaS without needing Civo EE-2893 (#1433)

Add support for configuring other cloud provider without Civo

* feat(database): add peristent kaas task queue EE-2877 (#1428)

Persist the KaaS creation tasks so Portainer will be able to continue where it left off if restarted

* feat(kaas): display-cloud-api-tokens-after-entry EE-2892 (#1430)

send a correct settings put request
display placeholder text for saved api keys
create react-query hooks to handle requests

* feat(kaas): cache cluster info EE-2920 (#1435)

Loading the environment create KaaS screen took 5-10 seconds because it
fetches the same info from the KaaS provider each time. Information like
"what regions the provider has", "what are there node sizes", etc. This
information doesn't change often so I've added a simple cache
mechanism to the backend.

It uses map of PloudProvider to Info. Each time a request is made, the
provider is looked up in the map. If they do not exist yet they will be
fetched live like before. If they do exist the cached ones are sent back
instantly and a request is made to update the cache.
Currently the cache is updated every 2 hours.

* feat(kaas): update-kaas-wordings-and-logo EE-2887 (#1436)

* stack icons and style based on theme
* update scaling

* fix(KaaS): fix fetch info twice EE-2929 (#1439)

* fix fetch info twice

* feat(kaas): add-form-validation-in-the-create-cluster-views EE-2881 (#1440)

* add better name validation

* prevent duplicate env names

* fix(kaas): send-the-correct-network-id EE-2931 (#1443)

* update selected network

* update selected network with region change

* feat(kaas): add concurrency mechanism EE-2922 (#1447)

Refactor cluster creation logic and improve error handling and flow

* add polling in environments (#1452)

* ignore cloudapikeys in update settings (#1454)

* feat(provisioning): add new provisioning states and struct EE-2884 (#1437)

This adds status codes 3 and 4 and provides a StatusMessage type that
gives more detail about the current status.

* feat(kaas): updating-wizard EE-2888 (#1456)

* fix incorrect regex

* reuse create wizard form

* reset form values onSubmit

* fix wizard tile oveflow for small devices

* add api keys to wizard

* useSettings hook

* show loading when loading for the first time

* update placeholder text

* disable autocomplete on api inputs

* pass correct apikey when creating a cluster

* feat(cloud): add retry limit and improve reliability EE-2879 (#1457)

Add retry mechanism for failed calls.  Switch to logrus and improve log messages.  Fix race conditions

* feat(KaaS): filter non provisioned environments on home page EE-2889 (#1471)

* create kaas env form (#1472)

* feat(cloud): allow the user to choose kubernetes version EE-2956 (#1467)

Add overriding kubernetes version.  Also add cypress tags. Other minor tweaks

* feat(kaas): add-analytics-to-the-front-end EE-2949 (#1463)

* add analytics
* Update the analytics event timing for POs

* feat(endpoint): allow deleting an endpoint with a blank URL without error EE-2963 (#1475)

Allow deleting an endpoint with a blank URL without error

* use latest in DO and Civo which both support that

* feat(kaas): improve error handling while provisioning the cluster EE-2884 (#1465)

This builds upon earlier work on making the cluster provisioning async. It updates endpoint.Status to include two new values:

3 - Indicating an ongoing cluster provision
4 - Indicating a failed cluster provision (or just any environment which has failed to the point of needing manual intervention).

A new StatusMessage is set to a value describing it's current progress while provisioning or an error message should the provisioning fail. StatusMessage is a struct which has a Summary and Detail strings.

Additionally a new CloudProvider value is attached Endpoints which have a Name and URL describing the cloud provider that provisioned the endpoint.

* civo doesn't like latest string as the version and produces a weird error

* fix log messages

* styling changes (#1476)

* refactor cache GetInfo (#1477)

This hopefully makes the flow more clear. If the cached info cannot be
looked up it will be fetched live instead. Additionally, I've added a
detailed comment explaining why we update the cache after each time we
serve it. Also I fixed a race condition in Linode and Digital Ocean that
had been fixed before, but only for Civo.

* update tooltip area

* don't leave a blank URL with cog. add a message for endpoints that take a while to update

* skip 1gb nodes for digital ocean

They are not actually valid. They return an error which is an issue
since they happen to be the default value.

* remove review comments that have been fixed

* add more provider info for frontend EE-2959

* table details

* change order or widget

* icon styling

* add button

* update styling

* remove comments

* filter out 1gb amd and intel nodes

* renamed cloudprovision to cloudprovisioning

* improve civo instance display names

* filter out g3 civo nodes and print description prefix

* add data-cy back in

* add service.Update() for updating the cache.

* change manual info cache update to 12 hours

* backend fixes for review comments

* mod tidy

* fix recursive call typo

* remove feature-flags on the backend

* remove feature flag for Linode and DO in the FE

* improve digital ocean node names (#1484)

* go mod tidy

* refactor(kaas): improve-form-logic EE-2550 (#1488)

* finish refactor with inner form

* integrate api keys with create cluster

* add stale time, fix invalidation

* fix build issue

* fix broken go tests

* fix types

* prettier

* feat(kaas): kaas agent version environment variable (#1493)

This allows using the environment variable KAAS_AGENT_VERSION=2.10 or
similar to override the version of the portainer agent which will be
used when deploying new clusters. This should make testing new releases
much easier.

I would've preferred to get this environment in cmd/portainer/main.go
and store it inside the CloudSetupService and then pass it to the
DeployPortainerAgent function, but unfortunately DeployPortainerAgent is
part of an interface from CE so it seems better to just lookup the
variable in DeployPortainerAgent when we actually use it than need to
change that interface.

For automatic version bumping we just need to change 
`const defaultKaasVersion = "2.12"` to the current released agent version.

* respect analytics settings

* fix errors, small refactor still required

* update datastore test json again

* prettier wants this fixed

* change default agent version to current patch version

* update prettierignore

Co-authored-by: Dakota Walsh <101994734+dakota-portainer@users.noreply.github.com>
Co-authored-by: Prabhat Khera <91852476+prabhat-org@users.noreply.github.com>
Co-authored-by: testA113 <83188384+testA113@users.noreply.github.com>
Co-authored-by: testA113 <alex.harris@portainer.io>
Co-authored-by: Dakota Walsh <dakota.walsh@portainer.io>
Co-authored-by: Prabhat Khera <prabhat.khera@portainer.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants