Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow binding to a specified IP #53

Closed
pzduniak opened this issue Jan 25, 2015 · 32 comments
Closed

Allow binding to a specified IP #53

pzduniak opened this issue Jan 25, 2015 · 32 comments
Assignees
Labels
area/networking kind/bug Issues that are defects reported by users or that we know have reached a real release priority/0
Milestone

Comments

@pzduniak
Copy link

Hey, so far using rancher was a great experience, but it's missing a quite vital feature. In docker, you can bind to a specified IP by passing -p 10.0.0.1:8080:8080. Sending 10.0.0.1:8080 in host port field in Rancher's UI does not work. Could it get added?

@ibuildthecloud
Copy link
Contributor

Yes, of course. Anything you can do with "docker run" we want to support. This one has just been overlooked.

@deniseschannon deniseschannon added the kind/enhancement Issues that improve or augment existing functionality label Mar 16, 2015
@deniseschannon deniseschannon changed the title Allow binding to a specified IP [Container] Allow binding to a specified IP Mar 19, 2015
@vincent99 vincent99 changed the title [Container] Allow binding to a specified IP Allow binding to a specified IP Apr 23, 2015
@deniseschannon
Copy link

@cjellick Is this issue fixed with the native docker feature?

@vincent99
Copy link
Contributor

@deniseschannon no.. Native docker might pick up different IPs but we still need that exposed in the API and as options in the UI.

@pospisil
Copy link

Is any progress in this issue? We use multiple IP's on our hosts, so possibility to set this in UI will be very helpfull. Now when I clone docker, the clone didn't publish any ports.

@deniseschannon deniseschannon added kind/bug Issues that are defects reported by users or that we know have reached a real release and removed kind/enhancement Issues that improve or augment existing functionality labels Jul 7, 2015
@will-chan will-chan added this to the Release 1.0 milestone Jul 7, 2015
@will-chan
Copy link
Contributor

We are working on a fix but it might not make it into this week's release.

@cjellick
Copy link

Another issue to look at while we're looking at this:
#1673

@tfiduccia
Copy link

Version - master 3/22
Not completed - needs backend support.

@tfiduccia
Copy link

Version - master 3/24
Verified fixed

@danipolo
Copy link

danipolo commented Aug 4, 2016

is this still solved? we have multiple public IP's and we need to route them to specific containers... How to do this?

@pwFoo
Copy link

pwFoo commented Aug 6, 2016

I was able to use -p ::. So it should work fine.

@Dids
Copy link

Dids commented Oct 20, 2016

Using 0.0.0.0:port:port doesn't work in docker-compose.yml, but works when done from the web UI. Why?

@farfeduc
Copy link

farfeduc commented May 5, 2017

I tried to to this on the last version of rancher earlier today and I couldn't get the IP field to work in the UI. Every time I set something like "192.168.0.2" inside the Host IP field in the port binding section, it binded my container to 0.0.0.0 :/

Currently I use my service in a standalone container. It would be great if rancher could use the good IP.

@drewpalmetto
Copy link

i have the same issue. i set the ip in the host IP field and the entire stack reverts back to the bound IP of the host. (which is actually a secondary IP assigned to said host.

@bascht
Copy link

bascht commented Oct 17, 2017

@drewpalmetto Did you get this fixed? It looks like this could be a regression. I'm running into the same issue with:

    ports:
      - '127.0.0.1:9000:9000/tcp'

resulting in the port 9000 being bound on the public interface.

@farfeduc
Copy link

farfeduc commented Oct 17, 2017 via email

@drewpalmetto
Copy link

@bascht No, we never found a workable solution. We ended up switching to docker enterprise. and it works fine.

@bascht
Copy link

bascht commented Oct 18, 2017

Alright, it's a bit odd. If I try the same config for the rancher load balancer service:

screenshot_20171018_151142

it works:

screenshot_20171018_151129

JeffersonBledsoe pushed a commit to JeffersonBledsoe/rancher-cli that referenced this issue Apr 28, 2022
bump rancher-compose-executor to v0.14.3
anders-swanson referenced this issue in verrazzano/rancher Jan 19, 2023
VZ-7353: Support for overriding image names and tags
anupama2501 pushed a commit to anupama2501/rancher that referenced this issue May 18, 2023
# This is the 1st commit message:

restricted admin additional tests

# This is the commit message rancher#2:

Stop hosted clusters from deleting before tests run

# This is the commit message rancher#3:

User GUID for the PrincipalID for Active Directory

# This is the commit message rancher#4:

Migrate Active Directory users to use objectGUID as the principalId

# This is the commit message rancher#5:

Add defaults package to extensions. Update v2 tests and extensions  to use the timeout value from the new defaults package

# This is the commit message rancher#6:

Fix atoi call with empty string in azure auth provider

# This is the commit message rancher#7:

Bump csp adapter to 2.0.2-rc2

# This is the commit message rancher#8:

Add retries to kubeapi requests in integration tests

The downstream cluster sometimes has random disconnects that interrupt
the test setup, try to make the suite more resilient when connecting to
the downstream.

# This is the commit message rancher#9:

Add retries to K3D cluster setup

About 1/10 times the integrationsetup script fails with message

Failed Cluster Preparation: Failed Network
Preparation: failed to create cluster network: docker failed to create
new network 'k3d-auto-k3d-cluster-xtrfk': Error response from daemon:
Failed to program FILTER chain: iptables failed: iptables --wait -I
FORWARD -o br-c722011a5900 -j DOCKER: iptables: Resource temporarily
unavailable.\n (exit status 4)

Since the failure is noted to be temporary, add retries to try to avoid
having ithe whole job fail.

# This is the commit message rancher#10:

Add timeout to integration import cluster

The integration test setup has a wait both in the ImportCluster routine
and after it. If a networking error on the test node causes the import
job to never run, the pipeline waits undefinitely until the drone
timeout, and it is never clear why the step hung. This change adds a
timeout to the internal import cluster step so that the wait is shorter
and the problem is more clearly logged. Also add more logging so it is
clear which step is getting stuck.

# This is the commit message rancher#11:

Use a random ID for integration test labels

5f78652 introduced labels to resources in the steveapi integration tests
so that assertions could exclude non-test-generated resources. However,
since the label is deterministic, if the tests are run multiiple times
on the same cluster and if the resources weren't properly cleaned up
after the last test run due to an unexpected failure, the subsequent
test runs would include the old resources in their results. To prevent
this, use a unique ID for the resource label in the steveapi integration
tests.

# This is the commit message rancher#12:

Update steve for new project filtering feature

# This is the commit message rancher#13:

Add steve API tests for filtering by projects

Add integration tests for the new `projectsornamespaces` query parameter
in steve.

# This is the commit message rancher#14:

Improving default for PSP options

Improves the default for global.cattle.psp.enabled to not require manual
user override on k8s 1.25

# This is the commit message rancher#15:

Rebasing helm unittests to use upstrem plugin

Previously, chart unit tests used a fork of helm-unittests to run.
This commit commit changes the unit tests to use the upstream plugin
instead, which requires small changes to the tests and omitting the
tests phase on the s390x architecture.

# This is the commit message rancher#16:

Tests for Improving default for PSP options

# This is the commit message rancher#17:

Bump Rancher-Webhook to v0.3.5-rc5

# This is the commit message rancher#18:

Create a CRTB for a restricted admin when a GRB gets created for it

# This is the commit message rancher#19:

Enqueue restricted admin's GRB if CRTB is deleted from remote cluster

# This is the commit message rancher#20:

Stop creating unnecessary RBAC resources for restricted admins

# This is the commit message rancher#21:

Updated GRB handler for resetricted-admin.

# This is the commit message rancher#22:

Restructure restricted-admin rule reconciliation.

related-resource logic for re-enqueuing GRBs was moved from
`pkg/controllers/management/authprovisioningv2`
to `pkg/controllers/managementuser/rbac`

`pkg/controllers/management/restrictedadminrbac/register.go`
no longer creates cluster and project handlers for giving the
restricted-admin rules in the local cluster namespace. This also caused
the removal of unused member variables from the handler

GRB handler code now ensures a CRTB for the GRB subject to the cluster-owner
roleTemplate if the GRB is for a restricted-admin. If not then the
handler will bind the GRB subject to the cluster-admin role if the GRB
is an admin GRB. This change also caused the removal of unused member
variables from the handler.

# This is the commit message rancher#23:

Moves restricted-admin CRTB to management context.

Restricted admin now gets their CRTB for cluster-owner to downstream
cluster through controllers in the management context.

# This is the commit message rancher#24:

Adds unit tests for restrictedadminrbac controller

# This is the commit message rancher#25:

Fixes admin sync error and adds unit tests.

# This is the commit message rancher#26:

[CAPR] Enhance new provisioning tests for etcd snapshot creation/restore, encryption key rotation, and certificate rotation (rancher#41459)

* Add new operations tests and refactor v2prov test framework, refactor test frameworks to prevent repeating the same code in two places, add more etcd snapshot related tests and additional conditional checks around secret conflicts, selectively check cluster readiness when scaling, check objectstore health to prevent race condition on startup and ensure snapshot file is not failed
* Add unit test for condition manipulation check for managesystemagentplan
* Fix rkebootstrap controller handling of etcd node safe removal annotation
* Add RKE2 manifest removal instructions to encryption key rotation and certificate rotation to help ensure system components are restarted on major operations
* Add additional etcd restore stage to clean up system pods, don't generate capr cluster tokens if the cluster has plans delivered, and don't short circuit plan delivery logic if planAppliedButWaitingForProbes
* Bump rancher-machine version to v0.15.0-rancher100
* Fix S3 endpoint CA rendering and prefer snapshot S3 files and arguments
* Bump system-agent to v0.3.3-rc3
* Consolidate etcd machine cleanup and force remove machines on etcd restore shutdown phase
* Don't autoset join URL if annotation is set
* Clean up non-matching nodes on restore
* Fix unnecessarily noisy certificate rotation pausing

Signed-off-by: Chris Kim <oats87g@gmail.com>
# This is the commit message rancher#27:

Add hostname truncation validation test

# This is the commit message rancher#28:

feat: Allows configuration of the 'type' used in Service

* Defaults to the standard ClusterIP
* Allows user to override with NodePort or LoadBalancer
* Allows user to customise service with provided annotations
* Chart docs have been updated
* This allows smooth running on GKE clusters using static IP addresses and Google managed certificates

Fixes issue: rancher#16061

# This is the commit message rancher#29:

Adds tests for the new service type attribute

# This is the commit message rancher#30:

fix: Fixed silly issue with tests

# This is the commit message rancher#31:

feat: Allows service annotations to be configured

# This is the commit message rancher#32:

fix: Added missing annotations key. Doh.

# This is the commit message rancher#33:

fix: Add missing empty trailing new line.

# This is the commit message rancher#34:

Adds a path to the Ingress rule in the Rancher chart to make it compatible with ingress controllers that require a path to be present.

Fixes rancher#39638

Signed-off-by: Bastian Hofmann <mail@bastianhofmann.de>

# This is the commit message rancher#35:

Fix ingress path unit test.

# This is the commit message rancher#36:

Add multi-environment support for AKS

Issue: rancher/aks-operator#98

# This is the commit message rancher#37:

Updating to Fleet v0.7.0-rc.3

# This is the commit message rancher#38:

Keep all nodes during etcd restore that either match the machine UID label selector or have a corresponding node ref (rancher#41564)

Signed-off-by: Chris Kim <oats87g@gmail.com>
# This is the commit message rancher#39:

Rework errNotConfigured into a type

Using a type that implements Error, we can use that type in
tests without needing to know about its underlying implementation.
This keeps the underlying value opaque. Should be no change in
behavior.

# This is the commit message rancher#40:

Initial round of unit tests for Okta+LDAP

These are specifically intended to test the behavior in PR rancher#41269
so they are intentionally quite limited in scope. Mostly the goal
is to ensure that when an ldapProvider is configured on a SAML
provider, it is actually used when a principal search is performed.

This would be fairly trivial to expand to the shibboleth provider,
and in the future I'd like to include a group search suite.

# This is the commit message rancher#41:

Add doc comments to IsNotConfigured and ErrNotConfigured

# This is the commit message rancher#42:

bump the SUC version in the Dockerfile

# This is the commit message rancher#43:

Add cluster agent tests

# This is the commit message rancher#44:

run constructFilesSecret both when creating and deleting a node (rancher#41003)

Some NodeDrivers need to have access to the same secrets they used when creating the node. For example, the Openstack node driver needs access to the cacert file that is used to connect to Openstack.
# This is the commit message rancher#45:

Fix run script to check for args

# This is the commit message rancher#46:

Pin the rancher-webhook chart to an exact version

# This is the commit message rancher#47:

Bring back the old version-comparing behavior and cover it with tests

# This is the commit message rancher#48:

Add new logic with exact version and cover it with tests

# This is the commit message rancher#49:

Adjust remaining behavior for the deprecated env var

# This is the commit message rancher#50:

Ensure the new and old Helm values are merged

# This is the commit message rancher#51:

Allow downgrades only when using exact version explicitly

# This is the commit message rancher#52:

Add test for agent customization in fleetcluster

# This is the commit message rancher#53:

Do not export RestConfig of test Client; configure a RestGetter instead

# This is the commit message rancher#54:

Additional restricted admin tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/bug Issues that are defects reported by users or that we know have reached a real release priority/0
Projects
None yet
Development

No branches or pull requests