Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Julie removes Placement Contstraints #501

Open
Fobhep opened this issue May 19, 2022 · 3 comments
Open

Julie removes Placement Contstraints #501

Fobhep opened this issue May 19, 2022 · 3 comments

Comments

@Fobhep
Copy link
Contributor

Fobhep commented May 19, 2022

Describe the bug
Boker has a default placement constraint for new topics.
Julie-Ops respects that when deploying a new "blank" topic, but removes it when run again.

To Reproduce
Deploy this descriptor:

context: "context"
source: "src"
projects:
  - name: "name"
    topics:
      - name: "topic2"

and check config with kafka-topics:

kafka-topics --bootstrap-server broker1-participant-0.kafka:9093 --command-config julie.properties --describe --topic context.src.name.topic2`

Topic: context.src.name.topic2	TopicId: urMxYNH_SSOGbV7sr1LVog	PartitionCount: 1	ReplicationFactor: 3	Configs: compression.type=snappy,min.insync.replicas=2,segment.bytes=1073741824,retention.ms=3600000,confluent.placement.constraints={"version":1,"replicas":[{"count":1,"constraints":{"rack":"rack-1"}},{"count":1,"constraints":{"rack":"rack-2"}},{"count":1,"constraints":{"rack":"rack-3"}}],"observers":[]}
	Topic: context.src.name.topic2	Partition: 0	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3	Offline: 

Rerun julie - log indicates that config is going to be deleted:

{
  "Operation" : "com.purbon.kafka.topology.actions.topics.UpdateTopicConfigAction",
  "Topic" : "context.src.name.topic2",
  "Action" : "update",
  "Changes" : {
    "DeletedConfigs" : {
      "confluent.placement.constraints" : "{\"version\":1,\"replicas\":[{\"count\":1,\"constraints\":{\"rack\":\"rack-1\"}},{\"count\":1,\"constraints\":{\"rack\":\"rack-2\"}},{\"count\":1,\"constraints\":{\"rack\":\"rack-3\"}}],\"observers\":[]}"
    }
  }
}

Checking with kafka-topics again confirms that config was deleted:

Topic: context.src.name.topic2	TopicId: urMxYNH_SSOGbV7sr1LVog	PartitionCount: 1	ReplicationFactor: 3	Configs: compression.type=snappy,min.insync.replicas=2,segment.bytes=1073741824,retention.ms=3600000
	Topic: context.src.name.topic2	Partition: 0	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3	Offline: 

Expected behavior
Julie should never delete config that is set per default from broker side!

Runtime (please complete the following information):

  • Version [e.g. 22]
julie-ops --version
4.2.5
@Fobhep Fobhep added the bug Something isn't working label May 19, 2022
@purbon
Copy link
Collaborator

purbon commented Jul 31, 2022

Thanks a lot for your report @Fobhep as always very much appreciated it. My current way of thinking is to introduce something like https://github.com/kafka-ops/julie/blob/master/src/main/java/com/purbon/kafka/topology/Constants.java#L6 but for configs.

In your case, you have this config introduced automatically either the cluster or an external tool. Which case is yours?

Thanks a lot for your continous help in the project.

@purbon
Copy link
Collaborator

purbon commented Aug 3, 2022

question, why not manage placement constraints with JulieOps,

---
context: "o"
projects:
  - name: "f"
    consumers:
      - principal: "User:NewApp2"
    topics:
      - name: "t"
        config:
          confluent.placement.constraints:  "{\"version\":1,\"replicas\":[{\"count\":1,\"constraints\":{\"rack\":\"rack-1\"}},{\"count\":1,\"constraints\":{\"rack\":\"rack-2\"}}],\"observers\":[]}"
$ docker exec kafka kafka-topics --bootstrap-server kafka:29092 \                                                        2.7.0
                  --describe --topic o.f.t
Topic: o.f.t	TopicId: dJImanTbSd2sbUjLVDMoVA	PartitionCount: 1	ReplicationFactor: 2	Configs: confluent.placement.constraints={"version":1,"replicas":[{"count":1,"constraints":{"rack":"rack-1"}},{"count":1,"constraints":{"rack":"rack-2"}}],"observers":[]}
	Topic: o.f.t	Partition: 0	Leader: 1	Replicas: 1,2	Isr: 1,2	Offline:

do you see a limitation for this operationally? I understand and tested when the config is there, no problem with being deleted.

What do you think?

removing the label bug for now until we're clear about the reason and causes behind the issue.

@purbon purbon added under-investigation and removed bug Something isn't working labels Aug 3, 2022
@purbon
Copy link
Collaborator

purbon commented Aug 3, 2022

related to #241

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants