Skip to content

Conversation

@plastikman
Copy link

… affinity

This commit introduces two related scheduling enhancements:

  1. Add topologySpreadConstraints field to BaseSpec (aggregated cluster) and CommonSpec (disaggregated cluster) for more flexible pod distribution control.

  2. Change affinity behavior to fully respect user-provided configuration:

    • If user provides affinity, use it as-is without injecting defaults
    • Only apply default soft podAntiAffinity when user hasn't specified any affinity

This allows users to configure scenarios like hard zone spreading with soft node spreading using topologySpreadConstraints, which was not possible before.

Files modified:

  • api/doris/v1/types.go: Add TopologySpreadConstraints to BaseSpec
  • api/disaggregated/v1/types.go: Add TopologySpreadConstraints to CommonSpec
  • pkg/common/utils/resource/pod.go: Pass through TopologySpreadConstraints, simplify constructAffinity to respect user config
  • pkg/controller/sub_controller/disaggregated_subcontroller.go: Simplify ConstructDefaultAffinity to respect user config

What problem does this PR solve?

Issue Number: close #xxx

Related PR: #xxx

Problem Summary:

TopologySpreadConstraints are not added to the operator, i use these for my infrastructure. please add them

Release note

None

Check List (For Author)

  • Test
    • Regression test
    • Unit Test
    • Manual test (add detailed scripts or steps below)
    • No need to test or manual test. Explain why:
      • This is a refactor/code format and no logic has been changed.
      • Previous test can cover this change.
      • No code files have been changed.
      • [] Other reason

This change was tested by applying the patch, building a new operator and deploying in my AKS cluster.

  • Behavior changed:

    • No.
    • Yes.

    In my infrastructure i use topology spread constraints to keep pods scheduled on different nodes and AZ's the operator does not support this. This PR adds this ability if they are not set nothing changes

     topologySpreadConstraints:
     - maxSkew: 1
       topologyKey: kubernetes.io/hostname
       whenUnsatisfiable: ScheduleAnyway
       labelSelector:
         matchLabels:
           app.doris.disaggregated.type: ms
    
     - maxSkew: 2
       topologyKey: topology.kubernetes.io/zone
       whenUnsatisfiable: DoNotSchedule
       labelSelector:
         matchLabels:
           app.doris.disaggregated.type: ms
    

This PR also keeps the podaffinity if set by the configuration.  I had affinity set to "RequiredDuringSchedulingIgnoredDuringExecution" but its being changed by the operator to "PreferredDuringSchedulingIgnoredDuringExecution"  

- Does this need documentation?
    - [ ] No.
    - [ ] Yes. <!-- Add document PR link here. eg: https://github.com/apache/doris-website/pull/1214 -->

### Check List (For Reviewer who merge this PR)

- [ ] Confirm the release note
- [ ] Confirm test cases
- [ ] Confirm document
- [ ] Add branch pick label <!-- Add branch pick label that this PR should merge into -->

… affinity

This commit introduces two related scheduling enhancements:

1. Add topologySpreadConstraints field to BaseSpec (aggregated cluster) and
   CommonSpec (disaggregated cluster) for more flexible pod distribution control.

2. Change affinity behavior to fully respect user-provided configuration:
   - If user provides affinity, use it as-is without injecting defaults
   - Only apply default soft podAntiAffinity when user hasn't specified any affinity

This allows users to configure scenarios like hard zone spreading with soft
node spreading using topologySpreadConstraints, which was not possible before.

Files modified:
- api/doris/v1/types.go: Add TopologySpreadConstraints to BaseSpec
- api/disaggregated/v1/types.go: Add TopologySpreadConstraints to CommonSpec
- pkg/common/utils/resource/pod.go: Pass through TopologySpreadConstraints,
  simplify constructAffinity to respect user config
- pkg/controller/sub_controller/disaggregated_subcontroller.go: Simplify
  ConstructDefaultAffinity to respect user config
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant