Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use new instance of Decompressor on channel initialization #3583

Merged
merged 9 commits into from
Oct 25, 2023

Conversation

cwperks
Copy link
Member

@cwperks cwperks commented Oct 22, 2023

Description

Resolves an issue with decompression that can lead to concurrent gzipped requests failing. This removes the @Sharable annotation from the Netty4ConditionalDecompressor and creates a new instance of the decompressor on channel initialization.

Netty4ConditionalDecompressor is an HttpContentDecompressor which is a subclass of HttpContentDecoder - a stateful handler. Netty docs on @Sharable annotation: https://netty.io/4.0/api/io/netty/channel/ChannelHandler.Sharable.html

  • Category (Enhancement, New feature, Bug fix, Test fix, Refactoring, Maintenance, Documentation)

Bug fix

Issues Resolved

Testing

Tested by running OpenSearch w fluentbit and Merge_Log on. See files below which can reproduce the issue from the linked error.

I opened this PR as draft pending an integration test to validate the behavior.

docker-compose.yml

version: '3'
services:
  opensearch: # This is also the hostname of the container within the Docker network (i.e. https://opensearch-node1/)
    image: opensearchproject/opensearch:latest # Specifying the latest available image - modify if you want a specific version
    container_name: opensearch
    environment:
      - cluster.name=opensearch-cluster # Name the cluster
      - node.name=opensearch # Name the node that will run in this container
      - discovery.type=single-node
      - bootstrap.memory_lock=true # Disable JVM heap memory swapping
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM
    ulimits:
      memlock:
        soft: -1 # Set memlock to unlimited (no soft or hard limit)
        hard: -1
      nofile:
        soft: 65536 # Maximum number of open files for the opensearch user - set to at least 65536
        hard: 65536
    volumes:
      - opensearch-data1:/usr/share/opensearch/data # Creates volume called opensearch-data1 and mounts it to the container
      # - /Users/craigperkins/Projects/OpenSearch/security/build/distributions/opensearch-security-2.11.0.0-SNAPSHOT.jar:/usr/share/opensearch/plugins/opensearch-security/opensearch-security-2.11.0.0.jar
    ports:
      - 9200:9200 # REST API
      - 9600:9600 # Performance Analyzer
    networks:
      - opensearch-net # All of the containers will join the same Docker bridge network
  fluent-bit:
    image: fluent/fluent-bit
    volumes:
      - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
    depends_on:
      - opensearch
    networks:
      - opensearch-net

volumes:
  opensearch-data1:
  opensearch-data2:

networks:
  opensearch-net:

fluent-bit.conf

[INPUT]
  Name dummy
  Dummy {"top": {".dotted": "value"}}

[OUTPUT]
  Name es
  Host opensearch
  Port 9200
  HTTP_User admin
  HTTP_Passwd admin
  Replace_Dots On
  Suppress_Type_Name On
  Compress gzip
  tls On
  tls.verify Off
  net.keepalive Off

[FILTER]
  Name kubernetes
  Match kube.*
  Buffer_Size 256KB
  Merge_Log On
  Keep_Log On

Check List

  • New functionality includes testing
  • New functionality has been documented
  • Commits are signed per the DCO using --signoff

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Signed-off-by: Craig Perkins <craig5008@gmail.com>
@codecov
Copy link

codecov bot commented Oct 22, 2023

Codecov Report

Merging #3583 (78242e8) into main (4f89b4a) will increase coverage by 0.01%.
The diff coverage is 100.00%.

Impacted file tree graph

@@             Coverage Diff              @@
##               main    #3583      +/-   ##
============================================
+ Coverage     64.90%   64.92%   +0.01%     
- Complexity     3637     3640       +3     
============================================
  Files           284      284              
  Lines         20618    20616       -2     
  Branches       3390     3390              
============================================
+ Hits          13383    13384       +1     
+ Misses         5555     5551       -4     
- Partials       1680     1681       +1     
Files Coverage Δ
...curity/http/SecurityNonSslHttpServerTransport.java 100.00% <100.00%> (ø)
.../ssl/http/netty/Netty4ConditionalDecompressor.java 100.00% <ø> (ø)
...ttp/netty/SecuritySSLNettyHttpServerTransport.java 85.36% <100.00%> (-0.35%) ⬇️

... and 1 file with indirect coverage changes

Signed-off-by: Craig Perkins <cwperx@amazon.com>
Signed-off-by: Craig Perkins <cwperx@amazon.com>
willyborankin
willyborankin previously approved these changes Oct 23, 2023
Copy link
Member

@peternied peternied left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @cwperks - how do we know that the Header Verifier is safe to keep as is?

Signed-off-by: Craig Perkins <cwperx@amazon.com>
Signed-off-by: Craig Perkins <cwperx@amazon.com>
Signed-off-by: Craig Perkins <cwperx@amazon.com>
@peternied peternied added the backport 2.11 Backport to 2.11 branch label Oct 25, 2023
Copy link
Collaborator

@stephen-crawford stephen-crawford left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me

Signed-off-by: Peter Nied <petern@amazon.com>
…ssor

Signed-off-by: Peter Nied <petern@amazon.com>
Signed-off-by: Peter Nied <petern@amazon.com>
@cwperks cwperks added the backport 2.x backport to 2.x branch label Oct 25, 2023
@DarshitChanpura
Copy link
Member

Thank you @cwperks & @peternied for fixing this!

@cwperks cwperks merged commit 499db78 into opensearch-project:main Oct 25, 2023
60 checks passed
opensearch-trigger-bot bot pushed a commit that referenced this pull request Oct 25, 2023
### Description

Resolves an issue with decompression that can lead to concurrent gzipped
requests failing. This removes the `@Sharable` annotation from the
`Netty4ConditionalDecompressor` and creates a new instance of the
decompressor on channel initialization.

`Netty4ConditionalDecompressor` is an `HttpContentDecompressor` which is
a subclass of `HttpContentDecoder` - a stateful handler. Netty docs on
`@Sharable` annotation:
https://netty.io/4.0/api/io/netty/channel/ChannelHandler.Sharable.html

* Category (Enhancement, New feature, Bug fix, Test fix, Refactoring,
Maintenance, Documentation)

Bug fix

### Issues Resolved

- opensearch-project/OpenSearch#10802

### Testing

Tested by running OpenSearch w fluentbit and Merge_Log on. See files
below which can reproduce the issue from the linked error.

I opened this PR as draft pending an integration test to validate the
behavior.

`docker-compose.yml`

```
version: '3'
services:
  opensearch: # This is also the hostname of the container within the Docker network (i.e. https://opensearch-node1/)
    image: opensearchproject/opensearch:latest # Specifying the latest available image - modify if you want a specific version
    container_name: opensearch
    environment:
      - cluster.name=opensearch-cluster # Name the cluster
      - node.name=opensearch # Name the node that will run in this container
      - discovery.type=single-node
      - bootstrap.memory_lock=true # Disable JVM heap memory swapping
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM
    ulimits:
      memlock:
        soft: -1 # Set memlock to unlimited (no soft or hard limit)
        hard: -1
      nofile:
        soft: 65536 # Maximum number of open files for the opensearch user - set to at least 65536
        hard: 65536
    volumes:
      - opensearch-data1:/usr/share/opensearch/data # Creates volume called opensearch-data1 and mounts it to the container
      # - /Users/craigperkins/Projects/OpenSearch/security/build/distributions/opensearch-security-2.11.0.0-SNAPSHOT.jar:/usr/share/opensearch/plugins/opensearch-security/opensearch-security-2.11.0.0.jar
    ports:
      - 9200:9200 # REST API
      - 9600:9600 # Performance Analyzer
    networks:
      - opensearch-net # All of the containers will join the same Docker bridge network
  fluent-bit:
    image: fluent/fluent-bit
    volumes:
      - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
    depends_on:
      - opensearch
    networks:
      - opensearch-net

volumes:
  opensearch-data1:
  opensearch-data2:

networks:
  opensearch-net:
```

`fluent-bit.conf`

```
[INPUT]
  Name dummy
  Dummy {"top": {".dotted": "value"}}

[OUTPUT]
  Name es
  Host opensearch
  Port 9200
  HTTP_User admin
  HTTP_Passwd admin
  Replace_Dots On
  Suppress_Type_Name On
  Compress gzip
  tls On
  tls.verify Off
  net.keepalive Off

[FILTER]
  Name kubernetes
  Match kube.*
  Buffer_Size 256KB
  Merge_Log On
  Keep_Log On
```

### Check List
- [ ] New functionality includes testing
- [ ] New functionality has been documented
- [x] Commits are signed per the DCO using --signoff

By submitting this pull request, I confirm that my contribution is made
under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and
signing off your commits, please check
[here](https://github.com/opensearch-project/OpenSearch/blob/main/CONTRIBUTING.md#developer-certificate-of-origin).

---------

Signed-off-by: Craig Perkins <craig5008@gmail.com>
Signed-off-by: Craig Perkins <cwperx@amazon.com>
Signed-off-by: Peter Nied <petern@amazon.com>
Co-authored-by: Peter Nied <petern@amazon.com>
(cherry picked from commit 499db78)
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
opensearch-trigger-bot bot pushed a commit that referenced this pull request Oct 25, 2023
### Description

Resolves an issue with decompression that can lead to concurrent gzipped
requests failing. This removes the `@Sharable` annotation from the
`Netty4ConditionalDecompressor` and creates a new instance of the
decompressor on channel initialization.

`Netty4ConditionalDecompressor` is an `HttpContentDecompressor` which is
a subclass of `HttpContentDecoder` - a stateful handler. Netty docs on
`@Sharable` annotation:
https://netty.io/4.0/api/io/netty/channel/ChannelHandler.Sharable.html

* Category (Enhancement, New feature, Bug fix, Test fix, Refactoring,
Maintenance, Documentation)

Bug fix

### Issues Resolved

- opensearch-project/OpenSearch#10802

### Testing

Tested by running OpenSearch w fluentbit and Merge_Log on. See files
below which can reproduce the issue from the linked error.

I opened this PR as draft pending an integration test to validate the
behavior.

`docker-compose.yml`

```
version: '3'
services:
  opensearch: # This is also the hostname of the container within the Docker network (i.e. https://opensearch-node1/)
    image: opensearchproject/opensearch:latest # Specifying the latest available image - modify if you want a specific version
    container_name: opensearch
    environment:
      - cluster.name=opensearch-cluster # Name the cluster
      - node.name=opensearch # Name the node that will run in this container
      - discovery.type=single-node
      - bootstrap.memory_lock=true # Disable JVM heap memory swapping
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM
    ulimits:
      memlock:
        soft: -1 # Set memlock to unlimited (no soft or hard limit)
        hard: -1
      nofile:
        soft: 65536 # Maximum number of open files for the opensearch user - set to at least 65536
        hard: 65536
    volumes:
      - opensearch-data1:/usr/share/opensearch/data # Creates volume called opensearch-data1 and mounts it to the container
      # - /Users/craigperkins/Projects/OpenSearch/security/build/distributions/opensearch-security-2.11.0.0-SNAPSHOT.jar:/usr/share/opensearch/plugins/opensearch-security/opensearch-security-2.11.0.0.jar
    ports:
      - 9200:9200 # REST API
      - 9600:9600 # Performance Analyzer
    networks:
      - opensearch-net # All of the containers will join the same Docker bridge network
  fluent-bit:
    image: fluent/fluent-bit
    volumes:
      - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
    depends_on:
      - opensearch
    networks:
      - opensearch-net

volumes:
  opensearch-data1:
  opensearch-data2:

networks:
  opensearch-net:
```

`fluent-bit.conf`

```
[INPUT]
  Name dummy
  Dummy {"top": {".dotted": "value"}}

[OUTPUT]
  Name es
  Host opensearch
  Port 9200
  HTTP_User admin
  HTTP_Passwd admin
  Replace_Dots On
  Suppress_Type_Name On
  Compress gzip
  tls On
  tls.verify Off
  net.keepalive Off

[FILTER]
  Name kubernetes
  Match kube.*
  Buffer_Size 256KB
  Merge_Log On
  Keep_Log On
```

### Check List
- [ ] New functionality includes testing
- [ ] New functionality has been documented
- [x] Commits are signed per the DCO using --signoff

By submitting this pull request, I confirm that my contribution is made
under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and
signing off your commits, please check
[here](https://github.com/opensearch-project/OpenSearch/blob/main/CONTRIBUTING.md#developer-certificate-of-origin).

---------

Signed-off-by: Craig Perkins <craig5008@gmail.com>
Signed-off-by: Craig Perkins <cwperx@amazon.com>
Signed-off-by: Peter Nied <petern@amazon.com>
Co-authored-by: Peter Nied <petern@amazon.com>
(cherry picked from commit 499db78)
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
cwperks pushed a commit that referenced this pull request Oct 25, 2023
…tion (#3598)

Backport 499db78 from #3583.

---------

Signed-off-by: Craig Perkins <craig5008@gmail.com>
Signed-off-by: Craig Perkins <cwperx@amazon.com>
Signed-off-by: Peter Nied <petern@amazon.com>
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Signed-off-by: Darshit Chanpura <dchanp@amazon.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Peter Nied <petern@amazon.com>
Co-authored-by: Darshit Chanpura <dchanp@amazon.com>
RyanL1997 pushed a commit that referenced this pull request Oct 26, 2023
…ation (#3599)

Backport 499db78 from #3583.

---------

Signed-off-by: Craig Perkins <craig5008@gmail.com>
Signed-off-by: Craig Perkins <cwperx@amazon.com>
Signed-off-by: Peter Nied <petern@amazon.com>
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Signed-off-by: Darshit Chanpura <dchanp@amazon.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Peter Nied <petern@amazon.com>
Co-authored-by: Darshit Chanpura <dchanp@amazon.com>
@wbeckler
Copy link

Is backporting this to 2.11 a violation of semver as outlined in OpenSearch's patch version policy, which says "Small improvements and features will be backported to a new minor version.... Security fixes will be backported to a new patch version."

Maybe this should only go into 2.12, so as to not violate the expectation that 2.11.1 would only contain security fixes.

@cwperks
Copy link
Member Author

cwperks commented Oct 27, 2023

@wbeckler 2.11.1 is being created for this fix: opensearch-project/opensearch-build#4161 (comment)

@bbarani
Copy link
Member

bbarani commented Oct 27, 2023

As per official semVar guidelines, Patch version Z (x.y.Z | x > 0) MUST be incremented if only backward compatible bug fixes are introduced. A bug fix is defined as an internal change that fixes incorrect behavior.

@dblock
Copy link
Member

dblock commented Oct 30, 2023

@bbarani that assumes that we make x.y.z.N for security fixes, doesn’t it?

peternied added a commit to peternied/security that referenced this pull request Nov 21, 2023
…ation (opensearch-project#3599)

Backport 499db78 from opensearch-project#3583.

---------

Signed-off-by: Craig Perkins <craig5008@gmail.com>
Signed-off-by: Craig Perkins <cwperx@amazon.com>
Signed-off-by: Peter Nied <petern@amazon.com>
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Signed-off-by: Darshit Chanpura <dchanp@amazon.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Peter Nied <petern@amazon.com>
Co-authored-by: Darshit Chanpura <dchanp@amazon.com>
willyborankin pushed a commit that referenced this pull request Nov 27, 2023
…requests (#3418) (#3675)

### Description

Includes:
- Backport f7c47af of #3418
- Backport 2dab119 of #3717
- Backport f27dee2 of #3583

---

Previously unauthorized requests were fully processed and rejected once
they reached the RestHandler. This allocations more memory and resources
for these requests that might not be useful if they are already detected
as unauthorized. Using the headerVerifer and decompressor customization
from [1], perform an early authorization check when only the headers are
available, save an 'early response' for transmission and do not perform
the decompression on the request to speed up closing out the connection.

```mermaid
graph TD

    oA["Receive Request Headers<br>(Orginal)"] --> oB[Decompress Request]
    oB --> oC[RestHandler]
       oC --> osrf[Intercept Request]
    subgraph sp[Security Plugin]
       osrf --> oD[Check Authorization]
       oD --> oE{Authorized?}
       oE -->|Yes| oF[Process and Respond]
       oE -->|No| oG[Reject Request]
   end
   oF --> oH[Forward to Request Handler]



    H["Receive Request Headers<br>(Updated)"] --> I[HeaderVerifier]
    subgraph nsp[Security Plugin]
       I --> J{Authorized?}
       J -->|Yes| K[Decompress Request]
       J -->|No| N[Save Early Response]
    end
    K --> L[RestHandler]
    N --> L
    L --> M[Intercept Request]
    subgraph n2sp[Security Plugin]
       M --> n2D["Check Authorization<br>(Cached)"]
       n2D --> nE{Authorized?}
       nE -->|Yes| nF[Process and Respond]
       nE -->|No| nG[Reject Request]
   end
   nF --> nH[Forward to Request Handler]

class oA,oB old;
class H,I,K,N,n2D new;
classDef old fill:#f9d0c4,stroke:#f28b82;
classDef new fill:#cfe8fc,stroke:#68a9ef;

```

### Issues Resolved
- Related #3559

### Check List
- [X] New functionality includes testing
- [ ] ~New functionality has been documented~
- [X] Commits are signed per the DCO using --signoff

By submitting this pull request, I confirm that my contribution is made
under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and
signing off your commits, please check
[here](https://github.com/opensearch-project/OpenSearch/blob/main/CONTRIBUTING.md#developer-certificate-of-origin).

---------

Signed-off-by: Peter Nied <petern@amazon.com>
Signed-off-by: Craig Perkins <cwperx@amazon.com>
Signed-off-by: Craig Perkins <craig5008@gmail.com>
Signed-off-by: Peter Nied <peternied@hotmail.com>
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Signed-off-by: Darshit Chanpura <dchanp@amazon.com>
Co-authored-by: Craig Perkins <cwperx@amazon.com>
Co-authored-by: opensearch-trigger-bot[bot] <98922864+opensearch-trigger-bot[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Darshit Chanpura <dchanp@amazon.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport 2.x backport to 2.x branch backport 2.11 Backport to 2.11 branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants