Red Hat 9038 Published by

A Red Hat OpenShift Data Foundation 4.10.0 enhancement, security and bug fix update has been released for Red Hat Enterprise Linux 8.



RHSA-2022:1372-01: Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update



=====================================================================
Red Hat Security Advisory

Synopsis: Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update
Advisory ID: RHSA-2022:1372-01
Product: RHODF
Advisory URL:   https://access.redhat.com/errata/RHSA-2022:1372
Issue date: 2022-04-13
CVE Names: CVE-2021-29923 CVE-2021-34558 CVE-2021-36221
CVE-2021-43565 CVE-2021-44716 CVE-2021-44717
=====================================================================

1. Summary:

Updated images that include numerous enhancements, security, and bug fixes
are now available for Red Hat OpenShift Data Foundation 4.10.0 on Red Hat
Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact
of Important. A Common Vulnerability Scoring System (CVSS) base score,
which
gives a detailed severity rating, is available for each vulnerability from
the CVE link(s) in the References section.

2. Description:

Red Hat OpenShift Data Foundation is software-defined storage integrated
with and optimized for the Red Hat OpenShift Container Platform. Red Hat
OpenShift Data Foundation is a highly scalable, production-grade persistent
storage for stateful applications running in the Red Hat OpenShift
Container Platform. In addition to persistent storage, Red Hat OpenShift
Data Foundation provisions a multicloud data management service with an S3
compatible API.

Security Fix(es):
* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
* golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
* golang: net/http: limit growth of header canonicalization cache
(CVE-2021-44716)
* golang: net/http/httputil: panic due to racy read of persistConn after
handler panic (CVE-2021-36221)
* golang: net: incorrect parsing of extraneous zero characters at the
beginning of an IP address octet (CVE-2021-29923)
* golang: crypto/tls: certificate of wrong type is causing TLS client to
panic (CVE-2021-34558)

Bug Fix(es):
These updated packages include numerous enhancements and bug fixes. Space
precludes documenting all of these changes in this advisory. Users are
directed to the Red Hat OpenShift Data Foundation Release Notes for
information on the most significant of these changes:

  https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.10/html/4.10_release_notes/index

All Red Hat OpenShift Data Foundation users are advised to upgrade to these
updated packages, which provide numerous bug fixes and enhancements.

or more details about the security issue(s), including the impact, a CVSS
score, acknowledgments, and other related information refer to the CVE
page(s) listed in the References section.

3. Solution:

For details on how to apply this update, refer to:

  https://access.redhat.com/articles/11258

4. Bugs fixed (  https://bugzilla.redhat.com/):

1898988 - [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster.
1954708 - [GSS][RFE] Restrict Noobaa from creating public endpoints for Azure Private Cluster
1956418 - [GSS][RFE] Automatic space reclaimation for RBD
1970123 - [GSS] [Azure] NooBaa insecure StorageAccount does not allow for TLS 1.2
1972190 - Attempt to remove pv-pool based noobaa-default-backing-store fails and makes this pool stuck in Rejected state
1974344 - critical ClusterObjectStoreState alert firing after installation of arbiter storage cluster, likely because ceph object user for cephobjectstore fails to be created, when storagecluster is reinstalled
1981341 - Changing a namespacestore's targetBucket field doesn't check whether the target bucket actually exists
1981694 - Restrict Noobaa from creating public endpoints for IBM ROKS Private cluster
1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic
1991462 - helper pod runs with root privileges during Must-gather collection(affects ODF Managed Services)
1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet
1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic
1996830 - OCS external mode should allow specifying names for all Ceph auth principals
1996833 - ceph-external-cluster-details-exporter.py should have a read-only mode
1999689 - Integrate upgrade testing from ocs-ci to the acceptance job for final builds before important milestones
1999952 - Automate the creation of cephobjectstoreuser for obc metrics collector
2003532 - [Tracker for RHEL BZ #2008825] Node upgrade failed due to "expected target osImageURL" MCD error
2005801 - [KMS] Tenant config does not override backendpath if the key is specified in UPPER_CASE
2005919 - [DR] [Tracker for BZ #2008587] when Relocate action is performed and the Application is deleted completely rbd image is not getting deleted on secondary site
2021313 - [GSS] Cannot delete pool
2022424 - System capacity card shows infinity % as used capacity.
2022693 - [RFE] ODF health should reflect the health of Ceph + NooBaa
2024107 - Retrieval of cached objects with `s3 sync` after change in object size in underlying storage results in an InvalidRange error
2024545 - Overprovision Level Policy Control doesn't support custom storageclass
2026007 - Use ceph 'osd safe-to-destroy' feature in OSD purge job
2027666 - [DR] CephBlockPool resources reports wrong mirroringStatus
2027826 - OSD Removal template needs to expose option to force remove the OSD
2028559 - OBC stuck on pending post node failure recovery
2029413 - [DR] Dummy image size is same as the size of image for which it was created
2030602 - MCG not reporting standardized metric correctly for usage
2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic
2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache
2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error
2030839 - Concecutive dashes in OBC name
2031023 - "dbStorageClassName" goes missing in storage cluster yaml for mcg standalone mode
2031705 - [GSS] OBC is not visible by admin of a Project on Console
2032404 - After a node restart, the RGW pod is stuck in a CrashLoopBackOff state
2032412 - [DR] After Failback and PVC deletion the rbd images are left in trash
2032656 - Rook not recovering when deleting osd deployment with kms encryption
2032969 - No RBD mirroring daemon down alert when daemon is down
2032984 - After creating a new SC it redirects to 404 error page instead of the "StorageSystems" page
2033251 - Fix ODF 4.9 compatibility with OCP 4.10
2034003 - NooBaa endpoint pod Terminated before new one comes in Running state after editing the configmap
2034805 - upgrade not started for ODF 4.10
2034904 - OCS operator version differ in CLI commands.
2035774 - Must Gather, Ceph files do not exist on MG directory
2035995 - [GSS] odf-operator-controller-manager is in CLBO with OOM kill while upgrading OCS-4.8 to ODF-4.9
2036018 - ROOK_CSI_* overrides missing from the CSV in 4.10
2036211 - [GSS] noobaa-endpoint becomes CrashLoopBackOff when uploading metrics data to bucket
2037279 - [Azure] OSDs go into CLBO state while mounting an RBD PVC
2037318 - Helper Pod doesn't come up for MCG only must-gather
2037497 - Concecutive dashes in OBC name
2038884 - noobaa-operator is stuck in a CrashLoopBackOff (r.OBC is nil, invalid memory address or nil pointer dereference)
2039240 - [KMS] Deployment of ODF cluster fails when cluster wide encryption is enabled using service account for KMS auth
2040682 - [GSS] Complete multipart upload operation fails with error ' Cannot read property 'sort' of undefined'
2041507 - Missing add modal for action "add capacity" in UI .
2042866 - must gather does not collect the yaml or describe output of the subscription
2043017 - "CSI Addons" operator is not hidden in OperatorHub and Installed Operators page
2043028 - the CSI-Addons sidecar is not automatically deployed, requires enabling in Rook ConfigMap
2043406 - ReclaimSpaceJob status showing "reclaimedSpace" value as "0"
2043513 - [Tracker for Ceph BZ 2044836] mon is in CLBO after upgrading to 4.10-113
2044447 - ODF 4.9 deployment fails when deployed using the ODF managed service deployer (ocs-osd-deployer)
2044823 - Update CSI sidecars to the latest release for 4.10
2045084 - [SNO] controller-manager state is CreateContainerError
2046186 - A TODO text block in the API browser
2046254 - Topolvm-controller is failing to pull image
2046677 - Reclaimspacecronjob is not created after adding the annotation reclaimspace.csiaddons.openshift.io/schedule in PVC
2046766 - [IBM Z]: csi-rbdplugin pods failed to come up due to ImagePullBackOff from the "csiaddons" registry
2046887 - use KMS_PROVIDER name for IBM key protect service as "ibmkeyprotect"
2047162 - ReclaimSpaceJob failing, fstrim is executed on a non-existing mountpoint/directory
2047201 - Add HPCS secret name to Ceph and NooBaa CR
2047562 - CSI Sidecar containers do not start
2047565 - PVC snapshot creation is not successful
2047625 - Dockerfile changes for topolvm
2047632 - mcg-operator failed to install on 4.10.0-126
2047642 - Replace alpine/openssl image in the downstream build
2048107 - vgmanager cannot list block devices on the node
2048370 - CSI-Addons controller makes node reclaimspace request even when the PVC is not mounted to any pod.
2048458 - python exporter script 'ceph-external-cluster-details-exporter.py' error cap mon does not match on ODF 4.10
2049029 - MCG admission control webhooks don't work
2049075 - openshift-storage namespace is stuck in terminating state during uninstall due to remaining csi-addons resources
2049081 - ReclaimSpaceJob is failing for RBD RWX PVC
2049424 - ODF Provider/Consumer mode - backport for missing content
2049509 - ocs operator stuck on CrashLoopBackOff while installing with KMS
2049718 - provider/consumer Mode: rook-ceph-csi-config configmap needs to be updated with the relevant subvolumegroup information
2049727 - [DR] Mirror Peer stuck in ExchangingSecret State
2049771 - We can see 2 ODF Multicluster Orchestrator operators in operator hub page
2049790 - Add error handling for GetCurrentStorageClusterRef
2050056 - [GSS][KMS] Tenant configmap does not override vault namespace
2050142 - [DR] MCO operator is setting s3region as empty inside s3storeprofiles
2050402 - Ramen doesn't generate correct VRG spec in sync mode
2050483 - [DR]post creating MirrorPeer, the ramen config map had invalid values
2051249 - [GSS]noobaa-db-pg-0 Pod stuck CrashLoopBackOff state
2051406 - Need commit hash in package json and logs
2051599 - Use AAD while unwrapping the KEY from HPCS/Key Protect KMS
2051913 - [KMS] Skip SC creation for vault SA based kms encryption
2052027 - cephfs: rados omap leak after deletesnapshot
2052438 - [KMS] Storagecluster is in progressing state due to failed RGW deployment when using cluster wide encryption with kubernetes auth method
2052937 - [KMS] Auto-detection of KV version fails when using Vault namespaces
2052996 - ODF deployment fails using RHCS in external mode due to cephobjectstoreuser
2053156 - Avoid worldwide permission mode setting at time of nodestage of CephFS share
2053517 - [DR] Applications are not getting DR protected
2054147 - Provider/Consumer: Provider API server crashloopbackoff
2054755 - Update storagecluster API in the odf-operator
2061251 - [GSS]Object Upload failed with Unhandled exception when not using parameter "UseChunkEncoding = false" in s3 client in ODF 4.9

5. References:

  https://access.redhat.com/security/cve/CVE-2021-29923
  https://access.redhat.com/security/cve/CVE-2021-34558
  https://access.redhat.com/security/cve/CVE-2021-36221
  https://access.redhat.com/security/cve/CVE-2021-43565
  https://access.redhat.com/security/cve/CVE-2021-44716
  https://access.redhat.com/security/cve/CVE-2021-44717
  https://access.redhat.com/security/updates/classification/#important

6. Contact:

The Red Hat security contact is . More contact
details at   https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc.