[RHSA-2023:3623-01] Moderate: Red Hat Ceph Storage 6.1 security and bug fix update
=====================================================================
Red Hat Security Advisory
Synopsis: Moderate: Red Hat Ceph Storage 6.1 security and bug fix update
Advisory ID: RHSA-2023:3623-01
Product: Red Hat Ceph Storage
Advisory URL: https://access.redhat.com/errata/RHSA-2023:3623
Issue date: 2023-06-15
CVE Names: CVE-2021-4231 CVE-2022-31129
=====================================================================
1. Summary:
New packages for Red Hat Ceph Storage 6.1 are now available on Red Hat
Enterprise Linux.
Red Hat Product Security has rated this update as having a security impact
of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which
gives a detailed severity rating, is available for each vulnerability from
the CVE link(s) in the References section.
2. Relevant releases/architectures:
Red Hat Ceph Storage 6.1 Tools - noarch, ppc64le, s390x, x86_64
3. Description:
Red Hat Ceph Storage is a scalable, open, software-defined storage platform
that combines the most stable version of the Ceph storage system with a
Ceph management platform, deployment utilities, and support services.
These new packages include numerous enhancements and bug fixes. Space
precludes documenting all of these changes in this advisory. Users are
directed to the Red Hat Ceph Storage Release Notes for information on the
most significant of these changes:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6.1/html/release_notes/index
Security Fix(es):
* moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)
* angular: XSS vulnerability (CVE-2021-4231)
For more details about the security issue(s), including the impact, a CVSS
score, acknowledgments, and other related information, refer to the CVE
page(s) listed in the References section.
All users of Red Hat Ceph Storage are advised to update to these packages
that provide numerous enhancements and bug fixes.
4. Solution:
For details on how to apply this update, see Upgrade a Red Hat Ceph Storage
cluster using cephadm in the Red Hat Storage Ceph Upgrade Guide
( https://access.redhat.com/documentation/en-us/red_hat_ceph_storage).
5. Bugs fixed ( https://bugzilla.redhat.com/):
1467648 - [RFE] support x-amz-replication-status for multisite
1600995 - rgw_user_max_buckets is not applied to non-rgw users
1783271 - [RFE] support for key rotation
1794550 - [Graceful stop/restart/shutdown] multiple ceph admin sockets
1929760 - [RFE] [Ceph-Dashboard] [Ceph-mgr] Dashboard to display per OSD slow op counter and type of slow op
1932764 - [RFE] Bootstrap console logs are through STDERR stream
1937618 - [CEE][RGW]Bucket policies disappears in archive zone when an object is inserted in master zone bucket
1975689 - Listing of snapshots are not always successful on nfs exports
1991808 - [rgw-multisite][LC]: LC rules applied from the master do not run on the slave.
2004175 - [RGW][Notification][kafka][MS]: arn not populated with zonegroup in event record
2016288 - [RFE] Defining a zone-group when deploying RGW service with cephadm
2016949 - [RADOS]: OSD add command has no return error/alert message to convey OSD not added with wrong hostname
2024444 - [rbd-mirror] Enabling mirroring on image in a namespace falsely fails saying cannot enable mirroring in current pool mirroring mode
2025815 - [RFE] RBD Mirror Geo-replication metrics
2028058 - [RFE][Ceph Dashboard] Add alert panel in the front dashboard
2029714 - ceph --version command reports incorrect ceph version in 5.x post upgrade from 4.2 when compared with ceph version output
2036063 - [GSS][Cephadm][Add the deletion of the cluster logs in the cephadm rm-cluster]
2053347 - [RFE] [RGW-MultiSite] [Notification] bucket notification types for replication events (S3 notifications extension, upstream)
2053471 - dashboard: add support for Ceph Authx (client auth mgmt)
2064260 - [GSS][RFE] Support for AWS PublicAccessBlock
2064265 - [GSS][RFE] Feature to disable the ability to set lifecycle policies
2067709 - [RFE] Add metric relative to osd blocklist
2076709 - per host ceph-exporter daemon
2080926 - [cephadm][ingress]: AssertionError seen upon restarting haproxy and keepalived using service name
2082666 - [cee/sd][RGW] Bucket notification: http endpoints with one trailing slash in the push-endpoint URL failed to create topic
2092506 - [cephadm] orch upgrade status help message is not apt
2094052 - CVE-2021-4231 angular: XSS vulnerability
2097027 - [cee/sd][ceph-dasboard] pool health on primary site shows error for one way rbd_mirror configuration
2097187 - Unable to redeploy the active mgr instance via "ceph orch daemon redeploy " command
2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
2105950 - [RHOS17][RFE] RGW does not support get object with temp_url using SHA256 digest (required for FIPS)
2106421 - [rbd-mirror]: mirror image status : non-primary : description : syncing_percent showing invalid value (3072000)
2108228 - sosreport logs from ODF cluster mangled
2108489 - [CephFS metadata information missing during a Ceph upgrade]
2109224 - [RFE] deploy custom RGW realm/zone using orchestrator and a specification file
2110290 - Multiple "CephPGImbalance" alerts on Dashboard
2111282 - Misleading information displayed using osd_mclock_max_capacity_iops_[hdd, ssd] command.
2111364 - [rbd_support] recover from RADOS instance blocklisting
2111680 - cephadm --config initial-ceph.conf no longer supports comma delimited networks for routed traffic
2111751 - [ceph-dashboard] In expand cluster create osd default selected as recommended not working
2112309 - [cee/sd][cephadm]Getting the warning "Unable to parse .yml succesfully" while bootstrapping
2114835 - prometheus reports an error during evaluation of CephPoolGrowthWarning alert rule
2120624 - don't leave an incomplete primary snapshot if the peer who is handling snapshot creation dies
2124441 - [cephadm] osd spec crush_device_class and host identifier "location"
2127345 - [RGW MultiSite] : during upgrade 2 rgw(out of 6) had Segmentation fault
2127926 - [RGW][MS]: bucket sync markers fails with ERROR: sync.read_sync_status() returned error=0
2129861 - [cee/sd][ceph-dashboard] Unable to access dashboard when enabling the "url_prefix" in RHCS 5.2 dashboard configuration
2132554 - [RHCS 5.3][Multisite sync policies: disabling per-bucket replication doesn't work if the zones replicate]
2133341 - [RFE] [RBD Mirror] Support force promote an image for RBD mirroring through dashboard
2133549 - [CEE] dashboard binds to host.containers.internal with podman-4.1.1-2.module+el8.6.0+15917+093ca6f8.x86_64
2133802 - [RGW] RFE: Enable the Ceph Mgr RGW module
2136031 - cephfs-top -d not working as expected
2136304 - [cee][rgw] Upgrade to 4.3z1 with vault results in (AccessDenied) failures when accessing buckets.
2136336 - [cee/sd][Cephadm] ceph mgr is filling up the log messages "Detected new or changed devices" for all OSD nodes every 30 min un-neccessarily
2137596 - [RGW] Suspending bucket versioning in primary/secondary zone also suspends bucket versioning in the archive zone
2138793 - make cephfs-top display scroll-able like top(1) and fix the blank screen for great number of clients
2138794 - [RGW][The 'select object content' API is not working as intended for CSV files]
2138933 - [RGW]: Slow object expiration observed with LC
2139694 - RGW cloud Transition. Found Errors during transition when using MCG Azure Namespacestore with a pre-created bucket
2139769 - [ceph-dashboard] rbd mirror sync progress shows empty
2140074 - [cee/sd][cephfs][dashboard]While evicting one client via ceph dashboard, it evicts all other client mounts of the ceph filesystem
2140784 - [CEE] cephfs mds crash /builddir/build/BUILD/ceph-16.2.8/src/mds/Server.cc: In function 'CDentry* Server::prepare_stray_dentry(MDRequestRef&, CInode*)' thread 7feb58dcd700 time 2022-11-06T13:26:27.233738+0000
2141110 - [RFE] Improve handling of BlueFS ENOSPC
2142167 - [RHCS 6.x] OSD crashes due to suicide timeout in rgw gc object class code, need assistance for core analysis
2142431 - [RFE] Enabling additional metrics in node-exporter container
2143285 - RFE: OSDs need ability to bind to a service IP instead of the pod IP to support RBD mirroring in OCP clusters
2145104 - [ceph-dashboard] unable to create snapshot of an image using dashboard
2146544 - [RFE] Provide support for labeled perf counters in Ceph Exporter
2146546 - [RFE] Refactor RBD mirror metrics to use new labeled performance counter
2147346 - [RFE] New metric to provide rbd mirror image status and snapshot replication information
2147348 - [RFE] Add additional fields about image status in rbd mirror comands
2149259 - [RGW][Notification][Kafka]: wrong event timestamp seen as 0.000000 for multipart upload events in event record
2149415 - [cephfs][nfs] "ceph nfs cluster info" shows does not exist cluster
2149533 - [RFE - Stretch Cluster] Provide way for Cephadm orch to deploy new Monitor daemons with "crush_location" attribute
2151189 - [cephadm] DriveGroup can't handle multiple crush_device_classes
2152963 - ceph cluster upgrade failure/handling report with offline hosts needs to be improved
2153196 - snap-schedule add command is failing when subvolume argument is provided
2153452 - [6.0][sse-s3][bucket-encryption]: Multipart object uploads are not encrypted, even though bucket encryption is set on a bucket
2153533 - [RGW][Notification][kafka]: object size 0 seen in event record upon lifecycle expiration event
2153673 - snapshot schedule stopped on one image and mirroring stopped on secondary images while upgrading from 16.2.10-82 to 16.2.10-84
2153726 - [RFE] On the Dashboard -> Cluster -> Monitoring page, source url of prometheus is in format http://hostname:9095 which doesn't work when you click.
2158689 - cephfs-top: new options to sort and limit
2159294 - Large Omap objects found in pool 'ocs-storagecluster-cephfilesystem-metadata'
2159307 - mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
2160598 - [GSS] MDSs are read only, after commit error on cache.dir(0x1)
2161479 - MDS: scan_stray_dir doesn't walk through all stray inode fragment
2161483 - mds: md_log_replay thread (replay thread) can remain blocked
2163473 - [Workload-DFG] small object recovery, backfill too slow and low client throughput!
2164327 - [Ceph-Dashboard] Hosts page flickers on auto refresh
2168541 - mon: prevent allocating snapids allocated for CephFS
2172791 - mds: make num_fwd and num_retry to __u32
2175307 - [RFE] Catch MDS damage to the dentry's first snapid
2180110 - cephadm: reduce spam to cephadm.log
2180567 - rebase ceph to 17.2.6
2181055 - [rbd-mirror] RPO not met when adding latency between clusters
2182022 - [RGW multisite][Archive zone][Duplicate objects in the archive zone]
2182035 - [RHCS 6.0][Cephadm][Permission denied errors upgrading to RHCS 6]
2182564 - mds: force replay sessionmap version
2182613 - client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
2184268 - [RGW][Notification][Kafka]: persistent notifications not seen after kafka is up for events happened when kafka is down
2185588 - [CEE/sd][Ceph-volume] wrong block_db_size computed when adding OSD
2185772 - [Ceph-Dashboard] Fix issues in the rhcs 6.1 branding
2186095 - [Ceph Dashboard]: Upgrade the grafana version to latest
2186126 - [RFE] Recovery Throughput Metrics to Dashboard Landing page
2186472 - [RGW Multisite]: If cloud transition happens on primary of multisite , secondary has no metadata of the object
2186557 - Metrics names produced by Ceph exporter differ form the name produced by Prometheus manager module
2186738 - [CEE/sd][ceph-monitoring][node-exporter] node-exporter on a fresh installation is crashing due to `panic: "node_rapl_package-0-die-0_joules_total" is not a valid metric name`
2186760 - Getting 411, missing content length error for PutObject operations for clients accessing via aws-sdk in RHCS5 cluster
2186774 - [RHCS 5.3z1][Cannot run `bucket stats` command on deleted buckets in the AZ]
2187265 - [Dashboard] Landing page has a hyperlink for Manager page even though it does not exist
2187394 - [RGW CloudTransition] tier configuration incorrectly parses keys starting with digit
2187617 - [6.1][rgw-ms] Writing on a bucket with num_shards 0 causes sync issues and rgws to segfault on the replication site.
2187659 - ceph fs snap-schedule listing is failing
2188266 - In OSP17.1 with Ceph Storage 6.0 object_storage tests fail with Unauthorized
2188460 - MDS Behind on trimming (145961/128) max_segments: 128, num_segments: 145961
2189308 - [RGW][Notification][Kafka]: bucket owner not in event record and received object size 0 for s3:ObjectSynced:Create event
2190412 - [cee/sd][cephadm][testfix] Zapping OSDs on Hosts deployed with Ceph RHCS 4.2z4 or before does not work after upgrade to RHCS 5.3z2 testfix
2196421 - update nfs-ganesha to V5.1 in RHCS 6.1
2196920 - Bring in ceph-mgr module framework dependencies for BZ 2111364
2203098 - [Dashboard] Red Hat Logo on the welcome page is too large
2203160 - [rbd_support] recover from "double blocklisting" (being blocklisted while recovering from blocklisting)
2203747 - Running cephadm-distribute-ssh-key.yml will require ansible.posix collection package downstream
2204479 - Ceph Common: "rgw-orphan-list" and "ceph-diff-sorted" missing from package
2207702 - RGW server crashes when using S3 PutBucketReplication API
2207718 - [RGW][notification][kafka]: segfault observed when bucket is configured with incorrect kafka broker
2209109 - [Ceph Dashboard]: fix pool_objects_repaired and daemon_health_metrics format
2209300 - [Dashboard] Refresh and information button misaligned on the Overall performance page
2209375 - [RHCS Tracker] After add capacity the rebalance does not complete, and we see 2 PGs in active+clean+scrubbing and 1 active+clean+scrubbing+deep
2209970 - [ceph-dashboard] snapshot create button got disabled in ceph dashboard
2210698 - [Dashboard] User with read-only permission cannot access the Dashboard landing page
6. Package List:
Red Hat Ceph Storage 6.1 Tools:
Source:
ansible-collection-ansible-posix-1.2.0-1.3.el9ost.src.rpm
ceph-17.2.6-70.el9cp.src.rpm
cephadm-ansible-2.15.0-1.el9cp.src.rpm
noarch:
ansible-collection-ansible-posix-1.2.0-1.3.el9ost.noarch.rpm
ceph-mib-17.2.6-70.el9cp.noarch.rpm
ceph-resource-agents-17.2.6-70.el9cp.noarch.rpm
cephadm-17.2.6-70.el9cp.noarch.rpm
cephadm-ansible-2.15.0-1.el9cp.noarch.rpm
cephfs-top-17.2.6-70.el9cp.noarch.rpm
ppc64le:
ceph-base-17.2.6-70.el9cp.ppc64le.rpm
ceph-base-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
ceph-common-17.2.6-70.el9cp.ppc64le.rpm
ceph-common-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
ceph-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
ceph-debugsource-17.2.6-70.el9cp.ppc64le.rpm
ceph-exporter-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
ceph-fuse-17.2.6-70.el9cp.ppc64le.rpm
ceph-fuse-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
ceph-immutable-object-cache-17.2.6-70.el9cp.ppc64le.rpm
ceph-immutable-object-cache-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
ceph-mds-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
ceph-mgr-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
ceph-mon-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
ceph-osd-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
ceph-radosgw-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
ceph-selinux-17.2.6-70.el9cp.ppc64le.rpm
ceph-test-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
cephfs-mirror-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
libcephfs-devel-17.2.6-70.el9cp.ppc64le.rpm
libcephfs2-17.2.6-70.el9cp.ppc64le.rpm
libcephfs2-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
libcephsqlite-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
librados-devel-17.2.6-70.el9cp.ppc64le.rpm
librados-devel-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
librados2-17.2.6-70.el9cp.ppc64le.rpm
librados2-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
libradospp-devel-17.2.6-70.el9cp.ppc64le.rpm
libradosstriper1-17.2.6-70.el9cp.ppc64le.rpm
libradosstriper1-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
librbd-devel-17.2.6-70.el9cp.ppc64le.rpm
librbd1-17.2.6-70.el9cp.ppc64le.rpm
librbd1-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
librgw-devel-17.2.6-70.el9cp.ppc64le.rpm
librgw2-17.2.6-70.el9cp.ppc64le.rpm
librgw2-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
python3-ceph-argparse-17.2.6-70.el9cp.ppc64le.rpm
python3-ceph-common-17.2.6-70.el9cp.ppc64le.rpm
python3-cephfs-17.2.6-70.el9cp.ppc64le.rpm
python3-cephfs-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
python3-rados-17.2.6-70.el9cp.ppc64le.rpm
python3-rados-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
python3-rbd-17.2.6-70.el9cp.ppc64le.rpm
python3-rbd-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
python3-rgw-17.2.6-70.el9cp.ppc64le.rpm
python3-rgw-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
rbd-fuse-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
rbd-mirror-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
rbd-nbd-17.2.6-70.el9cp.ppc64le.rpm
rbd-nbd-debuginfo-17.2.6-70.el9cp.ppc64le.rpm
s390x:
ceph-base-17.2.6-70.el9cp.s390x.rpm
ceph-base-debuginfo-17.2.6-70.el9cp.s390x.rpm
ceph-common-17.2.6-70.el9cp.s390x.rpm
ceph-common-debuginfo-17.2.6-70.el9cp.s390x.rpm
ceph-debuginfo-17.2.6-70.el9cp.s390x.rpm
ceph-debugsource-17.2.6-70.el9cp.s390x.rpm
ceph-exporter-debuginfo-17.2.6-70.el9cp.s390x.rpm
ceph-fuse-17.2.6-70.el9cp.s390x.rpm
ceph-fuse-debuginfo-17.2.6-70.el9cp.s390x.rpm
ceph-immutable-object-cache-17.2.6-70.el9cp.s390x.rpm
ceph-immutable-object-cache-debuginfo-17.2.6-70.el9cp.s390x.rpm
ceph-mds-debuginfo-17.2.6-70.el9cp.s390x.rpm
ceph-mgr-debuginfo-17.2.6-70.el9cp.s390x.rpm
ceph-mon-debuginfo-17.2.6-70.el9cp.s390x.rpm
ceph-osd-debuginfo-17.2.6-70.el9cp.s390x.rpm
ceph-radosgw-debuginfo-17.2.6-70.el9cp.s390x.rpm
ceph-selinux-17.2.6-70.el9cp.s390x.rpm
ceph-test-debuginfo-17.2.6-70.el9cp.s390x.rpm
cephfs-mirror-debuginfo-17.2.6-70.el9cp.s390x.rpm
libcephfs-devel-17.2.6-70.el9cp.s390x.rpm
libcephfs2-17.2.6-70.el9cp.s390x.rpm
libcephfs2-debuginfo-17.2.6-70.el9cp.s390x.rpm
libcephsqlite-debuginfo-17.2.6-70.el9cp.s390x.rpm
librados-devel-17.2.6-70.el9cp.s390x.rpm
librados-devel-debuginfo-17.2.6-70.el9cp.s390x.rpm
librados2-17.2.6-70.el9cp.s390x.rpm
librados2-debuginfo-17.2.6-70.el9cp.s390x.rpm
libradospp-devel-17.2.6-70.el9cp.s390x.rpm
libradosstriper1-17.2.6-70.el9cp.s390x.rpm
libradosstriper1-debuginfo-17.2.6-70.el9cp.s390x.rpm
librbd-devel-17.2.6-70.el9cp.s390x.rpm
librbd1-17.2.6-70.el9cp.s390x.rpm
librbd1-debuginfo-17.2.6-70.el9cp.s390x.rpm
librgw-devel-17.2.6-70.el9cp.s390x.rpm
librgw2-17.2.6-70.el9cp.s390x.rpm
librgw2-debuginfo-17.2.6-70.el9cp.s390x.rpm
python3-ceph-argparse-17.2.6-70.el9cp.s390x.rpm
python3-ceph-common-17.2.6-70.el9cp.s390x.rpm
python3-cephfs-17.2.6-70.el9cp.s390x.rpm
python3-cephfs-debuginfo-17.2.6-70.el9cp.s390x.rpm
python3-rados-17.2.6-70.el9cp.s390x.rpm
python3-rados-debuginfo-17.2.6-70.el9cp.s390x.rpm
python3-rbd-17.2.6-70.el9cp.s390x.rpm
python3-rbd-debuginfo-17.2.6-70.el9cp.s390x.rpm
python3-rgw-17.2.6-70.el9cp.s390x.rpm
python3-rgw-debuginfo-17.2.6-70.el9cp.s390x.rpm
rbd-fuse-debuginfo-17.2.6-70.el9cp.s390x.rpm
rbd-mirror-debuginfo-17.2.6-70.el9cp.s390x.rpm
rbd-nbd-17.2.6-70.el9cp.s390x.rpm
rbd-nbd-debuginfo-17.2.6-70.el9cp.s390x.rpm
x86_64:
ceph-base-17.2.6-70.el9cp.x86_64.rpm
ceph-base-debuginfo-17.2.6-70.el9cp.x86_64.rpm
ceph-common-17.2.6-70.el9cp.x86_64.rpm
ceph-common-debuginfo-17.2.6-70.el9cp.x86_64.rpm
ceph-debuginfo-17.2.6-70.el9cp.x86_64.rpm
ceph-debugsource-17.2.6-70.el9cp.x86_64.rpm
ceph-exporter-debuginfo-17.2.6-70.el9cp.x86_64.rpm
ceph-fuse-17.2.6-70.el9cp.x86_64.rpm
ceph-fuse-debuginfo-17.2.6-70.el9cp.x86_64.rpm
ceph-immutable-object-cache-17.2.6-70.el9cp.x86_64.rpm
ceph-immutable-object-cache-debuginfo-17.2.6-70.el9cp.x86_64.rpm
ceph-mds-debuginfo-17.2.6-70.el9cp.x86_64.rpm
ceph-mgr-debuginfo-17.2.6-70.el9cp.x86_64.rpm
ceph-mon-debuginfo-17.2.6-70.el9cp.x86_64.rpm
ceph-osd-debuginfo-17.2.6-70.el9cp.x86_64.rpm
ceph-radosgw-debuginfo-17.2.6-70.el9cp.x86_64.rpm
ceph-selinux-17.2.6-70.el9cp.x86_64.rpm
ceph-test-debuginfo-17.2.6-70.el9cp.x86_64.rpm
cephfs-mirror-debuginfo-17.2.6-70.el9cp.x86_64.rpm
libcephfs-devel-17.2.6-70.el9cp.x86_64.rpm
libcephfs2-17.2.6-70.el9cp.x86_64.rpm
libcephfs2-debuginfo-17.2.6-70.el9cp.x86_64.rpm
libcephsqlite-debuginfo-17.2.6-70.el9cp.x86_64.rpm
librados-devel-17.2.6-70.el9cp.x86_64.rpm
librados-devel-debuginfo-17.2.6-70.el9cp.x86_64.rpm
librados2-17.2.6-70.el9cp.x86_64.rpm
librados2-debuginfo-17.2.6-70.el9cp.x86_64.rpm
libradospp-devel-17.2.6-70.el9cp.x86_64.rpm
libradosstriper1-17.2.6-70.el9cp.x86_64.rpm
libradosstriper1-debuginfo-17.2.6-70.el9cp.x86_64.rpm
librbd-devel-17.2.6-70.el9cp.x86_64.rpm
librbd1-17.2.6-70.el9cp.x86_64.rpm
librbd1-debuginfo-17.2.6-70.el9cp.x86_64.rpm
librgw-devel-17.2.6-70.el9cp.x86_64.rpm
librgw2-17.2.6-70.el9cp.x86_64.rpm
librgw2-debuginfo-17.2.6-70.el9cp.x86_64.rpm
python3-ceph-argparse-17.2.6-70.el9cp.x86_64.rpm
python3-ceph-common-17.2.6-70.el9cp.x86_64.rpm
python3-cephfs-17.2.6-70.el9cp.x86_64.rpm
python3-cephfs-debuginfo-17.2.6-70.el9cp.x86_64.rpm
python3-rados-17.2.6-70.el9cp.x86_64.rpm
python3-rados-debuginfo-17.2.6-70.el9cp.x86_64.rpm
python3-rbd-17.2.6-70.el9cp.x86_64.rpm
python3-rbd-debuginfo-17.2.6-70.el9cp.x86_64.rpm
python3-rgw-17.2.6-70.el9cp.x86_64.rpm
python3-rgw-debuginfo-17.2.6-70.el9cp.x86_64.rpm
rbd-fuse-debuginfo-17.2.6-70.el9cp.x86_64.rpm
rbd-mirror-debuginfo-17.2.6-70.el9cp.x86_64.rpm
rbd-nbd-17.2.6-70.el9cp.x86_64.rpm
rbd-nbd-debuginfo-17.2.6-70.el9cp.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and
details on how to verify the signature are available from
https://access.redhat.com/security/team/key/
7. References:
https://access.redhat.com/security/cve/CVE-2021-4231
https://access.redhat.com/security/cve/CVE-2022-31129
https://access.redhat.com/security/updates/classification/#moderate
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6.1/html/release_notes/index
8. Contact:
The Red Hat security contact is [secalert@redhat.com]. More contact
details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc.
--
A new Red Hat Ceph Storage 6.1 security and bug fix update has been released for Red Hat Enterprise Linux.