Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
devel:gcc:next:testing
ceph
_constraints
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File _constraints of Package ceph
<?xml version="1.0"?> <constraints> <sandbox>kvm</sandbox> <!-- 2022-03-31 - Tim Serong <tserong@suse.com> Builds of ceph 16.2.7 on IBS showed the following resource usage (in MB): ceph aarch64 max disk: 41568 max mem: 13698 (on ibs-centriq-6:3 disk: 65536 mem: 18432) ceph x86_64 max disk: 41621 max mem: 9852 (on sheep74:2 disk: 51200 mem: 12500) ceph ppc64le max disk: 42005 max mem: 8754 (on ibs-power9-10:1 disk: 61440 mem: 20480) ceph s390x max disk: 40698 max mem: 8875 (on s390zl36:1 disk: 51200 mem: 10240) ceph-test x86_64 max disk: 51760 max mem: 16835 (on sheep94:2 disk: 112640 mem: 16384) Based on the above (and to hopefully provide a little wiggle room for the future while at the same time not being too demanding of workers) I've set the disk constraints to 50GB for ceph and 60GB for ceph-test. Memory requirements remain at 8GB and 10GB respectively as they were previously - despite the memory usage shown above, AFAIK we haven't run out of memory during builds, and this keeps the pool of possible workers noticeably larger than it would be if we required 16GB. Note to future hackers: please add comments here to describe any further changes made. Thank you! --> <hardware> <disk> <size unit="G">50</size> </disk> <physicalmemory> <size unit="G">8</size> </physicalmemory> </hardware> <overwrite> <conditions> <package>ceph-test</package> </conditions> <hardware> <disk> <size unit="G">60</size> </disk> <physicalmemory> <size unit="G">10</size> </physicalmemory> </hardware> </overwrite> </constraints>
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor