Overview
Request 1119617 accepted
- Drop support for obsolete openmpi[123]
- Prepare support for openmpi5
- Created by NMorey
- In state accepted
- Package maintainer: badshah400
Loading...
Request History
NMorey created request
- Drop support for obsolete openmpi[123]
- Prepare support for openmpi5
badshah400 accepted request
Many thanks for the sr. I wonder if we should support some kind of upgrade path for folks with FOO-openmpi3 installed in their system. I think with this change, users will be left with their FOO-openmpi[1-3] orphaned in — but not removed from — their TW machines. Any thoughts?
@badshah400 The HPC module and it's matrix of build flavors is already such a complex thing to maintain that manually handling upgrade paths isn't really part of the plan.
IMHO, people either use a specific openmpi flavour because they want this one and not another, so it does not make sense to migrate them. Or they don't care which and should use a package that uses the "default" MPI.
BTW, do you actually need to build against specific MPI flavors? Or would it work with the default openmpi?
All right, thanks for the explanation.
Do we recommend a default openmpi on openSUSE? I thought there was not a preferred version. In any case, I guess it is safest to build most libraries for all supported versions.
There is always a default version defined. It can be used by: BuildRequires: openmpi-macros-devel # This is the default version, whatever Leap/SLES/TW/ALp you're building for You then get some macros to be used in your spec without having to know which version it is: %openmpi_requires # Is something like Requires: openmpiXX-libs %openmpi_devel_requires # Is something like Requires: openmpiXX-devel %setup_openmpi to be called in your %build/%install. It'll set all the rights vars (PATH, LD_LIBRARY_PATH, etc) to use the right openMPI.
The reasons so many packages have multibuild for all flavors is that there are specific HPC usecases where a version of openMPI (or other MPI flavours) ( paired with a specific GCC version and maybe other libs as well), outperforms all other configs. For these, we do have to build all core MPI libraries into many multiple flavours.
Regular libs that are intended to be used on multi-million $ clusters, should not bother and just use the default openMPI. It makes maintenance so much easier :)
See https://build.opensuse.org/package/view_file/science:HPC/libcircle/libcircle.spec?expand=1 for an example
Looks great, I think I should do this for most of the openmpi multi-flavour libraries I maintain. Thanks a lot for all the information.
Sounds good to me. Less work when we switch the default to openmpi5 ;) Let me know if you need some help. And please submit to Factory when you have these changes (or with the one I SRed), so I can finally drop those pesky old OpenMPI versions
@badshah400: review reminder