Creating non-SLES boostrap repositories in SUSE Manager 3

You likely already know this, but let me state it anyhow: SUSE Manager is an awesome tool to manage your SUSE and Redhat server environment. At least it is since the update to version 3 – I wouldn’t start arguing if version 2 users wouldn’t come to the same conclusion.

SUSE has put a great deal of effort in making admins’ life easier when you’re dealing with SLES and RHES servers – but what about i. e. OpenSUSE hosts? Well, that’s not officially supported, for obvious reasons, but you’ll still get much from SuMa3 even for such non-supported distros.

One area that lags most is the creation of “bootstrap repositories”. That’s what you need to provide, so that a bare-metal (or “empty VM” or “blank instance”) install not only gets the base OS installed from media, but will be able to access all those small and large packages needed to get the new instance “on board” for i. e. Salt management via SuMa3. For SLES and RHES, there’s a program called “mgr-create-bootstrap-repository”: It knows what RPMs are needed for which of the supported distros and will fetch the latest and greatest version from it’s stash (in other words, from its mirror of the product repositories). It actually is configurable in its list (it’s in /usr/share/susemanager/mgr_bootstrap_data.py), but it is linked to the SuMa3 database in a way that will only fetch from actual product repositories. (The reason behind this is, that it needs to know which “channels” to check for which product – and i. e. OpenSUSE is none of SUSE’s products, hence your OpenSUSE channels are not identifiable as such in the database.)

The beauty of “mgr-create-bootstrap-repository” is that you can run it over and over again, for example per “cron”, to re-create the bootstrap repository when you’ve received updated packages in your channels. Of course you could write your own mechanism to retrieve those required RPMs and put them into a repository, but wouldn’t it be nice if you could make “mgr-create-bootstrap-repository” do that four you? 🙂 Continue reading

Posted in Linux, OpenSUSE, SUSE Manager | Leave a comment

Ceph 12.2.2: Minor update, major trouble

Recently, Ceph “Luminous” V12.2.2 was released, a bug fix release for the latest stable release of Ceph. It contains some urgently awaited fixed, i.e. for “Bluestore” memory leaks, and admins around the world started upgrading immediately.

Just before Chrismas, I had to handle a “situation” with such an upgraded Ceph cluster. It had been working for months, coming from pre-Luminous times, was upgraded to V12.2.1 a few weeks ago and now was brought to V12.2.2 in preparation of introducing “Bluestore” OSDs. Admittedly, the cluster wasn’t in perfect shape, but “HEALTH_OK” was reported before and right after the upgrade to V12.2.2.

Things started to go wrong when the first OSDs were taken “out” in preparation of “Bluestore” OSDs, step 2 of the official docs. The cluster reported “too many PGs per OSD” and showed slow requests that didn’t seem to go away. What’s worse, the cluster started to show signs of blocked requests, like unresponsive clients and hanging CephFS access. After some time, these were confirmed by “ceph -s”, where slow requests turned to blocked requests after 4064 seconds, taking the cluster to HEALTH_ERR. Additionally, the PG rearrangement, started by taking out the first OSDs, came to a halt and left the cluster with still high numbers of misplaced and degraded PGs. Overall, the cluster became unusable. Continue reading

Posted in Ceph | Leave a comment