Introduction

Hi everyone,
Life on overcloud nodes was simple back then and everybody loved that single 'root' partition on the (currently less than 2Tb) bootdisk. This gave us overcloud nodes partitioned like this:
[root@msccld2-l-rh-cmp-12 ~]# df -h -t xfs
Filesystem Size Used Avail Use% Mounted on
/dev/sda2  1.1T 4.6G 1.1T   1% /
The problem with this approach is that anything filling up any subdirectory on the boot disk will cause services to fail. This story is almost 30 years old.
For that reason, most security policies (Think SCAP) insist that /var, /tmp, /home be different logical volumes and that any disk uses LVM2 to allow additional logical volumes.
To solve this problem, whole-disk image support is coming to Ironic. It landed in 5.6.0 (See [1] ) but missed the OSP10 release. With whole-disk image support in Ironic, we could easily change overcloud-full.qcow2 to be a full-disk image with LVM and separate volumes. This work is a tremendous advance, thanks to Yolanda Robla. I hope it gets backported to stable/Newton (OSP10, our first LTS release).
I wanted to solve this issue for OSP10 (and maybe for previous versions too) and started working on some tool to 'encapsulate' the existing overcloud partition into LVM2 during deployment. This is now working reliably and I wanted to present the result here so this could be re-used for other purposes.

Resulting configuration

The resulting config is fully configurable and automated. It will make use of an arbitrary number of logical volumes from your freshly deployed overcloud node.
Here's an example for a compute node with a 64gb boot disk and an 8Tb secondary disk:
[root@krynn-cmpt-1 ~]# df -t xfs
Filesystem                        1K-blocks    Used Available Use% Mounted on
/dev/mapper/rootdg-lv_root         16766976 3157044  13609932  19% /
/dev/mapper/rootdg-lv_tmp           2086912   33052   2053860   2% /tmp
/dev/mapper/rootdg-lv_var          33538048  428144  33109904   2% /var
/dev/mapper/rootdg-lv_home          2086912   33056   2053856   2% /home

[root@krynn-cmpt-1 ~]# pvs
PV         VG     Fmt  Attr PSize  PFree
/dev/sda2  rootdg lvm2 a--  63.99g 11.99g

[root@krynn-cmpt-1 ~]# vgs
VG     #PV #LV #SN Attr   VSize  VFree
rootdg   1   4   0 wz--n- 63.99g 11.99g

Implementation

The tool (mostly a big fat shell script) will come into action at the end of firstboot and use a temporary disk to create the LVM2 structures and volumes. It will then set the root to this newly-created LV and will reboot the system.
When the system boots, it will wipe clean the partition the system was originally installed on. Then it will proceed to mirror back the LV's and VG to that single partition. Once finished, everything will be back to where it was before, except for the temporary disk which was wiped clean too..
Logs of all actions are kept on the nodes themselves:
[root@krynn-cmpt-1 ~]# ls -lrt /var/log/ospd/*root*log
-rw-r--r--. 1 root root 15835 Mar 20 16:53 /var/log/ospd/firstboot-encapsulate_rootvol.log
-rw-r--r--. 1 root root  2645 Mar 20 17:02 /var/log/ospd/firstboot-lvmroot-relocate.log
The first log details the execution of the initial part of the encapsulation: creating the VG, the LV's, setting up GRUB, injecting the boot run-once service, etc..
The second log details the execution of the run-once service that mirrors back the Volumes to the original partition carved by tripleo during a deploy.
The code/tool itself lives here:
https://github.com/ElCoyote27/OSP_All/blob/master/osp10/all-firstboot-config/scripts/encapsulate_rootvol.sh
It is called by the global multi-FirstBoot template here:
https://github.com/ElCoyote27/OSP_All/blob/master/osp10/all-firstboot-config.yaml
Which we called from the main environment file:
https://github.com/ElCoyote27/OSP_All/blob/master/osp10/krynn-environment.yaml#L3

Configuration

The tool provides you with the ability to change the names of the Volume Group, how many volumes are needed, what size they shall be, etc... The only way to change this is to edit your copy of the script and edit the lines marked as 'EDITABLE' at the top. E.g:
boot_dg=rootdg                                   # EDITABLE
boot_lv=lv_root                                  # EDITABLE
# ${temp_disk} is the target disk. This disk will be wiped clean, be careful.
temp_disk=/dev/sdc                               # EDITABLE
temp_part="${temp_disk}1"
# Size the volume
declare -A boot_vols
boot_vols["${boot_lv}"]="16g"                    # EDITABLE
boot_vols["lv_var"]="32g"                        # EDITABLE
boot_vols["lv_home"]="2g"                        # EDITABLE
boot_vols["lv_tmp"]="2g"                         # EDITABLE
declare -A vol_mounts
vol_mounts["${boot_lv}"]="/"
vol_mounts["lv_var"]="/var"                      # EDITABLE
vol_mounts["lv_home"]="/home"                    # EDITABLE
vol_mounts["lv_tmp"]="/tmp"                      # EDITABLE
All of the fields marked 'EDITABLE' can be change. Any new LV can be added by inserting a new entry for both boot_vols and vol_mounts.

Warnings, Caveats and Limitations

Please be aware of the following warnings
See https://github.com/ElCoyote27/OSP_All/blob/master/osp10/all-firstboot-config.yaml#L107