Recovering deleted /boot directory

“A man falling from the 70th floor might think he can fly when passing the 20th floor” – that’s basically the feeling when accidentally /boot folder is deleted and the PC keeps working as everything is okay. The harsh truth hits after the next reboot.

Things become tougher when the disk is encrypted, so attempt to recover this folder with kernel package isn’t a valid solution. Here is a list of steps to eventually recover your PC with Fedora 29 installed:

  • Boot with Fedora live-CD
  • Install grub related packages:

$ yum install *grub* boom-boot

  • Create a directory for mounting encrypted partition

$ mkdir /mnt/root

  • List partitions
$ fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 477 GiB, 512110190592 bytes, 1000215216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x30c3a181

Device         Boot   Start        End   Sectors  Size Id Type
/dev/nvme0n1p1 *       2048    2099199   2097152    1G 83 Linux
/dev/nvme0n1p2      2099200 1000214527 998115328  476G 83 Linux</pre>
  •  Open the encrypted partition
$ cryptsetup luksOpen /dev/nvme0n1p2 crypted_root
Enter passphrase for /dev/nvme0n1p2:
  •  List all logical volumes to find the root volume on the encrypted disk
$ lvscan
ACTIVE            '/dev/fedora/pool00' [<444.46 GiB] inherit
ACTIVE            '/dev/fedora/root' [100.00 GiB] inherit
ACTIVE            '/dev/fedora/home' [<344.46 GiB] inherit
ACTIVE            '/dev/fedora/swap' [<15.47 GiB] inherit
ACTIVE            '/dev/fedora/docker-pool' [6.12 GiB] inherit
  •  Mount root and boot directories
mount /dev/fedora/root /mnt/root
mount /dev/nvme0n1p1 /mnt/root/boot
  •  Chroot to the mounted directory
sudo mount -o bind /dev /mnt/root/dev
sudo mount -o bind /proc /mnt/root/proc
sudo mount -o bind /sys /mnt/root/sys
sudo mount -o bind /run /mnt/root/run
sudo chroot /mnt/root
  •  Use grub2 tools for recreating the /boot directory deleted content
  • This isn’t sufficient since the vmlinuz and initrd files might be missing. To make sure they reappear, removing and installing the kernel will re-create them on the /boot folder
sudo grub2-install --no-floppy --recheck /dev/nvme0n1
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
  •  Use grub2 tools for recreating the /boot directory deleted content including creating the grub.cfg with menu entries according to the existing files

I’ve encountered an issue with an issue with the boom-boot-grub2, skipped by disabling it from /etc/defaults/boom_42

  •  Unmount and close the encrypted partition
umount /mnt/root/boot
umount /mnt/root
cryptsetup closeLuks /dev/fedora/root
  •  Remove installation media and reboot – boot menu should appear

foreman_kubevirt plugin

Foreman  can be extended by installing plugins.

Recently I’ve been working on adding support for Kubevirt as a compute resource of Foreman: meaning Foreman users/admins can create and manage their hosts as Kubevirt’s virtual machines.

The Foreman-Kubevirt plugin is under development but no reason not to share info about it and get feedback about how to shape it and what to include in it. The following screenshots show the current state of the plugin and its supported dialog:

  • Adding Compute Resource:

compute_resource.png

  • Showing list of virtual machines:

list_vms.png

  • Showing a single offline virtual machine

show_offline_vm.png

  • Showing a single online virtual machine

show_online_vm.png

There are few gaps that need to be handled, i.e. support SSL communication with Kubevirt and Host actions.

In order to test the plugin on the env, users should use their self-built fog-kubevirt release on top of this fog-kubevirt patch.

Please report any issues to https://github.com/masayag/foreman_kubevirt/issues

fog-kubevirt

fog-kubevirt is a ruby client for Kubevirt. With kubevirt the administrator can manage virtual machines on Kubernetes cluster. fog-kubevirt uses kubeclient, a ruby client for Kubernetes.

Instructions for creating a running kubevirt instance can be found here.

In order to start working with fog-kubevirt, you’ll have to install it. Either by installing the latest release from rubygem.org by:

$ gem install fog-kubevirt

Or by picking a release from the releases page of the project.

In order to add fog-kubevirt to your ruby project, add the following line to project’s Gemfile:

gem 'fog-kubevirt'

followed by running:

$ bundle

Let’s start using fog-kubevirt:

Declare the library:

require 'fog/kubevirt'

Instantiate the kubevirt provider:

provider = Fog::Compute.new(:provider           => 'kubevirt',
                            :kubevirt_hostname  => hostname,
                            :kubevirt_port      => port,
                            :kubevirt_token     => token,
                            :kubevirt_namespace => 'default')

The required attributes are obtained as follow:

  • kubevirt_hostname – the hostname of the kubevirt server
  • kubevirt_port – the port of the kubevirt (8443/443 if relying on Openshift, else 6443)
  • kubevirt_token – the token can be achieved by extracting the token from the created my-account-tokex-xyz secret:
    • kubectl get secret my-account-token-vgtjm --template='{{index .data "token"}}' | base64 --decode
  • kubevirt_namespace – the cluster namespace to use

Another option is using the .fog file under the home directory. The .fog file contains the credentials and additional properties to be used by fog instead of specifying them within code. For kubevirt’s need, add the following properties to .fog with their actual value:

  :kubevirt_token:
  :kubevirt_hostname: node01
  :kubevirt_port: 8443
  :kubevirt_group: kubevirt.io
  :kubevirt_version: v1alpha2
  :kubevirt_namespace: default

Once kubevirt is instantiated, it can be used for listing virtual machines, virtual machine instances, templates and nodes and to perform management actions:

  # get all virtual machines
  vms = provider.vms

Creating a virtual machine is done by selecting a template and cloning it into a virtual machine.
Creating a template on kubevirt can be done by selecting a template and running:

kubectl create -f working-template.yml

More templates can be found here.
And creating the template by:

  # selecting a template named 'working'
  template = connection.template('working')

  # opts should contain keys to be replaced within the template
  opts = {name: 'vm-demo-1', memory: 1024, cpu_cores: 1}
  template.clone(opts)

Getting a specific virtual machine:

  # get vm by its name
  vm = provider.vms.get('vm-demo-1')

The returned object looks like:

"ovm-vm-demo-1", :"kubevirt.io/os"=>"fedora28"},
    owner_reference=nil,
    annotations=nil,
    cpu_cores=1,
    memory="1Gi",
    disks=[{:disk=>{:bus=>"virtio"}, :name=>"disk0", :volumeName=>"root"}, {:disk=>{:bus=>"virtio"}, :name=>"cloudinitdisk", :volumeName=>"cloudinitvolume"}],
    volumes=[{:name=>"root", :persistentVolumeClaim=>{:claimName=>"rhel75-pvc-15"}}, {:cloudInitNoCloud=>{:userData=>"#cloud-config\npassword: 'redhat'\nchpasswd: { expire: False }"}, :name=>"cloudinitvolume"}]

With the vm entity the user can start and stop the virtual machine:

  # start the virtual machine
  vm.start

The output of that action is a virt-launcher pod that will create the container in which the virtual machine will be running:
$ kubectl describe vmis vm-demo-1

And the output will be:

Name:           vm-demo-1
Namespace:      default
Labels:         kubevirt-ovm=ovm-vm-demo-1
                special=demo-key
Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                            -------------   --------        ------                  -------
  3s            3s              1       virtualmachine-controller                       Normal          SuccessfulCreate        Created virtual machine pod virt-launcher-vm-demo-1-2s6rg

Track the virtual machine instance status by:

  # get vm by its name
  vmi = provider.vminstances.get('vm-demo-1')
  vmi.status

In order to stop the virtual machine, simply:

  # stop the virtual machine and deletes the virtual machine instance
  vm.stop

In order to remove the virtual machine, use:

  # stop the virtual machine and deletes the virtual machine instance
  # 'default' is the namespace in which the virtual machine was created
  kube.delete_vm('vm-demo-1', 'default')

An attempt to get the virtual machine again will fail with 404:

  kube.vms.get('vm-demo-1')
Fog::Kubevirt::Errors::ClientError: HTTP status code 404, virtualmachines.kubevirt.io "vm-demo-1" not found

There is an option to track changes of virtual machines by receiving entities updates using notices.

  # last_known_version is the last version tracked by the client
  # all the updates later than last_known_version will be watched
  watcher = provider.watch_vms(:resource_version => last_known_version)
  watcher.each do |notice|
    # process notice data
  end

manageiq-providers-kubevirt is actively using fog-kubevirt as a client to interact with kubevirt. It also relies on the notices to dynamically update the virtual machines and templates on ManageIQ for Kubevirt provider.

Create an image with imagefactory

Recently I’ve experienced working using imagefactory to create an image. Currently, the input for image factory is a template (TDL format) which describes the instructions for creating the image:

  • The iso on which the image will be based
  • Extra packages to install from a given repository
  • Commands to execute after the image is created

imagefactory uses the oz tool to automate the image creation process. The oz tool interacts with kvm via libvirt to instantiate the vm instance. The created vm is mounted to a cd which is the iso file. The vm is connected to a linux bridge, by which it obtains its connectivity for the sake of packages download or any installation requirements. Eventually, the set of specified commands will be executed within that vm. Once the set of commands is completed, the vm is destroyed, and the image is kept.

The created image will be located in the target folder as specified in /etc/imagefactory/imagefactory.conf, with the configured format (qcow2 by default).

For creating the image I used one of my hosts which also acts as oVirt hypervisor. Due to the fact the host have already had libvirt configured and a linux bridge was already present (as ovirt management network ‘ovirtmgmt’), few adjustments had to be made to /etc/oz/oz.cfg, which is the configuration file of the oz tool that instructs the vm’s configuration:

  • bridge_name – to which bridge the vm should be connected (changed to ‘ovirtmgmt’)
  • memory – had to increase the default, as my image installed neutron via packstack that requires 4096 RAM
  • libvirt uri – no support for auth_mode, had to disable it on /etc/libvirt/libvirtd.conf

Overall, the experience is positive and things went smooth, except few issues:

  • Changes made to /etc/sysconfig/iptables within the image were reverted by imagefactory. As the log indicated (and so digging the code), imagefactory saves a backup of that file to /etc/sysconfig/iptables.ozbackup before starting the vm and restore that backup file. There are 2 options to update the iptables rules in such case: Override the iptables.ozbackup with the desired file or updating the image after its creation was completed with any libguestfs tool which supports it (guestfish, virt-tar / virt-tar-in)
  • A proper sealing of the image could not be done as part of the image creation process. Until RFE is implemented, there is a need to call virt-sysprep explicitly for sealing the image (remove hwaddr, ssh keys, dhcp lease and more).

I used imagefactory oin the cli, but it can be invoked also as a service which supports uploading the created image to a pre-configured cloud provider (i.e. glance image repository). Later, that image will serve as a base of an instance. The imagefactoryd service exposes REST api for querying and creating the images.

Next on the table is automating the image creation process as a jenkins job. This task will require some setup planning, as our CI environment uses vms for executing the jobs. However, in this case, job requires to start a vm for the image creation. So either a physical server will be required, or the hypervisor will have to support nested virtualization.

 

Edit configuration file from the command-line

Recently I had to manipulate configuration values within a file from a command-line. Since the code was executed as part of image creation process, I was limited by the measures for it.

The configuration file format is KEY=VALUE, and for updating the value of a given key I used the ‘sed’ tool:

sed -i "s/^$1=.*$/$1=$2/" $CONF_FILE

Where ^$1=.*$ represent the key section which matches the provided key as $1 variable,
and $1=$2 conducts the replacement of the entire line with the given Key=Value as provided by $1 (the key) and $2 (the value).

Also since the created image modified a known file, I relied the solution on that:

# A function receiving key and value to set
set_packstack_configuration_value ()
{
    sed -i "s/^$1=.*$/$1=$2/" $PACKSTACK_ANSWER_FILE
}

# Calling the function with new values for an existing keys
set_packstack_configuration_value CONFIG_NEUTRON_L2_PLUGIN ml2
set_packstack_configuration_value CONFIG_NEUTRON_ML2_TYPE_DRIVERS local,flat,vlan
set_packstack_configuration_value CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES vlan

The $PACKSTACK_ANSWER_FILE is updated with the new values for the known keys.

The image template which includes that snippet can be found in my github.

Invoke Setup Networks from the Java SDK

In order to use the latest and greatest host networking feature, the ‘setup networks’ api should be used. The ‘setup networks’ api expects to get as a parameter the complete target network configuration.

The following example demonstrates attaching network ‘red’ to network interface ‘eth4’ and assigning IP address to it in a single api call.

import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

import org.ovirt.engine.sdk.Api;
import org.ovirt.engine.sdk.decorators.HostNIC;
import org.ovirt.engine.sdk.decorators.HostNICs;
import org.ovirt.engine.sdk.entities.Action;
import org.ovirt.engine.sdk.entities.BaseResource;
import org.ovirt.engine.sdk.entities.HostNics;
import org.ovirt.engine.sdk.entities.IP;
import org.ovirt.engine.sdk.entities.Network;

public class SetupNetworksExample {

    public static void main(String[] args) throws Exception {

        try (Api api = new Api("http://localhost:8080/api",
                "admin@internal",
                "1",
                null, null, null, null, null, null, true)) {

            HostNICs nicsApi = api.getHosts().get("venus-vdsb").getHostNics();
            List<HostNIC> nics = nicsApi.list();

            Map<String, HostNIC> nicsByNames = entitiesByName(nics);
            HostNIC nic = nicsByNames.get("eth4");

            // add network 'red' to 'eth4' and assign IP address
            Network net = new Network();
            net.setName("red");
            nic.setNetwork(net);
            IP ip = new IP();
            ip.setAddress("192.168.1.151");
            ip.setNetmask("255.255.255.0");

            // In case a specific gateway other than the default should be set for 'red'
            // ip.setGateway(RED_GATEWAY);
            nic.setIp(ip);
            nic.setBootProtocol("static");

            nicsApi.setupnetworks(createSetupNetworksParams(nics));
        }
    }

    public static Action createSetupNetworksParams(List<HostNIC> nics) {
        Action action = new Action();
        HostNics nicsParams = new HostNics();
        nicsParams.getHostNics().addAll(nics);
        action.setHostNics(nicsParams);
        action.setCheckConnectivity(true);
        return action;
    }

    public static <E extends BaseResource> Map<String, E> entitiesByName(List<E> entityList) {
        if (entityList != null) {
            Map<String, E> map = new HashMap<String, E>();
            for (E e : entityList) {
                map.put(e.getName(), e);
            }
            return map;
        } else {
            return Collections.emptyMap();
        }
    }
}

Network Configuration using ovirt-engine Java SDK

The ovirt-engine can be accessed via the ovirt-engine-sdk-java to perform any rest-api based  action. The ovirt-engine Java SDK is Java 7 compliant and simple to use.

In the following example I’ll demonstrate creation of ‘bond0’ on top of ‘eth4’ and ‘eth5’ as its slaves, and on top of ‘bond0’, a tagged network ‘vlan200’ will be added.

However, the example makes a use of the 3.0 compliant API which is simpler on the one hand, but suffers of few disadvantages:

  • Does not support complex actions (i.e. multiple networks configuration at once)
  • Does not support advanced features:
    1. Multiple default gateways
    2. Synchronize network no host with its logical network definition
    3. Multiple default gateways
    4. Default route
    5. QoS (well… the other api neither)
import org.ovirt.engine.sdk.Api;
import org.ovirt.engine.sdk.decorators.HostNICs;
import org.ovirt.engine.sdk.entities.Bonding;
import org.ovirt.engine.sdk.entities.HostNIC;
import org.ovirt.engine.sdk.entities.Network;
import org.ovirt.engine.sdk.entities.Option;
import org.ovirt.engine.sdk.entities.Options;
import org.ovirt.engine.sdk.entities.Slaves;

public class CreateBondExample {

    public static void main(String[] args) throws Exception {

        try (Api api = new Api("http://localhost:8080/api",
                "admin@internal",
                "1",
                null, null, null, null, null, null, true)) {

            HostNICs hostNics = api.getHosts().get("venus-vdsb").getHostNics();

            HostNIC bond = new HostNIC();
            bond.setName("bond0");

            // add slaves and bonding options
            Bonding bonding = new Bonding();
            addSlaves(bonding);
            addOptions(bonding);
            bond.setBonding(bonding);

            // add network to be configured on top of the slave
            Network net = new Network();
            net.setName("vlan200");
            bond.setNetwork(net);

            // add the bond
            hostNics.add(bond);
        }

    }

    /**
     * Adds "BONDING_OPTS='miimon=100 mode=1 primary=eth4'" in /etc/sysconfig/network-scripts/ifcfg-bond0
     */
    private static void addOptions(Bonding bonding) {
        Options options = new Options();
        options.getOptions().add(createOption("miimon", "100"));
        options.getOptions().add(createOption("mode", "1"));
        options.getOptions().add(createOption("primary", "eth4"));
        bonding.setOptions(options);
    }

    public static Option createOption(String name, String value) {
        Option option = new Option();
        option.setName(name);
        option.setValue(value);
        return option;
    }

    /**
     *
     * eth4 ---|
     *         |--- bond0
     * eth5 ---|
     *
     */
    public static void addSlaves(Bonding bonding) {
        Slaves slaves = new Slaves();
        HostNIC slave1 = new HostNIC();
        slave1.setName("eth4");
        HostNIC slave2 = new HostNIC();
        slave2.setName("eth5");
        slaves.getSlaves().add(slave1);
        slaves.getSlaves().add(slave2);
        bonding.setSlaves(slaves);
    }
}

After executing the following, the bond device will be created, as long with the vlan device which will be named as ‘bond0.200’, according to the network’s vlan-id.

In order to update the configured network on the nic, the referred updated device should be the device on which the network is configured (‘bond0.200’ in this case). The following example will demonstrate setting a static IP address for ‘vlan200’ network on top of ‘bond0’:

import org.ovirt.engine.sdk.Api;
import org.ovirt.engine.sdk.decorators.HostNIC;
import org.ovirt.engine.sdk.entities.IP;
import org.ovirt.engine.sdk.entities.Network;

public class UpdateVlanNetworkOverBondExample {

    public static void main(String[] args) throws Exception {

        try (Api api = new Api("http://localhost:8080/api",
                "admin@internal",
                "1",
                null, null, null, null, null, null, true)) {

            HostNIC bondNetwork = api.getHosts().get("venus-vdsb").getHostNics().get("bond0.200");

            // add network to be configured on top of the slave
            Network net = new Network();
            net.setName("vlan200");
            IP ip = new IP();
            ip.setAddress("192.168.1.151");
            ip.setNetmask("255.255.255.0");
            bondNetwork.setIp(ip);
            bondNetwork.setBootProtocol("static");
            bondNetwork.setNetwork(net);

            // update the network
            bondNetwork.update();
        }
    }
}

And last example will demonstrate attaching network ‘red’ to ‘eth4’ interface:

import org.ovirt.engine.sdk.Api;
import org.ovirt.engine.sdk.decorators.HostNIC;
import org.ovirt.engine.sdk.entities.Action;
import org.ovirt.engine.sdk.entities.Network;

public class AddNetworkToNic {

    public static void main(String[] args) throws Exception {

        try (Api api = new Api("http://localhost:8080/api",
                "admin@internal",
                "1",
                null, null, null, null, null, null, true)) {

            HostNIC nic = api.getHosts().get("venus-vdsb").getHostNics().get("eth4");

            // add network to nic
            Network net = new Network();
            net.setName("red");
            nic.setNetwork(net);
            Action action = new Action();
            action.setNetwork(net);

            // update the network
            nic.attach(action);
        }
    }
}

However the above could be simplified by using the ‘SetupNetworks’ API which was introduced before and demonstrated by the Python SDK. An example using the Java SDK will be published in the next post.

vNic Profiles – one profile to rule them all

oVirt-engine 3.3 introduced a new concept of managing the vm network interfaces.
Previously, the network used to be assigned directly to the vnic, and any specific configuration (i.e. port mirroring) would have to be defined on the vnic level, for each vm.

In oVirt-engine 3.3 a couple of features were introduced: Network QoS and Device Custom Properties which would make the previous method of configuring the same network settings over-and-over for each nic tedious, and needless to mention the maintenance aspect of such.
Using the previous method, the admin would have to iterate all over the vms and their eligible vnics to modify the QoS values.

The Vnic Profiles were designed to simplify the management of the vnic configuration: A profile is defined once per network. Each network may have as many profiles as wishes. The profiles are being assigned to the vm network interface. Using the profiles, the admin controls the way a vm is using the network.

Once the admin wishes to modify a specific vnic profile, the change will be reflected to all of the vms using it, as soon as they are being either restarted or the relevant vnics are being unplugged and plugged.

vNic Profile Dialog

The vNic Profiles are also accessible via the Restful API and also supported by the ovirt-engine Python SDK (since 3.3.0.4-1) and the Java SDK (since 1.0.0.14-1).

The previous api which relies on the network name as the parameter to be assigned on the vnic is still supported, but planned to be removed in ovirt-engine-4.0.

In order to utilize the new vNic Profiles api in ovirt-engine-3.3, the user should provide the vnic profile id instead of the network name.

The following examples executes the same logic: updating a vm nic to use a specific profile.

Python SDK example:

vm1 = api.vms.get('vm1')
nic = vm1.nics.get('nic1')
nic.vnic_profile = api.vnicprofiles.get('a-vnic-profile')
nic.update()

Java SDK example:

VnicProfile profile = api.getVnicProfiles().get("a-vnic-profile");
VM vm = api.getVMs().get("vm1");
VMNIC vnic = vm.getNics().get("nic1");
vnic.setVnicProfile(profile);
vnic.update();

Unlinking a vnic from its network/profile using the vnic profile filed:

vm1 = api.vms.get('vm1')
nic = vm1.nics.get('nic1')
nic.vnic_profile = params.VnicProfile()
nic.update()

Attaching a vnic to a network using the deprecated network name attribute.
This action will select any eligible vnic profile to be assigned for that vnic:

vm1 = api.vms.get('vm1')
nic = vm1.nics.get('nic1')
nic.network = params.Network(name = 'ovirtmgmt')
nic.update()

Unlink a vnic from its network by the deprecated network attribute:

vm1 = api.vms.get('vm1')
nic = vm1.nics.get('nic1')
nic.network = params.Network()
nic.update()

Adding new vnic with network name and port mirroring by the deprecated api:

net = params.Network(name="ovirtmgmt")
port_mirroring_nets = params.Networks(network = [net])
nic = params.NIC(name="new", network=net, 
                 port_mirroring = params.PortMirroring(port_mirroring_nets))
vm1.nics.add(nic)

Adding new vnic with network name and port mirroring by providing a suitable vnic profile configured for port mirroring:

nic = params.NIC(name="new", vnic_profile = params.VnicProfile(id ="...")
vm1.nics.add(nic)

Updaing a vnic which currently uses a profile with port mirroring requires either clearing the port mirroring attribute of the vnic if intended to modify the network or providing the new vnic profile:

The preferred method:

nic.vnic_profile = api.vnicprofiles.get('no_port_mirroring')
nic.update()

The deprecated method:

nic.network = params.Network(name = 'net_with_no_port_mirroring')
nic.port_mirroring  = params.PortMirroring()
nic.update()

The same backward compatibility logic applies also to template’s vnics.

Networks synchronization with setupNetworks API

As mentioned in earlier post, ovirt-engine supports sync-ing network on the host with its logical network definitions on the data-center level.
Host network might get out-of-sync if it was manually configured on the host or if a host was moved between data-centers where a network with
the same name has a different logical network definition (i.e. network is configured as a VM network on the first DC and as a non-VM network on
the other).
When I’ve introduced the Setup Networks dialogue, I mentioned how to perform it using the UI. The same functionality can be obtained by using
the SDK as well.

The example below demonstrates how to sync all of the host’s networks to match their logical network definition:

hostNicsParam = hostNics.list()
for nic in hostNicsParam:
    ''' HostNic.set_override_configuration mark the network to be synced'''
    nic.set_override_configuration(True)

# Now apply the configuration
hostNics.setupnetworks(
               params.Action(force = 0,
               check_connectivity = 1,
               host_nics = params.HostNics(host_nic = hostNicsParam)))

Behind the scenes ovirt-engine analyzes the differences between the logical network to the actual network configuration, and if
it finds differences, it invokes setupNetworks call to VDSM (the agent on the host) to apply the adequate configuration to the host.

It is not over till moti sings

To clarify the title one need to translate ‘moti’ (my personal name) from hindi.
There are few translations for it – but a specific one comply with the header’s meaning.

After writing the scripts to modify the host’s network configuration, there is a need to persist these changes. Otherwise on the next VDSM agent restart, all of the changes will be reverted, potentially causing the host to become ‘Non-operational’ (limited functionality) or worse ‘Non-responsive’ (lost connectivity).

How to save the network configutation changes ?
Using the webadmin portal, by selecting the ‘Save network configuration’ checkbox on the ‘Setup Networks’ dialog.
If the setup networks ends successfully, the client will issue another command straightaway to persist the network configuration.
A user may decide to use the specific button ‘Save Network Configuration’ on the host’s network interfaces sub-tab.

Since the persistence action is exposed via the rest, another option is using ovirt-engine python SDK for that purpose:

api.hosts.get(name = 'venus-vdsb').commitnetconfig(params.Action())

Or using the Java-SDK:

api.getHosts().get("your-host-name").commitnetconfig(new Action());

Once action completed, an event will be added to the event log:
Network changes were saved on host ‘host-name’.