Tag: KVM

  • Growing Ubuntu LVM After Install

    Hello everyone. I hope you have all been well.

    I have a new blog entry on something I just noticed today.

    So I typically don’t use LVM in my Linux Virtual Machines, mainly because I have had some issues in the past trying to migrate VM’s from one hypervisor type to another, for example, VMware to KVM or vice versa. I have found that if I use LVM, I have mapping issues and it takes some work to get the VM’s working again after converting the raw disk image from vmdk to qcow2 or vice versa.

    However, since I don’t plan on doing that anymore (I’m sticking with KVM/Qemu for the time being) I have looked at using LVM again since I like how easy it is to grow the volume if I have to in the future. While growing a disk image is fairly easy, trying to grow a /dev/vda or /dev/sda is a little cumbersome, usually requiring me to boot my VM with a tool like PMagic or even the Ubuntu install media and using gparted to manipulate the size and then rebooting back into the VM after successfully growing it.

    With LVM, this is much simpler. 3 commands and I’m done, and don’t need a reboot. Those commands:

    • pvdisplay
    • lvextend
    • resize2fs

    Now, One thing I have noticed after a fresh install of Ubuntu Server 22.04.2, using LVM, I don’t get all my hard drive partition used. I noticed this after I installed, I ran df -h and noticed that my / folder was at 32%. I built the VM with a 50G hard drive, yet df was only seeing 23GB. I then ran

    sudo pvdisplay

    Sure enough, the device was 46GB in size. I then ran

    sudo lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv

    This command extended my partition out to the remaining space. Next, I grew the file system to use the new space:

    sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

    I then ran df -h again, and low and behold, my / folder is now saying 46GB and 16% used instead of 32%.

    I hope this helps anyone else!

  • Building ONIE with DUE

    Howdy everyone, been a while since I’ve had a post but this one is long overdue.

    I’m still working in Networking, and every once in a while, I need to update the ONIE software on a switch, or even create a KVM version for GNS3 so that I can test latest versions of NOS’s.

    Well, a lot has changed and improved since I had to do this. ONIE now has a build environment using DUE, or Dedicated User Environment. Cumulus has made this, and it is in the APT repos for Ubuntu and Debian. This does make building much easier as trying to build a build machine with today’s procedure from OCP’s GitHub repo is 100% broken and doesn’t work. They still ask to use Debian 9, which most of the servers hosting packages have been retired since Debian 9 has EOL’d. I’ve tried with Debian 10, only to have packages not be supported. So I found out about DUE and was having issues with that, but after much searching and reading, I finally found a way to build ONIE images successfully and consistently.

    Just a slight Caution: At the rate of change with ONIE, this procedure can change again. I will either update this blog or create a new one when necessary.

    So, lets get to building!

    The first thing I did, was install Docker and DUE on my Ubuntu 22.04.4 server

    sudo apt update
    sudo apt install docker.io
    sudo usermod -aG docker $USER
    logout

    I then log back in to the server so that my new group association takes place and install DUE

    sudo apt update
    sudo apt install due
    

    I then installed the ONIE DUE environment for Debian 10. From my research this one is the most stable and worked the best for me:

    due --create --from debian:10 --description "ONIE Build Debian 10" --name onie-build-debian-10 \
    --prompt ONIE-10 --tag onie --use-template onie

    This download and sets up the build environment to build ONIE based on Cumulus’s best practices. Once this process is complete, we now get into the environment with the following command:

    due --run -i due-onie-build-debian-10:onie --dockerarg --privileged

    You are now in the Docker Container running Debian 10 and has the prerequisites for building ONIE already installed. Now we need to clone the ONIE repo from GitHub and do some minor settings to make sure the build goes smoothly.

    mkdir src
    cd src
    git clone https://github.com/opencomputeproject/onie.git

    I then update the git global config to include my email address and name so that during the building process when it grabs other repos to build, it doesn’t choke out and die and tell me to do it later:

     git config --global user.email "wililupy@lucaswilliams.net"
     git config --global user.name "Lucas Williams"

    So, I am building for a KVM instance of ONIE for testing in GNS3. First thing I need to do is build the security key

    cd onie/build-config/
    make signing-keys-install MACHINE=kvm_x86_64
    make -j4 MACHINE=kvm_x86_64 shim-self-sign
    make -j4 MACHINE=kvm_x86_64 shim
    make -j4 MACHINE=kvm_x86_64 shim-self-sign
    make -j4 MACHINE=kvm_x86_64 shim

    I had to run the shim-self-sign after the shim build option again to create self-signed shims after creating the shim, and then had to run shim again to install the signed shims in the correct directory so that ONIE build would get pass the missing shim files.

    Now we are ready to actually build the KVM ONIE image.

     make -j4 MACHINE=kvm_x86_64 all

    Now, I’m not sure if this is a bug or what, but I actually had to run the previous command about 10 times after every time it completed, because it didn’t actually complete. I would just press UP on my keyboard arrow key to re-run the previous command, and I did this until I got the following output:

    Added to ISO image: directory '/'='/home/wililupy/src/onie/build/kvm_x86_64-r0/recovery/iso-sysroot'
    Created: /home/wililupy/src/onie/build/images/onie-updater-x86_64-kvm_x86_64-r0
    === Finished making onie-x86_64-kvm_x86_64-r0 master-06121636-dirty ===
    $

    I then ran ls ../build/images to verify that my recovery ISO file was there:

    $ ls ../build/images
    kvm_x86_64-r0.initrd       kvm_x86_64-r0.vmlinuz.unsigned
    kvm_x86_64-r0.initrd.sig   onie-recovery-x86_64-kvm_x86_64-r0.iso
    kvm_x86_64-r0.vmlinuz      onie-updater-x86_64-kvm_x86_64-r0
    kvm_x86_64-r0.vmlinuz.sig
    $

    I then logged out of the DUE environment and my ISO was in my home directory under the src/onie/build/images/onie-recovery-x86_64-kvm_x86_64-r0.iso file. From here I was able to upload it to my GNS3 server and create a new ONIE template and map the ISO as the CD-ROM and created a blank qcow2 hard disk image to use the recovery and build the image to use on my GNS3.

    One thing to note is that this procedure is for building the KVM version of ONIE. To build others, just change the MACHINE= variable to be what ever platform you are building for.

    Good luck and let me know in the comments if this worked for you.

  • Manually Migrating VM’s from one KVM host to another

    Hello everyone! Been a while since I posted a blog. This one was a doozy. I tried to find this information online and there was a ton to peruse through. Luckily, I was able to peice a few of them together to finally get it working the way I needed for my environment.

    So, this is what I was doing. I needed to retire a KVM host, but it was running a couple of VM’s that I couldn’t migrate using virt-manger or with virsh migrate so I decided I would try to just move the qcow2 files and build them from new.

    That did not work at all.

    So, after researching some solutions, I finally have one that works, and I’m going to share it with you now.

    NOTE: I shared my public ssh keys between the hosts so I don’t need to type passwords in when ssh’ing and scp’ing between the hosts.

    First, power off the VM on the original host if it is running:

    virsh stop <vm name>

    Now, I had to chown the storage file since I don’t enable root on any of my systems and I needed to scp the file from one host to the new host:

    sudo chown wililupy:wililupy server.qcow2

    I could then scp it to the server where I keep my vm storage.

    NOTE: The path for the storage of my VM’s is the same on both hosts, if it is different for you, you are going to have to make some modifications to the xml file that is part of the next step.

    scp server.qcow2 host2:/data/vms

    I then ssh into the new host and chown the file back to root:root.

    Back on the first host machine, I execute:

    virsh dumpxml server > ~/server.xml

    I then change to root using sudo and copy the NVRAM file for my host since I use UEFI for my VM’s:

    sudo -s
    cd /var/lib/libvirt/qemu/nvram
    cp server_VARS.fd ~wililupy
    exit
    cd ~
    sudo chown wililupy:wililupy server_VARS.fd

    I then scp’d the server_VARS.fd and the server.xml files to the new host:

    scp server_VARS.fd server.xml host2:~

    I then ssh’d to the new host and perform the following:

    virsh define server.xml
    sudo chown root:root server_VARS.fd
    sudo mv server_VARS.fd /var/lib/libvirt/qemu/nvram

    I was then able to start my VM’s on my new host and everything worked perfectly.

    virsh start server

    NOTE: My new host and old host had the same network and storage for the VM paths the same, which made this integration much easier. If yours are different, you will have to modify the xml file to match your new hosts information otherwise network will not work or the VM won’t find the storage correctly.

    Leave me a comment if this helps or you need some pointers!

  • Converting and Resizing KVM Hard Drives

    Hello everyone! I have been rebuilding my network and servers due to a major outage that I had with my ISP, which we are still meddling with. However, during the outage, I had to rebuild my servers. So I lost a lot of my build machines. Luckily, I still had copies running on my Mac running VMware Fusion. However, I don’t run on there so my machines just sit powered down, but if I need to bring them up for anything, I can.

    Well, I have a build machine that was running out there before I brought it into my KVM environment, but it was out of hard drive space and under utilized. This blog post is going to show how I moved the hard drive to my kvm server and then how I resized it and got it up and running.

    First thing I did was scp the vmdk file from my Mac to my KVM server:

    scp ~/Documents/Virtual\ Machines.localized/Precise-build.vmwarevm/Virtual\ Disk.vmdk kvm2:/data/VMS/precise-build.vmdk

    After 40 minutes, the vmdk was copied. I then converted it to qcow2:

    qemu-img convert -O qcow2 precise-build.vmdk precise-build.qcow2

    After that finished, I was able to get info on it:

    qemu-img info precise-build.qcow2
    image: precise-build.qcow2
    file format: qcow2
    virtual size: 80G (85899345920 bytes)
    disk size: 73G
    cluster_size: 65536
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false

    I wanted to grow it to 200GB in size:

    qemu-img resize precise-build.qcow2 +120G

    I then got info on it to verify that it grew:

    qemu-image info precise-build.qcow2
    image: precise-build.qcow2
    file format: qcow2
    virtual size: 200G (214748364800 bytes)
    disk size: 75G
    cluster_size: 65536
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false

    I was now ready to build the VM, which I used Virtual-Manager to build. I told it to use an existing disk, and then set it up to use more memory and processors then previously so I could get better performance out of it. I then told it to boot from a CD image of Parted-Magic so I could grow the file system. Luckily, this server only had two partitions, the root partition at the swap partition. However, the swap was on an extended partition and at the end of the disk. So I had to delete it and the extended partition so I could used parted to extend the file system. I extended it to the end minus 6GB, and then created at extended partition at the end and added a swap partition back and then saved it and rebooted. The machine rebooted, ran fsck and started up normally.

    I was then able to delete the vmdk file from my server to reclaim the 73GB of space it was using:

    rm /data/VMS/precise-build.vmdk

    Thats it. I hope this guide helps you migrating VM’s and growing their file systems from VMware or even Virtual Box to KVM.

    Let me know in the comments.

    Thanks!