Category: Ubuntu

  • Growing Ubuntu LVM After Install

    Hello everyone. I hope you have all been well.

    I have a new blog entry on something I just noticed today.

    So I typically don’t use LVM in my Linux Virtual Machines, mainly because I have had some issues in the past trying to migrate VM’s from one hypervisor type to another, for example, VMware to KVM or vice versa. I have found that if I use LVM, I have mapping issues and it takes some work to get the VM’s working again after converting the raw disk image from vmdk to qcow2 or vice versa.

    However, since I don’t plan on doing that anymore (I’m sticking with KVM/Qemu for the time being) I have looked at using LVM again since I like how easy it is to grow the volume if I have to in the future. While growing a disk image is fairly easy, trying to grow a /dev/vda or /dev/sda is a little cumbersome, usually requiring me to boot my VM with a tool like PMagic or even the Ubuntu install media and using gparted to manipulate the size and then rebooting back into the VM after successfully growing it.

    With LVM, this is much simpler. 3 commands and I’m done, and don’t need a reboot. Those commands:

    • pvdisplay
    • lvextend
    • resize2fs

    Now, One thing I have noticed after a fresh install of Ubuntu Server 22.04.2, using LVM, I don’t get all my hard drive partition used. I noticed this after I installed, I ran df -h and noticed that my / folder was at 32%. I built the VM with a 50G hard drive, yet df was only seeing 23GB. I then ran

    sudo pvdisplay

    Sure enough, the device was 46GB in size. I then ran

    sudo lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv

    This command extended my partition out to the remaining space. Next, I grew the file system to use the new space:

    sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

    I then ran df -h again, and low and behold, my / folder is now saying 46GB and 16% used instead of 32%.

    I hope this helps anyone else!

  • Building ONIE with DUE

    Howdy everyone, been a while since I’ve had a post but this one is long overdue.

    I’m still working in Networking, and every once in a while, I need to update the ONIE software on a switch, or even create a KVM version for GNS3 so that I can test latest versions of NOS’s.

    Well, a lot has changed and improved since I had to do this. ONIE now has a build environment using DUE, or Dedicated User Environment. Cumulus has made this, and it is in the APT repos for Ubuntu and Debian. This does make building much easier as trying to build a build machine with today’s procedure from OCP’s GitHub repo is 100% broken and doesn’t work. They still ask to use Debian 9, which most of the servers hosting packages have been retired since Debian 9 has EOL’d. I’ve tried with Debian 10, only to have packages not be supported. So I found out about DUE and was having issues with that, but after much searching and reading, I finally found a way to build ONIE images successfully and consistently.

    Just a slight Caution: At the rate of change with ONIE, this procedure can change again. I will either update this blog or create a new one when necessary.

    So, lets get to building!

    The first thing I did, was install Docker and DUE on my Ubuntu 22.04.4 server

    sudo apt update
    sudo apt install docker.io
    sudo usermod -aG docker $USER
    logout

    I then log back in to the server so that my new group association takes place and install DUE

    sudo apt update
    sudo apt install due
    

    I then installed the ONIE DUE environment for Debian 10. From my research this one is the most stable and worked the best for me:

    due --create --from debian:10 --description "ONIE Build Debian 10" --name onie-build-debian-10 \
    --prompt ONIE-10 --tag onie --use-template onie

    This download and sets up the build environment to build ONIE based on Cumulus’s best practices. Once this process is complete, we now get into the environment with the following command:

    due --run -i due-onie-build-debian-10:onie --dockerarg --privileged

    You are now in the Docker Container running Debian 10 and has the prerequisites for building ONIE already installed. Now we need to clone the ONIE repo from GitHub and do some minor settings to make sure the build goes smoothly.

    mkdir src
    cd src
    git clone https://github.com/opencomputeproject/onie.git

    I then update the git global config to include my email address and name so that during the building process when it grabs other repos to build, it doesn’t choke out and die and tell me to do it later:

     git config --global user.email "wililupy@lucaswilliams.net"
     git config --global user.name "Lucas Williams"

    So, I am building for a KVM instance of ONIE for testing in GNS3. First thing I need to do is build the security key

    cd onie/build-config/
    make signing-keys-install MACHINE=kvm_x86_64
    make -j4 MACHINE=kvm_x86_64 shim-self-sign
    make -j4 MACHINE=kvm_x86_64 shim
    make -j4 MACHINE=kvm_x86_64 shim-self-sign
    make -j4 MACHINE=kvm_x86_64 shim

    I had to run the shim-self-sign after the shim build option again to create self-signed shims after creating the shim, and then had to run shim again to install the signed shims in the correct directory so that ONIE build would get pass the missing shim files.

    Now we are ready to actually build the KVM ONIE image.

     make -j4 MACHINE=kvm_x86_64 all

    Now, I’m not sure if this is a bug or what, but I actually had to run the previous command about 10 times after every time it completed, because it didn’t actually complete. I would just press UP on my keyboard arrow key to re-run the previous command, and I did this until I got the following output:

    Added to ISO image: directory '/'='/home/wililupy/src/onie/build/kvm_x86_64-r0/recovery/iso-sysroot'
    Created: /home/wililupy/src/onie/build/images/onie-updater-x86_64-kvm_x86_64-r0
    === Finished making onie-x86_64-kvm_x86_64-r0 master-06121636-dirty ===
    $

    I then ran ls ../build/images to verify that my recovery ISO file was there:

    $ ls ../build/images
    kvm_x86_64-r0.initrd       kvm_x86_64-r0.vmlinuz.unsigned
    kvm_x86_64-r0.initrd.sig   onie-recovery-x86_64-kvm_x86_64-r0.iso
    kvm_x86_64-r0.vmlinuz      onie-updater-x86_64-kvm_x86_64-r0
    kvm_x86_64-r0.vmlinuz.sig
    $

    I then logged out of the DUE environment and my ISO was in my home directory under the src/onie/build/images/onie-recovery-x86_64-kvm_x86_64-r0.iso file. From here I was able to upload it to my GNS3 server and create a new ONIE template and map the ISO as the CD-ROM and created a blank qcow2 hard disk image to use the recovery and build the image to use on my GNS3.

    One thing to note is that this procedure is for building the KVM version of ONIE. To build others, just change the MACHINE= variable to be what ever platform you are building for.

    Good luck and let me know in the comments if this worked for you.

  • Emptying Zimbra mailbox from the Command Line

    Hello everyone. I hope you are all doing well and staying safe!

    I wanted to document this procedure for clearing out an email box in Zimbra. I recently had to update my Zimbra mail server and I noticed that my admin account was strangely full. Over 200,000 messages in the inbox. Looking at it, they ended up being storage alerts that the Core snap in my Ubuntu Server was out of disk space. This is normal for snaps since they are SquashFS file systems for the applications they run and that is how they are designed. However, the amount of alerts was quite amazing.

    Since I’m not using snaps on this system, I removed the core snap and all of it’s revisions, and then removed snapd from the system so that the alerts would stop. I did this by doing the following:

    $ sudo snap list --all

    This listed all the snaps and revisions running on my mail server. I then noted the revision number and removed all the disabled snap versions of core by running the following:

    $ sudo snap remove --revision=xxx core

    where xxx is the revision number of the snap. I ran this twice since snaps only keep the previous two versions by default. I than deleted snapd from the system so that it won’t update and remove the core snap from the system:

    $ sudo apt purge snapd

    After this ran, I ran df -h to verify that the /dev/loop2 which is where core was mounted on my system was no longer mounted, which it wasn’t. Since I don’t plan on using snaps on this system, I have no issues.

    Next, I needed to delete the over 200,000 alerts in the admin account. I tried to use the web UI to do this, but it was taking forever. After some Google searching and reading the Zimbra documents, I found out about the command zmmailbox.

    Since I didn’t care about any of the email in the mailbox, I was ready to just delete the entire contents. Use the following commands to do it:

    $ ssh mailhost.example.net
    $ sudo su - zimbra
    $ zmmailbox
    mbox> adminAuthenticate -u https://mailhost.example.net:7071 admin@example.net adminpassword
    mbox> selectMailbox admin@example.net
    mbox admin@example.net> emptyFolder /Inbox
    mbox admin@example.net> emptyFolder /Trash
    mbox admin@example.net> exit
    $ exit

    It took a little while after the emptyFolder command but it cleared out the inbox and trash folders.

    Let me know if this helps you.

  • Minecraft Server for Ubuntu 20.04.2

    Hello everyone. I hope you are all doing well. I am writing this blog entry because I created a Minecraft server for my kids some time ago, but I had a hardware failure in the system and never replaced it. At the time, it was no big deal since the boys decided that they were done with Minecraft. But lately, with this new version of Minecraft, they have gotten back into it, and they wanted to have a shared sandbox that they can play with their friends on.

    So, I rebuilt their Minecraft server, but this time, I did it from 16.04 to 20.04. It was pretty straight forward and not much has changed in the way of doing this, but this is here for those of you that want to deploy your own Minecraft server.

    NOTE: This will only work for the Java version of Minecraft. If you are using the Windows 10 version or the one on Xbox or Switch, you will not be able to connect to this server.

    So, the first thing you need is a clean installation of Ubuntu 20.04.2 Server. The system specs should be at least 4GB of RAM and 2 CPU Cores and 80GB of Storage. After you install Ubuntu, do the normal first boot practices, update, upgrade, reboot if required, etc.

    sudo apt update && sudo apt upgrade -y

    Once that is completed, you need to install a couple things on top.

    One thing I like is the MCRcon tool from Tiiffi. I use this to do backups and statistics of my server, and it is really easy to use, and it’s super small. So I install the Build-Essential package as well as git. Minecraft also leverages Java, so I install the Open Java Development Kit packages with headless mode:

    sudo apt install git build-essential openjdk-11-jre-headless

    Once that is completed, I then create a minecraft user so that when I run this as a service, it is a lot more secure, and I have a location where to keep all the dedicated Minecraft files.

    sudo useradd -m -r -U -d /opt/minecraft -s /bin/bash minecraft

    This creates the Minecraft user with the home directory in /opt/minecraft. This also doesn’t create a password for this account so we don’t have to worry about someone gaining access to our system with this account. You can only access this account via sudo su - minecraft with your local admin account.

    Now, we need to switch to the minecraft user and run the following:

    sudo su - minecraft
    mkdir -p {server,tools,backups}
    git clone https://github.com/Tiiffi/mcrcon.git ~/tools/mcrcon
    cd ~/tools/mcrcon
    make
    

    This will create the required directories for Minecraft, and download and build the MCRcon tool. You can verify that the MCRcon tools built successfully by running the command:

    ~/tools/mcrcon/mcrcon -v

    You will get the following output:

    mcrcon 0.7.1 (built: Mar 26 2021 22:34:02) - https://github.com/Tiiffi/mcrcon
     Bug reports:
         tiiffi+mcrcon at gmail
         https://github.com/Tiiffi/mcrcon/issues/

    Now, we get to installing the Minecraft Server Java file.

    First, we need to download the server.jar file from Minecraft. You can go here to download the file, or what I did, is I go to the link, and from there, I right click the link and select ‘Copy Link Address’ so I can paste it into my terminal on the server and use wget to install it.

    wget https://launcher.mojang.com/v1/objects/1b557e7b033b583cd9f66746b7a9ab1ec1673ced/server.jar -P ~/server 

    Now, we need to run the Minecraft server. It will fail on the first run because we need to accept the EULA. We also need to modify the server.properties file since the first run creates these files:

    cd ~/server
    java -Xmx1024M -Xms1024M -jar server.jar nogui

    After the program fails to start, we need to modify the eula.txt file and change the eula=false at the end of the file to eula=true. Save this file and exit.
    Next, we need to enable RCON in Minecraft. Open the server.properties file and search for the following variables, and change them accordingly:

    rcon.port=25575
    rcon.password=PassW0rd
    enable-rcon=true

    Also, while you are in this file, you can make any other changes that you want to the server, such as the server name, listening port for the server, the MOTD, etc. Also, choose a complex password so that not just anyone can remote control your server.

    Now, I like to have this run as a service using SystemD. To do this, create a service script. First you have to exit as the Minecraft user by typing exit and getting back to your local admin shell. Then run the following:

    sudo vi /etc/systemd/system/minecraft.service

    Paste the following in the file:

    [Unit]
    Description=Minecraft Server
    After=network.target
    
    [Service]
    User=minecraft
    Nice=1
    KillMode=none
    SuccessExitStatus=0 1
    ProtectHome=true
    ProtectSystem=full
    PrivateDevices=true
    NoNewPrivileges=true
    WorkingDirectory=/opt/minecraft/server
    ExecStart=/usr/bin/java -Xmx2G -Xms2G -jar server.jar nogui
    ExecStop=/opt/minecraft/tools/mcrcon/mcrcon -H 127.0.0.1 -P 25575 -p PassW0rd stop
    
    [Install]
    WantedBy=multi-user.target

    Save the document. Next, run

    sudo systemctl daemon-reload

    This will refresh systemd with the new minecraft.service.

    Now, you can start the minecraft service:

    sudo systemctl start minecraft

    To get it to start on reboots, execute the following:

    sudo sytemctl enable minecraft

    The last thing we have to do is create the backup job for your server. This uses the MCRcon tool and crontab to clean up the server as well.

    Switch back to the Minecraft user and perform the following:

    sudo su - minecraft
    vi ~/tools/backup.sh

    Paste the following script into the new file you are creating:

    !/bin/bash
     function rcon {
       /opt/minecraft/tools/mcrcon/mcrcon -H 127.0.0.1 -P 25575 -p PassW0rd "$1"
     }
     rcon "save-off"
     rcon "save-all"
     tar -cvpzf /opt/minecraft/backups/server-$(date +%F-%H-%M).tar.gz /opt/minecraft/server
     rcon "save-on"
     # Delete older backups
     find /opt/minecraft/backups/ -type f -mtime +7 -name '*.gz' -delete

    Now, create a crontab to run the backup:

    crontab -e
    0 0 * * * /opt/minecraft/tools/backup.sh

    Now exit as the Minecraft user and return as the local admin. Lastly, because I leverage UFW for my firewall, I need to open the port to the world so that people can connect to it. I do that with the following commands:

    sudo ufw allow from 10.1.10.0/24 to any 25575
    sudo ufw allow 25565/tcp
    

    This allows the Remote console to be accessed only by my internal network, and allows the Minecraft game to be accessed by the outside world.

    Now, you are ready to connect your Minecraft clients to your server and have some fun!

    Let me know if this guide worked for you or if you have any questions or comments, please leave them below.

  • Installing Ubuntu 20.04.2 on Macbook Pro Mid 2012

    Hello everyone. Been a while and I have a new blog entry so that I don’t forget how to do this if I ever have to do it again.

    I got my girlfriend a new Macbook Pro M1 for Hanukkah and she gave me her old one (It’s a Macbook Pro Mid 2012, or 14,1). I was going to update it to Mac OS 11, but found out that it didn’t support it, so I figured I would try to revive life to it by installing Ubuntu on it. This proved to be harder than I expected, but if you keep reading, I’ll tell you how I finally did it. (I’m actually writing this blog from the laptop running Ubuntu.)

    So, the installation was pretty straight forward. I burned Ubuntu 20.04.2 on a DVD (From https://releases.ubuntu.com) and booted up the mac by inserting the DVD in the drive and holding down the “Option” key while booting up and I select the first EFI Partition to boot from by pressing the Up arrow after selecting it. It booted right into Ubuntu no problem.

    I managed to install Ubuntu, and everything went smoothly. After installation, I noticed a weird error about MOK and EFI. I found out that Mac’s EFI wants a signed OS. To fix this, all I did was:

    sudo su -
    cd /boot/efi/EFI/ubuntu
    cp grubx64.efi shimx64.efi

    This will clear the black screen and error when booting.

    Next, I ran sudo apt update & sudo apt upgrade -y to make sure I had all the updates to my laptop.

    With the 20.04.2 update of Ubuntu, everything works out of the box with the Mid 2012 version of the Mac Book. If you run into any issues during the installation, leave a comment and I will try to help.

    Leave a comment if it helps.

  • Installing Jitsi Meet on Ubuntu 20.04

    Hello everyone! It’s been a while since I updated my blog. I hope you all are staying safe and healthy.

    I decided that I would write a blog about how I built my own video conferencing server during this whole outbreak with COVID and having to social distance and stay home.

    My family is all over the country, and with travel and get togethers not being possible, I figured I would reach out and try to video conference with my family. However, we found that not all of us have iPhone or Androids, Laptops, and even comptuers running the same OS. Plus we all are Zoom’d out after work, so we didn’t want to use Zoom. So while taking a college class, I found out about Jitsi and decided I would try to create my own hosted video conference server. The thing I liked about Jitsi is that it has it’s own web client. So you can host and have meetings directly from the server using any web browser on any OS. It also has Apple Store and Google Play Store apps so you can connect that way, however, I had issues with the Google Play version of the App connecting to my server, but figured out the problem was with certificates and the Google version of the app not trusting my SSL certificates on my server. I will detail further on what I did to fix this issue.

    This blog will detail how I did it using Ubuntu 20.04 as well as securing the server down so that not just anyone can use it and host video conferences.

    First thing you need to do, is have a spare server that is capable of hosting the video conferencing software, as well as the users you want to have per conference. There are many discussions in forums about how to scale your server, but what I did for mine is 4 core CPU, 8GB of RAM, and 80GB of Storage. It has a 1GB NIC connected to my external network pool so that it is accessible directly on the Internet. I have had over 15 people at a time conferencing and it never went above 40% utilization of the CPU and never maxed out the network, and the experience was perfect. You can adjust as you see fit.

    First, install Ubuntu 20.04.1 on the server. I use the Live Server ISO and configure the server and SSH and install my SSH Keys. I disable SSH password since I don’t use it and use keys only. I don’t install any Snaps since I don’t need that on this server. Once the OS installation is complete, reboot the server and login.

    Next, I update all the repos and packages to make sure my system is fully updated:

    $ sudo apt update && sudo apt upgrade -y

    Next, I setup UFW to secure the server so that it is protected from the outside:

    $ sudo ufw allow from xxx.xxx.xxx.xxx/24 to any port 22
    $ sudo ufw enable

    xxx.xxx.xxx.xxx is my internal network.

    Next, I copy my SSL certificates and SSL keys to the server. I use the default locations in /etc/ssl/ to store my keys and certificates. I put the key in private/ and the certificates in certs/.

    Now, before we can install Jitsi, I needed to make sure my hostname and /etc/hosts are configured for Jitsi to work correclty. I set the FQDN for my server using hostnamectl:

    $ sudo hostnamectl set-hostname meet.domain.name

    You can verify that it takes by running hostname at the prompt and it return the name you just set.

    Next you have to modify the /etc/hosts file and put the FQDN of your server in place of the localhost entry.

    Now, I create the firewall rules for Jitsi.

    $ sudo ufw allow 80/tcp
    $ sudo ufw allow 443/tcp
    $ sudo ufw allow 4443/tcp
    $ sudo ufw allow 10000/udp

    Now we are ready to install Jitsi. Luckily, it has a repo that we can use, but we have to have the system trust it, so first we have to download the jitsi gpg key using wget:

    $ wget https://download.jitis.org/jitsi-key.gpg.key
    $ sudo apt-key add jitsi-key.gpg.key 
    $ rm jitsi-key.gpg.key

    Now we create the repo source list to download Jitsi:

    $ sudo vi /etc/apt/source.list.d/jitsi-stable.list
    i
    deb https://download.jitsi.org stable/
    

    Press the <esc> key to get the vi prompt and then type :wq to save and quite vi.

    Now, run sudo apt update to refresh the repos on your system and then you are ready to install Jitsi by running:

    $ sudo apt install jitsi-meet

    You will be brought to a prompt where it asks for the server’s name, enter the FQDN of your server here. Next you will be asked about certificates. Select “I want to use my own certificates” and enter the path of your certificates and key.

    Thats all it takes to install Jitsi. You now have a server that people can connect to and join and create video conferences. However, I don’t just want anyone to be able to create conference rooms on my server, so I locked it down by modifying some of the configuration files.

    The first configuration file we need to modify is the /etc/prosody/conf.avail/meet.domain.name.cfg.lua file. This file will tell Jitsi to allow anonymous room creation, or password creation. Open the file in vi and find this line:

    authentication = "anonymous" 

    and change it to:

    authentication = "internal_plain"

    Then, go all the way to the bottom of the file and add the following line:

    VirtualHost "guest.meet.domain.name"
         authentication = "anonymous"
         c2s_require_encryption = false

    Save the file and exit. These settings allow it so that only someone authenticated in Jitsi can create a room, but guests are allowed to join the room once it is created.

    Next we need to modify the /etc/jitis/meet/meet.domain.name-config.js file. Edit and uncomment the following line:

    // anonymousdomain: 'guest.meet.domain.name',

    You uncomment it by removing the // from the front of the line. Save the file and quit vi.

    The last file we have to modify is /etc/jitsi/jicofo/sip-communicator.properties file. Go all the way to the bottom of the file and add the following line:

    org.jitsi.jicofo.auth.URL=XMPP:meet.domain.name

    Now you are ready to add users to the system that you want to have the permissions to create rooms on the server. You will use the prosodyctl command to do this:

    $ sudo prosodyctl register <username> meet.domain.name <password> 

    You can do this for as many users as you want.

    Last, restart all the Jitsi services so that everything you changed will take effect:

    $ sudo systemctl restart prosody

    You can now login to your meet server by opening a web browser to it, create a room, and you will be prompted to enter your Jitsi ID that you just created. It will be <username>@meet.domain.name and the password you set using the prosodyctl command.

    Android Users and Jitsi

    As I mentioned earlier, you can download the Jitsi app from the Apple Store and the Google Play Store. However, there is an issue with the Android version of Jitsi app where it only trusts Jitsi’s servers hosted on jitsi.org. To get around this with my friends and family, I shared with them my certificates for Jitsi in an email to them, and they installed them on their device. Once they did this they were able to connect to my Jitsi server using the Android app. IPhone and Web users do not have this issue.

    Conclusion

    I hope you liked this blog entry on installing your own video conferencing server. If you have any questions, or just want to leave a comment, leave it below.

    Thanks and Happy Hollidays!

  • RetroPie on the Intel NUC Hades Canyon

    RetroPie on the Intel NUC Hades Canyon

    Hey everyone! Been a while since I wrote a blog and I figured this would be a good one. So, because of the COVID-19, and everyone Social Distancing and schools being closed down. I decided that I was going to do a project. Me and my boys love retro gaming. We have used RetroPie in the past on Raspberry Pi’s and loved it, but we wanted to play some more modern games like from their Wii or Gamecube or PlayStation2, and those just can’t be done on the Raspberry Pi version of RetroPie.

    I read a blog about running RetroPie on an Intel NUC, which I followed and for the most part, I got it working right. Some of my lessons learned will be in this HowTo. But after a while, we noticed that it just didn’t perform like it does on the PC or laptop with a decent graphics card. So after doing some research I found the Intel NUC gaming section, and purchased a Hades Canyon, which I feel will give us the performance we are looking for. Plus it will definitely give us the storage since PS2 Games are HUGE!

    So, without further ado, here is how to get RetroPie working on your Hades Canyon (or regular NUC):

    First thing you going to want to do is install Linux on the NUC. Burn the 18.04 Server ISO to a thumbdrive. I downloaded the Live Server version, but you can use the alternate. I recommend Server since you don’t need to install the full Desktop expirience, and I can use the space savings for more ROMS!!

    Next, plug it in to the front USB port. Connect a keyboard up to the other USB port and connect the video and network. Luckily I have a mini switch next to my TV, so I just connected to that. Hades Canyon and NUC have WiFi, but I don’t use it.

    Power on the NUC and it will automatically boot off of USB. You’ll get the GRUB menu and it will start the Live Installer for Ubuntu. I kept everything default, full disk, no LVM, no Encryption, DHCP on the Ethernet settings, and clicked install.

    Setting up the user, I kept this simple and as close to the regular RetroPie settings:

    • User: Retro Pie
    • Server Name: retropie
    • Username: pi
    • Password: raspberry

    I also told it to enable SSH and to download my public SSH keys from Launchpad.net and to allow remote password authentication.

    After a few minutes, the install completed and I unplugged the USB thumbdrive and reboot the NUC. It restarted in Ubuntu 18.04.4.

    I like to use the HWE kernel for my RetroPie since the newer kernel’s have new features and RetroPie might make use of them. So I do the following:

    sudo apt update
    sudo apt upgrade
    sudo apt install --install-recommends linux-generic-hwe-18.04

    And then you reboot the NUC again. Once it comes back up you are ready to do RetroPie installation. This is what I did:

    SSH into the NUC

    Set it so the the pi user can execute sudo without a password (makes scripting much easier)

    sudo sed -i -e '$a\pi ALL=(ALL) NOPASSWD:ALL' /etc/sudoers

    Type the pi password and you’re all set. You won’t have to type that password again if you use sudo.

    Now to add the universe repo and all of the RetroPie dependancies:

    sudo apt-add-repository universe
    sudo apt update -y
    sudo apt upgrade -y 
    sudo apt install xorg openbox pulseaudio alsa-utils menu \ libglib2.0-bin python-xdg at-spi2-core dbus-x11 git \
    dialog unzip xmlstarlet --no-install-recommends -y

    Now we need to create an OpenBox autorun script to start terminal and start Emulation Station

    mkdir -p ~/.config/openbox
    echo 'gnome-terminal --full-screen --hide-menubar -- \ emulationstation' >> ~/.config/openbox/autostart

    Next, create the .xsession file:

    echo 'exec openbox-session' >> ~/.xsession

    Now we need to make it so X11 starts on reboots:

    echo 'if [[ -z $DISPLAY ]] && [[ $(tty) = /dev/tty1 ]]; then' >> ~/.bash_profile
    sed -i '$ a\startx -- -nocursor >/dev/null 2>&1' ~/.bash_profile 
    sed -i '$ a\fi' ~/.bash_profile

    Next, we make it so the that pi user automatically logs in and that way Emulation Station will be what you see on the screen:

    sudo mkdir /etc/systemd/system/getty@tty1.service.d
    sudo sh -c 'echo [Service] >> /etc/systemd/system/getty@tty1.servcie.d/override.conf' 
    sudo sed -i '$ a\ExecStart=' /etc/systemd/system/getty@tty1.service.d/override.conf
    sudo sed -i '$ a\ExecStart= /sbin/agetty --skip-login --noissue --autologin pi %I $TERM' /etc/systemd/system/getty@tty1.servcie.d/override.conf
    sudo sed -i '$ a\Type=idle' /etc/systemd/system/getty@tty1.servcie.d/override.conf

    Now we are ready to download RetroPie from the Git Repo and run the installation scripts:

    git clone --depth=1 https://github.com/RetroPie/RetroPie-Setup.git
    sudo RetroPie-Setup/retropie_setup.sh

    This will start the RetroPie installer. Accept the EULA and select Basic Installation. Select Yes to install all packages from Core and Main. They system will then start downloading and building RetroPie directly on the NUC

    Note: This takes some time. Grab a beverage, some food. I’ll wait.

    Once building and installation is complete, you can reboot the NUC from the menu. However, I have a few more customizations that I do.

    I use an Xbox One Controller with my NUC, so I have to install the driver for it on Ubuntu. To do that, cursor down to Driver, and select the xboxdrv and install from source. It takes about 3 minutes to download and build the driver. When it completes, you are back at the Menu. Select Back from the bottom to go up a level, and do it again to get back to the main menu.

    I also install Dolphin Emulator, and the Playstation2 Emulator, and they are found in the experimental section. They can be a trick to setup and work correctly. For me to get Dolphin to recognize the Xbox Controller correctly, I actually had to change my .bash_profile to enable cursor and window mode so that I could use the mouse on the screen so I could point and click the settings. Now that I have that done, I backed up the configuration from the old NUC, and just copied it to the new one with ease.

    I also use Dolphin for Wii games, and I have a Dolphin Bar so that I can use my Wii controllers. This was a slight bear to setup. First, you need to create a couple udev rules:

    sudo touch /etc/udev/rules.d/10-local.rules
    sudo vi /etc/udev/rules.d/10-local.rules

    Now paste the following into your rules file:

    #GameCube Controller Adapter
    SUBSYSTEM=="usb", ENV{DEVTYPE}=="usb_device", ATTRS{idVendor}=="057e", ATTRS{idProduct}=="0337", TAG+="uaccess"
    
    #Wiimotes or DolphinBar
    SUBSYSTEM=="hidraw*", ATTRS{idVendor}=="057e", ATTRS{idProduct}=="0306", TAG+="uaccess"
    SUBSYSTEM=="hidraw*", ATTRS{idVendor}=="057e", ATTRS{idProduct}=="0330", TAG+="uaccess"

    Now, you can plug in the Dolphin Bar to a USB port and connect your controller. Make sure you are in Mode 4 for emulation mode on the Dolphin Bar, and then start Dolphin emulator by running it from /opt/retropie/emulators/dolphin/bin/dolphin-emu and select controllers. In the middle of the dialog, you will select “Emulate the Wii’s Bluetooth Adapter” and for the Wii Remotes, select Real Wii Remote. You won’t be able to configure them, but that is ok. Also check Continuous Scanning and save. Restart the NUC and now it will work. To verify, start Dolphin again, and make sure that the Wiimote is connected to the Dolphin Bar, when the game starts, the Wiimote will rumble letting you know its connected.

    Also, Playstation and Playstation2 require BIOS’s to work.

    After installing all the Emulators I want, I then go back to the main menu and select Configuration / tools.

    I then configure Samba so that I can access my NUC’s ROM’s and am able to upload them using Samba. After it install Samba, select to Install Retropie Samba shares.

    And now you are all done. This is where I reboot the device.

    Now we are ready to setup the XBox One controller. First thing is to go to RetroPie Configuration in Emulation Station, and select Bluetooth. This will install the required Bluetooth libraries and binaries. Next, we need to SSH back into the box and make a setting change. XBox One controllers don’t use ERTM. Create a bluetooth.conf in /etc/modprobe.d/ and add the following line to the file:

    options bluetooth disable\_ertm=Y

    Reboot the NUC again. Now, go back in the RetroPie Configuration in Emulation Station, and select Bluetooth, set the Xbox Controller to be discoverable by turning it on, and holding the small button near the left bumper on top of the controller until the Xbox button flashes fast. Then in Emulation Station, select Search for controller. After a few moments it will be listed as Xbox Wireless Controller. Select it, and select the first option for connection and it will successfully connect. Back on the main screen in Emulation Station, press Enter or Start on another controller and select Configure Input. Select Yes and when it asks to press ‘A’ on a the controller, press A on the Xbox One Controller and you can configure it.

    These next steps are just to make the system pretty. I don’t log the default Linux Boot up text scrolling on my TV, so I use Herb Fargus’s Boot themes using Plymouth, and set it to the RetroPie PacMan default setting:

    sudo apt update
    sudo apt install plymouth plymouth-themes plymouth-x11 -y
    git clone --depth=1 https://github.com.com/HerbFargus/plymouth-themes.git tempthemes
    sudo cp -r ~/tempthemes/. /usr/share/plymouth/themes/
    rm -rf tempthemes
    sudo update-alternatives --install /usr/share/plymouth/themes/default.plymouth default.plymouth /usr/share/plymouth/themes/retorpie-pacman/retropie-pacman.plymouth 10
    sudo update-alternatives --set default.plymouth /usr/share/plymouth/themes/retropie-pacman/retropie-pacman.plymouth
    sudo update-initramfs -u
    sudo cp /etc/default/grub /etc/default/grub.backup
    sudo sed -i -e 's/GRUB_TIMEOUT=10/GRUB_TIMEOUT=2/g' /etc/default/grub
    sudo sed -i -e 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="quiet splash"/g' /etc/default/grub
    sudo update-grub

    This piece hides the last login information before starting OpenBox:

    sudo sed -i -e 's/session optional pam_lastlog.so/#session  optional pam_lastlog.so/g'/etc/pam.d/login
    sudo sed -i -e 's/session optional pam_motd.so motd=\/run\/motd.dynamic/#session optional pam_motd.so motd=\/run\/motd.dynamic/g' /etc/pam.d/login
    sudo sed -i -e 's/session optional pam_motd.so noupdate/#session optional pam_motd.so noupdate/g' /etc/pam.d/login
    sudo sed -i -e 's/session optional pam_mail.so standard/#session optional pam_mail.so standard/g' /etc/pam.d/login

    And to hide the terminal in Emulation Station:

    sed -i '1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ use-theme-colors false' ~/.bash_profile 
    sed -i '1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ use-theme-transparency false' ~/.bash_profile 
    sed -i '1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ default-show-menubar false' ~/.bash_profile 
    sed -i "1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ foreground-color '#FFFFFF'" ~/.bash_profile 
    sed -i "1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ background-color '#000000'" ~/.bash_profile 
    sed -i "1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ cursor-blink-mode 'off'" ~/.bash_profile 
    sed -i "1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ scrollbar-policy 'never'" ~/.bash_profile 
    sed -i '1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ audible-bell false' ~/.bash_profile 
    cp /etc/xdg/openbox/rc.xml ~/.config/openbox/rc.xml 
    cp ~/.config/openbox/rc.xml ~/.config/openbox/rc.xmlbackup 
    sed -i '//a ' ~/.config/openbox/rc.xml 
    sed -i '//a true ' ~/.config/openbox/rc.xml 
    sed -i '//a no ' ~/.config/openbox/rc.xml 
    sed -i '//a below ' ~/.config/openbox/rc.xml 
    sed -i '//a no ' ~/.config/openbox/rc.xml 
    sed -i '//a yes ' ~/.config/openbox/rc.xml 
    sed -i '//a ' ~/.config/openbox/rc.xml

    And lastly, if you want to suppress cloud-init, lets just remove it since we don’t need it:

    sudo apt purge cloud-init -y
    sudo rm -rf /etc/cloud/
    sudo rm -rf /var/lib/cloud/

    And you’re done. May need to do some tweaks with the graphics to get good performance. I found that running in 4k tends to slow the games and audio down to unplayable, but found that if I play in 1080 mode, they work better. Before the game starts and it asks you to press a button to configure when running retro-arch, hit A button and select the emulator resolution from the list. Find one that works the best for you.

    Happy Retro Gaming!

  • Manually Migrating VM’s from one KVM host to another

    Hello everyone! Been a while since I posted a blog. This one was a doozy. I tried to find this information online and there was a ton to peruse through. Luckily, I was able to peice a few of them together to finally get it working the way I needed for my environment.

    So, this is what I was doing. I needed to retire a KVM host, but it was running a couple of VM’s that I couldn’t migrate using virt-manger or with virsh migrate so I decided I would try to just move the qcow2 files and build them from new.

    That did not work at all.

    So, after researching some solutions, I finally have one that works, and I’m going to share it with you now.

    NOTE: I shared my public ssh keys between the hosts so I don’t need to type passwords in when ssh’ing and scp’ing between the hosts.

    First, power off the VM on the original host if it is running:

    virsh stop <vm name>

    Now, I had to chown the storage file since I don’t enable root on any of my systems and I needed to scp the file from one host to the new host:

    sudo chown wililupy:wililupy server.qcow2

    I could then scp it to the server where I keep my vm storage.

    NOTE: The path for the storage of my VM’s is the same on both hosts, if it is different for you, you are going to have to make some modifications to the xml file that is part of the next step.

    scp server.qcow2 host2:/data/vms

    I then ssh into the new host and chown the file back to root:root.

    Back on the first host machine, I execute:

    virsh dumpxml server > ~/server.xml

    I then change to root using sudo and copy the NVRAM file for my host since I use UEFI for my VM’s:

    sudo -s
    cd /var/lib/libvirt/qemu/nvram
    cp server_VARS.fd ~wililupy
    exit
    cd ~
    sudo chown wililupy:wililupy server_VARS.fd

    I then scp’d the server_VARS.fd and the server.xml files to the new host:

    scp server_VARS.fd server.xml host2:~

    I then ssh’d to the new host and perform the following:

    virsh define server.xml
    sudo chown root:root server_VARS.fd
    sudo mv server_VARS.fd /var/lib/libvirt/qemu/nvram

    I was then able to start my VM’s on my new host and everything worked perfectly.

    virsh start server

    NOTE: My new host and old host had the same network and storage for the VM paths the same, which made this integration much easier. If yours are different, you will have to modify the xml file to match your new hosts information otherwise network will not work or the VM won’t find the storage correctly.

    Leave me a comment if this helps or you need some pointers!

  • Deploying Ubuntu Core 18 with MAAS 2.5

    Forward

    Ubuntu Core is a snap-only, lightweight version of Ubuntu. The kernel, the root file system, and the snap daemon are all packaged and operated as snaps compared to the traditional layout of a Linux distribution. Ubuntu Core is designed around IoT devices due to its lightweight, transactional updates and security. Deployment usually happens in the factory of the device and the software is “flashed” on to the storage medium. 

    MAAS, or Metal-As-A-Service, is a Canonical-based application that typically runs on a server or Top of Rack (ToR) switch that is used to manage and deploy various operating systems like Windows Ubuntu or Red Hat Linux and onto those bare metal servers. It is a way to treat bare metal servers as cloud services where you can manage it as easily as managing cloud instances.

    With Ubuntu Core 18, cloud-init, which is the provisioning and configuration piece of Ubuntu, comes built-in. MAAS makes use of cloud-init to set up network access, initialize users, copy ssh keys to the device and to set up storage and partitions. 

    This document will explain how to set up MAAS to be able to deploy Ubuntu Core to devices, which devices are compatible with MAAS so that it can be used to deploy and what customizations have to be done to the Ubuntu Core image to have seamless integration with MAAS and the target devices.

    MAAS Setup

    Installing MAAS is fairly easy. There are two methods of deployment: you can either install it from the main repo in Ubuntu Server after the server has been deployed, you can install MAAS during the initial installation of Ubuntu Server, or you can install it via snap packages on any Linux distribution capable of running snaps. This document will cover installing MAAS from the MAAS/next repository that is the beta next release of MAAS (2.5) in Ubuntu 18.04.2. More information on other installations can be found here.

    Deploy Ubuntu Server

    1. Download the latest version of Ubuntu Server 18.04.2 by clicking here.
    2. Burn the media to a USB or DVD and install Ubuntu Server by following the Installation Guide located here.
    3. Once Ubuntu Server is running, do an update to make sure you have the latest versions of the software:
      sudo apt update && sudo apt upgrade -y
    4. Once this completes, you need to add the MAAS/next ppa to your apt.repos:
      sudo apt-add-repository -yu ppa:maas/next
    5. Now, install MAAS on the server:
      sudo apt install maas -y
    6. Once MAAS finishes installing, you need to configure an admin user account. Use the following command to do this:
      sudo maas createadmin --username=admin
      Enter a password and email address for the admin user and now you are ready to login to the web UI. However, we have one small step left to do since the next following steps will require command line use.
    7. Now, we need to save the API key to login to the MAAS command line. This is how we will upload the Ubuntu Core image to MAAS so that it can be deployed on devices. Use the following command to save the key:
      sudo maas apikey --username=admin > ~/apikey
      This will save the API key to your home directory as apikey. This will be used later when we actually upload the Ubuntu Core image to the server.

    Configuring MAAS

    1. Now, we need to finish setting up MAAS. Login to the web UI by opening a browser window to:
      http://maas.server.address:5240/MAAS
    2. After you login, you are presented with a screen asking for DNS forwarders. Here you can add your own internal DNS servers, or use the public ones. You can also leave it blank, but I recommend at least entering 8.8.8.8.
      You also need to have at least one image downloaded and installed, By default MAAS downloads the X86_64 version of Ubuntu Server 18.04. You can add others if you want, but this should suffice for now.
    3. Scroll all the way to the bottom and click Continue.
    4. On the next screen, import any other SSH keys you want MAAS to be able to provision to servers/devices and then click import and then click Go to dashboard to go to the Main MAAS UI.
    5. From the main dashboard, click the Subnets tab. We need to setup DHCP on the network that will be managing the servers and devices.
    6. Click on the VLAN for the subnet you want to manage DHCP for. Click the ‘untagged’ label.
    7. From the Action button, select Provide DHCP and configure the  address pool and the gateway for the subnet and then click Provide DHCP button.
    8. MAAS is configured now to manage and deploy images that are connected to the subnet that you just configured. You can click on the Machines tab and you are ready to start enlisting, commissioning, and deploying Nodes.

    OPTIONAL: Setup MAAS as a Router using UFW

    This step is optional. If your network that is managing the subnet has a router or a policy that routes traffic to the internet or other networks, this step can be bypassed. If you are testing on a Canonical OrangeBox this can be skipped since the mini switch in the box has routing capabilities built in. However, if you are running this in a VM environment or just testing locally on a subnet you created in your home lab, this will help get your clients you deploy with MAAS to get to the internet and work properly. I used UFW since it is built in to Ubuntu and fairly easy to configure, just a lot of rules for the various ports that MAAS uses to do its “magic.”

    Below is a list of ports that MAAS uses:

    Port
    Use
    7911/TCPMAAS
    22/TCPSSH
    53/TCP and UDPDNS
    3128/TCPiSCSI
    8000/TCPSquid
    5240/TCP and UDPMAAS
    5247/TCP and UDPMAAS
    5248/TCPMAAS
    5249/TCPMAAS
    5250/TCPMAAS
    5251/TCPMAAS
    5252/TCPMAAS
    5253/TCPMAAS
    67/UDPDHCP
    68/UDPDHCP
    69/UDPTFTP
    123/UDPNTP
    5353/UDPMulticast DNS
    5787/UDPMAAS

    Now, for the procedure to enable NAT and forwarding in MAAS using UFW.
    NOTE: It is much easier to work with the firewall as root. You can switch to root using the following command:
    sudo -s

    1. Setup the forwarding policy for UFW by modifying the /etc/ufw/default and change the following:
      DEFAULT_FORWARD_POLICY=”ACCEPT”
    2. Uncomment net/ipv4/ip_forward=1 from /etc/ufw/sysctl.conf to allow ipv4 forwarding. You can also uncomment the ipv6 if you need it.
    3. Next, modify the /etc/ufw/before.rules to create the NAT table and the source network and interface by adding the following BEFORE the filter rules:
      # NAT table rules 
      *nat 
      :POSTROUTING ACCEPT [0:0] 
      
      # Forward traffic through the external interface on host 
      -A POSTROUTING -s 172.16.236.0/24 -o ens33 -j MASQUERADE 
      
      # Don’t delete the ‘COMMIT’ line or this nat table rules 
      # won’t be applied 
      COMMIT 
    4. Now we are ready to create the firewall rules to allow connectivity to MAAS and the various services. Below are the commands to do this:
      ufw allow ssh 
      ufw allow bind9 
      ufw allow ntp 
      ufw allow tftp 
      ufw allow 67:68/udp 
      ufw allow 7911/tcp 
      ufw allow 3128/tcp 
      ufw allow 8000/tcp 
      ufw allow 5240/tcp 
      ufw allow 5240/udp 
      ufw allow 5247:5253/tcp 
      ufw allow 5247:5253/udp 
      ufw allow 5787/udp 
    5. Now we can start the firewall and test:
      ufw enable 
      ufw status
      You will get a list of the firewall rules. You can connect a device to your internal network managed by MAAS, and try to ping Google.com or 8.8.8.8 and verify that you get a return echo. If that is all working properly, then you have successfully setup MAAS to act as a router for clients on the managed network.

    Ubuntu Core Image Setup

    Ubuntu Core 18 can deploy from MAAS out of the box. However, there is currently a bug in console-conf where if MAAS manages the device and configures the network, console-conf will fail on the network setup and go into a loop where you cannot configure the device. You can still manage to ssh and login using your keys that are installed to the device via MAAS managed keys, but you cannot login locally on the console of the device because it does not know it is fully configured.

    There are also some limitations of what you can configure for the device through MAAS. For example, you cannot change the filesystem and partition layout since the image is basically just dd’d to the device, so after installation and first boot, Ubuntu Core resizes the partition to fit the device, and installs the correct partitions to be able to boot the device up properly. The only parts that Ubuntu Core uses from cloud-init is the network configuration and it creates a user named Ubuntu on the device, and then copies the stored SSH keys for the MAAS user deploying the device onto the device for remote login/administration.

    To get around the previous mentioned bug, before we install the image into MAAS, we have to add a file so that it doesn’t run console-conf after first boot. The following procedure will go into this in detail.

    Download Ubuntu Core 18 and upload it to the MAAS Server

    1. Download the latest version of Ubuntu Core 18 from here.
    2. SCP the image to the MAAS server:
      scp ubuntu-core-18-amd64.img.xz maas.server.address:~
    3. SSH to the MAAS server:
      ssh maas.server.address
    4. Login to the MAAS CLI using the APIkey you created previously:
      maas login admin http://localhost:5240/MAAS `cat ~/apikey`
    5. Upload the image to MAAS:
      maas admin boot-resources create name=ubuntu-core/uc18 \
      title="Ubuntu Core 18" filetype=ddxz \ 
      content@=ubuntu-core-18-amd64.img.xz architecture=amd64/generic 
    6. Verify that the image uploaded by going to the Images tab in MAAS UI and at the bottom you will see Custom Images, and the image will be there:
    7. You are now ready to deploy Ubuntu Core to a device managed by MAAS.

    Implementation

    So now we are ready to deploy Ubuntu Core 18 on to a device. First, make sure that your device is connected to the network, there is no OS installed on the device, and that it is setup to PXE boot. Make sure that you connect the power to a managed UPS outlet, or if it has a remote poweron/off ability, that you have the credentials or settings and that they are compatible with MAAS. For this demo, I used an Intel NUC, which uses Intel AMT for the power management. I configured it on the NUC in the BIOS and then entered in those settings when I got to that step. 

    MAAS has three phases when it acquires new devices. First, the device needs to be powered up and MAAS will automatically detect a new device on the network when it asks for a DHCP address. MAAS will boot up an ephemeral image and probe the device for network connectivity, and power management, and if it can, power off the device, and then add it to the MAAS database. This is called “Enlistment.”

    Once this step is completed, from the MAAS UI, the operator can then “Commission” the device. This boots up an ephemeral image on the device and probes all the hardware, gets more detailed information, and can perform tests on the device and also upgrade firmware if required. Once this step is completed, the device is ready to be Provisioned, or deployed.

    To provision a device, once it is in the Ready state, you can select it from the Machine tab in MAAS, and from the Action button, you select Deploy.

    Below will be the steps to enlist, commission and deploy Ubuntu Core 18.

    Enlistment

    1. Connect the device to the network being managed by MAAS. Make sure it is fully configured to use remote power up and down.
    2. Power on the device. You can either connect to the console of the device or install headless and watch MAAS. Once the device is enlisted, it will show up in the Machine tab.
      1. Doing this sometimes will cause the system to not know the power type to use. You may have to configure the power for the device for remote power on. For Intel NUC’s, they use AMT, but some make use of IPMI. You can also select Managed PDU’s as well. If it is a VM, you can use KVM or VMware. VirtualBox is not supported.
    3. When enlistment is finished, you will get the following in the Machine tab in MAAS:
    4. If enlistment couldn’t detect the power type, you will have to manually update it. Select the node.
    5. Click on Configuration
    6. From the Power type pull-down, select the power type that the device uses and enter all the applicable information. If it is entered correctly, you will get the following indication on the machine:

    Commissioning

    When the device is in a “New” status, it needs to be commissioned before it can be deployed. From the Machine tab screen, perform the following:

    1. Select the device from the Machine tab.
    2. From the Take action button, select Commission.
    3. You can watch the hardware test and the commission scripts run from the respective tabs in the Machine view in the MAAS UI:
    4. Once all the hardware tests and commission scripts have passed, the machine will be powered off and put in the Ready state.

    Deployment

    The device is now ready to have Ubuntu Core 18 installed. Follow this procedure for deploying Ubuntu Core.

    1. From the Machine tab in the Web UI, select the device you want to deploy Ubuntu Core to.
    2. From the Machine view of the device, you will see various tabs. These tabs let you customize the installation of the device.
    3. Click on the Interfaces tab. This tab lets you configure the network address of the device. To customize it, click the Action icon highlighted in the following picture and select Edit physical.
    4. The Storage tab does not function with Ubuntu Core, so any selection you make will not apply to Ubuntu Core. This is due to how Ubuntu Core is installed on the device. Since it is just dd’d on to the device, it will auto partition on first boot so that Ubuntu Core will work as designed with the writable partition and the boot/EFI partitions.
    5. From the green Take action button, select Deploy
    6. From the Choose your image pull down, select Ubuntu Core.
    7. Make sure that Core 18 is the image that it will install.
    8. Select Deploy Machine
    9. Once deployment is finished, the device will remain powered on and have Core 18 in the status. You can now SSH to the device with the user ubuntu.

    NOTE: Even though the device is saying that it is deployed, if you look at the console of the device, it will be at the first boot state. This is a bug in Console-conf where even though the device is configured, console-conf is not aware of this. If you try to configure the device, it will fail at the network setup with the following error:
    If you select Done, you get back on this screen. If you select Cancel, you end up on the main configuration screen and you can’t move past this. To work around this, we need to let console-conf know that the device is already configured.

    1. SSH to the device.
      ssh ubuntu@device.name
    2. A file needs to be placed in /var/lib/console-conf called complete.
      sudo touch /var/lib/console-conf/complete
    3. Reboot the device and when it comes back online, you will be presented the normal login screen on the console. However you cannot login via the console due to security of Ubuntu Core 18.

    Conclusion

    With this document, you can now deploy Ubuntu Core 18 to devices that are managed by MAAS. I hope that this has been helpful and informative. 

  • Using UFW to secure a server

    Using UFW to secure a server

    Hello Everyone! Hope you’re doing well!

    So for the last few weeks, I have been dealing with a DoS attack happening against me. After spending a couple days with Comcast Network Engineers we finally figured out that my mail server was being attacked. Once we disabled the NIC on the server, Internet service would start up immediately and everything appeared to be working properly. Looking at the auth logs, I noticed that port 80 and port 22 were getting bombarded by the DoS attacks.

    Since the Comcast gateway has a poor firewall, I looked in to getting an upgraded PAN to meet my demand, but unfortunatly, that was going to set me back about $4k, and I looked at Untangle, but ran into configuration issues with the Comcast Gateway and my static IP’s.

    So, until I save my pennies to get the upgraded PAN I had to use ufw to block my attacks.

    UFW, or Uncomplicated Firewall, is the default Firewall for Ubuntu. Since my mail server is running Ubuntu, I decided to use this. And it is fairly easy to setup and use.

    First thing I noticed when being attacked is that specific Chinese IP addresses were attacking the server on port 22 and port 80, which are the SSH port and the unencrypted default web server port. So these were going to be the first ones that I setup, however, one thing to note is that when you enabled ufw, it blocks all traffic, which is a good thing really, however, when you rely on email for your job, blocking it all is not good. so I needed to find out what external ports I needed to have open to the public so that it would still work, and which ones I could have just available internally so that I can still work on the servers if I need to.

    First thing I did was look at what ports my server was sharing outside. I did this with the netstat command:

    netstat -an | more

    This command outputs all the interfaces and ports that the server is listening and communicating on. This also tells you who is connected to what service if they have an open session, so this command is pretty important if you are wanting to get into security.

    To make it easy, I needed imap, pop, smtp, ssh, http, https, and ldap.

    I also needed to know what can be internal only and what needs to be exposed to the public so that my email server can still get email.

    Here is what I came up with:

    PortsVisibility
    22 (SSH)Internal
    25 (SMTP)public
    443 (HTTPS)public
    993 IMAPpublic
    465 (SMTPS)public
    587 (SMTP Submission)public
    80 (HTTP)Internal
    110 (POP3)Internal
    143 IMAPInternal
    389 (LDAP)Internal
    995 (IMAPS)Public

    So, now that I have the required information, I can create the rules. They are as simple as doing the following command:

    sudo ufw allow from x.x.x.x/24 to any port 22

    This rule allowed only my internal IPv4 network to connect to the server. I did this for all internal addresses. I also added the specific email external IP address with a /32 to specify only the server could talk to itself on the internal ports. Might have been overkill, but better safe than sorry. For my public rules I did the following command:

    sudo ufw allow from any to any port 443

    This will also create IPv6 rules as well.

    If you accidently create a rule and it isn’t working properly, you can remove the rule by first looking up its number:

    sudo ufw status numbered

    and then:

    sudo ufw delete [rule #]

    Once everything is done, enable the firewall so that the rules will be applied:

    sudo ufw enable

    If you every need to stop the firewall, you can disable it by sudo ufw disable and it will go back to being unsecured.
    I still had to reboot the server after creating the rules and enabling the firewall since sessions were still open but after the reboot, I haven’t had any more issues and email still works. You can look at the syslog to see all the blocks, which is somewhat fullfilling.

    If you have any questions, or if you have any comments, please leave them below!
    Thanks!