Virtualization using LXD: Difference between revisions

From The Opensource Knowledgebase
Jump to navigation Jump to search
Line 55: Line 55:
This ensures that the drive gets attached automatically when the server boots if the NAS box is available.
This ensures that the drive gets attached automatically when the server boots if the NAS box is available.
*Adding iSCSI drive to the server as a hard disk
*Adding iSCSI drive to the server as a hard disk
<code> sudo iscsiadm -m node --targetname "iqn.2013-03.com.wdc:aptestore1:infrabase1" --portal "10.1.65.50:3260" --login </code>
:<code> sudo iscsiadm -m node --targetname "iqn.2013-03.com.wdc:aptestore1:infrabase1" --portal "10.1.65.50:3260" --login </code>


Check if the drive has been added to the server(sudo fdisk -l). You should be able to see a new hard drive. Assuming it is 'sdd', we shall be using that as the storage and will be creating zfs storage pool on the iSCSI hard disk(sdd). Do not partition, do nothing. LXD will do all the necessary steps to configure that hard disk as a storage pool.
Check if the drive has been added to the server(sudo fdisk -l). You should be able to see a new hard drive. Assuming it is 'sdd', we shall be using that as the storage and will be creating zfs storage pool on the iSCSI hard disk(sdd). Do not partition, do nothing. LXD will do all the necessary steps to configure that hard disk as a storage pool.

Revision as of 14:07, 17 July 2020

LXD Host Details
hostname: infrabase1
Network: 110.1.65.0/24
IP Address : 10.1.65.9
Subnet Mask: 255.255.255.0
Gateway: 10.1.65.1
DNS: 10.1.65.1, 8.8.8.8

Server OS: Ubuntu 20.04
Edition: LTS, server 
sudo user: kedar

FTP Client: Filezilla
ssh client: terminal, reminna
Text editors: gedit, sublime-text

User PC Details
PC type: Desktop
OS: Ubuntu Desktop
IP Address: 10.1.65.160
sudo user:kedar 

Introduction

LXD is a next generation linux system container manager. It offers a user experience similar to virtual machines but using Linux containers instead. It's image based with pre-made images available for a wide number of Linux distributions and is built around a very powerful, yet pretty simple, REST API. With the latest LXD 4.0+, containers and virtual machines can be managed through LXD. More information here.

Preparing lxd Host

  • ssh from local machine to the lxc host
ssh kedar@10.1.65.9
  • Ensure infrabase1 (host server) is updated with latest patches and updates
sudo apt update && sudo apt upgrade -y
  • Remove unwanted software
sudo apt autoremove
  • Restart the host server
sudo init 6
  • Adding iSCSI storage
    • We shall be using network based storage for storing the containers, images instead of the local hard disk of the infrabase 1 server. For that I will be using a WD NAS box to create a iSCSI network storage of 225G with ip address as: 10.1.65.50. Steps to create a iSCSI varies from NAS box to NAS box and I will not show how to create it. I will be using
      • The portal address as: 10.1.65.50
      • The target id as: iqn.2013-03.com.wdc:aptestore1:infrabase1
    • Install open-iscsi to the infrabase1 server
sudo apt install open-iscsi
  • Make modification to the file /etc/iscsi/iscscsid.conf (this needs root for modification)
from
node.startup = manual

to
node.startup = automatic

This ensures that the drive gets attached automatically when the server boots if the NAS box is available.

  • Adding iSCSI drive to the server as a hard disk
sudo iscsiadm -m node --targetname "iqn.2013-03.com.wdc:aptestore1:infrabase1" --portal "10.1.65.50:3260" --login

Check if the drive has been added to the server(sudo fdisk -l). You should be able to see a new hard drive. Assuming it is 'sdd', we shall be using that as the storage and will be creating zfs storage pool on the iSCSI hard disk(sdd). Do not partition, do nothing. LXD will do all the necessary steps to configure that hard disk as a storage pool.

Installing lxd

  • Install lxd using the ubuntu repositories
sudo snap install lxd
  • Initiate lxd config
sudo lxd init:

It will ask you a bunch of questions. For storage pool, select default, zfs, Add a block device and enter /dev/sdd. iSCSI hard drive will be automatically configured as default storage pool for all the containers. You can add many more storage backeds like btrfs, ceph, dir, etc. Whatever suits you. Refer to storage backends documentation of LXD. For network, create a new lxd bridge (it will be called as lxdbr0) and disable access to the containers from local lan / outside the lXD host.

Creating Containers

  • Create a container called as apache. This container will be used as a webserver and apache will be installed as a webserver. You can name the container as you like.
lxc launch ubuntu:focal apache
  • You can access the container shell by
lxc exec apache -- bash
  • You can install apache software (or any software) in the container without logging into the container as well. It can be done as below
lxc exec apache -- apt install apache2

Enjoy the LXD containers.

Useful commands

Container status modification

lxc stop apache lxc start apache lxc delete apache

List various services

  • lxc list
  • lxc storage list
  • lxc network list
  • lxc profile list
  • lxc zfs list
  • lxc remote list

Adding a network bridge

Assuming we have the default bridge as lxdbro, I will be adding one more bridge called lxdbr1 with different IP subnets lxc network create lxdbr1 To disable the ipv6, we edit the config file by lxc network edit lxdbr1 Remove the nat rule for ipv6 and replace the ipv6 address by 'none'. Check the networks in the lxd by the below command lxc network list

Conclusion

  • Lxd is a great way to understand containers and should be a starting point for users who are interested in container based virtualization
  • There are several advantages of using lxd but it is by design and default that the containers are not accessible from outside the host. To enable that there are two options.
    • Create a network bride
    • Using iptables forward ports to the containers
  • Explore lxd and have fun !

References