Using iSCSI block storage and ZFS on FreeBSD with Oracle Cloud Infrastructure (OCI)

Use cloud block storage on OCI (Oracle Cloud Infrastructure) with FreeBSD, just like it’s done on Linux and Windows compute instances, and optionally leverage ZFS for simple management, cloning, encryption, redundancy, and more.

Disclaimer: at the time of writing, I work for Oracle. I wrote this article for fun on my free time. FreeBSD isn’t supported on OCI so I just had to find a way to get it running anyway and use the capabilities of Oracle Cloud Infrastructure since I love the high-performance, elastic nature of OCI.

Before going through this article you need to get FreeBSD running on OCI, and it’s not hard. The easiest way is to create a VMDK (VMware virtual disk image) or QCOW2 VM image on your computer, upload it to OCI object storage, and then use that VM image to create a Custom Image in OCI  compute that can be launched whenever needed. The OCI documentation is great and the relevant section will walk you through this process, which is written about importing Linux images but works just fine for FreeBSD too, running in “emulation mode.”

Attaching block storage

First, provision some block storage in the OCI web console and attach it to your FreeBSD compute instance. We will then need to run OS commands inside the instance so the operating system can see the storage as an iSCSI device.

We’ll initially test the connection works without setting up CHAP authentication and then if successful we’ll add on these authentication credentials to improve our security, and finally make this all permanent by creating a configuration file to mount the block storage at boot.

The OCI web console provides iSCSI command documentation for Linux and also separates out the pertinent information we need to connect block storage in FreeBSD. All this information except the Linux commands will be needed. We need to provide the block storage IP address, port and the volume IQN from the web console using the following command, whereby the -A attach option is used in the following format:

# service iscsictld start
# iscsictl -A -p 10.10.10.10:port-t iqn.myIQNnumberHere

Note that the IQN that Oracle’s Cloud Infrastructure gives you does not have a :target0 at the end of it, so you do not need to add one – simply use the IQN that is provided along with the correct IP address and port shown in the OCI console. If the above command is successful, nothing will be output to the shell. To check if the iSCSI attach was successful we simply issue the iscsictl command on its own and you should see something like this:

# iscsictl
Target name         Target portal    State
iqn.myIQNnumberHere 10.10.10.10:3260 Connected: da0

This indicates a successful connection. If you do not have output similar to the above that shows the iSCSI device is now connected, and instead see errors like “connection lost” or “waiting” or “not found” or “authentication failed” then the FreeBSD Handbook page entitled iSCSI Initiator and Target Configuration has some troubleshooting ideas for you.

Now we need to mount the block storage into the filesystem. In our case we’ll mount the block storage as /usr/jails, feel free to pick another location if you have something else in mind.

Since we want the performance, data integrity, and pooled storage capabilities of the ZFS filesystem, we need to startup the service. Below I shamelessly reproduce a portion of the excellent quick start guide to ZFS from the Handbook because it’s perfectly written.

There is a startup mechanism that allows FreeBSD to mount ZFS pools during system initialization. To enable it, add this line to /etc/rc.conf:

zfs_enable="YES"

Then start the service:

# service zfs start

To create a simple, non-redundant ZFS pool using a single disk device:

# zpool create block /dev/da0

To view the new pool, review the output of df:

# df Filesystem      
1K-blocks    Used     Avail Capacity  Mounted on 
/dev/gpt/rootfs  45696284 1668492  40372092     4%    /devfs                   1       1         0   100%    /devblock           516030224      92 516030132     0%    /block

This output shows that the /block pool has been created and mounted. It is now accessible as a file system. Now let’s create a filesystem within the pool:

# zfs create block/jails

With block storage now attached and working, let’s make the configuration permanent and also more secure. We’ll start by unmounting /jails and then remove the iSCSI block storage so we can reattach it using CHAP authentication credentials at boot time.

# zfs umount block
# iscsictl -R -p 10.10.10.10:port-t iqn.myIQNnumberHere

Now go into the OCI web console and detach the storage, then add it again but this time enable authentication using CHAP credentials. Make sure you select CHAP when you add the block storage again.

To connect using a configuration file, create /etc/iscsi.conf and ensure give it chmod 600 permissions so that only root users can read the file. Then edit the fil with contents like this, inserting the information you received in the OCI console for each placeholder below:

b0 {
        TargetAddress   = 10.10.10.10
        TargetName      = iqn.myIQNnumberHere
        AuthMethod      = CHAP
        chapIName       = user
        chapSecret      = mySecretSecret
}

The b0 specifies a nickname within the configuration file. It will be used by the initiator to specify which configuration to use if there are more than one. The other lines specify the parameters to use during connection. The TargetAddress and TargetName are mandatory, whereas the CHAP authentication is optional. In this example, the CHAP username and secret are shown .

To connect to the specific target, specify the nickname:

# iscsictl -An b0

Alternately, to connect to all targets defined in the configuration file, use:

# iscsictl -Aa

To make the initiator automatically connect to all targets in/etc/iscsi.conf, as well as ensure zfs starts at boot, add the following to /etc/rc.conf:

iscsictl_enable="YES"
iscsictl_flags="-Aa"
zfs_enable="YES"

Also make sure to add an entry for ZFS in /boot/loader.conf:

zfs_load="YES"

Now our system is fully configured and we can build out the jails to isolate our apps and services. In our cases we’ll isolate the database, application and web tiers into their own virtual containers. This process helps to heighten the overall security of the system because if one tier becomes compromised, the other tiers and the host will all remain secure. We will also monitor the jails from the all-knowing host using one or more intrusion detection systems in order to detect a breach.

Final Thoughts

I ran into some minor challenges with the boot order of services in FreeBSD, where my block storage mount attempt was happening before ZFS was enabled, causing it to fail, and then ezjail would fail to start because it couldn’t see the storage.

To resolve this issue I added an entry to /etc/rc.local pointing to a simple script I wrote with just a few lines,  that first runs the command zfs mount -a (which mounts all ZFS volumes), waits a few moments to ensure the storage is available, and then runs ezjail. Since this all happens at the end of the boot cycle when executived from /etc/rc.local, it worked just fine.

Comments or questions? Leave a comment below or send me an email.


Posted

in

,

by

Tags:

Comments

3 responses to “Using iSCSI block storage and ZFS on FreeBSD with Oracle Cloud Infrastructure (OCI)”

  1. Aw Avatar
    Aw

    Did you happen to get freebsd working in OCI with patravirtualization ? I get scsi sense errors as the kernel tries to mount the file system

    1. kel Avatar
      kel

      I tried paravirtualization but got similar errors so I stuck with a manual iSCSI configuration instead. I also experienced some issues with ZFS and iSCSI working together on OCI and ran out of time to troubleshoot so I tried a couple of easier configurations instead, one with block storage but UFS, and the other with ZFS and no block storage. Those worked no problem.

    2. procrastinator Avatar
      procrastinator

      I was able to boot 12.2 in paravirtualization mode with the following loader tunables:

      hint.hpet.0.clock=”0″
      kern.cam.da.0.minimum_cmd_size=”10″

      These with a VM.Standard.E2 shape. With VM.Standard2 only the second line was needed.

Leave a Reply

Your email address will not be published. Required fields are marked *