-
This guide demonstrates how to set up an iSCSI (Internet Small Computer Systems Interface) server (target) and client (initiator).
-
Goal: Enable the server to provide shared storage over a network, which the client can access as if it were a locally attached disk.
-
Centralized Storage Management: Easy management and allocation of storage -
resources. -
Cost Efficiency: Uses existing TCP/IP networks, reducing the need for specialized hardware.
-
Flexibility: Enables scalable storage solutions.
-
High Availability: Can be configured for redundancy and fault tolerance. Prerequisites
Both NFS (Network File System) and iSCSI (Internet Small Computer System Interface) are technologies used to share disk storage over a network, but they differ significantly in how they work and the use cases for which they are best suited.
| Feature | NFS (Network File System) | iSCSI (Internet SCSI) |
|---|---|---|
| Protocol Type | File-based protocol. | Block-based protocol. |
| Data Access | Provides access to files as a network file system. | Provides access to raw disk blocks, which can be used for any filesystem. |
| Usage | Primarily used for file sharing across networks (e.g., /home directories). | Used for sharing block-level storage (like hard drives) across a network. |
| Granularity | Operates at the file level, allowing clients to access specific files and directories. | Operates at the block level, allowing clients to access entire disks or partitions. |
| Performance | Generally slower than iSCSI due to file-level access and additional protocol overhead. | Generally faster than NFS because it allows direct access to blocks and bypasses file system overhead. |
| Flexibility | File systems on the server handle storage allocation, and the client accesses files via the NFS protocol. | The client has full control over the filesystem; it can format and partition the disk as needed. |
| Complexity | Easier to configure and manage, ideal for sharing files across machines. | More complex to set up, but offers greater control over storage and filesystem. |
| Authentication | Can use NFSv4 with Kerberos for secure access. | Uses iSCSI initiator and target authentication via IQNs and CHAP (Challenge Handshake Authentication Protocol). |
| Use Cases | Best for sharing files in environments where users and applications need to access files over a network. | Best for environments where applications need access to raw block storage (e.g., virtual machines, databases). |
- File sharing: NFS is ideal for sharing files across a network, allowing clients to mount remote directories as local filesystems.
- Cross-platform compatibility: Works well for Linux/Unix-based systems, with NFSv4 offering improved security and support for Linux, macOS, and more.
- Easy to configure: Simple to configure and maintain, making it perfect for sharing directories or file systems.
- Home directories: Sharing user directories across multiple machines.
- Shared storage for applications: File-based storage for applications that don't require block-level access (e.g., web servers).
- Network-mounted storage: Storing personal or work files over the network.
- Block-level storage: iSCSI provides clients with raw storage access, making it ideal for applications that need to format and use a filesystem (e.g., virtual machines, databases).
- High-performance storage: Offers better performance by allowing direct access to block storage without file-level protocol overhead.
- SAN (Storage Area Network): Commonly used to create a SAN for high-performance, block-level storage over a network.
- Virtualization: Shared storage in virtualized environments (e.g., VMware, Hyper-V) for VMs needing raw block-level storage.
- Database storage: Low-latency, high-performance storage for databases.
- Dedicated storage: Ideal for applications requiring dedicated storage, such as creating virtual disks in hypervisors or storage for high-performance applications.
- NFS: Provides file-level access and is better suited for general file sharing, offering easier setup and configuration.
- iSCSI: Provides block-level access, ideal for applications needing raw disk space (e.g., virtual machines, databases), offering better performance and more control but with a more complex setup.
- Choose NFS if you need to share files or directories over a network.
- Choose iSCSI if you need block-level storage for applications like databases or virtualization.
-
A Linux-based system (Debian 12 or CentOS 7 VMs recommended).
-
At least one or more additional disks for storage.
- A Linux-based system (Debian 12 or CentOS 7 VMs recommended). with the open- iscsi or iscsi-initiator-utils package installed.
- Both server and client should be connected on the same network.
- Ensure firewall rules allow iSCSI communication (TCP port 3260).
- Command (Run on server):
apt-get update
apt-get install lvm2 targetcli-fb
- apt-get update: Updates the list of available packages and their versions.
- apt-get install lvm2 targetcli-fb: Installs lvm2 (Logical Volume Manager) for managing disks and targetcli-fb (Target CLI) for configuring iSCSI targets.
- Command (Run on server):
lsblk- lsblk: Lists all available block devices (disks) on the system. Command (Run on server):
pvcreate /dev/sdb
vgcreate vg_iscsi /dev/sdb
lvcreate -n lv_iscsi-disk-01 -L 1G vg_iscsi
lvs- pvcreate /dev/sdb: Creates a physical volume on the /dev/sdb disk.
- vgcreate vg_iscsi /dev/sdb: Creates a volume group named vg_iscsi using /dev/sdb.
- lvcreate -n lv_iscsi-disk-01 -L 1G vg_iscsi: Creates a logical volume of 1 GB size named lv_iscsi-disk-01 inside the vg_iscsi volume group.
- lvs: Displays the details of logical volumes.
- Command (Run on server):
targetcli- targetcli: Starts the Target CLI tool to manage iSCSI targets. Within the Target CLI shell:
cd backstores/block
create block1 /dev/mapper/vg_iscsi-lv_iscsi--disk--01
cd ../../iscsi
create iqn.2024-12.cdac.acts.hpcsa.sbm:disk1
cd iqn.2024-12.cdac.acts.hpcsa.sbm:disk1/tpg1/acls
create iqn.1993-08.org.debian:01:84104998b5d
cd ../luns
create /backstores/block/block1
exit
-
cd backstores/block: Navigates to the block storage section in targetcli.
-
create block1 /dev/mapper/vg_iscsi-lv_iscsi--disk--01: Creates a block storage backend called block1 using the logical volume created earlier.
-
cd ../../iscsi: Moves to the iSCSI target configuration section.
-
create iqn.2024-12.cdac.acts.hpcsa.sbm:disk1: Creates an iSCSI target with the name iqn.2024-12.cdac.acts.hpcsa.sbm:disk1.
-
create iqn.1993-08.org.debian:01:84104998b5d: Creates an access control list
(ACL) entry for the initiator, allowing access from the specified IQN.
The IQN follows the format iqn.-.:<unique_identifier>.
You can customize this identifier to reflect the server or organizationโs name as needed.(It should be anything but unique )
- create /backstores/block/block1: Associates the block storage backend (block1) with the target.
- exit: Exits the targetcli shell.
- Command (Run on server):
systemctl restart rtslib-fb-targetctl
systemctl status rtslib-fb-targetctl-
systemctl restart rtslib-fb-targetctl: Restarts the iSCSI target service to apply changes.
-
systemctl status rtslib-fb-targetctl: Checks the status of the iSCSI target service.
- Command (Run on client):
apt-get install open-iscsi- apt-get install open-iscsi: Installs the iSCSI initiator utilities to allow the client machine to connect to iSCSI targets.
- Command (Run on client):
vi /etc/iscsi/initiatorname.iscsi- vi /etc/iscsi/initiatorname.iscsi: Edits the initiator name file. Set it to the IQN (iSCSI Qualified Name) provided by the server.
- If the server IQN is iqn.2024-12.cdac.acts.hpcsa.sbm:disk1, the client initiator IQN should look like:
InitiatorName=iqn.2022-12.acts.student:306631cea220This name must be unique to each client and server pair to avoid conflicts. Once you save the file with the correct initiator name, you can proceed to start the iSCSI service as described in the previous steps.
- Command (Run on client):
iscsiadm -m discovery -t sendtargets -p <server-ip>:3260 --login- iscsiadm -m discovery -t sendtargets -p :3260 --login: Discovers and logs into the iSCSI target at the specified server IP and port 3260.
- Command (Run on client):
fdisk -l- *fdisk -l: Lists all available disk partitions, including the new iSCSI disk.
- Command (Run on client):
fdisk /dev/sdx
# Press `n` for a new partition, then `w` to write changes.
- fdisk /dev/sdx: Starts partitioning the new iSCSI disk (/dev/sdx).
- Command (Run on client):
mkfs.xfs -f /dev/sdx- *mkfs.xfs -f /dev/sdx: Formats the partition /dev/sdx with the XFS filesystem.
- Command (Run on client):*
mkdir /mnt/disk-1
mount /dev/sdx /mnt/disk-1
df -Th- mkdir /mnt/disk-1: Creates a directory to mount the iSCSI disk.
- mount /dev/sdx /mnt/disk-1: Mounts the iSCSI disk to the specified directory.
- df -Th: Displays the filesystem information to confirm the disk is mounted correctly.
iscsiadm -m session
iscsiadm -m session -P 1
iscsiadm -m node --logoutall=all- iscsiadm -m session: Shows the current iSCSI sessions.
- iscsiadm -m node --logoutall=all: Logs out from all iSCSI sessions.
- Make sure that the firewall is properly configured to allow iSCSI traffic. On both the server and client machines, ensure that port 3260 is open, and disable firewalls if required:
sudo systemctl stop firewalld
sudo systemctl disable firewalld- Command (Run on client):
cd /mnt/disk-1
touch testfile.txt
ls -lh- *cd /mnt/disk-1: Navigates to the mounted iSCSI disk directory.
- touch testfile.txt: Creates a new file named testfile.txt in the iSCSI-mounted directory.*
- ls -lh: Lists the files in the directory to confirm that the testfile.txt file has been created successfully.
To verify that the file persists even after unmounting and remounting the disk, follow these steps:
- Command (Run on client):
umount /mnt/disk-1
mount /dev/sdx /mnt/disk-1
ls -lh- umount /mnt/disk-1: Unmounts the iSCSI disk.
- mount /dev/sdx /mnt/disk-1: Remounts the iSCSI disk.
- ls -lh: Lists the files in the directory to confirm that testfile.txt is still present after remounting
With the above implementation, we have successfully configured an iSCSI server and client. we also verified disk access by creating a file on the mounted iSCSI disk and ensured that the data persists across reboots. Now, the client can access storage from the server as if it were a locally connected disk.
-
iSCSI Target: Provides remote storage access to clients. It's configured on the server side (with tools like targetcli).
-
iSCSI Initiator: The client that connects to the iSCSI target. It is configured using the open-iscsi package.
-
LVM (Logical Volume Management): Used to manage storage on the server side, enabling flexibility in creating and resizing storage volumes.
-
Logical Unit Number (LUN): A unique identifier used to map storage devices to iSCSI targets.
-
Efficient network-based storage sharing.
-
Flexibility in expanding and managing storage resources.
-
Simplifies storage provisioning in virtualized environments.
๐จโ๐ป ๐๐ป๐ช๐ฏ๐ฝ๐ฎ๐ญ ๐ซ๐: Suraj Kumar Choudhary | ๐ฉ ๐๐ฎ๐ฎ๐ต ๐ฏ๐ป๐ฎ๐ฎ ๐ฝ๐ธ ๐๐ ๐ฏ๐ธ๐ป ๐ช๐ท๐ ๐ฑ๐ฎ๐ต๐น: csuraj982@gmail.com