Skip to content
This repository has been archived by the owner on Apr 9, 2024. It is now read-only.

Translated by qhwdw #11751

Merged
merged 1 commit into from Dec 21, 2018
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
@@ -1,28 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: (qhwdw)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: ()
[#]: publisher: ()
[#]: url: ()
[#]: subject: (How to Build a Netboot Server, Part 2)
[#]: via: (https://fedoramagazine.org/how-to-build-a-netboot-server-part-2/)
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)

How to Build a Netboot Server, Part 2
如何构建一台网络引导服务器(第二部分)
======

![](https://fedoramagazine.org/wp-content/uploads/2018/12/netboot2-816x345.jpg)

The article [How to Build a Netboot Server, Part 1][1] showed you how to create a netboot image with a “liveuser” account whose home directory lives in volatile memory. Most users probably want to preserve files and settings across reboots, though. So this second part of the netboot series shows how to reconfigure the netboot image from part one so that [Active Directory][2] user accounts can log in and their home directories can be automatically mounted from a NFS server.
在 [如何构建一台网络引导服务器(第一部分)][1] 的文章中,我们展示了如何创建一个网络引导镜像,在那个镜像中使用了一个名为 “liveuser” 帐户,它的 home 目录位于内存中,重启后 home 中的内容将全部消失。然而很多用户都希望机器重启后保存他们的文件和设置。因此,在本系列的第二部分,我们将向你展示如何在第一部分的基础上,重新配置网络引导镜像,使它能够使用 [活动目录][2] 中的用户帐户进行登陆,然后能够从一个 NFS 服务器上自动挂载他们的 home 目录。

Part 3 of this series will show how to make an interactive and centrally-configurable iPXE boot menu for the netboot clients.
本系列的第三部分,我们将向你展示网络引导客户端如何与中心化配置的 iPXE 引导菜单进行交互。

### Setup NFS4 Home Directories with KRB5 Authentication
### 设置使用 KRB5 认证的 NFS4 Home 目录

Follow the directions from the previous post “[Share NFS Home Directories Securely with Kerberos][3],” then return here.
按以前的文章 “[使用 Kerberos 强化共享的 NFS Home 目录安全性][3]” 的指导来做这个设置。

### Remove the Liveuser Account
### 删除 Liveuser 帐户

Remove the “liveuser” account created in part one of this series:
删除本系列文章第一部分中创建的 “liveuser” 帐户:

```
$ sudo -i
Expand All @@ -31,9 +31,9 @@ $ sudo -i
# for i in passwd shadow group gshadow; do sed -i '/^liveuser:/d' /fc28/etc/$i; done
```

### Configure NTP, KRB5 and SSSD
### 配置 NTPKRB5 SSSD

Next, we will need to duplicate the NTP, KRB5, and SSSD configuration that we set up on the server in the client image so that the same accounts will be available:
接下来,我们需要将 NTPKRB5、和 SSSD 的配置文件复制进客户端使用的镜像中,以便于它们能够使用同一个帐户:

```
# MY_HOSTNAME=$(</etc/hostname)
Expand All @@ -45,48 +45,48 @@ Next, we will need to duplicate the NTP, KRB5, and SSSD configuration that we se
# cp /etc/sssd/sssd.conf /fc28/etc/sssd
```

Reconfigure sssd to provide authentication services, in addition to the identification service already configured:
重新配置 sssd 在已配置的识别服务的基础上去提供认证服务:

```
# sed -i '/services =/s/$/, pam/' /fc28/etc/sssd/sssd.conf
```

Also, ensure none of the clients attempt to update the computer account password:
另外,配置成确保客户端不能更改这个帐户密码:

```
# sed -i '/id_provider/a \ \ ad_maximum_machine_account_password_age = 0' /fc28/etc/sssd/sssd.conf
```

Also, copy the nfsnobody definitions:
另外,复制 nfsnobody 的定义:

```
# for i in passwd shadow group gshadow; do grep "^nfsnobody:" /etc/$i >> /fc28/etc/$i; done
```

### Join Active Directory
### 连接活动目录

Next, you’ll perform a chroot to join the client image to Active Directory. Begin by deleting any pre-existing computer account with the same name your netboot image will use:
接下来,你将执行一个 chroot 将客户端镜像连接到活动目录。从删除预置在网络引导镜像中相同的计算机帐户开始:

```
# MY_USERNAME=jsmith
# MY_CLIENT_HOSTNAME=$(</fc28/etc/hostname)
# adcli delete-computer "${MY_CLIENT_HOSTNAME%%.*}" -U "$MY_USERNAME"
```

Also delete the krb5.keytab file from the netboot image if it exists:
在网络引导镜像中如果有 krb5.keytab 文件,也删除它:

```
# rm -f /fc28/etc/krb5.keytab
```

Perform a chroot into the netboot image:
在网络引导镜像中执行一个 chroot 操作:

```
# for i in dev dev/pts dev/shm proc sys run; do mount -o bind /$i /fc28/$i; done
# chroot /fc28 /usr/bin/bash --login
```

Perform the join:
执行一个 join 操作:

```
# MY_USERNAME=jsmith
Expand All @@ -97,17 +97,17 @@ Perform the join:
# adcli join $MY_DOMAIN --login-user="$MY_USERNAME" --computer-name="${MY_HOSTNAME%%.*}" --host-fqdn="$MY_HOSTNAME" --user-principal="host/$MY_HOSTNAME@$MY_REALM" --domain-ou="$MY_OU"
```

Now log out of the chroot and clear the root user’s command history:
现在登出 chroot,并清除命令历史:

```
# logout
# for i in run sys proc dev/shm dev/pts dev; do umount /fc28/$i; done
# > /fc28/root/.bash_history
```

### Install and Configure PAM Mount
### 安装和配置 PAM Mount

We want our clients to automatically mount the user’s home directory when they log in. To accomplish this, we’ll use the “pam_mount” module. Install and configure pam_mount:
我们希望客户端登入后自动挂载它的 home 目录。为实现这个目的,我们将要使用 “pam_mount” 模块。安装和配置 pam_mount

```
# dnf install -y --installroot=/fc28 pam_mount
Expand All @@ -123,7 +123,7 @@ We want our clients to automatically mount the user’s home directory when they
END
```

Reconfigure PAM to use pam_mount:
重新配置 PAM 去使用 pam_mount

```
# dnf install -y patch
Expand Down Expand Up @@ -152,24 +152,24 @@ END
# chroot /fc28 authselect select custom/sssd with-pammount --force
```

Also ensure the NFS server’s hostname is always resolvable from the client:
另外,要确保从客户端上总是可解析 NFS 服务器的主机名:

```
# MY_IP=$(host -t A $MY_HOSTNAME | awk '{print $4}')
# echo "$MY_IP $MY_HOSTNAME ${MY_HOSTNAME%%.*}" >> /fc28/etc/hosts
```

Optionally, allow all users to run sudo:
可选,允许所有用户去使用 sudo

```
# echo '%users ALL=(ALL) NOPASSWD: ALL' > /fc28/etc/sudoers.d/users
```

### Convert the NFS Root to an iSCSI Backing-Store
### 转换 NFS Root 到一个 iSCSI 背后的存储

Current versions of nfs-utils may have difficulty establishing a second connection from the client back to the NFS server for home directories when an nfsroot connection is already established. The client hangs when attempting to access the home directory. So, we will work around the problem by using a different protocol (iSCSI) for sharing our netboot image.
在一个 nfsroot 连接建立之后,目前版本的 nfs-utils 可能很难为 home 目录维护一个从客户端到 NFS 服务器的二次连接。当尝试去访问 home 目录时,客户端将被挂住。因此,为了网络引导镜像可共享使用,我们将使用一个不同的协议(iSCSI)来解决这个问题。

First chroot into the image to reconfigure its initramfs for booting from an iSCSI root:
首先 chroot 到镜像中,去重新配置它的 initramfs,让它从一个 iSCSI root 中去引导:

```
# for i in dev dev/pts dev/shm proc sys run; do mount -o bind /$i /fc28/$i; done
Expand All @@ -186,18 +186,18 @@ First chroot into the image to reconfigure its initramfs for booting from an iSC
# > /fc28/root/.bash_history
```

The qedi driver broke iscsi during testing, so it’s been disabled here.
在测试时,qedi 驱动会破坏 iscsi,因此我们将它禁用。

Next, create a fc28.img [sparse file][4]. This file serves as the iSCSI target’s backing store:
接着,创建一个 fc28.img 的 [稀疏文件][4]。这个稀疏文件代表 iSCSI 目标的背后存储:

```
# FC28_SIZE=$(du -ms /fc28 | cut -f 1)
# dd if=/dev/zero of=/fc28.img bs=1MiB count=0 seek=$(($FC28_SIZE*2))
```

(If you have one available, a separate partition or disk drive can be used instead of creating a file.)
(如果你有一个可使用的稀疏文件、一个单独的分区或磁盘驱动器,就可以代替它了,不用再去创建这个稀疏文件了。)

Next, format the image with a filesystem, mount it, and copy the netboot image into it:
接着,使用一个文件系统去格式化镜像、挂载它、然后将网络引导镜像复制进去:

```
# mkfs -t xfs -L NETROOT /fc28.img
Expand All @@ -207,19 +207,19 @@ Next, format the image with a filesystem, mount it, and copy the netboot image i
# umount $TEMP_MNT
```

During testing using SquashFS, the client would occasionally stutter. It seems that SquashFS does not perform well when doing random I/O from a multiprocessor client. (See also [The curious case of stalled squashfs reads][5].) If you want to improve throughput performance with filesystem compression, [ZFS][6] is probably a better option.
在使用 SquashFS 测试时,客户端偶尔会出现小状况。似乎是因为 SquashFS 在多处理器客户端上没法执行一个随机 I/O。(更多内容见 [squashfs 读取卡顿的奇怪案例][5])。如果你希望使用一个压缩文件系统来提升吞吐性能,[ZFS][6] 或许是个很好的选择。

If you need extremely high throughput from the iSCSI server (say, for hundreds of clients), it might be possible to [load balance][7] a [Ceph][8] cluster. For more information, see [Load Balancing Ceph Object Gateway Servers with HAProxy and Keepalived][9].
如果你对 iSCSI 服务器的吞吐性能要求非常高(比如,成百上千的客户端要连接它),可能需要使用带 [负载均衡][7] [Ceph][8] 集群了。更多相关内容,请查看 [使用 HAProxy 和 Keepalived 负载均衡的 Ceph 对象网关][9]

### Install and Configure iSCSI
### 安装和配置 iSCSI

Install the scsi-target-utils package which will provide the iSCSI daemon for serving our image out to our clients:
为了给我们的客户端提供网络引导镜像,安装 scsi-target-utils 包:

```
# dnf install -y scsi-target-utils
```

Configure the iSCSI daemon to serve the fc28.img file:
配置 iSCSI 守护程序去提供 fc28.img 文件:

```
# MY_REVERSE_HOSTNAME=$(echo $MY_HOSTNAME | tr '.' "\n" | tac | tr "\n" '.' | cut -b -${#MY_HOSTNAME})
Expand All @@ -231,9 +231,9 @@ Configure the iSCSI daemon to serve the fc28.img file:
END
```

The leading iqn. is expected by /usr/lib/dracut/modules.d/40network/net-lib.sh.
通过 /usr/lib/dracut/modules.d/40network/net-lib.sh 来指示预期的 iqn。

Add an exception to the firewall and enable and start the service:
添加一个防火墙例外,并启用和启动这个服务:

```
# firewall-cmd --add-service=iscsi-target
Expand All @@ -242,13 +242,13 @@ Add an exception to the firewall and enable and start the service:
# systemctl start tgtd.service
```

You should now be able to see the image being shared with the tgtadm command:
你现在应该能够使用 tatadm 命令看到这个共享后的镜像:

```
# tgtadm --mode target --op show
```

The above command should output something similar to the following:
上述命令的输出应该类似如下的内容:

```
Target 1: iqn.edu.example.server-01:fc28
Expand Down Expand Up @@ -290,7 +290,7 @@ Target 1: iqn.edu.example.server-01:fc28
ALL
```

We can now remove the NFS share that we created in part one of this series:
现在,我们可以去删除本系列文章的第一部分中创建的 NFS 共享了:

```
# rm -f /etc/exports.d/fc28.exports
Expand All @@ -300,11 +300,11 @@ We can now remove the NFS share that we created in part one of this series:
# sed -i '/^\/fc28 /d' /etc/fstab
```

You can also delete the /fc28 filesystem, but you may want to keep it for performing future updates.
你也可以删除 /fc28 文件系统,但为了以后进一步更新,你可能需要保留它。

### Update the ESP to use the iSCSI Kernel
### 更新 ESP 去使用 iSCSI 内核

Ipdate the ESP to contain the iSCSI-enabled initramfs:
更新 ESP 去包含启用了 iSCSIinitramfs

```
$ rm -vf $HOME/esp/linux/*.fc28.*
Expand All @@ -313,7 +313,7 @@ $ cp $(find /fc28/lib/modules -maxdepth 2 -name 'vmlinuz' | grep -m 1 $MY_KRNL)
$ cp $(find /fc28/boot -name 'init*' | grep -m 1 $MY_KRNL) $HOME/esp/linux/initramfs-$MY_KRNL.img
```

Update the boot.cfg file to pass the new root and netroot parameters:
更新 boot.cfg 文件去传递新的 root netroot 参数:

```
$ MY_NAME=server-01.example.edu
Expand All @@ -322,60 +322,60 @@ $ MY_ADDR=$(host -t A $MY_NAME | awk '{print $4}')
$ sed -i "s! root=[^ ]*! root=/dev/disk/by-path/ip-$MY_ADDR:3260-iscsi-iqn.$MY_EMAN:fc28-lun-1 netroot=iscsi:$MY_ADDR::::iqn.$MY_EMAN:fc28!" $HOME/esp/linux/boot.cfg
```

Now you just need to copy the updated files from your $HOME/esp/linux directory out to the ESPs of all your client systems. You should see results similar to what is shown in the below screenshot:
现在,你只需要从 $HOME/esp/linux 目录中复制更新后的文件到所有客户端系统的 ESP 中。你应该会看到类似下面屏幕截图的结果:

![][10]

### Upgrading the Image
### 更新镜像

First, make a copy of the current image:
首先,复制出一个当前镜像的副本:

```
# cp -a /fc28 /fc29
```

Chroot into the new copy of the image:
Chroot 进入到镜像的新副本:

```
# for i in dev dev/pts dev/shm proc sys run; do mount -o bind /$i /fc29/$i; done
# chroot /fc29 /usr/bin/bash --login
```

Allow updating the kernel:
允许更新内核:

```
# sed -i 's/^exclude=kernel-\*$/#exclude=kernel-*/' /etc/dnf/dnf.conf
```

Perform the upgrade:
执行升级:

```
# dnf distro-sync -y --releasever=29
```

Prevent the kernel from being updated:
阻止更新过的内核被再次更新:

```
# sed -i 's/^#exclude=kernel-\*$/exclude=kernel-*/' /etc/dnf/dnf.conf
```

The above command is optional, but saves you from having to copy a new kernel out to the clients if you add or update a few packages in the image at some future time.
上述命令是可选的,但是在以后,如果在镜像中添加和更新了几个包,在你的客户端之外保存有一个最新内核的副本,会在关键时刻对你非常有帮助。

Clean up dnf’s package cache:
清理 dnf 的包缓存:

```
# dnf clean all
```

Exit the chroot and clear root’s command history:
退出 chroot 并清理 root 的命令历史:

```
# logout
# for i in run sys proc dev/shm dev/pts dev; do umount /fc29/$i; done
# > /fc29/root/.bash_history
```

Create the iSCSI image:
创建 iSCSI 镜像:

```
# FC29_SIZE=$(du -ms /fc29 | cut -f 1)
Expand All @@ -387,7 +387,7 @@ Create the iSCSI image:
# umount $TEMP_MNT
```

Define a new iSCSI target that points to our new image and export it:
定义一个新的 iSCSI 目标,指向到新的镜像并导出它:

```
# MY_HOSTNAME=$(</etc/hostname)
Expand All @@ -401,15 +401,15 @@ END
# tgt-admin --update ALL
```

Add the new kernel and initramfs to the ESP:
添加新内核并 initramfs ESP

```
$ MY_KRNL=$(ls -c /fc29/lib/modules | head -n 1)
$ cp $(find /fc29/lib/modules -maxdepth 2 -name 'vmlinuz' | grep -m 1 $MY_KRNL) $HOME/esp/linux/vmlinuz-$MY_KRNL
$ cp $(find /fc29/boot -name 'init*' | grep -m 1 $MY_KRNL) $HOME/esp/linux/initramfs-$MY_KRNL.img
```

Update the boot.cfg in the ESP:
更新 ESP 的 boot.cfg

```
$ MY_DNS1=192.0.2.91
Expand All @@ -426,16 +426,15 @@ boot || exit
END
```

Finally, copy the files from your $HOME/esp/linux directory out to the ESPs of all your client systems and enjoy!

最后,从我的 $HOME/esp/linux 目录中复制文件到所有客户端系统的 ESP 中去使用它吧!

--------------------------------------------------------------------------------

via: https://fedoramagazine.org/how-to-build-a-netboot-server-part-2/

作者:[Gregory Bartholomew][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)

本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
Expand Down