Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZFS On Debian Lenny 64 bits (with 2.6.32 backport kernel) #20

Closed
davromaniak opened this issue Jun 9, 2010 · 6 comments
Closed

ZFS On Debian Lenny 64 bits (with 2.6.32 backport kernel) #20

davromaniak opened this issue Jun 9, 2010 · 6 comments

Comments

@davromaniak
Copy link

Hi again, :).

I create this issue to report the status of ZFS under Debian Lenny (stable) 64 bits with the 2.6.32 backport kernel.

Compilation, installation and modprobe (dont forget the "depmod -a") : All working well.

Here the configure line I used (important for the following part) : ./configure --with-linux=/usr/src/linux-headers-2.6.32-bpo.4-common/ --with-linux-obj=/usr/src/linux-headers-2.6.32-bpo.4-amd64/ --with-spl=/usr/local/src/spl-0.4.9/2.6.32-bpo.4-amd64/ --prefix=/usr/local/

I know "/usr/local/" is the default value for "--prefix", but it's just a habit for me.

First problem : the zconfig.sh execution

splzfs:~# /usr/local/libexec/zfs/zconfig.sh 
test 1 - persistent zpool.cache: /usr/local/libexec/zfs/zconfig.sh: line 63: /usr/libexec/zfs/zfs.sh: No such file or directory
FAIL (1)

For the moment, I give a solution which looks like more a piece of duct tape than a real fix.

I created the file /usr/local/libexec/.script-config with the following content :

splzfs:~# cat /usr/local/libexec/.script-config
KERNELSRC=/usr/src/linux-headers-2.6.32-bpo.4-common/
KERNELBUILD=/usr/src/linux-headers-2.6.32-bpo.4-amd64/
KERNELSRCVER=2.6.32-bpo.4-amd64
KERNELMOD=/lib/modules/${KERNELSRCVER}/kernel

SPLSRC=/usr/local/src/spl-0.4.9/2.6.32-bpo.4-amd64/
SPLBUILD=/lib/modules/{KERNELSRCVER}/addon/spl/
SPLSRCVER=0.4.9

TOPDIR=/usr/local
LIBDIR=${TOPDIR}/lib
MODDIR=/lib/modules/{KERNELSRCVER}/addon/zfs/
SCRIPTDIR=${TOPDIR}/libexec/zfs/
ZPOOLDIR=${SCRIPTDIR}/zpool-config
ZPIOSDIR=${SCRIPTDIR}/zpios-test
ZPIOSPROFILEDIR=${SCRIPTDIR}/zpios-profile

ZDB=${TOPDIR}/sbin/zdb
ZFS=${TOPDIR}/sbin/zfs
ZINJECT=${TOPDIR}/sbin/zinject
ZPOOL=${TOPDIR}/sbin/zpool
ZPOOL_ID=${TOPDIR}/bin/zpool_id
ZTEST=${TOPDIR}/sbin/ztest
ZPIOS=${TOPDIR}/sbin/zpios

COMMON_SH=${SCRIPTDIR}/common.sh
ZFS_SH=${SCRIPTDIR}/zfs.sh
ZPOOL_CREATE_SH=${SCRIPTDIR}/zpool-create.sh
ZPIOS_SH=${SCRIPTDIR}/zpios.sh
ZPIOS_SURVEY_SH=${SCRIPTDIR}/zpios-survey.sh

INTREE=1
MODULES=(zlib_deflate spl splat zavl znvpair zunicode zcommon zfs)

The last line is necessary for using "zfs.sh -u".

Now I've "duct-taped" this problem, I relaunch the zconfig.sh script.

Problem :

splzfs:~# /usr/local/libexec/zfs/zconfig.sh 
test 1 - persistent zpool.cache: PASS
test 2 - scan disks for pools to import: PASS
test 3 - ZVOL sanity: PASS

The first 2 tests passed without difficulties (thanks to the piece of duct tape).

The ZVOL sanity test takes a bit longer, maybe because the server I use is a bit old.

The zpios-sanity.sh script also works well :

splzfs:~# /usr/local/libexec/zfs/zpios-sanity.sh 
status    name        id    wr-data wr-ch   wr-bw   rd-data rd-ch   rd-bw
-------------------------------------------------------------------------------
PASS:     file-raid0   0    64m 64  20.52m  64m 64  838.66m
PASS:     file-raid10  0    64m 64  10.34m  64m 64  933.38m
PASS:     file-raidz   0    64m 64  7.86m   64m 64  748.92m
PASS:     file-raidz2  0    64m 64  6.29m   64m 64  718.95m
PASS:     lo-raid0     0    64m 64  20.10m  64m 64  961.69m
PASS:     lo-raid10    0    64m 64  5.80m   64m 64  794.47m
PASS:     lo-raidz     0    64m 64  6.26m   64m 64  641.12m
PASS:     lo-raidz2    0    64m 64  7.58m   64m 64  679.89m

Thanks.

@davromaniak
Copy link
Author

I've just tested ZFS on the 32 bits server.

It works well with the "/usr/local/libexec/.script-config" modified (replace amd64 by 686).

Thanks.

@behlendorf
Copy link
Contributor

Thanks for the guide davromaniak, I'll update the wiki with your Debian hints. Also can you try simply omitting --prefix from your configure line. This should result in the rpm setting the path to /usr/ instead of /usr/local/ and then I don't think you'll need the duck tape.

@davromaniak
Copy link
Author

The problem on Debian, is when I ommit the --prefix from my configure line, it puts all in /usr/local (if I can remember).

I simply use the --prefix as a personal remember, :).

My problem was that on the zconfig.sh script, which has ZFS_SH variable set to "/usr/libexec/zfs/zfs.sh", and on Debian, the absolute path of zfs.sh is "/usr/local/libexec/zfs/zfs.sh".

I'm going to double check this tomorrow morning, because I don't want to overcharge this howto.

On the next few weeks, I will try to make a debian package (and give you the source) for ZFS and SPL, I'm very familiar with debian/ubuntu packaging for about 3-4 years. I think this could be usefull and easier for installing this.

Thanks

@davromaniak
Copy link
Author

Sorry, I closed it by error.

@behlendorf
Copy link
Contributor

I suggested dropping --prefix because when rpmbuild runs configure it will include --prefix=/usr unless you specified your own with the initial configure. So by default the rpm package will install the scripts in to the right path. Now a ./configure; make; make install will use /usr/local/ as you say and the path will be wrong.

That said I would love to get real debian packages, I'm not familiar with their packaging. If you could work though all of that I've love to take those changes in to the project and support both. I appreciate the help!

@davromaniak
Copy link
Author

For the dropping of --prefix, I understand.

Maybe for the debian package I will have to patch the common.sh file (a patch within the package), the paths in common.sh are all "Debian compliant", except for the libexec directory, in Debian, the libexec directory is generally "/usr/lib/name_of_the_program/libexec/" (for example, "/usr/lib/zfs/libexec/"), it's a bit strange, but it's like this.

I will keep you informed on this.

Thanks.

FransUrbo pushed a commit to FransUrbo/zfs that referenced this issue Apr 29, 2013
behlendorf added a commit to behlendorf/zfs that referenced this issue Jun 24, 2016
Flag openzfs#20 was used in OpenZFS as DMU_BACKUP_FEATURE_RESUMING, the
DMU_BACKUP_FEATURE_LARGE_DNODE flag must be shifted to openzfs#21 and
reserved in the  OpenZFS implementation.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
akatrevorjay added a commit to akatrevorjay/zfs that referenced this issue Dec 16, 2017
# This is the 1st commit message:
Merge branch 'master' of https://github.com/zfsonlinux/zfs

* 'master' of https://github.com/zfsonlinux/zfs:
  Enable QAT support in zfs-dkms RPM

# This is the commit message openzfs#2:

Import 0.6.5.7-0ubuntu3

# This is the commit message openzfs#3:

gbp changes

# This is the commit message openzfs#4:

Bump ver

# This is the commit message openzfs#5:

-j9 baby

# This is the commit message openzfs#6:

Up

# This is the commit message openzfs#7:

Yup

# This is the commit message openzfs#8:

Add new module

# This is the commit message openzfs#9:

Up

# This is the commit message openzfs#10:

Up

# This is the commit message openzfs#11:

Bump

# This is the commit message openzfs#12:

Grr

# This is the commit message openzfs#13:

Yay

# This is the commit message openzfs#14:

Yay

# This is the commit message openzfs#15:

Yay

# This is the commit message openzfs#16:

Yay

# This is the commit message openzfs#17:

Yay

# This is the commit message openzfs#18:

Yay

# This is the commit message openzfs#19:

yay

# This is the commit message openzfs#20:

yay

# This is the commit message openzfs#21:

yay

# This is the commit message openzfs#22:

Update ppa script

# This is the commit message openzfs#23:

Update gbp conf with br changes

# This is the commit message openzfs#24:

Update gbp conf with br changes

# This is the commit message openzfs#25:

Bump

# This is the commit message openzfs#26:

No pristine

# This is the commit message openzfs#27:

Bump

# This is the commit message openzfs#28:

Lol whoops

# This is the commit message openzfs#29:

Fix name

# This is the commit message openzfs#30:

Fix name

# This is the commit message openzfs#31:

rebase

# This is the commit message openzfs#32:

Bump

# This is the commit message openzfs#33:

Bump

# This is the commit message openzfs#34:

Bump

# This is the commit message openzfs#35:

Bump

# This is the commit message openzfs#36:

ntrim

# This is the commit message openzfs#37:

Bump

# This is the commit message openzfs#38:

9

# This is the commit message openzfs#39:

Bump

# This is the commit message openzfs#40:

Bump

# This is the commit message openzfs#41:

Bump

# This is the commit message openzfs#42:

Revert "9"

This reverts commit de488f1.

# This is the commit message openzfs#43:

Bump

# This is the commit message openzfs#44:

Account for zconfig.sh being removed

# This is the commit message openzfs#45:

Bump

# This is the commit message openzfs#46:

Add artful

# This is the commit message openzfs#47:

Add in zed.d and zpool.d scripts

# This is the commit message openzfs#48:

Bump

# This is the commit message openzfs#49:

Bump

# This is the commit message openzfs#50:

Bump

# This is the commit message openzfs#51:

Bump

# This is the commit message openzfs#52:

ugh

# This is the commit message openzfs#53:

fix zed upgrade

# This is the commit message openzfs#54:

Bump

# This is the commit message openzfs#55:

conf file zed.d

# This is the commit message #56:

Bump
markroper added a commit to markroper/zfs that referenced this issue Feb 12, 2020
Using zfs with Lustre, an arc_read can trigger kernel memory allocation
that in turn leads to a memory reclaim callback and a deadlock within a
single zfs process. This change uses spl_fstrans_mark and
spl_trans_unmark to prevent the reclaim attempt and the deadlock
(https://zfsonlinux.topicbox.com/groups/zfs-devel/T4db2c705ec1804ba).
The stack trace observed is:

     #0 [ffffc9002b98adc8] __schedule at ffffffff81610f2e
     openzfs#1 [ffffc9002b98ae68] schedule at ffffffff81611558
     openzfs#2 [ffffc9002b98ae70] schedule_preempt_disabled at ffffffff8161184a
     openzfs#3 [ffffc9002b98ae78] __mutex_lock at ffffffff816131e8
     openzfs#4 [ffffc9002b98af18] arc_buf_destroy at ffffffffa0bf37d7 [zfs]
     openzfs#5 [ffffc9002b98af48] dbuf_destroy at ffffffffa0bfa6fe [zfs]
     openzfs#6 [ffffc9002b98af88] dbuf_evict_one at ffffffffa0bfaa96 [zfs]
     openzfs#7 [ffffc9002b98afa0] dbuf_rele_and_unlock at ffffffffa0bfa561 [zfs]
     openzfs#8 [ffffc9002b98b050] dbuf_rele_and_unlock at ffffffffa0bfa32b [zfs]
     openzfs#9 [ffffc9002b98b100] osd_object_delete at ffffffffa0b64ecc [osd_zfs]
    openzfs#10 [ffffc9002b98b118] lu_object_free at ffffffffa06d6a74 [obdclass]
    openzfs#11 [ffffc9002b98b178] lu_site_purge_objects at ffffffffa06d7fc1 [obdclass]
    openzfs#12 [ffffc9002b98b220] lu_cache_shrink_scan at ffffffffa06d81b8 [obdclass]
    openzfs#13 [ffffc9002b98b278] shrink_slab at ffffffff811ca9d8
    openzfs#14 [ffffc9002b98b338] shrink_node at ffffffff811cfd94
    openzfs#15 [ffffc9002b98b3b8] do_try_to_free_pages at ffffffff811cfe63
    openzfs#16 [ffffc9002b98b408] try_to_free_pages at ffffffff811d01c4
    openzfs#17 [ffffc9002b98b488] __alloc_pages_slowpath at ffffffff811be7f2
    openzfs#18 [ffffc9002b98b580] __alloc_pages_nodemask at ffffffff811bf3ed
    openzfs#19 [ffffc9002b98b5e0] new_slab at ffffffff81226304
    openzfs#20 [ffffc9002b98b638] ___slab_alloc at ffffffff812272ab
    openzfs#21 [ffffc9002b98b6f8] __slab_alloc at ffffffff8122740c
    openzfs#22 [ffffc9002b98b708] kmem_cache_alloc at ffffffff81227578
    openzfs#23 [ffffc9002b98b740] spl_kmem_cache_alloc at ffffffffa048a1fd [spl]
    openzfs#24 [ffffc9002b98b780] arc_buf_alloc_impl at ffffffffa0befba2 [zfs]
    openzfs#25 [ffffc9002b98b7b0] arc_read at ffffffffa0bf0924 [zfs]
    openzfs#26 [ffffc9002b98b858] dbuf_read at ffffffffa0bf9083 [zfs]
    openzfs#27 [ffffc9002b98b900] dmu_buf_hold_by_dnode at ffffffffa0c04869 [zfs]

Signed-off-by: Mark Roper <markroper@gmail.com>
allanjude pushed a commit to KlaraSystems/zfs that referenced this issue Apr 28, 2020
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants