Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mounted s3 gone after rebooting EC2 instance #412

Closed
yuanzhou opened this issue May 10, 2016 · 9 comments

Comments

Projects
None yet
6 participants
@yuanzhou
Copy link

commented May 10, 2016

Here is my use case: I created a cluster on AWS using cfncluster and successfully mounted my s3 bucket to all the cluster nodes using s3fs.

#!/bin/bash

# Have everyting installed under ec2-user home directory so we can login and submit jobs later
cd /home/ec2-user/

# Creating mountpoint
mkdir s3mnt

# So ec2-user can access when logged in via ssh
chmod 777 s3mnt

# Ensure we have all the dependencies
yum -y install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel

# Compile from master via the following commands
git clone https://github.com/s3fs-fuse/s3fs-fuse.git

cd s3fs-fuse

./autogen.sh

./configure

make

make install

# Don't forget to jump out
cd ..

# No need to keep the source files
rm -rf s3fs-fuse

# Enter S3 identity and credential in a file
echo xxxx:yyyyy > .passwd-s3fs

# Allow ec2-user to own this file
chown ec2-user:ec2-user .passwd-s3fs

# Make sure the file has proper permissions
chmod 600 .passwd-s3fs

# Actual mounting
# Need -o allow_other, otherwise will see ???? in directory listing
s3fs my-s3 /home/ec2-user/s3mnt -o allow_other -o passwd_file=/home/ec2-user/.passwd-s3fs -o umask=0000

And as suggested, I also edited the /etc/fstab by adding this mountpoint

[ec2-user@ip-172-31-19-81 ~]$ sudo cat /etc/fstab
#
LABEL=/     /           ext4    defaults,noatime  1   1
tmpfs       /dev/shm    tmpfs   defaults        0   0
devpts      /dev/pts    devpts  gid=5,mode=620  0   0
sysfs       /sys        sysfs   defaults        0   0
proc        /proc       proc    defaults        0   0
/dev/disk/by-ebs-volumeid/vol-f5824650 /shared ext4 _netdev 0 0
s3fs#my-s3 /home/ec2-user/s3mnt fuse _netdev,allow_other 0 0

I was also prompted to enable the "user_allow_other" in /etc/fuse.conf.

But when I tried to reboot or stop/start the master instance via AWS web console, the s3 is not mounted. I had to manually mount it again using

s3fs my-s3 /home/ec2-user/s3mnt -o allow_other -o passwd_file=/home/ec2-user/.passwd-s3fs -o umask=0000

Is there anything that I missed?

@yuanzhou

This comment has been minimized.

Copy link
Author

commented May 11, 2016

Just to update, I also tried to add a cron job to mount this s3 bucket upon reboot. But it didn't work. I tested with other simple cron jobs on reboot and they worked though.

@ggtakec

This comment has been minimized.

Copy link
Member

commented May 14, 2016

@yuanzhou
Is there some log about s3fs in /var/messages(or other log file) after failure?
(you can change the log level by dbglevel option or -d option)

Before, we had to specify the retries option in fstab for using iam_role option.

Please try to get a log, if you can please set retries option too.

Thanks in advance for your help.

@selimnasrallah88

This comment has been minimized.

Copy link

commented Jul 4, 2016

Hello yuanzhou ,

S3 mounting should be executed after networking services

Solution: Create a service using chkconfig and add it to the last order on boot S99 per exemple and 80 order on kill

#!/bin/sh

chkconfig: 2345 99 80

copy this file into /etc/init.d/

sudo chkconfig --add s3

sudo service s3 start

Optional: Configure required runlevels for the service

start() {
echo -n $"Starting..."
sudo /usr/local/bin/s3fs -o allow_other -o use_cache=**** bucket /s3dbbackup

sudo ls -al /s3dbbackup
}

stop() {
echo -n $"Stopping..."
sudo umount /s3dbbackup
}

reload(){
sudo umount /s3dbbackup
sudo /usr/local/bin/s3fs -o allow_other -o use_cache=**** bucket /s3dbbackup kafbucket /s3dbbackup
sudo ls -al /s3dbbackup
}

status(){
sudo ls -al /s3dbbackup
}

See how we were called.

case "$1" in
start)
start
;;
stop)
stop
;;
status)
status
;;
restart)
stop
start
;;
reload)
reload
;;
*)
echo $"Usage: $prog {start|stop|restart|status}"
RETVAL=2
esac

exit $RETVAL

@yuanzhou

This comment has been minimized.

Copy link
Author

commented Jul 5, 2016

@selimnasrallah88 thanks a lot for the suggestion. We've switched from S3 to EBS due to the performance issue. AWS also made EFS available recently.

@ngbranitsky

This comment has been minimized.

Copy link

commented Jul 5, 2016

What was your use case for S3 as EBS clearly does not support shared storage?
AWS EFS is restricted to a single VPC, doesn't support snapshots, and costs $0.30/GB/Month.
If you are looking for a cross VPC solution that does support snapshots,
consider a Virtual Private SAN (VPSA), from AWS partner Zadara, for $0.08/GB/Month.
It's available in the AWS Marketplace.

Norman Branitsky

On Jul 5, 2016, at 8:49 AM, yuanzhou notifications@github.com wrote:

@selimnasrallah88 thanks a lot for the suggestion. We've switched from S3 to EBS due to the performance issue. AWS also made EFS available recently.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

@ggtakec

This comment has been minimized.

Copy link
Member

commented Jul 18, 2016

I'm sorry fo rmy late reply.

@selimnasrallah88, @ngbranitsky thanks for your help.

@yuanzhou
You can try to specify "retries" option in fstab.
Maybe, you might be able to resolve this problem by this option(ex. retries=5, retries=10...)

Thanks in advance for your assistance.

@daavve

This comment has been minimized.

Copy link

commented Jul 19, 2017

I have the same problem with an Arch Linux server fstab. I have successfully mounted using s3fs from my user account, but I cannot get the mount working using fstab.

my-bucket /mnt/s3fs fuse.s3fs _netdev,allow_other,endpoint=us-west-2,use_cache=/tmp,storage_class=reduced_redundancy 0 0

When I try to mount get the following:

# mount -a -v
/                        : ignored
/mnt/s3fs                : successfully mounted
# lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    254:0    0  10G  0 disk 
└─vda1 254:1    0  10G  0 part /
#ls -al /mnt/s3fs/
total 8
drwxr-xr-x 2 root root 4096 Jul 18 21:22 .
drwxr-xr-x 3 root root 4096 Jul 18 22:59 ..

I have lots of files inside by bucket so ls should have done something. Dmesg looks normal and I cannot find any log messages.

-Thanks,

-Dave

@stormm2138

This comment has been minimized.

Copy link

commented Feb 24, 2018

I updated the init script provided by @selimnasrallah88 --

#!/bin/sh
# fuse_s3_mount          Mount / unmount an s3 bucket using fuse
#
# chkconfig: 2345 85 15
# description: Use Fuse to mount an s3 bucket locally -- https://github.com/s3fs-fuse/s3fs-fuse
#
### BEGIN INIT INFO
# Provides: fuse_s3_mount
# Required-Start: $local_fs $remote_fs $network $named $fuse
# Required-Stop: $local_fs $remote_fs $network
# Short-Description: start and stop s3 Fuse Mount
# Description: Use Fuse to mount an s3 bucket locally -- https://github.com/s3fs-fuse/s3fs-fuse
### END INIT INFO


# Source function library.
. /etc/init.d/functions

MOUNT_PATH=""
RETVAL=0

if [ -z "${MOUNT_PATH}" ]; then
    echo -n "MOUNT_PATH must be set"
    failure
    exit 1
fi

start() {
    echo "Starting Fuse s3 mount service to mount ${MOUNT_PATH} to s3"
    if mountpoint ${MOUNT_PATH} >> /dev/null; then
       echo -n "${MOUNT_PATH} is already mounted, skipping start"
       warning
       echo
    else
       mount ${MOUNT_PATH}
       mountpoint ${MOUNT_PATH} >> /dev/null
       RETVAL=$?
       echo -n "Mounting fuse s3 mount... "
       if [ ${RETVAL} != 0 ]; then
         failure
          echo -e "\n You can debug the Fuse mounting using:\n"
          echo -e "s3fs BUCKET_NAME ${MOUNT_PATH}  -o dbglevel=info -f -o curldbg"
       else
          success
          echo -e "\nMount contents: $(ls -l ${MOUNT_PATH}/)"
       fi
    fi
}

stop() {
    echo "Stopping Fuse s3 mount service."
    if ! mountpoint ${MOUNT_PATH} >> /dev/null; then
       echo -n "${MOUNT_PATH} was not mounted, skipping stop"
       warning
    else
       umount ${MOUNT_PATH}
       RETVAL=$?
       echo -n "Unmounting fuse s3 mount..."
       if [ ${RETVAL} != 0 ]; then
          failure
       else
          success
       fi
    fi
    echo
}

status(){
    mountpoint ${MOUNT_PATH}
    echo "Mount contents: $(ls -l ${MOUNT_PATH}/)"
}

case "$1" in
start)
    start
    ;;
stop)
    stop
    ;;
status)
    status
    ;;
restart)
    stop
    start
    ;;
*)
    echo $"Usage: $prog {start|stop|restart|status}"
    RETVAL=2
    esac

exit $RETVAL
@ggtakec

This comment has been minimized.

Copy link
Member

commented Mar 30, 2019

We kept this issue open for a long time.
I will close this, but if the problem persists, please reopen or post a new issue.

@ggtakec ggtakec closed this Mar 30, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.