Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

Pipe to self: "Error: Lock file is already being hold" #1835

Closed
alanshaw opened this issue Jan 22, 2019 · 12 comments · Fixed by #1860
Closed

Pipe to self: "Error: Lock file is already being hold" #1835

alanshaw opened this issue Jan 22, 2019 · 12 comments · Fixed by #1860
Assignees
Labels
exp/expert Having worked on the specific codebase is important help wanted Seeking public contribution on this issue kind/bug A bug in existing code (including security flaws) P3 Low: Not priority right now

Comments

@alanshaw
Copy link
Member

jsipfs cid base32 fails to start because repo is locked by jsipfs add -q

$ echo "hello world" | jsipfs add -q | DEBUG=* jsipfs cid base32
  cli daemon is off +0ms
  jsipfs EXPERIMENTAL pubsub is enabled +0ms
  jsipfs booting +2ms
  repo opening at: /Users/alan/.jsipfs +0ms
  repo init check +1ms
  repo:version comparing version: 7 and 7 +0ms
  repo init null { config: true, spec: true, version: undefined } +3ms
  repo:lock locking /Users/alan/.jsipfs/repo.lock +0ms
  repo:lock Error: Lock file is already being hold
  repo:lock     at options.fs.stat (/Users/alan/Code/protocol-labs/js-ipfs/node_modules/proper-lockfile/lib/lockfile.js:54:47)
  repo:lock     at /Users/alan/Code/protocol-labs/js-ipfs/node_modules/graceful-fs/polyfills.js:285:20
  repo:lock     at FSReqWrap.oncomplete (fs.js:155:5) +3ms
events.js:167
      throw er; // Unhandled 'error' event
      ^

Error: write EPIPE
    at WriteWrap.afterWrite [as oncomplete] (net.js:792:14)
Emitted 'error' event at:
    at onwriteError (_stream_writable.js:431:12)
    at onwrite (_stream_writable.js:456:5)
    at _destroy (internal/streams/destroy.js:40:7)
    at Socket.dummyDestroy [as _destroy] (internal/process/stdio.js:11:34)
    at Socket.destroy (internal/streams/destroy.js:32:8)
    at WriteWrap.afterWrite [as oncomplete] (net.js:794:10)

I think go-ipfs has the concept of command requirements, perhaps we need something similar i.e. jsipfs cid does not require a repo so shouldn't need to obtain a lock.

@alanshaw alanshaw added kind/bug A bug in existing code (including security flaws) exp/expert Having worked on the specific codebase is important help wanted Seeking public contribution on this issue status/ready Ready to be worked P3 Low: Not priority right now labels Jan 22, 2019
alanshaw added a commit that referenced this issue Jan 31, 2019
The switch to using `yargs-promise` for `ipfs init` and `ipfs daemon` commands caused an unhandled promise rejection and in some cases would cause an error to not be printed to the console.

This PR greatly simplifies the code in `src/cli/bin.js`, to always use `yargs-promise`. Command handlers are now passed an async `getIpfs` function instead of an `ipfs` instance. It means that we don't have to differentiate between commands that use an IPFS instance in `src/cli/bin.js`, giving the handler the power to call `getIpfs` or not to obtain an IPFS instance as and when needed. This removes a whole bunch of complexity from `src/cli/bin.js` at the cost of adding a single line to every command handler that needs to use an IPFS instance.

This enables operations like `echo "hello" | jsipfs add -q | jsipfs cid base32` to work without `jsipfs cid base32` failing because it's trying to acquire a repo lock when it doesn't use IPFS at all.

fixes #1835
refs #1858
refs libp2p/js-libp2p#311

License: MIT
Signed-off-by: Alan Shaw <alan.shaw@protocol.ai>
@alanshaw alanshaw mentioned this issue Jan 31, 2019
1 task
@ghost ghost assigned alanshaw Jan 31, 2019
@ghost ghost added status/in-progress In progress and removed status/ready Ready to be worked labels Jan 31, 2019
alanshaw added a commit that referenced this issue Feb 5, 2019
The switch to using `yargs-promise` for `ipfs init` and `ipfs daemon` commands caused an unhandled promise rejection and in some cases would cause an error to not be printed to the console.

This PR greatly simplifies the code in `src/cli/bin.js`, to always use `yargs-promise`. Command handlers are now passed an async `getIpfs` function instead of an `ipfs` instance. It means that we don't have to differentiate between commands that use an IPFS instance in `src/cli/bin.js`, giving the handler the power to call `getIpfs` or not to obtain an IPFS instance as and when needed. This removes a whole bunch of complexity from `src/cli/bin.js` at the cost of adding a single line to every command handler that needs to use an IPFS instance.

This enables operations like `echo "hello" | jsipfs add -q | jsipfs cid base32` to work without `jsipfs cid base32` failing because it's trying to acquire a repo lock when it doesn't use IPFS at all.

fixes #1835
refs #1858
refs libp2p/js-libp2p#311

License: MIT
Signed-off-by: Alan Shaw <alan.shaw@protocol.ai>
@ghost ghost removed the status/in-progress In progress label Feb 6, 2019
alanshaw added a commit that referenced this issue Feb 6, 2019
The switch to using `yargs-promise` for `ipfs init` and `ipfs daemon` commands caused an unhandled promise rejection and in some cases would cause an error to not be printed to the console.

This PR greatly simplifies the code in `src/cli/bin.js`, to always use `yargs-promise`. Command handlers are now passed an async `getIpfs` function instead of an `ipfs` instance. It means that we don't have to differentiate between commands that use an IPFS instance in `src/cli/bin.js`, giving the handler the power to call `getIpfs` or not to obtain an IPFS instance as and when needed. This removes a whole bunch of complexity from `src/cli/bin.js` at the cost of adding a single line to every command handler that needs to use an IPFS instance.

This enables operations like `echo "hello" | jsipfs add -q | jsipfs cid base32` to work without `jsipfs cid base32` failing because it's trying to acquire a repo lock when it doesn't use IPFS at all.

fixes #1835
refs #1858
refs libp2p/js-libp2p#311

License: MIT
Signed-off-by: Alan Shaw <alan.shaw@protocol.ai>
@Schwartz10
Copy link

Schwartz10 commented Feb 28, 2019

@alanshaw unfortunately still getting this issue, with this node program:

const IPFS = require('ipfs');

const init = () => {
  const ipfs = new IPFS();

  ipfs.on('ready',  () => console.log('ready'));
  ipfs.on('error', err => console.log(err));
};

init();

The error is being caught in the error event listener:

{ Error: Lock file is already being hold
    at options.fs.stat (<repo-path>/node_modules/proper-lockfile/lib/lockfile.js:54:47)
    at <repo-path>/node_modules/graceful-fs/polyfills.js:285:20
    at FSReqCallback.oncomplete (fs.js:160:5) code: 'ELOCKED', file: '/Users/<me>/.jsipfs' }
node: v11.9.0
ipfs: ^0.34.4

Running this program several times in a row (one after another, not simultaneously) will cause the error to happen. If I stop, wait a bit, and run it again, it sometimes works. Seems like it could be related to #229. How can i help?

@KrishnaPG
Copy link
Contributor

Same problem here. During the development, it is imperative that the app will be restarted with code modifications frequently, which leads to the "Lock file is already being hold" error frequently.

Previous instance holding the lock is fine, but how to override / release it in new instance ? Is there anything we can do in the code to resolve this situation and force the node to start ??

This may turnout to be a problem in production also, if this file-system lock thing cannot be resolved programatically.

@ohager
Copy link

ohager commented Sep 25, 2019

Same problem here. During the development, it is imperative that the app will be restarted with code modifications frequently, which leads to the "Lock file is already being hold" error frequently.

Previous instance holding the lock is fine, but how to override / release it in new instance ? Is there anything we can do in the code to resolve this situation and force the node to start ??

This may turnout to be a problem in production also, if this file-system lock thing cannot be resolved programatically.

You could remove the repo.lock folder within the repo directory. This is not elegant, but works for me (so far).

@ptoner
Copy link

ptoner commented Oct 13, 2019

I also frequently get this when running unit tests that create an instance and then fail for whatever reason. When I run the tests again this will almost always happen. I also handle it by manually deleting the repo.lock folder.

@alanshaw
Copy link
Member Author

alanshaw commented Oct 14, 2019

Only one IPFS node can access an IPFS repo at a time.

Use the repo option to specify a directory if you need to start up multiple nodes. Please see this example for further instructions on how to run multiple nodes simultaneously on the same computer.

In tests you should use a temporary directory that you clean up after the node has shutdown. You could also use https://github.com/ipfs/js-ipfsd-ctl to make this job easier.

@ptoner
Copy link

ptoner commented Oct 14, 2019

Only 1 is running at a time. It starts, it crashes, and then it doesn't start again. I've resolved it by deleting the repo.lock file in code. But it's pretty confusing.

@alanshaw
Copy link
Member Author

If you retry after 10s proper-lockfile should consider the lock stale and succeed. I'd still recommend using a temporary repo if you're running tests where things may fail at any time.

I think a PR to proper-lockfile to add an option for removing the lock on uncaught exception /unhandled rejection might be an option but you should open an issue first at https://github.com/moxystudio/node-proper-lockfile/issues to see if it would be welcome.

@ptoner
Copy link

ptoner commented Oct 15, 2019

It happens most often with tests, but it also sometimes happens when the app I'm testing crashes. I'll look into the temp repo for the tests.

The 10s thing is helpful info and makes sense because I'm usually able to restart it if I wait a bit.

@ungarson
Copy link

I came here because of the problem in my js, the problem was that I called IPFS.create() two times (in two different places), and so I guess they somehow conflicted. Solved it by initializing it in one place. Hope it might help people like me :)

@bn185068
Copy link

bn185068 commented Jul 24, 2021

@ungarson thanks for your post. This solved the problem for me. I built an api and every time I call a specific route, I was calling await IPFS.create() in the route's logic, which I guess was reinitializing the repo and hitting the LockExistsError. Solved it by initializing the repo once in my app.js file, and then setting it globally for reuse. Something like this:

const IPFS = require('ipfs-core')
async function initGlobalIPFS() {
    global.IPFS = await IPFS.create()
};
initGlobalIPFS()

This works, however, I'm not clear on why this works. Does anyone have any reference documentation that I can read through to get a better understanding of what is happening when calling the .create() method and how IPFS repos work. Would love to understand it more, but having difficulty finding docs or reference materials that explain it. Thanks!

@achingbrain
Copy link
Member

@bn185068 please can you ask on https://discuss.ipfs.io ? You're more likely to get a better answer there, this repo is for reporting bugs/issues with js-ipfs and not asking how-do-I type questions.

@bn185068
Copy link

@achingbrain apologies. I assumed since I'm reporting that I also experienced this "LockExistsError" error while using this package, that it was appropriate to report how I solved it on the thread where others are experiencing the same error. Didn't realize I couldn't ask a question about the code that I'm using in an issue attached to that code's repo. I'll ask my question about the js-ipfs code on the ipfs discussion forum. Thanks for the reference link.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
exp/expert Having worked on the specific codebase is important help wanted Seeking public contribution on this issue kind/bug A bug in existing code (including security flaws) P3 Low: Not priority right now
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants