New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uncaught Error: no protocol with name: libp2p-webrtc-star #1029

Closed
pgte opened this Issue Sep 25, 2017 · 10 comments

Comments

Projects
None yet
3 participants
@pgte

pgte commented Sep 25, 2017

  • Version: 0.26.0
  • Platform: Darwin
  • Subsystem:

Type:

Critical

Severity:

High

Description:

When starting an IPFS node, this error gets thrown:

protocols-table.js:17 Uncaught Error: no protocol with name: libp2p-webrtc-star
    at Protocols (protocols-table.js:17)
    at stringToStringTuples (codec.js:45)
    at stringToBuffer (codec.js:170)
    at Object.fromString (codec.js:178)
    at new Multiaddr (index.js:39)
    at Multiaddr (index.js:27)
    at config.Addresses.Swarm.forEach (pre-start.js:27)
    at Array.forEach (<anonymous>)
    at waterfall (pre-start.js:26)
    at nextTask (waterfall.js:16)
    at next (waterfall.js:23)
    at onlyOnce.js:12
    at peerId.createFromPrivKey (pre-start.js:21)
    at waterfall (index.js:205)
    at once.js:12
    at next (waterfall.js:21)
    at onlyOnce.js:12
    at privKey.public.hash (index.js:198)
    at Multihashing.Multihashing.digest (index.js:33)
    at index.js:15
    at run (setImmediate.js:40)
    at runIfPresent (setImmediate.js:69)
    at onGlobalMessage (setImmediate.js:109)

Steps to reproduce the error:

Start an IPFS node with the following options:

    ipfs = new IPFS({
      EXPERIMENTAL: {
        pubsub: true
      }
    })
@ya7ya

This comment has been minimized.

Show comment
Hide comment
@ya7ya

ya7ya Sep 25, 2017

Contributor

Hey @pgte, Are you using IPFS in the browser ? this is probably due to the recent update to mutliaddr. try this code instead

// browser start
ipfs = new IPFS({
config: {
  Addresses: {
    Swarm: [
      "/dns4/star-signal.cloud.ipfs.team/wss/p2p-webrtc-star"
    ],
    API: '',
    Gateway: ''
  },
EXPERIMENTAL: {
     pubsub: true
   }
})
Contributor

ya7ya commented Sep 25, 2017

Hey @pgte, Are you using IPFS in the browser ? this is probably due to the recent update to mutliaddr. try this code instead

// browser start
ipfs = new IPFS({
config: {
  Addresses: {
    Swarm: [
      "/dns4/star-signal.cloud.ipfs.team/wss/p2p-webrtc-star"
    ],
    API: '',
    Gateway: ''
  },
EXPERIMENTAL: {
     pubsub: true
   }
})
@pgte

This comment has been minimized.

Show comment
Hide comment
@pgte

pgte Sep 25, 2017

@ya7ya Yes, running on Chrome.
I tried that, with the same error.

BTW, tried debugging it, and the addresses that config.Addresses.Swarm returns is ["/libp2p-webrtc-star/dns4/star-signal.cloud.ipfs.team/wss"].

pgte commented Sep 25, 2017

@ya7ya Yes, running on Chrome.
I tried that, with the same error.

BTW, tried debugging it, and the addresses that config.Addresses.Swarm returns is ["/libp2p-webrtc-star/dns4/star-signal.cloud.ipfs.team/wss"].

@ya7ya

This comment has been minimized.

Show comment
Hide comment
@ya7ya

ya7ya Sep 25, 2017

Contributor

@pgte That's the config from older versions, maybe there is a cached repo, Try clearing your cache and let me know if that fixes the issue.

Contributor

ya7ya commented Sep 25, 2017

@pgte That's the config from older versions, maybe there is a cached repo, Try clearing your cache and let me know if that fixes the issue.

@pgte

This comment has been minimized.

Show comment
Hide comment
@pgte

pgte Sep 25, 2017

Clearing the cache fixed this issue.

Wasn't the cached config supposed to have been overridden by my explicit config?

pgte commented Sep 25, 2017

Clearing the cache fixed this issue.

Wasn't the cached config supposed to have been overridden by my explicit config?

@ya7ya

This comment has been minimized.

Show comment
Hide comment
@ya7ya

ya7ya Sep 25, 2017

Contributor

it'd have if we explicity specified a repo in the start like so

ipfs = new IPFS({
repo: 'ipfs-repo-'+String(Math.random()),
config: {
  Addresses: {
    Swarm: [
      "/dns4/star-signal.cloud.ipfs.team/wss/p2p-webrtc-star"
    ],
    API: '',
    Gateway: ''
  },
EXPERIMENTAL: {
     pubsub: true
   }
})
Contributor

ya7ya commented Sep 25, 2017

it'd have if we explicity specified a repo in the start like so

ipfs = new IPFS({
repo: 'ipfs-repo-'+String(Math.random()),
config: {
  Addresses: {
    Swarm: [
      "/dns4/star-signal.cloud.ipfs.team/wss/p2p-webrtc-star"
    ],
    API: '',
    Gateway: ''
  },
EXPERIMENTAL: {
     pubsub: true
   }
})
@pgte

This comment has been minimized.

Show comment
Hide comment
@pgte

pgte Sep 25, 2017

I understand that creating a random repo busts the cache, but my question was more:

What's the point of specifying the config if it's not going to override the cached config?

pgte commented Sep 25, 2017

I understand that creating a random repo busts the cache, but my question was more:

What's the point of specifying the config if it's not going to override the cached config?

@ya7ya

This comment has been minimized.

Show comment
Hide comment
@ya7ya

ya7ya Sep 25, 2017

Contributor

@pgte Oh, my bad, I misunderstood. Good question, I don't know the exact answer, So I'll look at the start and pre-start processes to figure that out.

if i had to make a guess though, I'd say it's bad practice to run a new config on an older repo because that might break things. Again I'm not certain about this

Edit: check next comment for the correct answer.

Contributor

ya7ya commented Sep 25, 2017

@pgte Oh, my bad, I misunderstood. Good question, I don't know the exact answer, So I'll look at the start and pre-start processes to figure that out.

if i had to make a guess though, I'd say it's bad practice to run a new config on an older repo because that might break things. Again I'm not certain about this

Edit: check next comment for the correct answer.

@ya7ya

This comment has been minimized.

Show comment
Hide comment
@ya7ya

ya7ya Sep 25, 2017

Contributor

@pgte so i took a look at src/boot.js and you can see where config is extended. but the thing is it adds the new config + the old one, so you end up with both addresses Swarm: [ "/libp2p-webrtc-star/dns4/star-signal.cloud.ipfs.team/wss", "/dns4/star-signal.cloud.ipfs.team/wss/p2p-webrtc-star"]

The older version throws the error. since it doesn't recognize the protocol anymore

here is the snippet of that part

    // Need to set config
    if (setConfig) {
      if (!hasRepo) {
        console.log('WARNING, trying to set config on uninitialized repo, maybe forgot to set "init: true"')
      } else {
        tasks.push((cb) => {
          waterfall([
            (cb) => self.config.get(cb),
            (config, cb) => {
              // this extends the config. so both (old & new) addresses are there.
              extend(config, options.config)

              self.config.replace(config, cb)
            }
          ], cb)
        })
      }
    }
Contributor

ya7ya commented Sep 25, 2017

@pgte so i took a look at src/boot.js and you can see where config is extended. but the thing is it adds the new config + the old one, so you end up with both addresses Swarm: [ "/libp2p-webrtc-star/dns4/star-signal.cloud.ipfs.team/wss", "/dns4/star-signal.cloud.ipfs.team/wss/p2p-webrtc-star"]

The older version throws the error. since it doesn't recognize the protocol anymore

here is the snippet of that part

    // Need to set config
    if (setConfig) {
      if (!hasRepo) {
        console.log('WARNING, trying to set config on uninitialized repo, maybe forgot to set "init: true"')
      } else {
        tasks.push((cb) => {
          waterfall([
            (cb) => self.config.get(cb),
            (config, cb) => {
              // this extends the config. so both (old & new) addresses are there.
              extend(config, options.config)

              self.config.replace(config, cb)
            }
          ], cb)
        })
      }
    }
@pgte

This comment has been minimized.

Show comment
Hide comment
@pgte

pgte Sep 25, 2017

@ya7ya that explains it.. thanks!

pgte commented Sep 25, 2017

@ya7ya that explains it.. thanks!

@Asone

This comment has been minimized.

Show comment
Hide comment
@Asone

Asone May 28, 2018

Hi,
I'm having hard time myself about this exception.

I currently use IPFS through a customized implementation of ipfs-webpack-plugin. The plugin uses v0.26 without any problem, however updating the plugin with a newer version brings systematically that exception.

I've been trying 0.28 & 0.27 ( and a bunch of minor versions of those ) with no result. Configuration passed to the instance is the following :

 {
    start: false,
    config: {
      Addresses: {
        Swarm: [
          "/ip4/0.0.0.0/tcp/4002",
          "/ip4/127.0.0.1/tcp/4003/ws",
          "/libp2p-webrtc-star/dns4/star-signal.cloud.ipfs.team/wss",
          "/dns4/star-signal.cloud.ipfs.team/wss/p2p-webrtc-star"
        ],
        API: '/ip4/127.0.0.1/tcp/5002',
        Gateway: '/ip4/127.0.0.1/tcp/9090',
      },
      Discovery: {
        MDNS: {
          Enabled: true,
          Interval: 10
        },
        webRTCStar: {
          Enabled: true
        }
      },
      Bootstrap: [
        "/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
        "/ip4/104.236.176.52/tcp/4001/ipfs/QmSoLnSGccFuZQJzRadHn95W2CrSFmZuTdDWP8HXaHca9z",
        "/ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
        "/ip4/162.243.248.213/tcp/4001/ipfs/QmSoLueR4xBeUbY9WZ9xGUUxunbKWcrNFTDAadQJmocnWm",
        "/ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
        "/ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
        "/ip4/178.62.158.247/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd",
        "/ip4/178.62.61.185/tcp/4001/ipfs/QmSoLMeWqB7YGVLJN3pNLQpmmEk35v6wYtsMGLzSr5QBU3",
        "/ip4/104.236.151.122/tcp/4001/ipfs/QmSoLju6m7xTh3DuokvT3886QRYqxAzb1kShaanJgW36yx"
      ],
      Identity: {
        PeerID: "QmRSNiq3rEkPpokCePZ3oko9eV9Fszn1nP9z8YihjGfPme",
        PrivKey: "CAASqAkwggSkAgEAAoIBAQDHQREDvLgU2KzY8NQBfdMrplLNrx9hlTkt5R83vQcQmNkPKpYTt4RsxtmP/XKxfS03aOrX9q8j4og73H5bHFLvA9305nvrUwJ1nrlFbcPFM1QzBK6u/rwN+X5+8F3E64y8Vl3jjQceJ4oKPrcEWDH8fkQZ/RDDPb0FePYZexiLTx80Qzp4a2iV9mfYun6ZW7rxhNYIo01pdB20IuYjLV05NwXDedf9dSOnFiwFY3kxr0FmQ9/nQUMJkkFZ1gYi5ooUh5LoOhxkchJ6Y8lmlGa6QGidyoxzLSSnI408xHSGaVqyvKjBBKp/TbJPi0RjO/TO2xJ90msyf/hi/J9UOpr5AgMBAAECggEBAKNLCrOiXMYQ0I61xzk1sfMKyr9v7mrdjU+0f0IBsyGB8hlA0F92PZub1z7u+ajFqmHHpPa6Xsws4XMVf6QRcVIaPDNxFEtF6zUTkEh67T7WkwGAq9wUPW/CcU18lYxFckADE8zhjdzDkJhWz0xLLyP7Irqdr7giB5/Ngvpc7D91dL2fUBbhhA6CX9fixcSHeLH3V3QHblWKvmw4VEr3H4TUEFfOkTT/1OVJQCfMuCjPSlc/zLSODjoME+ZDbCoHwc6ra5XYa40LCkclcQmdqqYuxDptVAmsI6WHUWONzXgHxyM3I+CJqP0er6ILvXm87lOKXkyJpn4PvrVJwg7yr+UCgYEA7T6d5N4DsmC6FB3TVzjoThT66ZYyIWnJW7R/3y0T9kvqoHcSAgyCJdLQYzZneP7p86W05dq83+2luI4cNMBhaoAnQmxYZo2ZueMu2sJnaP7RAIagwPKNN/ujHcMTM4AEEdwzzjKQCchiq3nVxo5Uq2pdmOzila0NWP9WJIyJ8G8CgYEA1wGYUst3w+9lWtN4j2D1sgP2Mh26m0G6tNkigynYqYJnXQKgY6ieVKxEbILesm1B5IL9akInpdfkLJ77W+eSBt+IO2QmTVQPopQ5bRIMTT3cmAyhrsGRAQnX//cmCnbpJZwBnY/iCKlrzWtxyC5M9YK+fVxVtiZoqXEFznX4jxcCgYBWT9ib4lXP+LbaCLvR2MdTWPisMNOOKnFyZqm65SiFC7uRo6AulKRo5FiiL7HXaE5vMRMuKLVcdpY7HaCPZIpMd9FQriA/Nzb9VPS/68g5f7NEELa9W8Ea4/bFJip/KwzP/p/uXaDfnkKfhhTLRw7wyiLBNzV8JNhdT4/kfijVCwKBgEnv0Xr/X1Mw2xDt0fK0bClodVxsnsRPSS5x0Q178XbxUixI//DlhnUlvG34Xy7KpbM4XH8S+uFsKZoyncvQCYZ1jjqmSQmkk6/b+xeH8lUJpfdfuKYJCJ1rzizGx/0nQSvexytw1FEYOestPLaTPYHcETe47fyynqFOLan/JZfHAoGBALQdJYHLOMkHrZBO7sEcxBfSYxFJFTWWj3oVPCAGgga94iSg7a9cSGiRyQVmZxUvQKquUHjGs0lY8osiSz9lPr3xchCQeP1M5PwaSesBKjBgKQGvo/uyvNugpag5/omW0fivI9yJByUpQyBabCiLbHBd9ea2FYtOYkizO+wlzV47"
      },
      EXPERIMENTAL: {
            pubsub: true
      }
    }
  }

I tried switching on and off flags ( e.g : webRTCstar ), modifying parameters ( swarm addresses list, bootstrap addresses ), I still get stuck with the same problem.

I don't know if it's really a regression or just a configuration mistake of mine as my knowledge about might not be extended enough. Anyway, some help would be warmly appreciated, even if not critical as i'll stick to v0.26 meanwhile.

Thanks !

Asone commented May 28, 2018

Hi,
I'm having hard time myself about this exception.

I currently use IPFS through a customized implementation of ipfs-webpack-plugin. The plugin uses v0.26 without any problem, however updating the plugin with a newer version brings systematically that exception.

I've been trying 0.28 & 0.27 ( and a bunch of minor versions of those ) with no result. Configuration passed to the instance is the following :

 {
    start: false,
    config: {
      Addresses: {
        Swarm: [
          "/ip4/0.0.0.0/tcp/4002",
          "/ip4/127.0.0.1/tcp/4003/ws",
          "/libp2p-webrtc-star/dns4/star-signal.cloud.ipfs.team/wss",
          "/dns4/star-signal.cloud.ipfs.team/wss/p2p-webrtc-star"
        ],
        API: '/ip4/127.0.0.1/tcp/5002',
        Gateway: '/ip4/127.0.0.1/tcp/9090',
      },
      Discovery: {
        MDNS: {
          Enabled: true,
          Interval: 10
        },
        webRTCStar: {
          Enabled: true
        }
      },
      Bootstrap: [
        "/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
        "/ip4/104.236.176.52/tcp/4001/ipfs/QmSoLnSGccFuZQJzRadHn95W2CrSFmZuTdDWP8HXaHca9z",
        "/ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
        "/ip4/162.243.248.213/tcp/4001/ipfs/QmSoLueR4xBeUbY9WZ9xGUUxunbKWcrNFTDAadQJmocnWm",
        "/ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
        "/ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
        "/ip4/178.62.158.247/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd",
        "/ip4/178.62.61.185/tcp/4001/ipfs/QmSoLMeWqB7YGVLJN3pNLQpmmEk35v6wYtsMGLzSr5QBU3",
        "/ip4/104.236.151.122/tcp/4001/ipfs/QmSoLju6m7xTh3DuokvT3886QRYqxAzb1kShaanJgW36yx"
      ],
      Identity: {
        PeerID: "QmRSNiq3rEkPpokCePZ3oko9eV9Fszn1nP9z8YihjGfPme",
        PrivKey: "CAASqAkwggSkAgEAAoIBAQDHQREDvLgU2KzY8NQBfdMrplLNrx9hlTkt5R83vQcQmNkPKpYTt4RsxtmP/XKxfS03aOrX9q8j4og73H5bHFLvA9305nvrUwJ1nrlFbcPFM1QzBK6u/rwN+X5+8F3E64y8Vl3jjQceJ4oKPrcEWDH8fkQZ/RDDPb0FePYZexiLTx80Qzp4a2iV9mfYun6ZW7rxhNYIo01pdB20IuYjLV05NwXDedf9dSOnFiwFY3kxr0FmQ9/nQUMJkkFZ1gYi5ooUh5LoOhxkchJ6Y8lmlGa6QGidyoxzLSSnI408xHSGaVqyvKjBBKp/TbJPi0RjO/TO2xJ90msyf/hi/J9UOpr5AgMBAAECggEBAKNLCrOiXMYQ0I61xzk1sfMKyr9v7mrdjU+0f0IBsyGB8hlA0F92PZub1z7u+ajFqmHHpPa6Xsws4XMVf6QRcVIaPDNxFEtF6zUTkEh67T7WkwGAq9wUPW/CcU18lYxFckADE8zhjdzDkJhWz0xLLyP7Irqdr7giB5/Ngvpc7D91dL2fUBbhhA6CX9fixcSHeLH3V3QHblWKvmw4VEr3H4TUEFfOkTT/1OVJQCfMuCjPSlc/zLSODjoME+ZDbCoHwc6ra5XYa40LCkclcQmdqqYuxDptVAmsI6WHUWONzXgHxyM3I+CJqP0er6ILvXm87lOKXkyJpn4PvrVJwg7yr+UCgYEA7T6d5N4DsmC6FB3TVzjoThT66ZYyIWnJW7R/3y0T9kvqoHcSAgyCJdLQYzZneP7p86W05dq83+2luI4cNMBhaoAnQmxYZo2ZueMu2sJnaP7RAIagwPKNN/ujHcMTM4AEEdwzzjKQCchiq3nVxo5Uq2pdmOzila0NWP9WJIyJ8G8CgYEA1wGYUst3w+9lWtN4j2D1sgP2Mh26m0G6tNkigynYqYJnXQKgY6ieVKxEbILesm1B5IL9akInpdfkLJ77W+eSBt+IO2QmTVQPopQ5bRIMTT3cmAyhrsGRAQnX//cmCnbpJZwBnY/iCKlrzWtxyC5M9YK+fVxVtiZoqXEFznX4jxcCgYBWT9ib4lXP+LbaCLvR2MdTWPisMNOOKnFyZqm65SiFC7uRo6AulKRo5FiiL7HXaE5vMRMuKLVcdpY7HaCPZIpMd9FQriA/Nzb9VPS/68g5f7NEELa9W8Ea4/bFJip/KwzP/p/uXaDfnkKfhhTLRw7wyiLBNzV8JNhdT4/kfijVCwKBgEnv0Xr/X1Mw2xDt0fK0bClodVxsnsRPSS5x0Q178XbxUixI//DlhnUlvG34Xy7KpbM4XH8S+uFsKZoyncvQCYZ1jjqmSQmkk6/b+xeH8lUJpfdfuKYJCJ1rzizGx/0nQSvexytw1FEYOestPLaTPYHcETe47fyynqFOLan/JZfHAoGBALQdJYHLOMkHrZBO7sEcxBfSYxFJFTWWj3oVPCAGgga94iSg7a9cSGiRyQVmZxUvQKquUHjGs0lY8osiSz9lPr3xchCQeP1M5PwaSesBKjBgKQGvo/uyvNugpag5/omW0fivI9yJByUpQyBabCiLbHBd9ea2FYtOYkizO+wlzV47"
      },
      EXPERIMENTAL: {
            pubsub: true
      }
    }
  }

I tried switching on and off flags ( e.g : webRTCstar ), modifying parameters ( swarm addresses list, bootstrap addresses ), I still get stuck with the same problem.

I don't know if it's really a regression or just a configuration mistake of mine as my knowledge about might not be extended enough. Anyway, some help would be warmly appreciated, even if not critical as i'll stick to v0.26 meanwhile.

Thanks !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment