Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

assertion failed ... code fragment does not contain the given arm address #4422

Closed
dmeehan1968 opened this issue Jul 17, 2024 · 18 comments
Closed

Comments

@dmeehan1968
Copy link

I have been using 0.0.462 for a while without problems. Yesterday I decided to upgrade to 0.0.535 and started getting problems with sst dev and sst deploy.

One part of that was that when running dev or deploy I would intermittently (possibly alternate runs) see a 'log' message (even without verbose mode) which mentioned the error in the title of this issue.

I downgraded to 0.0.462 and the issue persisted. I assume that its coming from a dependency that gets installed when sst install runs after the upgrade/downgrade.

It appears to affect all sst commands that touch the cloud, so refresh is also affected.

When this log message appears, the command then halts indefinitely (no further output however long I wait).

Ctrl-C to abort and then running again often then succeeds, although sst deploy seems to fail every time, whereas sst dev will succeed (on alternate invocations).

time=2024-07-17T08:27:03.003+01:00 level=INFO msg=publishing type=*project.StackEvent
time=2024-07-17T08:27:03.003+01:00 level=INFO msg=publishing type=*project.StackEvent
time=2024-07-17T08:27:03.033+01:00 level=INFO msg=publishing type=*project.StackEvent
time=2024-07-17T08:27:03.034+01:00 level=INFO msg=publishing type=*project.StackEvent
time=2024-07-17T08:27:03.284+01:00 level=INFO msg=publishing type=*project.StackEvent                                                                                                                      
time=2024-07-17T08:27:03.285+01:00 level=INFO msg=publishing type=*project.StackEvent
|  Log         assertion failed [arm_interval().contains(address)]: code fragment does not contain the given arm address
|  Log         (CodeFragmentMetadata.cpp:48 instruction_extents_for_arm_address)
⠏  Deploying   [3 skipped]       
@dmeehan1968
Copy link
Author

May relate to pulumi/pulumi-aws#4190

@dmeehan1968
Copy link
Author

I found a workaround, which is to fix the @pulumi/aws version via the sst.config.ts file:

      home: "aws",
      providers: {
        aws: {
          version: "6.41.0"
        }
      },

It seems that pulumi/pulumi-aws#4190 is indeed relevant (sst by default installs 'latest' which is currently 6.45.0).

@flostadler
Copy link

flostadler commented Jul 17, 2024

Hey @dmeehan1968, pulumi-aws maintainer here. Is my assumption correct that you're running an x86_64 version of sst and therefore also an x86_64 version of pulumi-aws on an Apple Silicon Mac?

Could you try running the commands with the following env variable: GODEBUG=asyncpreemptoff=1? That did work for me locally

@dmeehan1968
Copy link
Author

@flostadler

  1. file "$(which sst)" gives Mach-O 64-bit executable arm64
  2. Adding that GODEBUG setting seems to fix it (when { aws: { version: "latest" } } is used in sst.config.ts

@flostadler
Copy link

Oh that's surprising. I don't know enough about SST internals but it seems like it is somehow downloading an x86_64 version of the aws provider.

Copy link
Contributor

thdxr commented Jul 18, 2024

sst doesn't actually handle the downloading of the provider - it's using the automation api so that's delegated to pulumi

@jayair
Copy link
Contributor

jayair commented Jul 19, 2024

I'll close this for now

@jayair jayair closed this as completed Jul 19, 2024
@dmeehan1968
Copy link
Author

@jayair not sure why you closed this. It currently only works if I pin the aws provider to 6.41.0.

@jayair
Copy link
Contributor

jayair commented Jul 19, 2024

I thought it was fixed upstream. It's not?

@dmeehan1968
Copy link
Author

@jayair only that the GODEBUG setting also acts as a workaround. I’ve not tried it today, the latest version was 6.45.0 when I tested it and it was not working without either the GODEBUG option or pinning the version.

‘it looks like for some reason we are getting the intel binary. I’ve also been seeing some extremely long deploy times (10 minutes for something that’s usually <1) which might be down to Rosetta overhead, but might be unrelated. I’ll try some more tests as soon as I can.

@jayair jayair reopened this Jul 19, 2024
@dmeehan1968
Copy link
Author

@jayair Yes this still fails

sst v 0.0.535

SST ❍ ion 0.0.535  ready!

➜  App:        replicated
   Stage:      dmeehan
   Console:    https://console.sst.dev/local/replicated/dmeehan

~  Deploying

|  Log         assertion failed [arm_interval().contains(address)]: code fragment does not contain the given arm address
|  Log         (CodeFragmentMetadata.cpp:48 instruction_extents_for_arm_address)
⠙  Deploying   [1 skipped]                        

That final 'deploying' will sit there for several minutes/indefinitely and the deploy won't complete (at least not in a the normal way).

Two workarounds as mentioned above:

  1. Pin the aws provider version
  2. GODEBUG env var to alter Pulumi runtime behaviour

npm list from within .sst/platform with { providers: { aws: true } } gives:

├── @pulumi/aws@6.45.0
├── @pulumi/cloudflare@v5.24.1
├── @pulumi/docker@v4.5.3
├── @pulumi/pulumi@3.112.0
├── @pulumi/random@v4.15.0
├── @pulumi/tls@v5.0.1

sst v 0.1.4

NB: in sst v0.1.4, doing npm list in the .sst/platform directory gives this:

├── @pulumi/aws@6.45.0 invalid: "latest" from the root project

sst dev output is slightly different but it hangs on that final Deploying and doesn't complete.

SST ❍ ion 0.1.4 ready!

➜ App: replicated
Stage: dmeehan
Console: https://console.sst.dev/local/replicated/dmeehan

~ Deploying

|  Log         assertion failed [arm_interval().contains(address)]: code fragment does not contain the given arm address
|  Log         (CodeFragmentMetadata.cpp:48 instruction_extents_for_arm_address)
⠧  Creating    NextAuthSecretLinkRef sst:sst:LinkRef                                                                                                                                                       
⠧  Creating    GitHubIdLinkRef sst:sst:LinkRef                                                                                                                                                             
⠧  Creating    GitHubSecretLinkRef sst:sst:LinkRef                                                                                                                                                         
⠧  Creating    PanelSettingsLinkRef sst:sst:LinkRef                                                                                                                                                        
⠧  Creating    DashboardLinkRef sst:sst:LinkRef                                                                                                                                                            
⠧  Deploying   [6 skipped]   

The workarounds are still valid.

@dmeehan1968
Copy link
Author

@jayair In addition to the above, I can see from MacOS Activity Monitor that the pulumi instances do seem to be using Rosetta (so will be intel binaries) (Note the references to rosetta in the open files list for the processes).

Screenshot 2024-07-20 at 08 45 45 Screenshot 2024-07-20 at 08 45 59

This is also the case when using the either of the workarounds, so the issue existed prior to the 6.41.0 Pulumi release and as noted in pulumi/pulumi-aws#4190 this is actually a GO issue at source.

For that reason, I would recommend using the GODEBUG workaround rather than pinning the version as hopefully it will be resolved in time. As this isn't a direct SST issue, you could probably close this issue, but it might be worth pinning it as an alternative (or some other way of highlighting it) so others don't spend time trying to figure out a workaround).

It should be noted that after running SST I'm seeing stuck node processes after these failed runs, and indeed, even with the workaround on either of the Pulumi versions mentioned. In the testing I did for my comments today I ended up with multiple Pulumi and node processes that had to be force quit - indeed going by the accumulated time they've been there for a few days since this problem started). It's possible that this was IDE related, as closing the IDE did kill them off. I mentioned before that I thought that some of the deploy steps seems to be taking a lot longer, and I suspect this is a side effect of rosetta and stuck processes that might be interferring with the normal deploy process.

With either workaround it appears that the Pulumi processes correctly exit, but I'm still left with 3 node processes, one of which isn't anywhere near idle (no sst dev running), but subsequent dev or deploy commands seem to reuse these instances so its not a big issue.

Screenshot 2024-07-20 at 08 22 59

@notsoluckycharm
Copy link

notsoluckycharm commented Aug 10, 2024

Although not super helpful, I also discovered my sst was running as Intel. I tried to reinstall it via brew and brew was installing the x86 version of sst everytime. I did try to force the arm64 install, but it refused to cooperate. I didn't feel like figuring this out, so I just switched to the README install tactic of the bash command. All fixed, sst is arm and so is Pulumi now.

If you find yourself hitting this, it may be brew. Perhaps you'll carry the torch further as to why its installing the Intel version of sst, but you can also just not use it.

@dmeehan1968
Copy link
Author

@notsoluckycharm Although I initially used Homebrew to install SST, I've since adopted installing sst@ion via npm, so its not a global, but local to the project. This still exhibits the issue however. I have a couple of SST projects on the go which are on differnt SST versions so installing globally doesn't work for me (assuming that what you were doing with the README instructions?)

Copy link
Contributor

thdxr commented Sep 23, 2024

closing for now feel free to message if this is still an issue

@emlegweak
Copy link

this is still an issue - I'm on an apple M3 using the pulumi automation API (with typescript) for deployments, and the only successful workaround is pinning the aws provider version to 6.41.0

definitely not ideal in the long run

@jayair
Copy link
Contributor

jayair commented Oct 4, 2024

@emlegweak which version of SST?

@angelgarrido
Copy link

For us, some times we're not picking the Apple Silicon version of plumi on install (even with sst 3.1.49) However, with the last OS upgrade on MacOs Sequoia, the devs who were experiencing the problem are no longer experiencing it, it seems there are some corrections on rosetta.

@thdxr thdxr transferred this issue from sst/ion Oct 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants