Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large file sizes #136

Open
liamcurry opened this issue Dec 14, 2014 · 67 comments
Open

Large file sizes #136

liamcurry opened this issue Dec 14, 2014 · 67 comments

Comments

@liamcurry
Copy link

Here are a few simple examples and their generated file sizes:

Filename Source Generated Minified Minified + Gzipped (Level 9)
simple.go 30b 61kb 46kb 12kb
console.go 86b 71kb 53kb 15kb
websocket.go 296b 1.4mb 1.4mb 69kb
websocket_fork.go 296b 78kb 59kb 13kb
websocket_2015-02-03.go 399b 542kb 390kb 96kb

(The huge jump in filesize in websocket.go is because of github.com/dominikh/go-js-dom)

There are a lot of implementation details in the generated files that shouldn't be there. For example, I would assume that:

console.Log("hello")

would compile to:

console.log("hello")

but instead it compiles to:

console.Log(new ($sliceType($emptyInterface))([new $String("hello")]));

I'd love to start using gopherjs in production, but this is a show stopper. Any ideas on how we can fix this?

edit: added gzipped sizes
edit: added websocket_fork.go to results table
edit: added websocket_2015-02-03.go to results table

@dmitshur
Copy link
Member

Use gzip compression. I'm routinely making 1.5-2 MB GopherJS output .js files as little as 200-250 KB by using it. It helps a lot to make the generated output more manageable.

P.S. On an unrelated topic, what's the reason you're using a fork of the websocket bindings instead of the original? Just curious.

@liamcurry
Copy link
Author

I added the gzipped results to the table above. You're right, that helped considerably. 1.4mb to 69kb! I didn't realize gzip was so effective.

IMO 69kb is still kind of a lot considering the source is 296 bytes, but that's much better than it was.

@dominikh
Copy link
Member

For what it is worth, while gzip will help with the cost of downloading the file, the browser will still have to parse and process 1.4 MB of JavaScript.

@dmitshur
Copy link
Member

IMO 69kb is still kind of a lot considering the source is 296 bytes, but that's much better than it was.

The source was less than 300 bytes, but it indirectly imported the dom package which is quite heavy. That adds a fixed size, so if the source were 600 bytes instead of 300 but with no other heavy dependencies, it'd be 300~ bytes more, not 1.4~ MB more.

Maybe we should not import the entire dom package in websocket bindings, I think it's only used for EventTarget and some event types, hmm. /cc @nightexcessive

For what it is worth, while gzip will help with the cost of downloading the file, the browser will still have to parse and process 1.4 MB of JavaScript.

That is true.

@liamcurry
Copy link
Author

@shurcooL I was using a fork of websocket because I made some updates. Here's the pull request. There's a new file in the table above called websocket_fork.go that uses this fork.

@dominikh Do you think we should split up github.com/dominikh/go-js-dom into sub-packages? It'd be nice to be able to import only the event-specific code.

@dominikh
Copy link
Member

@liamcurry I have not looked into splitting up the packages yet, primarily because I am expecting circular dependencies. I'll see what can be done.

@dominikh
Copy link
Member

Events have various methods that return dom.Element values, so there's a direct dependency between the DOM part and the event part. I don't think anything can be done about that, without making the API of the event-specific code worse.

@neelance
Copy link
Member

GopherJS has some constant overhead, because it includes a chunk of code in the output that is required for most programs. This also applies to the normal Go compiler, e.g. your simple.go example compiles to a 617kb binary on my machine. But as I said, this is a constant overhead, a twice as big source file will not result in a twice as big output file. And yes, please use gzip compression because it is very effective on generated code. ;-) Is this still preventing your from using GopherJS in production? What would be your requirement for using it?

@rusco
Copy link
Member

rusco commented Dec 20, 2014

Question: Does DCE also get applied to the standard go packages ?
For example the "time" package is a real heavy wight regarding the js size, even when importing only a tiny function. There might be intra-package dependencies though.

@neelance
Copy link
Member

Yes, DCE is applied to the standard packages, else the output would be much bigger. The time package has a large output because when you use time.Time, then all of its exported methods and their dependencies need to be included, too. That's because exported methods are available to (reflect.Value).Method(), so you can't know at compile-time which of them will be used.

I could add a flag that would potentially break (reflect.Value).Method() for reduced output size. Would this be desired?

@dominikh
Copy link
Member

I could add a flag that would potentially break (reflect.Value).Method() for reduced output size. Would this be desired?

I generally advise against switches that create a two-class society of code; code that works when the switch is turned on, and code that doesn't. You'd have to be aware of the implementation of your dependencies and their dependencies to know if the switch could be safely used, and you'd have to check again after every update to one of these dependencies.

Ultimatively, it'd make for a GopherJS-centric ecosystem, which is similar to what you got when people tried writing event-loop based code in Ruby: Libraries that used blocking calls and weren't suitable, and libraries designed specifically for event loops. I'd really hate to see a similar divide for GopherJS.

As a counter proposal, would it be possible to detect if (reflect.Value).Method() was used at all, and make the decision based on that? That would avoid the aforementioned issue, but have the negative side-effect that a tiny change in code (making use of (reflect.Value).Method()) could have a huge effect on the file size, but at least code wouldn't break.

@liamcurry
Copy link
Author

Forgive my ignorance, but could you give an example of (reflect.Value).Method() that couldn't be detected by the compiler? Maybe we can figure out a way around that. I have to agree with @dominikh against adding special flags. Wouldn't want to fragment the Go ecosystem.

The thing holding me back at the moment is the overhead required to interact with native Javascript APIs. @neelance you're certainly right about Go binaries having some overhead, so I'd expect the Javascript equivalent to have overhead too. However, I think the overhead for interacting with native APIs (like the DOM) should be minimal.

I'm not totally aware of GopherJS's innards, so this could be a silly idea. In a perfect world, native Javascript libraries could be implemented as just interfaces. There would be a way to register libraries as "native" (perhaps as a build tag?), which would tell GopherJS to treat these libraries differently. The ASTs for these libraries would be rewritten (lowercased) at compile time instead of embedded. This would significantly reduce the amount of overhead.

Here's a partial example for implementing the DOM:

//gopherjs:native
package dom

// http://www.w3.org/TR/DOM-Level-3-Core/core.html#ID-1950641247
type Node interface {}

// http://www.w3.org/TR/DOM-Level-3-Core/core.html#ID-745549614
type Element interface {
    Node
}

// http://www.w3.org/TR/DOM-Level-3-Core/core.html#i-Document
type Document interface {
    GetElementById(elementId string) Element
}

So then any calls to GetElementById would be rewritten at compile time to getElementById.

@dominikh
Copy link
Member

Forgive my ignorance, but could you give an example of (reflect.Value).Method() that couldn't be detected by the compiler

The issue that Richard is referring to is the mere use of it. You can't determine at compile time what argument will be passed to Method(), for example because it could be user input. That's why you cannot eliminate any public methods of a type that is used, as all public methods are accessible at runtime via that mechanism.

@dominikh
Copy link
Member

http://play.golang.org/p/JabJabl5Ly would be an example. Run it with GopherJS and enter either Foo or Bar in the prompt.

@dominikh
Copy link
Member

@liamcurry Also, if you looked at the implementation of my js/dom package, as well as the generated output, you'd see that it's not as trivial as just calling JavaScript functions. You need to do certain wrapping and conversions and, more importantly, all of the type information needs to be available at runtime, too, which is where most of the file size actually comes from.

@neelance
Copy link
Member

@liamcurry You probably know that you can do

js.Global.Get("document").Call("getElementById")

which will translate to simply

$global.document.getElementById()

But in that case you will stay in the JavaScript world, e.g. the returned object is a js.Object. Wrappers are meant for adding static typing, etc., which will have the overhead of bridging the JavaScript and the Go world. This applies especially if Go features like a variable number of arguments are used (console.Log(objs ...interface{})). Your suggestion with the interfaces is a good one, but it is hard to implement that without breaking some assumptions that you have with Go. For example what happens if someone defines

type ElementProvider interface {
    GetElementById(elementId string) dom.Element
}

and then assigns the Document to it? The compiler would not be able to special case the calls to ElementProvider.GetElementById, because it's not flagged as native. This problem already exists today with js.Object, but at least with js.Object it is quite obvious that it is special (at least that's what I hope).

@neelance
Copy link
Member

@dominikh I just checked and it seems like only net/rpc and text/template are using Method() or MethodByName(), so checking for that could work. NumMethod() is used more often, but this could of course still work with DCE. Am I missing something else?

@dominikh
Copy link
Member

SGTM. I can't think of any other way to dynamically get to methods right now, and luckily there's no reflect.MakeInterface.

@liamcurry
Copy link
Author

@dominikh thanks for the example, that makes sense. @neelance just to echo @dominikh question, would it be possible to detect use of Method()/MethodByName() and only import the full packages in those cases?

Your suggestion with the interfaces is a good one, but it is hard to implement that without breaking some assumptions that you have with Go.

Good point. What if we had an interface in github.com/gopherjs/gopherjs/js to mark interfaces as native? Something like this:

type NativeObject interface {
    IsNative()
}

So your previous example would be:

type ElementProvider interface {
    js.NativeObject
    GetElementById(elementId string) dom.Element
}

Would that work?

@neelance
Copy link
Member

My problem ist not with marking the interfaces, but with code breaking quite unexpectedly if you don't mark it, e.g. because you don't know that you have to. It simply breaks the Go spec. The current solution contains the problem to js.Object only.

@dmitshur
Copy link
Member

js.NativeObject

In my opinion, I would prefer to have less magic, rather than more. There's already quite a bit to keep in mind, see https://github.com/gopherjs/gopherjs/wiki/JavaScript-Gotchas.

The benefit should be very strong for it to outweigh the extra mental overhead.

@neelance
Copy link
Member

neelance commented Feb 2, 2015

@liamcurry Output size has gotten smaller, especially for honnef.co/go/js/dom. Is that better?

@liamcurry
Copy link
Author

@neelance I added websocket_2015-02-03.go compiled file sizes to the table above. Definitely an improvement! A 315% reduction in file size to be precise.

It's interesting that the gzipped file size got slightly larger even though the uncompressed file is so much smaller. I wonder why that is?

@liamcurry
Copy link
Author

FYI using Google's Closure Compiler with advanced optimizations I was able to get the minified file size down to 218kb (another 179% reduction), but the code no longer works.

@neelance
Copy link
Member

neelance commented Feb 4, 2015

There are also other changes that might have increased the gzipped output size a bit. In my testing of the version before be37568 and after, the gzipped size reduced.

If working code is not a requirement, then I can reduce the size to zero. ;-)

dmitshur added a commit to shurcooL/play that referenced this issue Feb 16, 2015
@dmitshur
Copy link
Member

Here are additional samples I've gathered for my upcoming talk. I'm also including the file size of Go compiled binary output for reference.

Filename Source Go GopherJS Minified Min+Gzip
simple.go 67 B 624 KB 67 KB 50 KB 12 KB
fmt_simple.go "fmt" + 85 B 1920 KB 567 KB 392 KB 89 KB
peg_solitaire_solver.go "fmt" + 2696 B 1929 KB 570 KB 395 KB 89 KB
markdownfmt.go 11000~ LoC 3701 KB 1681 KB 1135 KB 238 KB

@neelance
Copy link
Member

@shurcooL Thanks for collecting those stats.

I'm quite happy with the output size right now. Of course there are still some optimizations possible, but there are more pressing issues.

@theclapp
Copy link

Re: my previous comment: Those were (obviously) my own download times. I have pretty fast Internet here. Others are welcome to post their own times.

Also: In earlier comments, some folks posted their results using the Google Closure Compiler, which does do DCE, as I understand it. Their results were, in my opinion, fairly modest: 5.7 mb -> 4.7 mb, about a 17% reduction. Possibly the actual GopherJS compiler, with a little more knowledge on its side, could do better, but it's something to keep in mind. If @shurcooL (et al) wants to implement it, I'm not gonna tell 'em not to, but I don't know if it's worth it in the long run.

Something no one has mentioned is that all of this might well be moot, once the Wasm branch is released. Again, just something to keep in mind, when thinking about priorities.

@shelby3
Copy link

shelby3 commented May 23, 2018

@theclapp thank you for your level-headed comments.

My veering off-topic thought is if we can drive popularity (and whether GopherJS is popular is both related but also orthogonal to whether Go is popular), then the resources for everything we need to do will follow. I realize again that is more of a vaporware/speculative thought, but that is where my current focus is at the moment. I realize many/most of you are coming from the focus of deploying what is available now and prioritizing resources that are available now and that is of course rational.

My investigation is from the perspective of whether (at least as a first short cut) target Go for transpiler from typeclasses and other improvements, or whether we need to bypass and target perhaps LLVM or roll our own. I’m reasonably sure that we don’t want to limit our design efforts (the discussions with @keean, @sighoya et al) by trying for it to qualify as Go 2 or Go 3, because there’s too much inertia already in design decisions that would hamper what we could design. Yet it is enticing to leverage all the work on goroutines and GC by Go, so I hope we can target it with a transpiler. Also the extant ecosystem libraries are enticing. Hence my interest in this thread.

Again to reiterate my concurrence, I also don’t know what the prioritization of #186 should be.

I agree it is more compelling when someone who is using the GopherJS reports an example issue that hinges on DCE. Was the reported 5 – 8 second delays in iterative developer workflow when combined with Webpack not impacted by DCE?

@theclapp
Copy link

Was the reported 5 – 8 second delays in iterative developer workflow when combined with Webpack not impacted by DCE?

I see DCE discussed, but not implemented or tested; perhaps I missed it. If DCE were implemented, I imagine it'd have some impact, though I'm not sure how much. The Google Closure Compiler discussed earlier in this issue removed about 17% of the JS in their test. @neelance mentioned in the issue you linked that GopherJS itself could do a better job than any external post-processor. So let's posit a 2x improvement, or 34%. That takes us to a 3-5 second delay (assuming said delay scales linearly with code size). Which, admittedly, is not nothing, and every little bit helps. So it's food for thought.

As I understand it, GopherJS already does what DCE that it can, given that reflection exists. Issue #186 posits a package main-level flag that would make reflection moot vis-a-vis DCE (sort of; see the issue for more).


Let's review: This issue is about large file sizes. We already have some DCE, and #186 presents a way to make it even more effective. jsgo.io gives us a way to separate the compiler output into package-level files, but that would make DCE problematic — or it would mean that only a given project could use those particular compiled package files, which defeats at least one of the primary goals of jsgo.io in the first place, to wit, reuse of compiled files between unrelated projects.

As a more-or-less direct fallout of large file sizes, it can take Webpack a while to process and reload GopherJS code. It seems possible that jsgo.io's output could be leveraged so that webpack only reloads the package files that've changed. From my point of view, both hot-reloading and jsgo.io are PFM, so possibly someone could add yet more magic to get that to work. (Or, you know, possibly not.) If someone wants to try their hand at that, that should be a separate issue.

Do we have any other ideas about reducing the file size, with the goals of 1) reducing network transit time, 2) reducing total load on the browser (compiling uncompressed code, etc), and 3) making it easier for packaging tools (e.g. webpack) to process compiled output more granularly?

Here is one totally wild hair-brained thought which is quite possibly terrible if not literally insane, and which I've considered for exactly as long as it took me to write about it: Go has the idea of init functions. Could GopherJS either add to that, or enhance it, in such a way that reloading jsgo.io's per-package compiled files is possible? Perhaps a special function (gopherjs_init and/or gopherjs_reload) that's run on reload? This might have to work hand-in-glove with a GopherJS-aware webpack module that knows to run said function(s).

Another thought is to enhance the GopherJS compiled output so it's just, you know, smaller. Reading it, it has a lot of repetition. Possibly the yield keyword (discussed in another issue, I forget where) could make that easier. On the other hand, speaking for myself, compile-time minification removes a lot of what I'd want to see removed, so this idea (aside from yield) is probably a non-starter. Actually, I found the issues (#15 & #320), and @neelance takes a stance similar to mine, "If you find some major performance gain, then we can talk about it" (aka "show me the benchmarks"), so again somebody'd have to do the work and prove it was useful.

(And again, I'm confident that GopherJS-wasm will come out eventually and make at least some of this discussion moot.)

But anyway ... any other ideas?

@theclapp
Copy link

theclapp commented May 23, 2018

[Deleted as irrelevant to both the issue and GopherJS at large. ~~ LC]

@shelby3
Copy link

shelby3 commented May 23, 2018

Something no one has mentioned is that all of this might well be moot, once the Wasm branch is released.

How so? We’re still sending executable code over the wire.

Also JavaScript has proper yield, generators, and GC which AFAIK Wasm doesn’t have yet. Yet GopherJS doesn’t employ yield as I think it should.

Also I had suggested that AFAIK in theory, DCE could even be beneficial for runtime memory paging, so Go should have it also. Although I’ve never confirmed this. That was just a thought off the top of my head.

How are we going to debug the Wasm? What’s the interoperability FFI between language ecosystems?

I was aware of @neelance’s initiative for compiling to WASM.

Seems to me that Wasm is still a few years away from being practical. And it’s an experimental design which ultimately may be discovered to be the incorrect design and could potentially die on the vine or require significant breaking changes. So I would be cautious about throwing all your eggs into that basket too fast. My current belief is that design-by-public-committee is a flawed methodology analogous to the flaws in democracy, so I am always skeptical of it.

If the WASM compiler is going to implement the entire runtime (e.g. GC, exceptions, generators) then it may also be possible to achieve multithreading with shared memory on JavaScript engines.

Also I still have some doubts about Wasm, because sacrifices in performance and features were made in order to make it secure. But I wonder whether security wouldn’t be better handled at the language layer. However the advantage of Wasm in theory is that it all experimentation with many different programming languages on the client, so seems to make it a winner. But if it turns out that there is one language that dominates the rest, then the security model will need to adapt so as to provide best performance to that language and no duplicate performance cost for security at both the low-level and the high-level (e.g. if the high-level language already protects against buffer overruns then don’t need the low-level also checking for memory out-of-bounds accesses). I think there’s a reasonable chance that the latter outcome might end up being the reality. From my 3 years of discussions on programming language design, there really aren’t that many choices for doing a general purpose programming language correctly w.r.t. to certain features such as checking for overruns of arrays. But to be honest, I haven’t dug in deep enough yet on WebAssembly, and I may very well be incorrect on some of the details.

If DCE were implemented, I imagine it'd have some impact, though I'm not sure how much.

My thought w.r.t. “impact” is whether it was determined if the file sizes (e.g. due to parsing slowness?) was the like cause of the delays?

jsgo.io gives us a way to separate the compiler output into package-level files, but that would make DCE problematic — or it would mean that only a given project could use those particular compiled package files, which defeats at least one of the primary goals of jsgo.io in the first place, to wit, reuse of compiled files between unrelated projects.

I wouldn’t characterize it as problematic, but rather emphasize DCE can be added to jsgo.io but at the cost of what you recapitulated. But I had already suggested that the goal of having multiple applications reuse the same library files on each user’s client is not very realistic (at this time in most scenarios) because GopherJS is not popular enough to make that happen. The user will likely only have one or zero applications that were compiled with GopherJS. And the DCE would be presumably speedup some developer workflows. And if ever GopherJS becomes ubiquitous, then the DCE is to be an optional compiler setting, so it can be turned off as the market changes. And always turned on during development. So the benefit of DCE will persist. Also the benefit of reusing libraries can be offset even if GopherJS is popular, by the fact that 3 copies of the same file which is 1/3 its size without DCE is break even, and that doesn’t even factor in all the files which do not get duplicated between applications. Would need basically that all applications use the same library files and that DCE is not very effective, which isn’t likely. IOW, it is dubious that the stated goal of jsgo.io will ever be advantageous if aggressive DCE is viable.

It seems possible that jsgo.io's output could be leveraged so that webpack only reloads the package files that've changed.

Yeah I would like to know if that would solve the developer workflow delays without needing to resort to aggressive DCE based on a heuristic. The splitting into files seems to be significant benefit of jsgo.io even with DCE added. Or both combined would in theory reduce the delays even further. But reading that thread #524, there seems to be an issue with initialization and separately loaded modules. I don’t know what will be required to overcome that.

Could GopherJS either add to that, or enhance it, in such a way that reloading jsgo.io's per-package compiled files is possible?

I’m not understanding what you are proposing? Do you mean hot reloading while the program is running? IMO, I think this is far outside the realm of what GopherJS should do. IMO, Go would need to have that feature.

Another thought is to enhance the GopherJS compiled output so it's just, you know, smaller. Reading it, it has a lot of repetition. Possibly the yield keyword (discussed in another issue, I forget where) could make that easier.

Yeah see I had recently commented on that in a very old thread which I linked above. And there’s other benefits (and tradeoffs) to switching to yield (e.g. debugging being more comprehensible and JavaScript engines may be able to better optimize). Where was this already discussed? I didn’t see any mention of yield in the thread I linked above.

@theclapp
Copy link

Something no one has mentioned is that all of this might well be moot, once the Wasm branch is released.

How so? We’re still sending executable code over the wire.

Yeah, fair enough. I guess I was just thinking that this particular issue with GopherJS would be moot. In that it'd then be an issue with Go-wasm.

[... yield ...]

@neelance has stated he'd like to see proof that it'd help, which I think is fair. Time is scarce. We're all free to try it out and submit a PR.

DCE could even be beneficial for runtime memory paging

Could be.

How are we going to debug the Wasm? What’s the interoperability FFI between language ecosystems?

Dunno. We'll see when it comes out, I think. :)

Could GopherJS either add to that, or enhance it, in such a way that reloading jsgo.io's per-package compiled files is possible?

I’m not understanding what you are proposing? Do you mean hot reloading while the program is running?

Yes, sort of. In the context of a web application. I don't know exactly how web applications achieve hot reloading, so I can't go into detail on what it would look like, exactly, in a Go application. I suspect that the web app would have to be GopherJS-aware, and the Go code would probably need helper code to facilitate the reloading. I suspect that figuring out how to do that would be really hard, and quite possibly beyond the scope of GopherJS.

IMO, I think this is far outside the realm of what GopherJS should do.

Yeah.

But then I thought per-package GopherJS files would never work, and then @dave went and implemented it. So I dunno. I just know, it won't be me. (Alas; I bet that'd be cool to know how to do. :)

@dave
Copy link
Contributor

dave commented May 24, 2018

BTW (off topic I know) I’m currently working on a project that will load new GopherJS packages dynamically at run-time... not sure if it would work in the general case but it seems to work fine for my specific use... this is only new packages mind - reloading a changed version of an existing package isn’t what I’m trying to do.

@shelby3
Copy link

shelby3 commented May 24, 2018

Just a reminder to consider @neelance’s comments in #524 about dynamically loaded modules:

But reading that thread #524, there seems to be an issue with initialization and separately loaded modules. I don’t know what will be required to overcome that.

Maybe that topic needs to be discussed in #524?

@theclapp
Copy link

I wonder if the "shared library" mode (-buildmode=shared ) would help out with partial reloading? E.g. manually designate modules, and when they change, recompile them, discard the old ones, and load the new ones. I assume regular Go can load shared libraries; what I don't know is if it can unload such a shared library. If not, maybe the programmer could manually arrange for all goroutines in the old version to stop (so it wouldn't be completely automatic, but it wouldn't be impossible, either), and reload the new one under a different name, so during development, at least, you could simulate reloading "the same" module, at the expense of some (duhn duhn duhn) dead code.

Thinking about that leads me to a different, but related, idea. GopherJS is just Javascript (duh). I bet that right now you could safely load multiple GopherJS programs on the same web page, via multiple <script> tags. We don't do this because the overhead is pretty high. But now with jsgo.io, perhaps that overhead could be shared. E.g. you could run two different main functions and they'd share the code for fmt. Probably it'd cost some extra memory, but maybe not a prohibitive amount, and presumably less than loading the same code twice using the current monolithic model (but "show me the benchmarks!"). Presumably you'd only do this during development, and then combine everything for deployment. But maybe that's a way to get "modules" — don't use modules per se, use whole programs, that work together. I imagine they could even pass Go objects back and forth with little overhead (they're just JS objects under the hood, after all), via the Cloak and Uncloak functions outlined in #704. Cloak it, .Call the other "program's" public JS API, and Uncloak it there.

Cloak and Uncloak are very simple and look like this:

// Cloak encapsulates a Go value within a JavaScript object. None of the fields
// or methods of the value will be exposed; it is therefore not intended that this
// *Object be used by Javascript code. Instead this function exists as a convenience
// mechanism for carrying state from Go to JavaScript and back again.
func Cloak(i interface{}) *js.Object {
   return js.InternalObject(i)
}

// Uncloak is the inverse of Cloak.
func Uncloak(o *js.Object) interface{} {
   return interface{}(unsafe.Pointer(o.Unsafe()))
}

That's kind of an exciting idea. I wish I had time to experiment with it right now. :)

Hey @dave, do you think that'd work?

@dave
Copy link
Contributor

dave commented May 25, 2018

So when I was developing the jsgo playground, I played around with a higher performance "run" button. Instead of completely destroying the iframe with the code running, it just replaced the script tag containing the edited source code. It worked faster, but it broke most projects... They'd re-initialise and duplicate UI elements etc. I guess they also would have hanging goroutines and all sorts of bad stuff.

@dave
Copy link
Contributor

dave commented May 25, 2018

I bet that right now you could safely load multiple GopherJS programs on the same web page, via multiple <script> tags.

Yeah I don't see any reason you couldn't have multiple main packages in the same page sharing common dependencies... Not sure if that would solve any pressing problem though...

@theclapp
Copy link

Not sure if that would solve any pressing problem though

Well, you know, pressing ... Maybe nothing. My thought was just that it might be a way to have GopherJS "modules" without having to change the language or even the compiler at all.

And also hoping that maybe with jsgo.io splitting dependencies up, tools like webpack could run a bit faster, e.g. maybe they could mostly ignore packages from the Go standard library. (Which is a different issue than the one I @'ed you about, just to be clear.)

[... problems with this idea in the jsgo playground ...]

Yeah, the user Go code would probably have to be written with an eye towards re-initialization. I wouldn't assume you could just do it for everybody automatically. I assume webpack users have startup/shutdown/restart code in their regular Javascript, too, but I haven't checked. Full page reloads haven't annoyed me enough yet to try any of the more sophisticated solutions.

@theclapp
Copy link

On Aug/18/2016, @shurcooL said:

I wanted to share https://blog.golang.org/go1.7-binary-size here.

Surprisingly many of the principles that Go can use/uses to improve normal binary size can apply to GopherJS (or any other Go compiler) too. I've talked with David Crawshaw (@crawshaw), who worked on the Go binary size improvements for 1.7, at GopherCon 2016 and he gave me some good ideas that can apply to GopherJS (many starting points are mentioned in that blog post).

Here are some of those starting points:

  • Use the SSA backend (if we're not already).
    • This seems kind of iffy for GopherJS. Go did it for performance reasons, and the smaller generated code was a side benefit. I don't know if SSA would help GopherJS.
  • Method pruning (basically more DCE)
    • Discard any unexported methods that do not match an interface. This could work in jsgo.io.
    • Similarly the linker can discard other exported methods, those that are only accessible through reflection, if the corresponding reflection features [reflect/Value.Call()] are not used anywhere in the program. This is similar to our Optional aggressive dead code elimination #186 but automatic and safer. As others have mentioned, that means that changing your code to use reflect/Value.Call() where it didn't before will result in (a probably unexpected & unwelcome) increase in code size, disproportionate to the change. Jsgo.io could not use this method.
  • more compact format for run-time type information used by the reflect package
    • I haven't looked into how GopherJS does run-time type information; compacting it might be possible & helpful or might be neither.

@benma
Copy link
Contributor

benma commented Feb 18, 2020

If I could use a flag to make any use of Method() and MethodByName() fail compilation in return for significantly smaller binary sizes, I would enable the flag.

My current cross compilation results in a JS file of over 4MB, which is supposed to be used by lots of downstream users as a dependency. This is a bit too much.

@nevkontakte
Copy link
Member

For anyone still interested in this issue, I think #1183 is a promising solution. I don't think it's too difficult to implement either and I tried to spell out the key points. I (unfortunately) won't get to it at least until #1013 is done and we are caught up with the upstream Go versions. So I invite the community to give it a try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests