-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove core DLLs from the repository #38
Comments
I believe they were put there to make everything work out-of-the-box, and that the fetch script exists just to update them. @meleu can you confirm this? |
If that's really important, then please consider bundling Running |
I'll leave @meleu @Jamiras and @GameDragon2k decide what to do about this. My development machine runs Linux. I just don't think removing them from the repo and adding instructions to the README to make developers run a script before start working with it is not much different in practice. |
The difference is that the fetch script will always download up-to-date versions of the cores, while vendored cores would need to be updated regularly in the repository, which involves maintenance work and pollutes the history, and means that in order to get a full (deep) copy of the repository, one will end up downloading every set of cores in history. Someone who reads the Finally, cores are soft dependencies; they are not needed to compile RALibretro or even to launch it, so there is no need to download them as part of automated builds and other situations where they are not needed. |
I think they being in the |
I agree that having binaries in the repository is unnecessary bloat, and one of the few things git doesn't do very well. Right now the bin directory is 63MB - fbalpha being more than half of that. Because git is a distributed repository, that 63MB has to be downloaded every time someone clones the repository. If we decide we want a new build of fbalpha, that's an automatic 35MB more. The previous one is still in the history, and therefore has to be available locally in case you wanted to switch to an older commit. In order to nuke these binaries from the repository, we have to rewrite the history without them. That's fairly trivial to do, but it will break any local checkouts you have of the affected branch or anything that was branched off the affected branch. As these files are in the In my opinion, it would be beneficial to remove these files from the repository. It is my recommendation that anyone working with the repository clone a fresh copy after the purge. As such, if anyone has any non-committed code they want to keep, consider pushing it now. Let me know if you have any concerns. I won't do anything until at least next weekend. |
@Jamiras ok, could announce in the appropriate forums so everyone has time to comply? I don't think this will affect anyone besides us, but it doesn't hurt to be sure. |
People should be able to rebase their change sets either way in order to move them to the "clean" history. |
What would the appropriate forum be? It seems like the target audience would get the notification about this issue being updated just from watching this repository. I'm not even sure what the appropriate Discord channel would be. It's not really #rasuite or #raemus. So #devs? |
@leiradel, I've found our conversation talking about this stuff. We decided to stick with a core version and update once in a while due to the "not very controlled" way the development happens on those repos. At that time we agreed that storing a specific version here would be good to avoid the insertion of bugs. But the guys here have a point, though (the dlls are bloating the repo). I think we just need to test the current version of each core and ensure they are working fine. If it's all working properly, we can go ahead with the "rewrite the history approach". 👍 |
Have you encountered issues due to the use of nightly builds before, or is this hypothetical? |
@meleu is right, the libretro cores don't follow a strict release cycle. There were situations where a core update would lead to lost save games, because the code that deals with save states had changed in incompatible ways and released without warning. Although it's rare to have that kind of issue, I'd rather still have a known "good" set of cores. If the repository is not a good place for them, we should find another place. We can't assume nightly core builds are always ok and stick to the "good" set. |
Can you please provide a reference for a case where core updates led to lost save games? Was it resolved afterwards, between stable releases? Did save states get overwritten automatically on read? I don't consider save states to be a critical system that warrants holding back improvements and fixes to a core, and I don't think that auditing every one of them to guarantee that no breaking change was pushed to a deployment version is reasonable, but I would recommend documenting the deployment process somewhere and packaging stable releases, not nightly builds. The reason why the fetch script operates on nightly builds is because stable builds do not include individual core packages. If a breaking change is found in a new stable release, then you can consider the core bundled with the previous release to be "good". |
I'm afraid references are lost in IRC/Discord chat histories. The issues weren't fixed because they weren't bugs, they were made in the upstream repos and caused compatibility issues with the save states when the changes were updated in the cores. The main problem is that the cores are not kept up-to-date with the upstream repos, updates are made from time to time and incorporate lots of changes in one shot.
There are no stable releases of libretro cores, only nightly builds. Our "good" core set would be the what we think are stable releases for RALibretro.
Save states is a very important feature, I sure don't want to have angry users opening issues because their save states don't work anymore. |
From what I understand/assume based on the details provided, save states were not "lost", they were simply rendered incompatible with newer versions of a certain core. In that case, rolling back is not a major issue, but users will eventually have to migrate. I don't know that we can take responsibility for core maintainers following best practices, and we can't hold back updates forever. So in practice I don't see much difference between deploying fresh core builds at every release and holding them back for some arbitrary amount of time. In order for a list of "stable" cores to be useful and reliable, we would need acceptance tests covering every important feature exhaustively. Otherwise, any update will be subject to bias, validation mistakes, omissions, and negligence, which is not much better than trusting the core maintainers. In any case, if you decide to go this route, I would document such a list in the Wiki, along with the documentation for acceptance tests. |
End users won't care what the real issue was, they will be unable to load their save games.
I wouldn't count on end-users locating the last known "good" core and updating RALibretro by hand.
And we can't make our end-users lives worse because they don't follow best practices.
We don't need to go overboard with this. When a cool feature is released or an important bug is fixed, we can just wait a couple of weeks after the core is available and see if there are road blocks. Of course if one has enough time to make such tests, I'm all in for that.
It's not much a question of trust. I'm positive they do their best trying to maintain their ports of the upstream emulators, but when you only have time to update the core once an year, the original emulator may already changed the save state format and dropped the compatibility code that loads the previous format so the core just can't load the previous format after one single update. Again, this has happened before, and in the current state of affairs, can happen again. I don't think exposing RetroAchievements end-users with the holes we have in our release process as a hole is the best approach, hence my stance about having a known "good" set of cores, and updating them with care. |
It's a huge difference! We can easily roll back to a previous version of the affected core (in the case of a legitimate bug) or direct affected users to do this themselves. This is not the same as losing or corrupting files, which would be a major issue. In any case, the situation you describe seems preferable to sticking with an older version that is potentially buggier in unpredictable ways, as well as less accurate/compatible; not to mention that
We can provide links to the RetroArch buildbot, which includes several older "stable" versions as backup. In addition, deploying a hotfix version to correct any critical issue is possible. I think we should expect some level of intelligence from end users anyway.
It sounds like the situation you brought up was a one-time issue involving a single core, and as you said, it was never rolled back. So in this situation, we would be making users' lives worse either way. Another important point that should be brought up is that by keeping RALibretro in sync with the libretro repositories, we are ensuring compatibility with RetroArch. Mixed compatibility between supported systems has been an issue for a while, and I was under the impression that solving that problem was a goal of RALibretro.
As a user, I would not trust some arbitrary tests performed at random times by another random user without any standards any more than a core maintainer's own testing. After all, there's no guarantee that a feature I care about would be tested properly. I don't feel that there's any benefit to holding back updates without serious guarantees about stability. In other words, if stability is truly critical, then exhaustive acceptance tests (and/or audits for the source code of bundled cores) are critical to ensuring that stability.
Outside of keeping the old version forever, or forking the repositories in the style of If you have any other example of core updates compromising stability, then you might convince me to hold off on updates; I am simply not convinced that the risk of a major issue popping up is worth the trouble. Either way. if we decide to ship validated versions only, I think we can agree that they should be kept outside of the repository. I will let others voice their thoughts on these issues, since our conclusions are opposite. |
Did we come to a verdict on this? @ScottFromDerby seems to want to keep them where they are to keep the "barrier for entry" low. @meleu suggested a new git repo for the binaries (not really an improvement). #37 downloads them from the nightly builds. It seems like the ideal solution would be to put a package of approved cores on a host somewhere. Then the user could download/compile the executable separately from the cores. This is similar to how RAProject64 manages its plugins. I'd even go so far as to just document the behavior rather than have a script do it. |
The shell script in master right now downloads nightly builds. #37 is a port. I agree with hosting "stable" cores and other dependencies elsewhere. Another repository is of course still poor practice, but much better than keeping them here. |
I want to solve this issue soon (as in removing the core binaries from the commit history). And I would like to know what are the drawbacks of creating a dedicated repo for us to put the cores (windows 32bits builds only). My plan is to update the cores every time we release a new version of RALibretro. IMO creating another repo is a good approach cause we could have a safe way to get older builds if something breaks. The libretro's nightly website doesn't keep older builds of the cores for downloading and users would need to build them from source to get them. Opinions? |
I only know git superficially, so pardon my ignorance, but how does that improve what we have now if the two repositories will walk hand-in-hand? Can't we have a |
The problem this is trying to address is git is a distributed system, and keeps history for everything. So that 40MB FBA core that's in the repository now is downloaded to everyone who clones the repository even if it's been replaced a dozen times already - which would of course also down the newest version and the 10 intermediate versions. As I mentioned in my last comment: using a separate git repository does nothing to address the problem. The cores are in the official release package. But we still need some way to indicate when we move to an updated core so people can use it until the next release. And we might want to know which versions of the cores were supported in each released version - which could be handled just by archiving the release packages. |
yeah, we already do and will continue doing this. |
I agree that That said, if all that matters is having that set of cores hosted somewhere that can be updated occasionally, then the RA server seems best. |
This is tangentially related, but assuming at least some of the cores are licensed under the GPL, currently we are not following the terms for distribution:
Other licenses may have similar terms, so please check the terms for each binary being distributed when/if a change is made to hosting (including the non-core DLLs in the libretro also seems to be violating those terms, so maybe someone can convince them to bundle an offer with their automated builds as per b) and we can follow c). |
With #103, cores can now be downloaded/updated from the nightly buildbot just like RetroArch. The dlls have been removed from the current I'm undecided whether or not the history should be rewritten to completely eliminate them from the repository, so I'm going to leave this open. |
#37 provides a script to fetch cores on Windows, so the core DLLs in the
develop
branch are bloating the size of the repository for no reason.The best would be to provide a better way to fetch dependencies and get rid of all of them, ignoring the entire
bin
directory.The text was updated successfully, but these errors were encountered: