Replies: 2 comments 10 replies
-
This issue seems to be related: #280. |
Beta Was this translation helpful? Give feedback.
-
We have something similar. We created our cmake build system before the NASA cmake build system was made public, and we still use ours. We support many different projects, using different CFS versions and components, and need to have many different project repositories but be able to push and pull updates from the common repositories. We also need to accommodate mixing public and proprietary code (apps, OSALs, PSPs, configurations, even forked CFE repos). Our build system puts no constraints on where OSALs, PSPs, application code, tables, platform header files, or mission header files reside. We have a repository at https://github.com/windhoverlabs/baseliner, that creates our initial CFS project repository. There is an input file where you specify the repository URLs and the commit/branch/tag you want to pull in for CFE, OSALs, PSPs, applications, and tools, and whether you want them as git modules or subtrees. After you run the script, you push the newly created project to its new CM project. We used that to create our https://github.com/windhoverlabs/airliner public repository, and repositories for other projects. Some of the builds in that public repository require proprietary OSALs, PSPs, and applications. Those are in private repositories, so those builds fail if one does not have access to the private repos. The public builds still build though. All components are relocatable, but our typical project structure is: <main_private_repo>/public/ <- This is the Airliner repo at https://github.com/windhoverlabs/airliner <main_private_repo>/private/apps <- Proprietary or non-public applications <main_private_repo>/... <- Non CFS stuff like drivers, OS, IP, etc... My recommendation is to not be afraid to change the build system and be creative with where to put files to better fit your CM policies and work flow. Most of the code is independent of the build system. Version 6.7 did add some build system auto-generated code, but those are relatively easy to recreate. Lastly, I have also occasionally created totally different build structures to accommodate vendor specific build systems, i.e. Wind River Workbench, Green Hills Multi, Xilinx SDK, and Xilinx Vitis. Green Hills works out well because build configuration is inheritable, making it easier to maintain, particularly for ARINC653 builds. Workbench allows nested projects but configuration doesn't inherit so its difficult to maintain. Xilinx SDK requires a lot of work. Vitis was a total show stopper. |
Beta Was this translation helpful? Give feedback.
-
Hello everyone,
We would like to know what a typical cFS-based project structure should look like.
It looks like cFS directory structure assumes that the user apps should be created in the CFS_ROOT/apps directory which means that they should live inside a fork of the main cFS repository.
One alternative for us could be a vendor directory-based approach with the following structure:
Project root:
On a previous project, we could achieve the vendor-based structure but it required serious CMake surgery on the normal cFS CMake files, and this was something we would like to avoid in the future.
The reasons why the out-of-source approach seems more attractive to us:
Is there a way we could achieve this out-of-cFS-source-tree configuration with the current cFS CMake structure? Also, does this approach seem reasonable?
Thank you for your attention.
Beta Was this translation helpful? Give feedback.
All reactions