The Beef framework, and the underlying code generation, has been primarily created to support the industralisation of API development.
A means to have software developers focus directly on the accelerated delivery of business value; with consistently higher quality outcomes at an overall lower cost.
The key industralisation goals are:
- Value – focus on business value, not on boilerplate
- Acceleration – improve velocity; reduce costs and time to market
- Simplicity – increase effective usage and minimise learning
- Standardised – increase knowledgeable resource pool
- Consistency – improve overall quality and maintainability
- Flexibility – enable innovation and evolution easily over time
- Confidence – reduced delivery timeframes and risk
As a result of the Beef Architecture, supporting Framework and included Code Generation capabilities, enterprise-grade APIs can be developed in a matter of hours, not days, in a standardised and consistent manner.
The APIs created will have the following capabilities out-of-the-box with limited developer effort, so the developer can focus on the key business value:
- Rich Entity (DTO) functionality including
INotifyPropertyChanged
,IEditableObject
,IEquatable
,ICloneable
,ICopyFrom
,ICleanUp
,IUniqueKey
, etc. - Rich Reference data capabilities, including caching, optimised serialisation, and enriched API endpoints.
- Rich Validation capability to simplify and ensure data integrity and consistency.
- CRUD (Create, Read, Update and Delete) for Database (Stored procedures and Entity Framework), Cosmos DB and OData in a standardised manner.
- An approach and tooling to automate and manage database set up, configuration, and deployment.
- Paging (skip and top) and resulting total count, that flows from API through to the underlying data source in a consistent and seamless manner.
- ETag (concurrency) and
If-Match
/If-None-Match
handling. - JSON response field filtering (include/exclude) to minimise resulting payload size (e.g.
$fields=firstname,lastname
) - HTTP Patch support, where required, in a simplified and consistent manner.
- An end-to-end intra-domain integration testing approach enables effective tests to be built easily and quickly.
- gRPC server (and client) integration.
- Event publishing and subcribing to enable an event-driven architecture.
To implement these included capabilities would literally take months/years to build and test; these are available for developers to use immediately, and contribute back if so inclined. The capabilities and implementations have been influenced by Microsoft's best practices for cloud applications; specifically:
To get started a .NET Core template capability is provided to enable you to get a solution up and running in minutes.
Beef has been developed to encourage the standardisation and industrialisation of the tiering and layering within the microservices (APIs) of an Application Architecture.
The conceptual architecture is as follows; with Beef being targeted specifically at implementation of the API tier.
The key concepts are as follows:
- Channel-agnostic - the APIs are based around the key entities and the operations that can be performed on them:
- APIs represent the key trust boundary; as such, they make no assumptions on the consumer. The APIs will always validate the request data, and house the application’s functional business and orchestration rules.
- APIs should not be developed to service a specific user interface interaction; as the APIs are agnostic to the consumer. The consumer has the responsibility of coordinating across API calls.
- Domain-based – the APIs are based around, and encapsulate, the capabilities for a functional domain:
- Outcome of a Domain-Driven Design; divides capapabilities into different Bounded Contexts.
- Encourages micro vs monolithic services.
An architectural pattern for creating domain-based APIs:
- Is a software architecture style in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs.
- These services are small, highly decoupled and focus on doing a small task, facilitating a modular approach to system-building.
- Implementation independence:
- Loose coupling – should have its own persistence repository; data is duplicated (synchronised), not shared; eventual consistency; no distributed transactions.
- Polyglot persistence / programming – use the best persistence repository to support the storage requirements; use a mix of programming languages (fit-for-purpose). Note: Beef provides a C# / .NET implementation approach as one option.
- Eventual consistency - for the most part, eventual consistency is good enough; real-time distributed transactional integrity is rarely required (although generally desired). An asynchronous messaging system, such as Queues or a Service Bus, can be leveraged to orchestrate cross domain data (eventual) consistency.
“Micro” doesn’t imply number of lines of code; but a bounded concept / business capability within your Domain. - http://herdingcode.com
The architecture supports a domain-based channel-agnostic microservices approach. The API service endpoints represent a light-weight facade for the Business (domain logic) tier, that is ultimately responsible for the fulfillment of the request.
The following represents the prescribed tiering and layering of the architecture:
Given this architecture, the .NET Solution you create using Beef should adhere to the prescribed solution structure.
Each of the key layers / components above are further detailed (Xxx
denotes the entity name):
- Entity (DTO) -
Xxx
- Service agent -
XxxAgent
andXxxServiceAgent
- Service interface -
XxxController
- Domain logic -
XxxManager
- Service orchestration -
XxxDataSvc
- Data access -
XxxData
To support the goals of an Event-driven Architecture Beef enables the key capabilities; the publishing (Producer) and subscribing (Consumer) of events (messages) to and from an event-stream (or equivalent).
-
Producer / Publisher - the publishing of events is integrated into the API processing pipeline; this is enabled within either the Data (where leveraging the transactional outbox pattern) or Service orchestration layers to ensure consistency of approach. Beef is largely agnostic to the underlying event/messaging infrastructure (event-stream) and must be implemented by the developer (unless provided, see Azure EventHubs or ServiceBus).
-
Consumer / Subscriber - a event subscriber is then implemented to listen to events from the underlying event/messaging infrastructure (event-stream) and perform the related action. The event subscriber is encouraged to re-use the underlying logic by hosting the Beef capabilities to implement. The Domain logic layer can be re-leveraged to perform the underlying business logic on the receipt of an event (within the context of a subscribing domain).
The Beef support for an event-driven architecture is enabled by the Beef.Events
, Beef.Events.EventHubs
and Beef.Events.ServiceBus
assemblies.
Additionally, Beef has capabilities to support the Transactional Outbox Pattern where there is a requirement for events to be sent reliably (with no message loss); i.e. to guarantee at-least-once sent semantics within the context of the underlying data update (currently only supported for Database repository).
A comprehensive framework has been created to support the defined architecture, to encapsulate and standardise capabilities, to achieve the desired code-generation outcomes and improve the overall developer experience.
Standardised approach, ensures consistency of implementation:
- Reduction in development effort.
- Higher quality of output; reduced defects.
- Greater confidence in adherence to architectural vision; minimised deviation.
- Generation and alike enables the solution to evolve more quickly and effectively over time.
A key accelerator for Beef is achieved using a flexible code generation approach.
An extensive framework of capabilities has also been developed to support this entity-based development. Specifically around entities and their collections, entity mapping, reference data, validation, standardised exceptions, standardised messaging, basic caching, logging, flat-file reader/writer, RESTful API support, ADO.NET database access, Entity Framework (EF) data access, OData access, Azure Service Bus, long running (execution and triggers) processes, etc.
The key capabilities for Beef are enabled by the following runtime assemblies:
Assembly | Description | NuGet | Changes |
---|---|---|---|
Beef.Abstractions |
Core foundational framework. | Log | |
Beef.Core |
Core foundational framework. | Log | |
Beef.AspNetCore.WebApi |
ASP.NET Core Web API framework. | Log | |
Beef.Data.Database |
ADO.NET database framework. | Log | |
Beef.Data.Database.Cdc |
ADO.NET database Change Data Capture (CDC) framework. | Log | |
Beef.Data.EntityFrameworkCore |
Entity Framework (EF) Core framework. | Log | |
Beef.Data.Cosmos |
Cosmos DB execution framework. | Log | |
Beef.Data.OData |
OData execution framework. | Log | |
Beef.Events |
Supporting Event-driven framework. | Log | |
Beef.Events.EventHubs |
Supporting Event-driven framework using Azure Event Hubs. | Log | |
Beef.Events.ServiceBus |
Supporting Event-driven framework using Azure Service Bus. | Log |
The tooling / supporting capabilities for Beef are enabled by the following assemblies:
Assembly | Description | NuGet | Changes |
---|---|---|---|
Beef.CodeGen.Core |
Code generation console tool. | Log | |
Beef.Database.Core |
Database and data management console tool. | Log | |
Beef.Test.NUnit |
Unit and intra-domain integration testing framework. | Log | |
Beef.Template.Solution |
Solution and projects template. | Log |
The following samples are provided to guide usage:
Sample | Description |
---|---|
My.Hr |
A sample as an end-to-end solution walkthrough to demonstrate the usage of Beef within the context of a fictitious Human Resources solution. The main intent is to show how Beef can be used against a relational database (SQL Server) leveraging both direct ADO.NET (with stored procedures) and Entity Framework (EF) where applicable. |
Cdr.Banking |
A sample as an end-to-end solution to demonstrate Beef being used to solve a real-world scenario. This demonstrates an implementation of the CDR Banking APIs leveraging a Cosmos DB data source. |
Xyz.Legacy |
A sample as an end-to-end solution to demonstrate Beef being used to faciliate the introduction of Change Data Capture (CDC) entity event publishing on a legacy SQL Server database. |
Demo |
A sample as an end-to-end solution to demonstrate the tiering & layering, code-generation, database management and automated intra-domain integration testing. This is primarily used to further test the key end-to-end capabilities enabled by Beef. |
The following are references to additional documentation (these are all accessible via links within this and other documentation):
- Solution structure
- Entity (DTO)
- Service agent
- Service interface
- Domain logic
- Service orchestration
- Data access
- Code generation
- Entity-driven (.NET C#) - CodeGeneration - YAML/JSON or XML
- Database-driven (database) - CodeGeneration - YAML/JSON or XML
- Query - YAML/JSON or XML
- QueryJoin - YAML/JSON or XML
- QueryJoinOn - YAML/JSON or XML
- QueryWhere - YAML/JSON or XML
- QueryOrder - YAML/JSON or XML
- Table - YAML/JSON or XML
- StoredProcedure - YAML/JSON or XML
- Parameter - YAML/JSON or XML
- Where - YAML/JSON or XML
- OrderBy - YAML/JSON or XML
- Execute - YAML/JSON or XML
- Cdc - YAML/JSON or XML
- CdcJoin - YAML/JSON or XML
- CdcJoinOn - YAML/JSON or XML
- Versioning - article, implementation - Beef has no specific support or opinion with respect to versioning approach and/or implementation.
- Domain-driven design - Wikipedia, Fowler, Microsoft authored articles: article, article, article and article - Beef encourages the DDD approach, and is why Entity naming and convention is foundational within.
Beef is open source under the MIT license and is free for commercial use.
To start using Beef you do not need to clone or fork the repo; you just need to create a solution with the underlying projects using the prescribed solution structure, including referencing the appropriate NuGet packages. To accelerate this a .NET Core template capability is provided to enable you to get up and running in minutes.
See the following for example end-to-end solution/project creation; each demonstrating the same API functionality leveraging different data sources to accomplish:
Otherwise, follow along with the following sample tutorials that will provide a more in-depth walkthrough solving a defined functional problem:
My.Hr
- microservice against a SQL Database using both stored procedures and entity framework.Cdr.Banking
- microservice against an Azure CosmosDB data source.Xyz.Legacy
- CDC implementation against a legacy database publishing messages to Azure Service Bus.
One of the easiest ways to contribute is to participate in discussions on GitHub issues. You can also contribute by submitting pull requests (PR) with code changes.
The most general guideline is that we use all the VS default settings in terms of code formatting; if in doubt, follow the coding convention of the existing code base.
- Use four spaces of indentation (no tabs).
- Use
_camelCase
for private fields. - Avoid
this.
unless absolutely necessary. - Always specify member visibility, even if it's the default (i.e.
private string _foo;
notstring _foo;
). - Open-braces (
{
) go on a new line (anif
with single-line statement does not need braces). - Use any language features available to you (expression-bodied members, throw expressions, tuples, etc.) as long as they make for readable, manageable code.
- All methods and properties must include the XML documentation comments. Private methods and properties only need to specifiy the summary as a minimum.
For further guidance see ASP.NET Core Engineering guidelines.
We use NUnit
for all unit testing.
- Tests need to be provided for every bug/feature that is completed.
- Tests only need to be present for issues that need to be verified by QA (for example, not tasks).
- If there is a scenario that is far too hard to test there does not need to be a test for it.
- "Too hard" is determined by the team as a whole.
We understand there is more work to be performed in generating a higher level of code coverage; this technical debt is on the backlog.
To help ensure that only the highest quality code makes its way into the project, please submit all your code changes to GitHub as PRs. This includes runtime code changes, unit test updates, and updates to the end-to-end demo.
For example, sending a PR for just an update to a unit test might seem like a waste of time but the unit tests are just as important as the product code and as such, reviewing changes to them is also just as important. This also helps create visibility for your changes so that others can observe what is going on.
The advantages are numerous: improving code quality, more visibility on changes and their potential impact, avoiding duplication of effort, and creating general awareness of progress being made in various areas.