Skip to content

feat: Builder API for Sections and Entities#253

Merged
zachdaniel merged 26 commits intoash-project:mainfrom
leonqadirie:feat-builder-api
Mar 2, 2026
Merged

feat: Builder API for Sections and Entities#253
zachdaniel merged 26 commits intoash-project:mainfrom
leonqadirie:feat-builder-api

Conversation

@leonqadirie
Copy link
Contributor

@leonqadirie leonqadirie commented Jan 18, 2026

Contributor checklist

Leave anything that you believe does not apply unchecked.

  • I accept the AI Policy, or AI was not used in the creation of this PR.
  • Bug fixes include regression tests
  • Chores
  • Documentation changes
  • Features include unit/acceptance tests
  • Refactoring
  • Update dependencies

Partially addresses #51 by providing a builder API for Sections and Entities.
This was an experiment to understand Spark better, I thought the builder API might be interesting to dive into how the DSL works and things got a bit out of hand .

More of a discussion PR. Happy to know if it seems valuable and if yes, if the API matches expectations and what would be the required feature set to suffice for incorporation. Otherwise, it can also just be closed.

Caution

Disclaimer: I used AI to discuss the codebase and some parts were contributed by it (vetted by me, but I'm not an Elixir pro and not confident I understood all intricacies)


Example usage

defmodule MyApp.Notifications.Notification do
  defstruct [:name, :type, :target, :metadata, :__identifier__, :__spark_metadata__]
end

defmodule MyApp.Notifications.Dsl.Builder do
  alias Spark.Builder.{Entity, Field, Section}

  def notification_entity do
    Entity.new(:notification, MyApp.Notifications.Notification,
      describe: "Defines a notification delivery",
      args: [:name, :type],
      schema: [
        Field.new(:name, :atom, required: true, doc: "Notification name"),
        Field.new(:type, {:one_of, [:email, :slack]},
          required: true,
          doc: "Notification type"
        ),
        Field.new(:target, :string, doc: "Delivery target"),
        Field.new(:metadata, :keyword_list,
          keys: [
            priority: [type: :integer, default: 0, doc: "Priority level"]
          ],
          doc: "Optional metadata"
        )
      ],
      identifier: :name
    )
    |> Entity.build!()
  end

  def notifications_section do
    Section.new(:notifications,
      describe: "Notification configuration",
      entities: [notification_entity()]
    )
    |> Section.build!()
  end
end

defmodule MyApp.Notifications.Dsl do
  alias MyApp.Notifications.Dsl.Builder

  use Spark.Dsl.Extension, sections: [Builder.notifications_section()]
  use Spark.Dsl, default_extensions: [extensions: __MODULE__]
end

Using the DSL

defmodule MyApp.Config do
  use MyApp.Notifications.Dsl

  notifications do
    notification :ops, :email do
      target "ops@example.com"
      metadata priority: 1
    end
  end
end

@zachdaniel
Copy link
Contributor

This is pretty cool! One major piece of feedback: IMO for the field/value pairs, those should just be options to a function, i.e

Field.new(:type) |> Field.type(Type.one_of([:string, :integer, :boolean])) |> Field.required(),

# should just be:

Field.new(:type, type: .., required?: ...)

I'm also not sure about enumerating all of the types as functions like that, i.e Type.one_of .... It has some benefits of autocomplete, but also would require maintaining corresponding functions for all types.

I don't think I see a test for using this in an actual DSL (the tests test the built data) but it might be good to have those and maybe an example guide on how to use this to define an extension.

@leonqadirie leonqadirie force-pushed the feat-builder-api branch 2 times, most recently from 9cd5d0b to 06798b4 Compare January 20, 2026 18:09
@leonqadirie
Copy link
Contributor Author

leonqadirie commented Jan 20, 2026

Cool!

  • Adopted the more data-focused API style and removed alternatives for Fields. NOT for Sections or Entities, though. Shall these also be refactored accordingly?
  • Removed the Type helper module
  • Added dsl_integration_test.exs
  • Added build-extensions-with-builders.md
  • Removed function acceptance from Entity.nested_entity/2, Section.entity/1, and Section.nested_section/1, as I'm unsure of actual use cases and adding them if needed is better than wanting to remove them once released. Lazy evaluation seems to happen via recursive_as if I understand this correctly. Are they important to have?

I also stumbled over lib/spark/options/options.ex:486's {:rename_to, atom} as part of the @type option_schema. Currently seems to be dead code, shall this be removed or handled in any way?


Finally, I suspect the failing mix spark.cheat_sheets --check CI job to be due to stale cached expectations of which modules should exist, but not to be handled in this PR?

@leonqadirie leonqadirie marked this pull request as ready for review January 20, 2026 18:33
@zachdaniel
Copy link
Contributor

Adopted the more data-focused API style and removed alternatives for Fields. NOT for Sections or Entities, though. Shall these also be refactored accordingly?

I think that we should consider refactoring sections/entities to be the same where it's all based on options lists. We can always add in more functions, i.e Entity.schema/2 later. So we can start with fewer functions and then come back in and add the builders based on how that feels.

For Field.new(:name, type: <type>), type should be a positional argument IMO, because all fields always have at minimum a name and a type.

@leonqadirie
Copy link
Contributor Author

  • type is now positional
  • turned the rest into an option list
  • retained the build function as a separate step to make it easier to extend later and hook validation into it

%{field | keys: normalized}
end

defp apply_opt(field, :keys, value) when is_function(value) do
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should instead have a Spark.Options schema for entities and sections that way we don't have to redefine a bunch of the validation logic.

Copy link
Contributor Author

@leonqadirie leonqadirie Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I fully follow what you have in mind, but here is my attempt to reduce own validation logic.

@zachdaniel
Copy link
Contributor

That looks just right! Lets just get the build passing and get it merged 😎

@leonqadirie
Copy link
Contributor Author

leonqadirie commented Mar 2, 2026

That looks just right! Lets just get the build passing and get it merged 😎

Locally, mix spark.cheat_sheets --check emits cleanly. I hope to be able to properly look into it soon.

At least got it reproduced by simulating CI behaviour. The CI action runs mix local.rebar --force and mix local.hex -- force in both steps, which triggers a full recompile in step 2. The beam files from step 1 remain in _build, and during the full recompile the builder modules' beam files get loaded before their source files are recompiled.

The CI workflow (ash-ci.yml) runs mix local.hex --force and mix local.rebar --force between the two cheat_sheets steps. This invalidates the Elixir compiler's manifest (compile.elixir), triggering a full recompile. However, stale .beam files from the first step remain in _build test/lib/spark/ebin/. The BEAM VM auto-loads these stale modules when they're referenced during compilation, and when the compiler then redefines them from source, it emits "redefining module" warnings — which warnings_as_errors: true turns into build failures.
@leonqadirie
Copy link
Contributor Author

leonqadirie commented Mar 2, 2026

While this fixes it, maybe we should instead touch the CI workflow (ash-ci.yml) itself, or do we really need the --dry-run --yes step if --check also runs? Having both runs mix local.hex --force and mix local.rebar --force between the two cheat_sheets steps, which invalidates the Elixir compiler's manifest and in turn triggers a full recompile. Then, stale .beam files from the first step remain in _build/test/lib/spark/ebin/. The BEAM VM auto-loads these when referenced during compilation, and when the compiler then redefines them from source, it emits "redefining module" warnings, which warnings_as_errors: true turns into build failures.

@zachdaniel
Copy link
Contributor

Good callout, I've merged that improvement into ash-ci.yml

@zachdaniel
Copy link
Contributor

Your change here is also fine to leave in since we define modules in test anyway 😄

@zachdaniel zachdaniel merged commit 488ba55 into ash-project:main Mar 2, 2026
23 checks passed
@zachdaniel
Copy link
Contributor

🚀 Thank you for your contribution! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants