Skip to content

Commit

Permalink
doc: kernel: fix improper Sphinx C domain usage
Browse files Browse the repository at this point in the history
fixed usage of wrong C roles (e.g. `:c:struct:` instead of `:c:type:`)
which Breathe tolerates but can cause trouble when using other systems.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
  • Loading branch information
kartben authored and jhedberg committed Jun 7, 2024
1 parent 45085ba commit 593dfe1
Show file tree
Hide file tree
Showing 8 changed files with 33 additions and 32 deletions.
8 changes: 4 additions & 4 deletions doc/kernel/data_structures/dlist.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ the head, tail or any internal node). To do this, the list stores two
pointers per node, and thus has somewhat higher runtime code and
memory space needs.

A :c:struct:`sys_dlist_t` struct may be instantiated by the user in any
A :c:type:`sys_dlist_t` struct may be instantiated by the user in any
accessible memory. It must be initialized with :c:func:`sys_dlist_init`
or :c:macro:`SYS_DLIST_STATIC_INIT` before use. The :c:struct:`sys_dnode_t` struct
or :c:macro:`SYS_DLIST_STATIC_INIT` before use. The :c:type:`sys_dnode_t` struct
is expected to be provided by the user for any nodes added to the
list (typically embedded within the struct to be tracked, as described
above). It must be initialized in zeroed/bss memory or with
Expand Down Expand Up @@ -50,8 +50,8 @@ implementation that has zero overhead vs. the normal list processing).
Double-linked List Internals
----------------------------

Internally, the dlist implementation is minimal: the :c:struct:`sys_dlist_t`
struct contains "head" and "tail" pointer fields, the :c:struct:`sys_dnode_t`
Internally, the dlist implementation is minimal: the :c:type:`sys_dlist_t`
struct contains "head" and "tail" pointer fields, the :c:type:`sys_dnode_t`
contains "prev" and "next" pointers, and no other data is stored. But
in practice the two structs are internally identical, and the list
struct is inserted as a node into the list itself. This allows for a
Expand Down
2 changes: 1 addition & 1 deletion doc/kernel/data_structures/ring_buffers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ Implementation
Defining a Ring Buffer
======================

A ring buffer is defined using a variable of type :c:type:`ring_buf`.
A ring buffer is defined using a variable of type :c:struct:`ring_buf`.
It must then be initialized by calling :c:func:`ring_buf_init` or
:c:func:`ring_buf_item_init`.

Expand Down
16 changes: 8 additions & 8 deletions doc/kernel/data_structures/slist.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Single-linked List
==================

Zephyr provides a :c:struct:`sys_slist_t` type for storing simple
Zephyr provides a :c:type:`sys_slist_t` type for storing simple
singly-linked list data (i.e. data where each list element stores a
pointer to the next element, but not the previous one). This supports
constant-time access to the first (head) and last (tail) elements of
Expand All @@ -12,7 +12,7 @@ constant time removal of the head. Removal of subsequent nodes
requires access to the "previous" pointer and thus can only be
performed in linear time by searching the list.

The :c:struct:`sys_slist_t` struct may be instantiated by the user in any
The :c:type:`sys_slist_t` struct may be instantiated by the user in any
accessible memory. It should be initialized with either
:c:func:`sys_slist_init` or by static assignment from SYS_SLIST_STATIC_INIT
before use. Its interior fields are opaque and should not be accessed
Expand All @@ -21,15 +21,15 @@ by user code.
The end nodes of a list may be retrieved with
:c:func:`sys_slist_peek_head` and :c:func:`sys_slist_peek_tail`, which will
return NULL if the list is empty, otherwise a pointer to a
:c:struct:`sys_snode_t` struct.
:c:type:`sys_snode_t` struct.

The :c:struct:`sys_snode_t` struct represents the data to be inserted. In
The :c:type:`sys_snode_t` struct represents the data to be inserted. In
general, it is expected to be allocated/controlled by the user,
usually embedded within a struct which is to be added to the list.
The container struct pointer may be retrieved from a list node using
:c:macro:`SYS_SLIST_CONTAINER`, passing it the struct name of the
containing struct and the field name of the node. Internally, the
:c:struct:`sys_snode_t` struct contains only a next pointer, which may be
:c:type:`sys_snode_t` struct contains only a next pointer, which may be
accessed with :c:func:`sys_slist_peek_next`.

Lists may be modified by adding a single node at the head or tail with
Expand Down Expand Up @@ -66,8 +66,8 @@ Single-linked List Internals
----------------------------

The slist code is designed to be minimal and conventional.
Internally, a :c:struct:`sys_slist_t` struct is nothing more than a pair of
"head" and "tail" pointer fields. And a :c:struct:`sys_snode_t` stores only a
Internally, a :c:type:`sys_slist_t` struct is nothing more than a pair of
"head" and "tail" pointer fields. And a :c:type:`sys_snode_t` stores only a
single "next" pointer.

.. figure:: slist.png
Expand Down Expand Up @@ -101,7 +101,7 @@ Only one such variant, sflist, exists in Zephyr at the moment.
Flagged List
------------

The :c:struct:`sys_sflist_t` is implemented using the described genlist
The :c:type:`sys_sflist_t` is implemented using the described genlist
template API. With the exception of symbol naming ("sflist" instead
of "slist") and the additional API described next, it operates in all
ways identically to the slist API.
Expand Down
4 changes: 2 additions & 2 deletions doc/kernel/memory_management/shared_multi_heap.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ This framework is commonly used as follow:
the pool with :c:func:`shared_multi_heap_add()`, possibly gathering the
needed information for the regions from the DT.

2. Each memory region encoded in a :c:type:`shared_multi_heap_region`
2. Each memory region encoded in a :c:struct:`shared_multi_heap_region`
structure. This structure is also carrying an opaque and user-defined
integer value that is used to define the region capabilities (for example:
cacheability, cpu affinity, etc...)
Expand Down Expand Up @@ -76,7 +76,7 @@ Adding new attributes
*********************

The API does not enforce any attributes, but at least it defines the two most
common ones: :c:enum:`SMH_REG_ATTR_CACHEABLE` and :c:enum:`SMH_REG_ATTR_NON_CACHEABLE`
common ones: :c:enumerator:`SMH_REG_ATTR_CACHEABLE` and :c:enumerator:`SMH_REG_ATTR_NON_CACHEABLE`.

.. doxygengroup:: shared_multi_heap
:project: Zephyr
9 changes: 5 additions & 4 deletions doc/kernel/services/polling.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,14 +78,15 @@ Poll events can be initialized using either the runtime initializers
:c:macro:`K_POLL_EVENT_INITIALIZER()` or :c:func:`k_poll_event_init`, or
the static initializer :c:macro:`K_POLL_EVENT_STATIC_INITIALIZER()`. An object
that matches the **type** specified must be passed to the initializers. The
**mode** *must* be set to :c:macro:`K_POLL_MODE_NOTIFY_ONLY`. The state *must*
be set to :c:macro:`K_POLL_STATE_NOT_READY` (the initializers take care of
this). The user **tag** is optional and completely opaque to the API: it is
**mode** *must* be set to :c:enumerator:`K_POLL_MODE_NOTIFY_ONLY`. The state
*must* be set to :c:macro:`K_POLL_STATE_NOT_READY` (the initializers take care
of this). The user **tag** is optional and completely opaque to the API: it is
there to help a user to group similar events together. Being optional, it is
passed to the static initializer, but not the runtime ones for performance
reasons. If using runtime initializers, the user must set it separately in the
:c:struct:`k_poll_event` data structure. If an event in the array is to be
ignored, most likely temporarily, its type can be set to K_POLL_TYPE_IGNORE.
ignored, most likely temporarily, its type can be set to
:c:macro:`K_POLL_TYPE_IGNORE`.

.. code-block:: c
Expand Down
12 changes: 6 additions & 6 deletions doc/kernel/services/threads/workqueue.rst
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ itself. The work item also maintains information about its status.
A work item must be initialized before it can be used. This records the work
item's handler function and marks it as not pending.

A work item may be **queued** (:c:macro:`K_WORK_QUEUED`) by submitting it to a
A work item may be **queued** (:c:enumerator:`K_WORK_QUEUED`) by submitting it to a
workqueue by an ISR or a thread. Submitting a work item appends the work item
to the workqueue's queue. Once the workqueue's thread has processed all of
the preceding work items in its queue the thread will remove the next work
Expand All @@ -80,11 +80,11 @@ the scheduling priority of the workqueue's thread, and the work required by
other items in the queue, a queued work item may be processed quickly or it
may remain in the queue for an extended period of time.

A delayable work item may be **scheduled** (:c:macro:`K_WORK_DELAYED`) to a
A delayable work item may be **scheduled** (:c:enumerator:`K_WORK_DELAYED`) to a
workqueue; see `Delayable Work`_.

A work item will be **running** (:c:macro:`K_WORK_RUNNING`) when it is running
on a work queue, and may also be **canceling** (:c:macro:`K_WORK_CANCELING`)
A work item will be **running** (:c:enumerator:`K_WORK_RUNNING`) when it is running
on a work queue, and may also be **canceling** (:c:enumerator:`K_WORK_CANCELING`)
if it started running before a thread has requested that it be cancelled.

A work item can be in multiple states; for example it can be:
Expand Down Expand Up @@ -248,7 +248,7 @@ The following code defines and initializes a workqueue:
In addition the queue identity and certain behavior related to thread
rescheduling can be controlled by the optional final parameter; see
:c:struct:`k_work_queue_start()` for details.
:c:func:`k_work_queue_start()` for details.

The following API can be used to interact with a workqueue:

Expand Down Expand Up @@ -416,7 +416,7 @@ be a flag indicating that work needs to be done, or a shared object that is
filled by an ISR or thread and read by the work handler.

For simple flags :ref:`atomic_v2` may be sufficient. In other cases spin
locks (:c:struct:`k_spinlock_t`) or thread-aware locks (:c:struct:`k_sem`,
locks (:c:struct:`k_spinlock`) or thread-aware locks (:c:struct:`k_sem`,
:c:struct:`k_mutex` , ...) may be used to ensure data races don't occur.

If the selected lock mechanism can :ref:`api_term_sleep` then allowing the
Expand Down
10 changes: 5 additions & 5 deletions doc/kernel/services/timing/clocks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ For example:
* The kernel :c:struct:`k_work_delayable` API provides a timeout parameter
indicating when a work queue item will be added to the system queue.

All these values are specified using a :c:struct:`k_timeout_t` value. This is
All these values are specified using a :c:type:`k_timeout_t` value. This is
an opaque struct type that must be initialized using one of a family
of kernel timeout macros. The most common, :c:macro:`K_MSEC`, defines
a time in milliseconds after the current time.
Expand All @@ -123,7 +123,7 @@ described above: :c:macro:`K_NSEC()`, :c:macro:`K_USEC`, :c:macro:`K_TICKS` and
:c:macro:`K_CYC()` specify timeout values that will expire after specified
numbers of nanoseconds, microseconds, ticks and cycles, respectively.

Precision of :c:struct:`k_timeout_t` values is configurable, with the default
Precision of :c:type:`k_timeout_t` values is configurable, with the default
being 32 bits. Large uptime counts in non-tick units will experience
complicated rollover semantics, so it is expected that
timing-sensitive applications with long uptimes will be configured to
Expand All @@ -141,16 +141,16 @@ Timing Internals
Timeout Queue
-------------

All Zephyr :c:struct:`k_timeout_t` events specified using the API above are
All Zephyr :c:type:`k_timeout_t` events specified using the API above are
managed in a single, global queue of events. Each event is stored in
a double-linked list, with an attendant delta count in ticks from the
previous event. The action to take on an event is specified as a
callback function pointer provided by the subsystem requesting the
event, along with a :c:struct:`_timeout` tracking struct that is
expected to be embedded within subsystem-defined data structures (for
example: a :c:struct:`wait_q` struct, or a :c:struct:`k_tid_t` thread struct).
example: a :c:struct:`wait_q` struct, or a :c:type:`k_tid_t` thread struct).

Note that all variant units passed via a :c:struct:`k_timeout_t` are converted
Note that all variant units passed via a :c:type:`k_timeout_t` are converted
to ticks once on insertion into the list. There no
multiple-conversion steps internal to the kernel, so precision is
guaranteed at the tick level no matter how many events exist or how
Expand Down
4 changes: 2 additions & 2 deletions doc/kernel/services/timing/timers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,11 @@ is referenced by its memory address.
A timer has the following key properties:

* A **duration** specifying the time interval before the timer
expires for the first time. This is a ``k_timeout_t`` value that
expires for the first time. This is a :c:type:`k_timeout_t` value that
may be initialized via different units.

* A **period** specifying the time interval between all timer
expirations after the first one, also a ``k_timeout_t``. It must be
expirations after the first one, also a :c:type:`k_timeout_t`. It must be
non-negative. A period of ``K_NO_WAIT`` (i.e. zero) or
``K_FOREVER`` means that the timer is a one-shot timer that stops
after a single expiration. (For example then, if a timer is started
Expand Down

0 comments on commit 593dfe1

Please sign in to comment.