Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature/tdmamodel #22

Merged
merged 27 commits into from Jul 17, 2015
Merged

feature/tdmamodel #22

merged 27 commits into from Jul 17, 2015

Conversation

sgalgano
Copy link
Member

TDMA radio model providing a generic TDMA scheme that supports TDMA schedule distribution and updates in realtime using events.

module. Initial functionality includes: schedule event processing,
emanesh event components for authoring and publishing schedules, tx
and rx processing, priority and destination queue management, upstream
SINR POR curve logic and slot structure performance tables.
used to transmit message components instead of the more traditional
emane notion of packets. Depending on whether aggregation and
fragmentation are enabled, the radio model will transmit one or more
message components per transmission. A message component, depending on
slot and message data size, can be one or more entire downstream
packets, a portion (fragment) of one or more downstream packets or
some combination thereof. A single over-the-air transmission may
contain a mixture of both unicast and broadcast message components,
where unicast components can be for different destinations.

Added QueueManager support to dequeue packets from other queues
(highest priority) first, if there are no message components
available for the specified queue. This can be disabled using the
queue.strictdequeueenable config parameter.

Added QueueManager support for aggregation and fragmentation of
outbound transmissions. Support includes the ability to disable
aggregation (queue.aggregationenable), disable fragmentation
(queue.fragmentationenable) and when enabled, to set an aggregation
message (slot) size threshold (queue.aggregationslotthreshold). Two
config parameters: fragmentcheckthreshold and fragmenttimeoutthreshold
are used to control abandoning reassembly on lost fragments.

Added ReceiveManager support for aggregated message components and
fragmentation reassembly. The ReceiveManager will handle aggregation
and fragmentation of inbound messages even when the model is configured
to not aggregate or fragment transmissions.

Added QueueStatusTable and QueueFragmentHistogram to monitor performance.

Added one of each per queue: UnicastByteAcceptTable,
BroadcastByteAcceptTable, UnicastByteDropTable and
BroadcastByteDropTable to monitor processed and dropped messages (in
bytes).
capable of supporting flow control, such as the virtual transport.

Modified the QueueManager API to return the number of packets dropped
(if any) during an enqueue due to overflow. Dropped packet information
is necessary for increasing available tokens.

Added an additional flow control outbound drop code for reporting
bytes dropped due to flow control. Drops due to flow control are an
indication of an issue with token synchronization and should not
occur.

Trailing white space cleanup.
processSchedulerPacket(), processSchedulerControl() and
getPacketQueueInfo(). SchedulerUser processSchedulerPacket() method
was modified to remove the control messages parameter from call to
address potential ambiguity due to aggregation and fragmentation
functions. The non-packet control message path,
processSchedulerControl() should be used by a scheduler module wishing
to send control messages to the physical layer.

Queue functionality and accessor method was added to track the number
of packets and bytes in a queue. Partial packets remaining in the
queue due to fragmentation are counted as packets in the queue, but
only the bytes remaining are counted towards bytes in the queue.

Queue initialization was modified to add an indication as to queue
type: data or control. This information is used when creating message
components in order to route control messages to a receive-side
scheduler module.

Trailing white space cleanup.
processTxOpportunity(). A return of true indicates that a dynamically
allocated functor scheduled for execution by a timer should be deleted
following execution.
scheduler module using processPacketMetaInfo() for data messages and
processSchedulerPacket() for control messages. The
processSchedulerPacket() method takes both the UpstreamPacket and
PacketMetaInfo as parameters.

Added datarate (u64DataRatebps_) to PacketMetaInfo struct used to
convey information about a received over-the-air message.

Trailing white space cleanup.
and updated schedules. All statistics are CLEARABLE:

   scheduler.schedule AcceptFull
   scheduler.schedule AcceptUpdate
   scheduler.schedule RejectFrameIndexRange
   scheduler.schedule RejectSlotIndexRange
   scheduler.schedule RejectUpdateBeforeFull
information. Removed the --file option and replaced with a mandatory
schedule xml file argument.
of a packet when fragmentation was disabled. Previous logic was only
recording drops when at least one message component was returned from
the designated queue.
disable in-queue search for additional message components when
aggregation is disabled.

Modified queue fragmentation disabled drop policy. Packets too large
for the slot will be discarded whiling searching for one that
fits. Once one packet is found, aggregation will be abandoned, if there
is more room in the slot but the next packet in the queue is too
large. Previous logic would have continued to drop packets while
searching for one that fit.
NeighborMetricTable and NeighborStatusTable. Two configuration items
were added to support configuring neighbor delete time
(neighbormetricdeletetime) and neighbor table update interval
(neighbormetricupdateinterval).
configuration, statistic or statistic table name.
logic. A frequency set is now cached which stores all the frequencies
referenced in an initial full schedule and any following updates. The
frequency set cache is cleared when a full schedule is received. This
frequency set cache is used to report FOI to the physical layer.

Previous implementation only reported frequencies contained in the
most recent schedule event, which may have been an update only
containing a subset of the actual frequencies in use.
dequeuing from the destination based packet queues. Previous
implementation was attempting to remove the queue entry by sequence
number from the destination queue map which is a map of NEMId
PacketQueue entries, instead of the PacketQueue for the target
NEMId. That remove would fail, leaving an entry to a packet that had
since been deallocated.
opportunity timer to be canceled after it has fired but before its
corresponding functor executes. This situation would result in two
rescheduled tx opportunity timers consuming the opportunity queue
causing receivers to get messages for future slots.

Each schedule is now given a unique index which is cached by the tx
opportunity timer. A timer will return without processing or
rescheduling if the cached schedule index does not match the current
schedule index.
be the oldest packet where no portion of the packet has been
transmitted due to fragmentation. If all packets in the queue have had
a portion transmitted, then the oldest packet is discarded regardless
of fragmentation state.
sequence number.  Source, priority and sequence are required to
uniquely identify a message.
percentage, calculation was using it as a ratio.
full or update schedule containing an error. This resets a model
instance to its initial state of no schedule.
@eschreiber-alink
Copy link
Member

Three trials of the aggfrag.disabled, aggfrag.enabled and slot.characterization tests were completed this morning with all observed completion results matching expected values. The current feature commit has my approval for merge.

@sgalgano sgalgano merged commit a56c24b into develop Jul 17, 2015
@sgalgano sgalgano deleted the feature/tdmamodel branch July 17, 2015 16:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants