Skip to content
Permalink
Vladimir-Oltea…
Switch branches/tags

Commits on Aug 18, 2021

  1. net: mscc: ocelot: enforce FDB isolation when VLAN-unaware

    Currently ocelot uses a pvid of 0 for standalone ports and ports under a
    VLAN-unaware bridge, and the pvid of the bridge for ports under a
    VLAN-aware bridge. Standalone ports do not perform learning, but packets
    received on them are still subject to FDB lookups. So if the MAC DA that
    a standalone port receives has been also learned on a VLAN-unaware
    bridge port, ocelot will attempt to forward to that port, even though it
    can't, so it will drop packets.
    
    So there is a desire to avoid that, and isolate the FDBs of different
    bridges from one another, and from standalone ports.
    
    The ocelot switch library has two distinct entry points: the felix DSA
    driver and the ocelot switchdev driver.
    
    We need to code up a minimal bridge_num allocation in the ocelot
    switchdev driver too, this is copied from DSA with the exception that
    ocelot does not care about DSA trees, cross-chip bridging etc. So it
    only looks at its own ports that are already in the same bridge.
    
    The ocelot switchdev driver uses the bridge_num it has allocated itself,
    while the felix driver uses the bridge_num allocated by DSA. They are
    both stored inside ocelot_port->bridge_num by the common function
    ocelot_port_bridge_join() which receives the bridge_num passed by value.
    
    Once we have a bridge_num, we can only use it to enforce isolation
    between VLAN-unaware bridges. As far as I can see, ocelot does not have
    anything like a FID that further makes VLAN 100 from a port be different
    to VLAN 100 from another port with regard to FDB lookup. So we simply
    deny multiple VLAN-aware bridges.
    
    For VLAN-unaware bridges, we crop the 4000-4095 VLAN region and we
    allocate a VLAN for each bridge_num. This will be used as the pvid of
    each port that is under that VLAN-unaware bridge, for as long as that
    bridge is VLAN-unaware.
    
    VID 0 remains only for standalone ports. It is okay if all standalone
    ports use the same VID 0, since they perform no address learning, the
    FDB will contain no entry in VLAN 0, so the packets will always be
    flooded to the only possible destination, the CPU port.
    
    The CPU port module doesn't need to be member of the VLANs to receive
    packets, but if we use the DSA tag_8021q protocol, those packets are
    part of the data plane as far as ocelot is concerned, so there it needs
    to. Just ensure that the DSA tag_8021q CPU port is a member of all
    reserved VLANs when it is created, and is removed when it is deleted.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  2. net: mscc: ocelot: use helpers for port VLAN membership

    This is a mostly cosmetic patch that creates some helpers for accessing
    the VLAN table. These helpers are also a bit more careful in that they
    do not modify the ocelot->vlan_mask unless the hardware operation
    succeeded.
    
    Not all callers check the return value (the init code doesn't), but anyway.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  3. net: mscc: ocelot: transmit the VLAN filtering restrictions via extack

    We need to transmit more restrictions in future patches, convert this
    one to netlink extack.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  4. net: mscc: ocelot: transmit the "native VLAN" error via extack

    We need to reject some more configurations in future patches, convert
    the existing one to netlink extack.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  5. net: dsa: sja1105: enforce FDB isolation

    For sja1105, to enforce FDB isolation simply means to turn on
    Independent VLAN Learning unconditionally, and to remap VLAN-unaware FDB
    and MDB entries towards the private VLAN allocated by tag_8021q for each
    bridge.
    
    Standalone ports each have their own standalone tag_8021q VLAN. No
    learning happens in that VLAN due to:
    - learning being disabled on standalone user ports
    - learning being disabled on the CPU port (we use
      assisted_learning_on_cpu_port which only installs bridge FDBs)
    
    VLAN-aware ports learn FDB entries with the bridge VLANs.
    
    VLAN-unaware bridge ports learn with the tag_8021q VLAN for bridging.
    
    Since sja1105 is the first driver to use the dsa_bridge_num_find()
    helper, we need to export it.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  6. net: dsa: request drivers to perform FDB isolation

    For DSA, to encourage drivers to perform FDB isolation simply means to
    track which bridge does each FDB and MDB entry belong to. It then
    becomes the driver responsibility to use something that makes the FDB
    entry from one bridge not match the FDB lookup of ports from other
    bridges.
    
    The top-level functions where the bridge is determined are:
    - dsa_port_fdb_{add,del}
    - dsa_port_host_fdb_{add,del}
    - dsa_port_mdb_{add,del}
    - dsa_port_host_mdb_{add,del}
    
    aka the pre-crosschip-notifier functions.
    
    One might obviously ask: why do you pass the bridge_dev all the way to
    drivers, can't they just look at dsa_to_port(ds, port)->bridge_dev?!
    
    Well, no.
    
    While that might work for user ports, it does not work for CPU and DSA
    ports. Those service multiple bridges, of course.
    
    When dsa_port_host_fdb_add(dp) is called, the driver is notified on
    dp->cpu_dp. So it loses the information about the original dp, so it
    cannot access dp->bridge_dev.
    
    But notice that at least we don't explicitly pass the bridge_num to it.
    Drivers can call dsa_bridge_num_find(bridge_dev), sure, but it is
    optional and if they have a better tracking scheme, they should be free
    to use it.
    
    DSA must perform refcounting on the CPU and DSA ports by also taking
    into account the bridge number. So if two bridges request the same local
    address, DSA must notify the driver twice, once for each bridge.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  7. net: dsa: pass extack to .port_bridge_join driver methods

    As FDB isolation cannot be enforced between VLAN-aware bridges in lack
    of hardware assistance like extra FID bits, it seems plausible that many
    DSA switches cannot do it. Therefore, they need to reject configurations
    with multiple VLAN-aware bridges from the two code paths that can
    transition towards that state:
    
    - joining a VLAN-aware bridge
    - toggling VLAN awareness on an existing bridge
    
    The .port_vlan_filtering method already propagates the netlink extack to
    the driver, let's propagate it from .port_bridge_join too, to make sure
    that the driver can use the same function for both.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  8. net: dsa: tag_8021q: rename dsa_8021q_bridge_tx_fwd_offload_vid

    The dsa_8021q_bridge_tx_fwd_offload_vid is no longer used just for
    bridge TX forwarding offload, it is the private VLAN reserved for
    VLAN-unaware bridging in a way that is compatible with FDB isolation.
    
    So just rename it dsa_tag_8021q_bridge_vid.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  9. net: dsa: tag_8021q: merge RX and TX VLANs

    In the old Shared VLAN Learning mode of operation that tag_8021q
    previously used for forwarding, we needed to have distinct concepts for
    an RX and a TX VLAN.
    
    An RX VLAN could be installed on all ports that were members of a given
    bridge, so that autonomous forwarding could still work, while a TX VLAN
    was dedicated for precise packet steering, so it just contained the CPU
    port and one egress port.
    
    Now that tag_8021q uses Independent VLAN Learning and imprecise RX/TX
    all over, those lines have been blurred and we no longer have the need
    to do precise TX towards a port that is in a bridge. As for standalone
    ports, it is fine to use the same VLAN ID for both RX and TX.
    
    This patch changes the tag_8021q format by shifting the VLAN range it
    reserves, and halving it. Previously, our DIR bits were encoding the
    VLAN direction (RX/TX) and were set to either 1 or 2. This meant that
    tag_8021q reserved 2K VLANs, or 50% of the available range.
    
    Change the DIR bits to a hardcoded value of 3 now, which makes tag_8021q
    reserve only 1K VLANs, and a different range now (the last 1K). This is
    done so that we leave the old format in place in case we need to return
    to it.
    
    In terms of code, the vid_is_dsa_8021q_rxvlan and vid_is_dsa_8021q_txvlan
    functions go away. Any vid_is_dsa_8021q is both a TX and an RX VLAN, and
    they are no longer distinct. For example, felix which did different
    things for different VLAN types, now needs to handle the RX and the TX
    logic for the same VLAN.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  10. net: dsa: felix: delete workarounds present due to SVL tag_8021q brid…

    …ging
    
    The felix driver, which also has a tagging protocol implementation based
    on tag_8021q, does not care about adding the RX VLAN that is pvid on one
    port on the other ports that are in the same bridge with it. It simply
    doesn't need that, because in its implementation, the RX VLAN that is
    pvid of a port is only used to install a TCAM rule that pushes that VLAN
    ID towards the CPU port.
    
    Now that tag_8021q no longer performs Shared VLAN Learning based
    forwarding, the RX VLANs are actually segregated into two types:
    standalone VLANs and VLAN-unaware bridging VLANs. Since you actually
    have to call dsa_tag_8021q_bridge_join() to get a bridging VLAN from
    tag_8021q, and felix does not do that because it doesn't need it, it
    means that it only gets standalone port VLANs from tag_8021q. Which is
    perfect because this means it can drop its workarounds that avoid the
    VLANs it does not need.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  11. net: dsa: tag_8021q: add support for imprecise RX based on the VBID

    Similar to dsa_find_designated_bridge_port_by_vid() which performs
    imprecise RX for VLAN-aware bridges, let's introduce a helper in
    tag_8021q for performing imprecise RX based on the VLAN that it has
    allocated for a VLAN-unaware bridge. Make the sja1105 driver use this.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  12. net: dsa: tag_8021q: replace the SVL bridging with VLAN-unaware IVL b…

    …ridging
    
    For VLAN-unaware bridging, tag_8021q uses something perhaps a bit too
    tied with the sja1105 switch: each port uses the same pvid which is also
    used for standalone operation (a unique one from which the source port
    and device ID can be retrieved when packets from that port are forwarded
    to the CPU). Since each port has a unique pvid when performing
    autonomous forwarding, the switch must be configured for Shared VLAN
    Learning (SVL) such that the VLAN ID itself is ignored when performing
    FDB lookups. Without SVL, packets would always be flooded.
    
    First of all, to make tag_8021q more palatable to switches which might
    not support Shared VLAN Learning, let's just use a common VLAN for all
    ports that are under a bridge.
    
    Secondly, using Shared VLAN Learning means that FDB isolation can never
    be enforced. But now, when all ports under the same VLAN-unaware bridge
    share the same VLAN ID, it can.
    
    The disadvantage is that the CPU port can no longer perform precise
    source port identification for these packets. But at least we have a
    mechanism which has proven to be adequate for that situation: imprecise
    RX, which is what we use for VLAN-aware bridging.
    
    The VLAN ID that VLAN-unaware bridges will use with tag_8021q is the
    same one as we were previously using for imprecise TX (bridge TX
    forwarding offload). It is already allocated, it is just a matter of
    using it.
    
    Note that because now all ports under the same bridge share the same
    VLAN, the complexity of performing a tag_8021q bridge join decreases
    dramatically. We no longer have to install the RX VLAN of a newly
    joining port into the port membership of the existing bridge ports.
    The newly joining port just becomes a member of the VLAN corresponding
    to that bridge, and the other ports are already members of it. So
    forwarding works properly.
    
    This means that we can unhook dsa_tag_8021q_bridge_{join,leave} from the
    cross-chip notifier level dsa_switch_bridge_{join,leave}. We can put
    these calls directly into the sja1105 driver.
    
    With this new mode of operation, a port controlled by tag_8021q can have
    two pvids whereas before it could only have one. The pvid for standalone
    operation is different from the pvid used for VLAN-unaware bridging.
    This is done, again, so that FDB isolation can be enforced.
    Let tag_8021q manage this by deleting the standalone pvid when a port
    joins a bridge, and restoring it when it leaves it.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  13. net: dsa: handle SWITCHDEV_FDB_{ADD,DEL}_TO_DEVICE synchronously

    Since the switchdev FDB entry notifications are now blocking and
    deferred by switchdev and not by us, switchdev will also wait for us to
    finish, which means we can proceed with our FDB isolation mechanism
    based on dp->bridge_num.
    
    It also means that the ordered workqueue is no longer needed, drop it
    and simply call the driver.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  14. net: switchdev: don't assume RCU context in switchdev_handle_fdb_{add…

    …,del}_to_device
    
    Now that the SWITCHDEV_FDB_{ADD,DEL}_TO_DEVICE events are blocking, it
    would be nice if callers of the fan-out helper functions (i.e. DSA)
    could benefit from that blocking context.
    
    But at the moment, switchdev_handle_fdb_{add,del}_to_device use some
    netdev adjacency list checking functions that assume RCU protection.
    Switch over to their rtnl_mutex equivalents, since we are also running
    with that taken, and drop the surrounding rcu_read_lock from the callers.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  15. net: switchdev: drop the atomic notifier block from switchdev_bridge_…

    …port_{,un}offload
    
    Now that br_fdb_replay() uses the blocking_nb, there is no point in
    passing the atomic nb anymore.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  16. net: bridge: switchdev: make br_fdb_replay offer sleepable context to…

    … consumers
    
    Now that the SWITCHDEV_FDB_{ADD,DEL}_TO_DEVICE events are notified on
    the blocking chain, it would be nice if we could also drop the
    rcu_read_lock() atomic context from br_fdb_replay() so that drivers can
    actually benefit from the blocking context and simplify their logic.
    
    Do something similar to what is done in br_mdb_queue_one/br_mdb_replay_one,
    except the fact that FDB entries are held in a hash list.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  17. net: switchdev: move SWITCHDEV_FDB_{ADD,DEL}_TO_DEVICE to the blockin…

    …g notifier chain
    
    Currently, br_switchdev_fdb_notify() uses call_switchdev_notifiers (and
    br_fdb_replay() open-codes the same thing). This means that drivers
    handle the SWITCHDEV_FDB_{ADD,DEL}_TO_DEVICE events on the atomic
    switchdev notifier block.
    
    Most existing switchdev drivers either talk to firmware, or to a device
    over a bus where the I/O is sleepable (SPI, I2C, MDIO etc). So there
    exists an (anti)pattern where drivers make a sleepable context for
    offloading the given FDB entry by registering an ordered workqueue and
    scheduling work items on it, and doing all the work from there.
    
    The problem is the inherent limitation that this design imposes upon
    what a switchdev driver can do with those FDB entries.
    
    For example, a switchdev driver might want to perform FDB isolation,
    i.e. associate each FDB entry with the bridge it belongs to. Maybe the
    driver associates each bridge with a number, allocating that number when
    the first port of the driver joins that bridge, and freeing it when the
    last port leaves it.
    
    And this is where the problem is. When user space deletes a bridge and
    all the ports leave, the bridge will notify us of the deletion of all
    FDB entries in atomic context, and switchdev drivers will schedule their
    private work items on their private workqueue.
    
    The FDB entry deletion notifications will succeed, the bridge will then
    finish deleting itself, but the switchdev work items have not run yet.
    When they will eventually get scheduled, the aforementioned association
    between the bridge_dev and a number will have already been broken by the
    switchdev driver. All ports are standalone now, the bridge is a foreign
    interface!
    
    One might say "why don't you cache all your associations while you're
    still in the atomic context and they're still valid, pass them by value
    through your switchdev_work and work with the cached values as opposed
    to the current ones?"
    
    This option smells of poor design, because instead of fixing a central
    problem, we add tens of lateral workarounds to avoid it. It should be
    easier to use switchdev, not harder, and we should look at the common
    patterns which lead to code duplication and eliminate them.
    
    In this case, we must notice that
    (a) switchdev already has the concept of notifiers emitted from the fast
        path that are still processed by drivers from blocking context. This
        is accomplished through the SWITCHDEV_F_DEFER flag which is used by
        e.g. SWITCHDEV_OBJ_ID_HOST_MDB.
    (b) the bridge del_nbp() function already calls switchdev_deferred_process().
        So if we could hook into that, we could have a chance that the
        bridge simply waits for our FDB entry offloading procedure to finish
        before it calls netdev_upper_dev_unlink() - which is almost
        immediately afterwards, and also when switchdev drivers typically
        break their stateful associations between the bridge upper and
        private data.
    
    So it is in fact possible to use switchdev's generic
    switchdev_deferred_enqueue mechanism to get a sleepable callback, and
    from there we can call_switchdev_blocking_notifiers().
    
    In the case of br_fdb_replay(), the only code path is from
    switchdev_bridge_port_offload(), which is already in blocking context.
    So we don't need to go through switchdev_deferred_enqueue, and we can
    just call the blocking notifier block directly.
    
    To preserve the same behavior as before, all drivers need to have their
    SWITCHDEV_FDB_{ADD,DEL}_TO_DEVICE handlers moved from their switchdev
    atomic notifier blocks to the blocking ones. This patch attempts to make
    that trivial movement. Note that now they might schedule a work item for
    nothing (since they are now called from a work item themselves), but I
    don't have the energy or hardware to test all of them, so this will have
    to do.
    
    Note that previously, we were under rcu_read_lock() but now we're not.
    I have eyeballed the drivers that make any sort of RCU assumption and
    enclosed them between a private rcu_read_lock()/rcu_read_unlock(). This
    can be dropped when the drivers themselves are reworked.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  18. net: dsa: propagate the bridge_num to driver .port_bridge_{join,leave…

    …} methods
    
    If the driver needs to do something to isolate FDBs of different
    bridges, it must be able to reliably get a FDB identifier for each
    bridge.
    
    So one might ask: why is the driver not able to call something like
    dsa_bridge_num_find(bridge_dev) and find the associated FDB identifier
    already provided by the DSA core if it needs to, and not change anything
    if it doesn't?
    
    The issue is that drivers might need to do something with the FDB
    identifier on .port_bridge_leave too, and the dsa_bridge_num_find
    function is stateful: it only retrieves a valid bridge_num if there is
    at least one port which has dp->bridge_dev == br.
    
    But the dsa_port_bridge_leave() method first clears dp->bridge_dev and
    dp->bridge_num, and only then notifies the driver. The bridge that the
    port just left is only present inside the cross-chip notifier attribute,
    and is passed by value to the switch driver.
    
    So the bridge_num of the bridge we just left needs to be passed by value
    too, just like the bridge_dev itself. And from there, .port_bridge_join
    follows the same prototype mostly for symmetry.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  19. net: dsa: assign a bridge number even without TX forwarding offload

    The service where DSA assigns a unique bridge number for each forwarding
    domain is useful even for drivers which do not implement the TX
    forwarding offload feature.
    
    For example, drivers might use the dp->bridge_num for FDB isolation.
    
    So rename ds->num_fwd_offloading_bridges to ds->max_num_bridges, and
    calculate a unique bridge_num for all drivers that set this value.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  20. net: dsa: track unique bridge numbers across all DSA switch trees

    Right now, cross-tree bridging setups work somewhat by mistake.
    
    In the case of cross-tree bridging with sja1105, all switch instances
    need to agree upon a common VLAN ID for forwarding a packet that belongs
    to a certain bridging domain.
    
    With TX forwarding offload, the VLAN ID is the bridge VLAN for
    VLAN-aware bridging, and the tag_8021q TX forwarding offload VID
    (a VLAN which has non-zero VBID bits) for VLAN-unaware bridging.
    
    The VBID for VLAN-unaware bridging is derived from the dp->bridge_num
    value calculated by DSA independently for each switch tree.
    
    If ports from one tree join one bridge, and ports from another tree join
    another bridge, DSA will assign them the same bridge_num, even though
    the bridges are different. If cross-tree bridging is supported, this
    is an issue.
    
    Modify DSA to calculate the bridge_num globally across all switch trees.
    This has the implication for a driver that the dp->bridge_num value that
    DSA will assign to its ports might not be contiguous, if there are
    boards with multiple DSA drivers instantiated. Additionally, all
    bridge_num values eat up towards each switch's
    ds->num_fwd_offloading_bridges maximum, which is potentially unfortunate,
    and can be seen as a limitation introduced by this patch. However, that
    is the lesser evil for now.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    vladimiroltean authored and intel-lab-lkp committed Aug 18, 2021
  21. octeontx2-pf: Allow VLAN priority also in ntuple filters

    VLAN TCI is a 16 bit field which includes Priority(3 bits),
    CFI(1 bit) and VID(12 bits). Currently ntuple filters support
    installing rules to steer packets based on VID only.
    This patch extends that support such that filters can
    be installed for entire VLAN TCI.
    
    Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
    Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Subbaraya Sundeep authored and davem330 committed Aug 18, 2021
  22. selftests: vrf: Add test for SNAT over VRF

    Commit 09e856d ("vrf: Reset skb conntrack connection on VRF rcv")
    fixes the "reverse-DNAT" of an SNAT-ed packet over a VRF.
    
    This patch adds a test for this scenario.
    
    Signed-off-by: Lahav Schlesinger <lschlesinger@drivenets.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    lschlesinger-dn authored and davem330 committed Aug 18, 2021
  23. net: net_namespace: Optimize the code

    There is only one caller for ops_free(), so inline it.
    Separate net_drop_ns() and net_free(), so the net_free()
    can be called directly.
    Add free_exit_list() helper function for free net_exit_list.
    
    ====================
    v2:
     - v1 does not apply, rebase it.
    ====================
    
    Signed-off-by: Yajun Deng <yajun.deng@linux.dev>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Yajun Deng authored and davem330 committed Aug 18, 2021
  24. net: dsa: tag_sja1105: be dsa_loop-safe

    Add support for tag_sja1105 running on non-sja1105 DSA ports, by making
    sure that every time we dereference dp->priv, we check the switch's
    dsa_switch_ops (otherwise we access a struct sja1105_port structure that
    is in fact something else).
    
    This adds an unconditional build-time dependency between sja1105 being
    built as module => tag_sja1105 must also be built as module. This was
    there only for PTP before.
    
    Some sane defaults must also take place when not running on sja1105
    hardware. These are:
    
    - sja1105_xmit_tpid: the sja1105 driver uses different VLAN protocols
      depending on VLAN awareness and switch revision (when an encapsulated
      VLAN must be sent). Default to 0x8100.
    
    - sja1105_rcv_meta_state_machine: this aggregates PTP frames with their
      metadata timestamp frames. When running on non-sja1105 hardware, don't
      do that and accept all frames unmodified.
    
    - sja1105_defer_xmit: calls sja1105_port_deferred_xmit in sja1105_main.c
      which writes a management route over SPI. When not running on sja1105
      hardware, bypass the SPI write and send the frame as-is.
    
    Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    vladimiroltean authored and davem330 committed Aug 18, 2021
  25. Merge branch 'nci-ext'

    Bongsu Jeon says:
    
    ====================
    Update the virtual NCI device driver and add the NCI testcase
    
    This series updates the virtual NCI device driver and NCI selftest code
    and add the NCI test case in selftests.
    
    1/8 to use wait queue in virtual device driver.
    2/8 to remove the polling code in selftests.
    3/8 to fix a typo.
    4/8 to fix the next nlattr offset calculation.
    5/8 to fix the wrong condition in if statement.
    6/8 to add a flag parameter to the Netlink send function.
    7/8 to extract the start/stop discovery function.
    8/8 to add the NCI testcase in selftests.
    ====================
    
    Signed-off-by: David S. Miller <davem@davemloft.net>
    davem330 committed Aug 18, 2021
  26. selftests: nci: Add the NCI testcase reading T4T Tag

    Add the NCI testcase reading T4T Tag that has NFC TEST in plain text.
    the virtual device application acts as T4T Tag in this testcase.
    
    Signed-off-by: Bongsu Jeon <bongsu.jeon@samsung.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Bongsu Jeon authored and davem330 committed Aug 18, 2021
  27. selftests: nci: Extract the start/stop discovery function

    To reuse the start/stop discovery code in other testcase, extract the code.
    
    Signed-off-by: Bongsu Jeon <bongsu.jeon@samsung.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Bongsu Jeon authored and davem330 committed Aug 18, 2021
  28. selftests: nci: Add the flags parameter for the send_cmd_mt_nla

    To reuse the send_cmd_mt_nla for NLM_F_REQUEST and NLM_F_DUMP flag,
    add the flags parameter to the function.
    
    Signed-off-by: Bongsu Jeon <bongsu.jeon@samsung.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Bongsu Jeon authored and davem330 committed Aug 18, 2021
  29. selftests: nci: Fix the wrong condition

    memcpy should be executed only in case nla_len's value is greater than 0.
    
    Signed-off-by: Bongsu Jeon <bongsu.jeon@samsung.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Bongsu Jeon authored and davem330 committed Aug 18, 2021
  30. selftests: nci: Fix the code for next nlattr offset

    nlattr could have a padding for 4 bytes alignment. So next nla's offset
    should be calculated with a padding.
    
    Signed-off-by: Bongsu Jeon <bongsu.jeon@samsung.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Bongsu Jeon authored and davem330 committed Aug 18, 2021
  31. selftests: nci: Fix the typo

    Fix typo: rep_len -> resp_len
    
    Signed-off-by: Bongsu Jeon <bongsu.jeon@samsung.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Bongsu Jeon authored and davem330 committed Aug 18, 2021
  32. selftests: nci: Remove the polling code to read a NCI frame

    Because the virtual NCI device uses Wait Queue, the virtual device
    application doesn't need to poll the NCI frame.
    
    Signed-off-by: Bongsu Jeon <bongsu.jeon@samsung.com>
    Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Bongsu Jeon authored and davem330 committed Aug 18, 2021
  33. nfc: virtual_ncidev: Use wait queue instead of polling

    In previous version, the user level virtual device application that used
    this driver should have the polling scheme to read a NCI frame.
    To remove this polling scheme, use Wait Queue.
    
    Signed-off-by: Bongsu Jeon <bongsu.jeon@samsung.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Bongsu Jeon authored and davem330 committed Aug 18, 2021
  34. net: procfs: add seq_puts() statement for dev_mcast

    Add seq_puts() statement for dev_mcast, make it more readable.
    As also, keep vertical alignment for {dev, ptype, dev_mcast} that
    under /proc/net.
    
    Signed-off-by: Yajun Deng <yajun.deng@linux.dev>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Yajun Deng authored and davem330 committed Aug 18, 2021
  35. net: RxRPC: make dependent Kconfig symbols be shown indented

    Make all dependent RxRPC kconfig entries be dependent on AF_RXRPC
    so that they are presented (indented) after AF_RXRPC instead
    of being presented at the same level on indentation.
    
    Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
    Cc: David Howells <dhowells@redhat.com>
    Cc: Marc Dionne <marc.dionne@auristor.com>
    Cc: linux-afs@lists.infradead.org
    Cc: "David S. Miller" <davem@davemloft.net>
    Cc: Jakub Kicinski <kuba@kernel.org>
    Cc: netdev@vger.kernel.org
    Signed-off-by: David S. Miller <davem@davemloft.net>
    rddunlap authored and davem330 committed Aug 18, 2021
Older