Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

more fixes #2

Merged
merged 3 commits into from
Feb 26, 2015
Merged

more fixes #2

merged 3 commits into from
Feb 26, 2015

Conversation

kuba-moo
Copy link
Contributor

This time I offer corrections for errors I found while using some of the code in my own driver. I correct a bug introduced in the previous batch by using field width instead of precision to limit the length of strings read from firmware header. Apart from that there is a correction of aggregation statistics' offsets and a spelling fix.

Signed-off-by: Jakub Kicinski <kubakici@wp.pl>
Signed-off-by: Jakub Kicinski <kubakici@wp.pl>
It is the precision not field width in the format
which limits length of printed string.

Fixes: 28f93be ("remove get_string helper")
Signed-off-by: Jakub Kicinski <kubakici@wp.pl>
nbd168 pushed a commit that referenced this pull request Feb 26, 2015
@nbd168 nbd168 merged commit 56f2798 into openwrt:master Feb 26, 2015
@wintonliuwen wintonliuwen mentioned this pull request Mar 2, 2016
LorenzoBianconi referenced this pull request in LorenzoBianconi/mt76 May 14, 2018
not tested yet

[  338.068131] ======================================================
[  338.068133] WARNING: possible circular locking dependency detected
[  338.068136] 4.17.0-rc1-wdn-src+ #8 Tainted: G        W
[  338.068138] ------------------------------------------------------
[  338.068140] kworker/u4:83/14351 is trying to acquire lock:
[  338.068142] 0000000041c58f17 (&(&q->lock)->rlock#2){+.-.}, at: mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068154]
               but task is already holding lock:
[  338.068156] 00000000461f7579 (&(&sta->lock)->rlock){+.-.}, at: ieee80211_stop_tx_ba_cb+0x2a/0x1e0 [mac80211]
[  338.068192]
               which lock already depends on the new lock.

[  338.068195]
               the existing dependency chain (in reverse order) is:
[  338.068197]
               -> #2 (&(&sta->lock)->rlock){+.-.}:
[  338.068214]        ieee80211_start_tx_ba_session+0xc5/0x300 [mac80211]
[  338.068229]        minstrel_ht_get_rate+0x3b8/0x480 [mac80211]
[  338.068243]        rate_control_get_rate+0x121/0x140 [mac80211]
[  338.068258]        ieee80211_tx_h_rate_ctrl+0x18a/0x3d0 [mac80211]
[  338.068273]        ieee80211_xmit_fast+0x38b/0x880 [mac80211]
[  338.068288]        __ieee80211_subif_start_xmit+0xfc/0x300 [mac80211]
[  338.068304]        ieee80211_subif_start_xmit+0x4b/0x3f0 [mac80211]
[  338.068309]        dev_hard_start_xmit+0x85/0x110
[  338.068312]        __dev_queue_xmit+0x61a/0x890
[  338.068316]        ip6_finish_output2+0x26a/0x740
[  338.068318]        mld_sendpack+0x1cb/0x390
[  338.068321]        mld_ifc_timer_expire+0x1a0/0x2d0
[  338.068325]        call_timer_fn+0x75/0xf0
[  338.068328]        run_timer_softirq+0x317/0x360
[  338.068331]        __do_softirq+0xfd/0x246
[  338.068335]        irq_exit+0xb6/0xc0
[  338.068337]        smp_apic_timer_interrupt+0x59/0x90
[  338.068340]        apic_timer_interrupt+0xf/0x20
[  338.068343]        lock_release+0x102/0x360
[  338.068347]        unmap_page_range+0x513/0x900
[  338.068349]        unmap_vmas+0x47/0xa0
[  338.068352]        exit_mmap+0x73/0x140
[  338.068355]        mmput+0x58/0x140
[  338.068358]        flush_old_exec+0x561/0x7e0
[  338.068361]        load_elf_binary+0x25d/0x1090
[  338.068363]        search_binary_handler.part.39+0x5e/0x210
[  338.068366]        do_execveat_common.isra.43+0x756/0x8d0
[  338.068368]        __x64_sys_execve+0x2d/0x40
[  338.068371]        do_syscall_64+0x55/0x190
[  338.068374]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
[  338.068375]
               -> #1 (&(&sta->rate_ctrl_lock)->rlock){+.-.}:
[  338.068393]        rate_control_tx_status+0x4a/0xa0 [mac80211]
               -> #1 (&(&sta->rate_ctrl_lock)->rlock){+.-.}:
[  338.068393]        rate_control_tx_status+0x4a/0xa0 [mac80211]
[  338.068404]        __ieee80211_tx_status+0x4be/0xa60 [mac80211]
[  338.068416]        ieee80211_tx_status+0xb1/0x160 [mac80211]
[  338.068420]        mt76x2u_tx_complete_skb+0x7b/0x140 [mt76x2u]
[  338.068423]        tasklet_action_common.isra.18+0x51/0xc0
[  338.068425]        __do_softirq+0xfd/0x246
[  338.068427]        irq_exit+0xb6/0xc0
[  338.068430]        do_IRQ+0x8a/0x100
[  338.068432]        ret_from_intr+0x0/0x1d
[  338.068435]        cpuidle_enter_state+0x12e/0x210
[  338.068439]        do_idle+0x1c3/0x220
[  338.068441]        cpu_startup_entry+0x5a/0x60
[  338.068444]        start_secondary+0x189/0x1c0
[  338.068447]        secondary_startup_64+0xa5/0xb0
[  338.068449]
               -> #0 (&(&q->lock)->rlock#2){+.-.}:
[  338.068455]        _raw_spin_lock_bh+0x33/0x70
[  338.068459]        mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068472]        ieee80211_agg_start_txq+0xb2/0x1a0 [mac80211]
[  338.068485]        ieee80211_stop_tx_ba_cb+0xc8/0x1e0 [mac80211]
[  338.068498]        ieee80211_ba_session_work+0x1f6/0x2c0 [mac80211]
[  338.068501]        process_one_work+0x231/0x430
[  338.068503]        worker_thread+0x32/0x3f0
[  338.068507]        kthread+0x117/0x130
[  338.068509]        ret_from_fork+0x3a/0x50
[  338.068511]
               other info that might help us debug this:

[  338.068513] Chain exists of:
                 &(&q->lock)->rlock#2 --> &(&sta->rate_ctrl_lock)->rlock --> &(&sta->lock)->rlock

[  338.068520]  Possible unsafe locking scenario:

[  338.068522]        CPU0                    CPU1
[  338.068524]        ----                    ----
[  338.068525]   lock(&(&sta->lock)->rlock);
[  338.068528]                                lock(&(&sta->rate_ctrl_lock)->rlock);
[  338.068531]                                lock(&(&sta->lock)->rlock);
[  338.068533]   lock(&(&q->lock)->rlock#2);
[  338.068537]
                *** DEADLOCK ***

[  338.068540] 5 locks held by kworker/u4:83/14351:
[  338.068541]  #0: 000000007a890c9a ((wq_completion)"%s"wiphy_name(local->hw.wiphy)){+.+.}, at: process_one_work+0x1ca/0x430
[  338.068547]  #1: 00000000346af92b ((work_completion)(&sta->ampdu_mlme.work)){+.+.}, at: process_one_work+0x1ca/0x430
[  338.068552]  #2: 0000000073e81805 (&sta->ampdu_mlme.mtx){+.+.}, at: ieee80211_ba_session_work+0x4c/0x2c0 [mac80211]
[  338.068569]  #3: 00000000461f7579 (&(&sta->lock)->rlock){+.-.}, at: ieee80211_stop_tx_ba_cb+0x2a/0x1e0 [mac80211]
[  338.068585]  #4: 00000000c03d1aae (rcu_read_lock){....}, at: ieee80211_agg_start_txq+0x28/0x1a0 [mac80211]
               stack backtrace:
[  338.068605] CPU: 1 PID: 14351 Comm: kworker/u4:83 Tainted: G        W         4.17.0-rc1-wdn-src+ #8
[  338.068607] Hardware name: Dell Inc. Studio XPS 1340/0K183D, BIOS A11 09/08/2009
[  338.068621] Workqueue: phy4 ieee80211_ba_session_work [mac80211]
[  338.068624] Call Trace:
[  338.068629]  dump_stack+0x67/0x9b
[  338.068633]  print_circular_bug.isra.44+0x1ce/0x1db
[  338.068636]  __lock_acquire+0x126e/0x1310
[  338.068639]  ? sched_clock_local+0x12/0x80
[  338.068643]  ? lock_acquire+0x43/0x60
[  338.068645]  lock_acquire+0x43/0x60
[  338.068649]  ? mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068652]  _raw_spin_lock_bh+0x33/0x70
[  338.068656]  ? mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068659]  mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068673]  ieee80211_agg_start_txq+0xb2/0x1a0 [mac80211]
[  338.068687]  ieee80211_stop_tx_ba_cb+0xc8/0x1e0 [mac80211]
[  338.068700]  ieee80211_ba_session_work+0x1f6/0x2c0 [mac80211]
[  338.068703]  process_one_work+0x231/0x430
[  338.068706]  ? process_one_work+0x1ca/0x430
[  338.068709]  worker_thread+0x32/0x3f0
[  338.068712]  ? current_work+0x30/0x30
[  338.068714]  kthread+0x117/0x130
[  338.068717]  ? kthread_create_worker_on_cpu+0x40/0x40

Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
LorenzoBianconi referenced this pull request in LorenzoBianconi/mt76 May 20, 2018
not tested yet

[  338.068131] ======================================================
[  338.068133] WARNING: possible circular locking dependency detected
[  338.068136] 4.17.0-rc1-wdn-src+ #8 Tainted: G        W
[  338.068138] ------------------------------------------------------
[  338.068140] kworker/u4:83/14351 is trying to acquire lock:
[  338.068142] 0000000041c58f17 (&(&q->lock)->rlock#2){+.-.}, at: mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068154]
               but task is already holding lock:
[  338.068156] 00000000461f7579 (&(&sta->lock)->rlock){+.-.}, at: ieee80211_stop_tx_ba_cb+0x2a/0x1e0 [mac80211]
[  338.068192]
               which lock already depends on the new lock.

[  338.068195]
               the existing dependency chain (in reverse order) is:
[  338.068197]
               -> #2 (&(&sta->lock)->rlock){+.-.}:
[  338.068214]        ieee80211_start_tx_ba_session+0xc5/0x300 [mac80211]
[  338.068229]        minstrel_ht_get_rate+0x3b8/0x480 [mac80211]
[  338.068243]        rate_control_get_rate+0x121/0x140 [mac80211]
[  338.068258]        ieee80211_tx_h_rate_ctrl+0x18a/0x3d0 [mac80211]
[  338.068273]        ieee80211_xmit_fast+0x38b/0x880 [mac80211]
[  338.068288]        __ieee80211_subif_start_xmit+0xfc/0x300 [mac80211]
[  338.068304]        ieee80211_subif_start_xmit+0x4b/0x3f0 [mac80211]
[  338.068309]        dev_hard_start_xmit+0x85/0x110
[  338.068312]        __dev_queue_xmit+0x61a/0x890
[  338.068316]        ip6_finish_output2+0x26a/0x740
[  338.068318]        mld_sendpack+0x1cb/0x390
[  338.068321]        mld_ifc_timer_expire+0x1a0/0x2d0
[  338.068325]        call_timer_fn+0x75/0xf0
[  338.068328]        run_timer_softirq+0x317/0x360
[  338.068331]        __do_softirq+0xfd/0x246
[  338.068335]        irq_exit+0xb6/0xc0
[  338.068337]        smp_apic_timer_interrupt+0x59/0x90
[  338.068340]        apic_timer_interrupt+0xf/0x20
[  338.068343]        lock_release+0x102/0x360
[  338.068347]        unmap_page_range+0x513/0x900
[  338.068349]        unmap_vmas+0x47/0xa0
[  338.068352]        exit_mmap+0x73/0x140
[  338.068355]        mmput+0x58/0x140
[  338.068358]        flush_old_exec+0x561/0x7e0
[  338.068361]        load_elf_binary+0x25d/0x1090
[  338.068363]        search_binary_handler.part.39+0x5e/0x210
[  338.068366]        do_execveat_common.isra.43+0x756/0x8d0
[  338.068368]        __x64_sys_execve+0x2d/0x40
[  338.068371]        do_syscall_64+0x55/0x190
[  338.068374]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
[  338.068375]
               -> #1 (&(&sta->rate_ctrl_lock)->rlock){+.-.}:
[  338.068393]        rate_control_tx_status+0x4a/0xa0 [mac80211]
               -> #1 (&(&sta->rate_ctrl_lock)->rlock){+.-.}:
[  338.068393]        rate_control_tx_status+0x4a/0xa0 [mac80211]
[  338.068404]        __ieee80211_tx_status+0x4be/0xa60 [mac80211]
[  338.068416]        ieee80211_tx_status+0xb1/0x160 [mac80211]
[  338.068420]        mt76x2u_tx_complete_skb+0x7b/0x140 [mt76x2u]
[  338.068423]        tasklet_action_common.isra.18+0x51/0xc0
[  338.068425]        __do_softirq+0xfd/0x246
[  338.068427]        irq_exit+0xb6/0xc0
[  338.068430]        do_IRQ+0x8a/0x100
[  338.068432]        ret_from_intr+0x0/0x1d
[  338.068435]        cpuidle_enter_state+0x12e/0x210
[  338.068439]        do_idle+0x1c3/0x220
[  338.068441]        cpu_startup_entry+0x5a/0x60
[  338.068444]        start_secondary+0x189/0x1c0
[  338.068447]        secondary_startup_64+0xa5/0xb0
[  338.068449]
               -> #0 (&(&q->lock)->rlock#2){+.-.}:
[  338.068455]        _raw_spin_lock_bh+0x33/0x70
[  338.068459]        mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068472]        ieee80211_agg_start_txq+0xb2/0x1a0 [mac80211]
[  338.068485]        ieee80211_stop_tx_ba_cb+0xc8/0x1e0 [mac80211]
[  338.068498]        ieee80211_ba_session_work+0x1f6/0x2c0 [mac80211]
[  338.068501]        process_one_work+0x231/0x430
[  338.068503]        worker_thread+0x32/0x3f0
[  338.068507]        kthread+0x117/0x130
[  338.068509]        ret_from_fork+0x3a/0x50
[  338.068511]
               other info that might help us debug this:

[  338.068513] Chain exists of:
                 &(&q->lock)->rlock#2 --> &(&sta->rate_ctrl_lock)->rlock --> &(&sta->lock)->rlock

[  338.068520]  Possible unsafe locking scenario:

[  338.068522]        CPU0                    CPU1
[  338.068524]        ----                    ----
[  338.068525]   lock(&(&sta->lock)->rlock);
[  338.068528]                                lock(&(&sta->rate_ctrl_lock)->rlock);
[  338.068531]                                lock(&(&sta->lock)->rlock);
[  338.068533]   lock(&(&q->lock)->rlock#2);
[  338.068537]
                *** DEADLOCK ***

[  338.068540] 5 locks held by kworker/u4:83/14351:
[  338.068541]  #0: 000000007a890c9a ((wq_completion)"%s"wiphy_name(local->hw.wiphy)){+.+.}, at: process_one_work+0x1ca/0x430
[  338.068547]  #1: 00000000346af92b ((work_completion)(&sta->ampdu_mlme.work)){+.+.}, at: process_one_work+0x1ca/0x430
[  338.068552]  #2: 0000000073e81805 (&sta->ampdu_mlme.mtx){+.+.}, at: ieee80211_ba_session_work+0x4c/0x2c0 [mac80211]
[  338.068569]  #3: 00000000461f7579 (&(&sta->lock)->rlock){+.-.}, at: ieee80211_stop_tx_ba_cb+0x2a/0x1e0 [mac80211]
[  338.068585]  #4: 00000000c03d1aae (rcu_read_lock){....}, at: ieee80211_agg_start_txq+0x28/0x1a0 [mac80211]
               stack backtrace:
[  338.068605] CPU: 1 PID: 14351 Comm: kworker/u4:83 Tainted: G        W         4.17.0-rc1-wdn-src+ #8
[  338.068607] Hardware name: Dell Inc. Studio XPS 1340/0K183D, BIOS A11 09/08/2009
[  338.068621] Workqueue: phy4 ieee80211_ba_session_work [mac80211]
[  338.068624] Call Trace:
[  338.068629]  dump_stack+0x67/0x9b
[  338.068633]  print_circular_bug.isra.44+0x1ce/0x1db
[  338.068636]  __lock_acquire+0x126e/0x1310
[  338.068639]  ? sched_clock_local+0x12/0x80
[  338.068643]  ? lock_acquire+0x43/0x60
[  338.068645]  lock_acquire+0x43/0x60
[  338.068649]  ? mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068652]  _raw_spin_lock_bh+0x33/0x70
[  338.068656]  ? mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068659]  mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068673]  ieee80211_agg_start_txq+0xb2/0x1a0 [mac80211]
[  338.068687]  ieee80211_stop_tx_ba_cb+0xc8/0x1e0 [mac80211]
[  338.068700]  ieee80211_ba_session_work+0x1f6/0x2c0 [mac80211]
[  338.068703]  process_one_work+0x231/0x430
[  338.068706]  ? process_one_work+0x1ca/0x430
[  338.068709]  worker_thread+0x32/0x3f0
[  338.068712]  ? current_work+0x30/0x30
[  338.068714]  kthread+0x117/0x130
[  338.068717]  ? kthread_create_worker_on_cpu+0x40/0x40

Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
LorenzoBianconi referenced this pull request in LorenzoBianconi/mt76 May 20, 2018
not tested yet

[  338.068131] ======================================================
[  338.068133] WARNING: possible circular locking dependency detected
[  338.068136] 4.17.0-rc1-wdn-src+ #8 Tainted: G        W
[  338.068138] ------------------------------------------------------
[  338.068140] kworker/u4:83/14351 is trying to acquire lock:
[  338.068142] 0000000041c58f17 (&(&q->lock)->rlock#2){+.-.}, at: mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068154]
               but task is already holding lock:
[  338.068156] 00000000461f7579 (&(&sta->lock)->rlock){+.-.}, at: ieee80211_stop_tx_ba_cb+0x2a/0x1e0 [mac80211]
[  338.068192]
               which lock already depends on the new lock.

[  338.068195]
               the existing dependency chain (in reverse order) is:
[  338.068197]
               -> #2 (&(&sta->lock)->rlock){+.-.}:
[  338.068214]        ieee80211_start_tx_ba_session+0xc5/0x300 [mac80211]
[  338.068229]        minstrel_ht_get_rate+0x3b8/0x480 [mac80211]
[  338.068243]        rate_control_get_rate+0x121/0x140 [mac80211]
[  338.068258]        ieee80211_tx_h_rate_ctrl+0x18a/0x3d0 [mac80211]
[  338.068273]        ieee80211_xmit_fast+0x38b/0x880 [mac80211]
[  338.068288]        __ieee80211_subif_start_xmit+0xfc/0x300 [mac80211]
[  338.068304]        ieee80211_subif_start_xmit+0x4b/0x3f0 [mac80211]
[  338.068309]        dev_hard_start_xmit+0x85/0x110
[  338.068312]        __dev_queue_xmit+0x61a/0x890
[  338.068316]        ip6_finish_output2+0x26a/0x740
[  338.068318]        mld_sendpack+0x1cb/0x390
[  338.068321]        mld_ifc_timer_expire+0x1a0/0x2d0
[  338.068325]        call_timer_fn+0x75/0xf0
[  338.068328]        run_timer_softirq+0x317/0x360
[  338.068331]        __do_softirq+0xfd/0x246
[  338.068335]        irq_exit+0xb6/0xc0
[  338.068337]        smp_apic_timer_interrupt+0x59/0x90
[  338.068340]        apic_timer_interrupt+0xf/0x20
[  338.068343]        lock_release+0x102/0x360
[  338.068347]        unmap_page_range+0x513/0x900
[  338.068349]        unmap_vmas+0x47/0xa0
[  338.068352]        exit_mmap+0x73/0x140
[  338.068355]        mmput+0x58/0x140
[  338.068358]        flush_old_exec+0x561/0x7e0
[  338.068361]        load_elf_binary+0x25d/0x1090
[  338.068363]        search_binary_handler.part.39+0x5e/0x210
[  338.068366]        do_execveat_common.isra.43+0x756/0x8d0
[  338.068368]        __x64_sys_execve+0x2d/0x40
[  338.068371]        do_syscall_64+0x55/0x190
[  338.068374]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
[  338.068375]
               -> #1 (&(&sta->rate_ctrl_lock)->rlock){+.-.}:
[  338.068393]        rate_control_tx_status+0x4a/0xa0 [mac80211]
               -> #1 (&(&sta->rate_ctrl_lock)->rlock){+.-.}:
[  338.068393]        rate_control_tx_status+0x4a/0xa0 [mac80211]
[  338.068404]        __ieee80211_tx_status+0x4be/0xa60 [mac80211]
[  338.068416]        ieee80211_tx_status+0xb1/0x160 [mac80211]
[  338.068420]        mt76x2u_tx_complete_skb+0x7b/0x140 [mt76x2u]
[  338.068423]        tasklet_action_common.isra.18+0x51/0xc0
[  338.068425]        __do_softirq+0xfd/0x246
[  338.068427]        irq_exit+0xb6/0xc0
[  338.068430]        do_IRQ+0x8a/0x100
[  338.068432]        ret_from_intr+0x0/0x1d
[  338.068435]        cpuidle_enter_state+0x12e/0x210
[  338.068439]        do_idle+0x1c3/0x220
[  338.068441]        cpu_startup_entry+0x5a/0x60
[  338.068444]        start_secondary+0x189/0x1c0
[  338.068447]        secondary_startup_64+0xa5/0xb0
[  338.068449]
               -> #0 (&(&q->lock)->rlock#2){+.-.}:
[  338.068455]        _raw_spin_lock_bh+0x33/0x70
[  338.068459]        mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068472]        ieee80211_agg_start_txq+0xb2/0x1a0 [mac80211]
[  338.068485]        ieee80211_stop_tx_ba_cb+0xc8/0x1e0 [mac80211]
[  338.068498]        ieee80211_ba_session_work+0x1f6/0x2c0 [mac80211]
[  338.068501]        process_one_work+0x231/0x430
[  338.068503]        worker_thread+0x32/0x3f0
[  338.068507]        kthread+0x117/0x130
[  338.068509]        ret_from_fork+0x3a/0x50
[  338.068511]
               other info that might help us debug this:

[  338.068513] Chain exists of:
                 &(&q->lock)->rlock#2 --> &(&sta->rate_ctrl_lock)->rlock --> &(&sta->lock)->rlock

[  338.068520]  Possible unsafe locking scenario:

[  338.068522]        CPU0                    CPU1
[  338.068524]        ----                    ----
[  338.068525]   lock(&(&sta->lock)->rlock);
[  338.068528]                                lock(&(&sta->rate_ctrl_lock)->rlock);
[  338.068531]                                lock(&(&sta->lock)->rlock);
[  338.068533]   lock(&(&q->lock)->rlock#2);
[  338.068537]
                *** DEADLOCK ***

[  338.068540] 5 locks held by kworker/u4:83/14351:
[  338.068541]  #0: 000000007a890c9a ((wq_completion)"%s"wiphy_name(local->hw.wiphy)){+.+.}, at: process_one_work+0x1ca/0x430
[  338.068547]  #1: 00000000346af92b ((work_completion)(&sta->ampdu_mlme.work)){+.+.}, at: process_one_work+0x1ca/0x430
[  338.068552]  #2: 0000000073e81805 (&sta->ampdu_mlme.mtx){+.+.}, at: ieee80211_ba_session_work+0x4c/0x2c0 [mac80211]
[  338.068569]  #3: 00000000461f7579 (&(&sta->lock)->rlock){+.-.}, at: ieee80211_stop_tx_ba_cb+0x2a/0x1e0 [mac80211]
[  338.068585]  #4: 00000000c03d1aae (rcu_read_lock){....}, at: ieee80211_agg_start_txq+0x28/0x1a0 [mac80211]
               stack backtrace:
[  338.068605] CPU: 1 PID: 14351 Comm: kworker/u4:83 Tainted: G        W         4.17.0-rc1-wdn-src+ #8
[  338.068607] Hardware name: Dell Inc. Studio XPS 1340/0K183D, BIOS A11 09/08/2009
[  338.068621] Workqueue: phy4 ieee80211_ba_session_work [mac80211]
[  338.068624] Call Trace:
[  338.068629]  dump_stack+0x67/0x9b
[  338.068633]  print_circular_bug.isra.44+0x1ce/0x1db
[  338.068636]  __lock_acquire+0x126e/0x1310
[  338.068639]  ? sched_clock_local+0x12/0x80
[  338.068643]  ? lock_acquire+0x43/0x60
[  338.068645]  lock_acquire+0x43/0x60
[  338.068649]  ? mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068652]  _raw_spin_lock_bh+0x33/0x70
[  338.068656]  ? mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068659]  mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068673]  ieee80211_agg_start_txq+0xb2/0x1a0 [mac80211]
[  338.068687]  ieee80211_stop_tx_ba_cb+0xc8/0x1e0 [mac80211]
[  338.068700]  ieee80211_ba_session_work+0x1f6/0x2c0 [mac80211]
[  338.068703]  process_one_work+0x231/0x430
[  338.068706]  ? process_one_work+0x1ca/0x430
[  338.068709]  worker_thread+0x32/0x3f0
[  338.068712]  ? current_work+0x30/0x30
[  338.068714]  kthread+0x117/0x130
[  338.068717]  ? kthread_create_worker_on_cpu+0x40/0x40

Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
LorenzoBianconi referenced this pull request in LorenzoBianconi/mt76 May 20, 2018
not tested yet

[  338.068131] ======================================================
[  338.068133] WARNING: possible circular locking dependency detected
[  338.068136] 4.17.0-rc1-wdn-src+ #8 Tainted: G        W
[  338.068138] ------------------------------------------------------
[  338.068140] kworker/u4:83/14351 is trying to acquire lock:
[  338.068142] 0000000041c58f17 (&(&q->lock)->rlock#2){+.-.}, at: mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068154]
               but task is already holding lock:
[  338.068156] 00000000461f7579 (&(&sta->lock)->rlock){+.-.}, at: ieee80211_stop_tx_ba_cb+0x2a/0x1e0 [mac80211]
[  338.068192]
               which lock already depends on the new lock.

[  338.068195]
               the existing dependency chain (in reverse order) is:
[  338.068197]
               -> #2 (&(&sta->lock)->rlock){+.-.}:
[  338.068214]        ieee80211_start_tx_ba_session+0xc5/0x300 [mac80211]
[  338.068229]        minstrel_ht_get_rate+0x3b8/0x480 [mac80211]
[  338.068243]        rate_control_get_rate+0x121/0x140 [mac80211]
[  338.068258]        ieee80211_tx_h_rate_ctrl+0x18a/0x3d0 [mac80211]
[  338.068273]        ieee80211_xmit_fast+0x38b/0x880 [mac80211]
[  338.068288]        __ieee80211_subif_start_xmit+0xfc/0x300 [mac80211]
[  338.068304]        ieee80211_subif_start_xmit+0x4b/0x3f0 [mac80211]
[  338.068309]        dev_hard_start_xmit+0x85/0x110
[  338.068312]        __dev_queue_xmit+0x61a/0x890
[  338.068316]        ip6_finish_output2+0x26a/0x740
[  338.068318]        mld_sendpack+0x1cb/0x390
[  338.068321]        mld_ifc_timer_expire+0x1a0/0x2d0
[  338.068325]        call_timer_fn+0x75/0xf0
[  338.068328]        run_timer_softirq+0x317/0x360
[  338.068331]        __do_softirq+0xfd/0x246
[  338.068335]        irq_exit+0xb6/0xc0
[  338.068337]        smp_apic_timer_interrupt+0x59/0x90
[  338.068340]        apic_timer_interrupt+0xf/0x20
[  338.068343]        lock_release+0x102/0x360
[  338.068347]        unmap_page_range+0x513/0x900
[  338.068349]        unmap_vmas+0x47/0xa0
[  338.068352]        exit_mmap+0x73/0x140
[  338.068355]        mmput+0x58/0x140
[  338.068358]        flush_old_exec+0x561/0x7e0
[  338.068361]        load_elf_binary+0x25d/0x1090
[  338.068363]        search_binary_handler.part.39+0x5e/0x210
[  338.068366]        do_execveat_common.isra.43+0x756/0x8d0
[  338.068368]        __x64_sys_execve+0x2d/0x40
[  338.068371]        do_syscall_64+0x55/0x190
[  338.068374]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
[  338.068375]
               -> #1 (&(&sta->rate_ctrl_lock)->rlock){+.-.}:
[  338.068393]        rate_control_tx_status+0x4a/0xa0 [mac80211]
               -> #1 (&(&sta->rate_ctrl_lock)->rlock){+.-.}:
[  338.068393]        rate_control_tx_status+0x4a/0xa0 [mac80211]
[  338.068404]        __ieee80211_tx_status+0x4be/0xa60 [mac80211]
[  338.068416]        ieee80211_tx_status+0xb1/0x160 [mac80211]
[  338.068420]        mt76x2u_tx_complete_skb+0x7b/0x140 [mt76x2u]
[  338.068423]        tasklet_action_common.isra.18+0x51/0xc0
[  338.068425]        __do_softirq+0xfd/0x246
[  338.068427]        irq_exit+0xb6/0xc0
[  338.068430]        do_IRQ+0x8a/0x100
[  338.068432]        ret_from_intr+0x0/0x1d
[  338.068435]        cpuidle_enter_state+0x12e/0x210
[  338.068439]        do_idle+0x1c3/0x220
[  338.068441]        cpu_startup_entry+0x5a/0x60
[  338.068444]        start_secondary+0x189/0x1c0
[  338.068447]        secondary_startup_64+0xa5/0xb0
[  338.068449]
               -> #0 (&(&q->lock)->rlock#2){+.-.}:
[  338.068455]        _raw_spin_lock_bh+0x33/0x70
[  338.068459]        mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068472]        ieee80211_agg_start_txq+0xb2/0x1a0 [mac80211]
[  338.068485]        ieee80211_stop_tx_ba_cb+0xc8/0x1e0 [mac80211]
[  338.068498]        ieee80211_ba_session_work+0x1f6/0x2c0 [mac80211]
[  338.068501]        process_one_work+0x231/0x430
[  338.068503]        worker_thread+0x32/0x3f0
[  338.068507]        kthread+0x117/0x130
[  338.068509]        ret_from_fork+0x3a/0x50
[  338.068511]
               other info that might help us debug this:

[  338.068513] Chain exists of:
                 &(&q->lock)->rlock#2 --> &(&sta->rate_ctrl_lock)->rlock --> &(&sta->lock)->rlock

[  338.068520]  Possible unsafe locking scenario:

[  338.068522]        CPU0                    CPU1
[  338.068524]        ----                    ----
[  338.068525]   lock(&(&sta->lock)->rlock);
[  338.068528]                                lock(&(&sta->rate_ctrl_lock)->rlock);
[  338.068531]                                lock(&(&sta->lock)->rlock);
[  338.068533]   lock(&(&q->lock)->rlock#2);
[  338.068537]
                *** DEADLOCK ***

[  338.068540] 5 locks held by kworker/u4:83/14351:
[  338.068541]  #0: 000000007a890c9a ((wq_completion)"%s"wiphy_name(local->hw.wiphy)){+.+.}, at: process_one_work+0x1ca/0x430
[  338.068547]  #1: 00000000346af92b ((work_completion)(&sta->ampdu_mlme.work)){+.+.}, at: process_one_work+0x1ca/0x430
[  338.068552]  #2: 0000000073e81805 (&sta->ampdu_mlme.mtx){+.+.}, at: ieee80211_ba_session_work+0x4c/0x2c0 [mac80211]
[  338.068569]  #3: 00000000461f7579 (&(&sta->lock)->rlock){+.-.}, at: ieee80211_stop_tx_ba_cb+0x2a/0x1e0 [mac80211]
[  338.068585]  #4: 00000000c03d1aae (rcu_read_lock){....}, at: ieee80211_agg_start_txq+0x28/0x1a0 [mac80211]
               stack backtrace:
[  338.068605] CPU: 1 PID: 14351 Comm: kworker/u4:83 Tainted: G        W         4.17.0-rc1-wdn-src+ #8
[  338.068607] Hardware name: Dell Inc. Studio XPS 1340/0K183D, BIOS A11 09/08/2009
[  338.068621] Workqueue: phy4 ieee80211_ba_session_work [mac80211]
[  338.068624] Call Trace:
[  338.068629]  dump_stack+0x67/0x9b
[  338.068633]  print_circular_bug.isra.44+0x1ce/0x1db
[  338.068636]  __lock_acquire+0x126e/0x1310
[  338.068639]  ? sched_clock_local+0x12/0x80
[  338.068643]  ? lock_acquire+0x43/0x60
[  338.068645]  lock_acquire+0x43/0x60
[  338.068649]  ? mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068652]  _raw_spin_lock_bh+0x33/0x70
[  338.068656]  ? mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068659]  mt76_wake_tx_queue+0x1d/0x60 [mt76]
[  338.068673]  ieee80211_agg_start_txq+0xb2/0x1a0 [mac80211]
[  338.068687]  ieee80211_stop_tx_ba_cb+0xc8/0x1e0 [mac80211]
[  338.068700]  ieee80211_ba_session_work+0x1f6/0x2c0 [mac80211]
[  338.068703]  process_one_work+0x231/0x430
[  338.068706]  ? process_one_work+0x1ca/0x430
[  338.068709]  worker_thread+0x32/0x3f0
[  338.068712]  ? current_work+0x30/0x30
[  338.068714]  kthread+0x117/0x130
[  338.068717]  ? kthread_create_worker_on_cpu+0x40/0x40

Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
nbd168 pushed a commit that referenced this pull request May 11, 2019
Move ieee80211_tx_status_ext() outside of status_list lock section
in order to avoid locking dependency and possible deadlock reposed by
LOCKDEP in below warning.

Also do mt76_tx_status_lock() just before it's needed.

[  440.224832] WARNING: possible circular locking dependency detected
[  440.224833] 5.1.0-rc2+ #22 Not tainted
[  440.224834] ------------------------------------------------------
[  440.224835] kworker/u16:28/2362 is trying to acquire lock:
[  440.224836] 0000000089b8cacf (&(&q->lock)->rlock#2){+.-.}, at: mt76_wake_tx_queue+0x4c/0xb0 [mt76]
[  440.224842]
               but task is already holding lock:
[  440.224842] 000000002cfedc59 (&(&sta->lock)->rlock){+.-.}, at: ieee80211_stop_tx_ba_cb+0x32/0x1f0 [mac80211]
[  440.224863]
               which lock already depends on the new lock.

[  440.224863]
               the existing dependency chain (in reverse order) is:
[  440.224864]
               -> #3 (&(&sta->lock)->rlock){+.-.}:
[  440.224869]        _raw_spin_lock_bh+0x34/0x40
[  440.224880]        ieee80211_start_tx_ba_session+0xe4/0x3d0 [mac80211]
[  440.224894]        minstrel_ht_get_rate+0x45c/0x510 [mac80211]
[  440.224906]        rate_control_get_rate+0xc1/0x140 [mac80211]
[  440.224918]        ieee80211_tx_h_rate_ctrl+0x195/0x3c0 [mac80211]
[  440.224930]        ieee80211_xmit_fast+0x26d/0xa50 [mac80211]
[  440.224942]        __ieee80211_subif_start_xmit+0xfc/0x310 [mac80211]
[  440.224954]        ieee80211_subif_start_xmit+0x38/0x390 [mac80211]
[  440.224956]        dev_hard_start_xmit+0xb8/0x300
[  440.224957]        __dev_queue_xmit+0x7d4/0xbb0
[  440.224968]        ip6_finish_output2+0x246/0x860 [ipv6]
[  440.224978]        mld_sendpack+0x1bd/0x360 [ipv6]
[  440.224987]        mld_ifc_timer_expire+0x1a4/0x2f0 [ipv6]
[  440.224989]        call_timer_fn+0x89/0x2a0
[  440.224990]        run_timer_softirq+0x1bd/0x4d0
[  440.224992]        __do_softirq+0xdb/0x47c
[  440.224994]        irq_exit+0xfa/0x100
[  440.224996]        smp_apic_timer_interrupt+0x9a/0x220
[  440.224997]        apic_timer_interrupt+0xf/0x20
[  440.224999]        cpuidle_enter_state+0xc1/0x470
[  440.225000]        do_idle+0x21a/0x260
[  440.225001]        cpu_startup_entry+0x19/0x20
[  440.225004]        start_secondary+0x135/0x170
[  440.225006]        secondary_startup_64+0xa4/0xb0
[  440.225007]
               -> #2 (&(&sta->rate_ctrl_lock)->rlock){+.-.}:
[  440.225009]        _raw_spin_lock_bh+0x34/0x40
[  440.225022]        rate_control_tx_status+0x4f/0xb0 [mac80211]
[  440.225031]        ieee80211_tx_status_ext+0x142/0x1a0 [mac80211]
[  440.225035]        mt76x02_send_tx_status+0x2e4/0x340 [mt76x02_lib]
[  440.225037]        mt76x02_tx_status_data+0x31/0x40 [mt76x02_lib]
[  440.225040]        mt76u_tx_status_data+0x51/0xa0 [mt76_usb]
[  440.225042]        process_one_work+0x237/0x5d0
[  440.225043]        worker_thread+0x3c/0x390
[  440.225045]        kthread+0x11d/0x140
[  440.225046]        ret_from_fork+0x3a/0x50
[  440.225047]
               -> #1 (&(&list->lock)->rlock#8){+.-.}:
[  440.225049]        _raw_spin_lock_bh+0x34/0x40
[  440.225052]        mt76_tx_status_skb_add+0x51/0x100 [mt76]
[  440.225054]        mt76x02u_tx_prepare_skb+0xbd/0x116 [mt76x02_usb]
[  440.225056]        mt76u_tx_queue_skb+0x5f/0x180 [mt76_usb]
[  440.225058]        mt76_tx+0x93/0x190 [mt76]
[  440.225070]        ieee80211_tx_frags+0x148/0x210 [mac80211]
[  440.225081]        __ieee80211_tx+0x75/0x1b0 [mac80211]
[  440.225092]        ieee80211_tx+0xde/0x110 [mac80211]
[  440.225105]        __ieee80211_tx_skb_tid_band+0x72/0x90 [mac80211]
[  440.225122]        ieee80211_send_auth+0x1f3/0x360 [mac80211]
[  440.225141]        ieee80211_auth.cold.40+0x6c/0x100 [mac80211]
[  440.225156]        ieee80211_mgd_auth.cold.50+0x132/0x15f [mac80211]
[  440.225171]        cfg80211_mlme_auth+0x149/0x360 [cfg80211]
[  440.225181]        nl80211_authenticate+0x273/0x2e0 [cfg80211]
[  440.225183]        genl_family_rcv_msg+0x196/0x3a0
[  440.225184]        genl_rcv_msg+0x47/0x8e
[  440.225185]        netlink_rcv_skb+0x3a/0xf0
[  440.225187]        genl_rcv+0x24/0x40
[  440.225188]        netlink_unicast+0x16d/0x210
[  440.225189]        netlink_sendmsg+0x204/0x3b0
[  440.225191]        sock_sendmsg+0x36/0x40
[  440.225193]        ___sys_sendmsg+0x259/0x2b0
[  440.225194]        __sys_sendmsg+0x47/0x80
[  440.225196]        do_syscall_64+0x60/0x1f0
[  440.225197]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
[  440.225198]
               -> #0 (&(&q->lock)->rlock#2){+.-.}:
[  440.225200]        lock_acquire+0xb9/0x1a0
[  440.225202]        _raw_spin_lock_bh+0x34/0x40
[  440.225204]        mt76_wake_tx_queue+0x4c/0xb0 [mt76]
[  440.225215]        ieee80211_agg_start_txq+0xe8/0x2b0 [mac80211]
[  440.225225]        ieee80211_stop_tx_ba_cb+0xb8/0x1f0 [mac80211]
[  440.225235]        ieee80211_ba_session_work+0x1c1/0x2f0 [mac80211]
[  440.225236]        process_one_work+0x237/0x5d0
[  440.225237]        worker_thread+0x3c/0x390
[  440.225239]        kthread+0x11d/0x140
[  440.225240]        ret_from_fork+0x3a/0x50
[  440.225240]
               other info that might help us debug this:

[  440.225241] Chain exists of:
                 &(&q->lock)->rlock#2 --> &(&sta->rate_ctrl_lock)->rlock --> &(&sta->lock)->rlock

[  440.225243]  Possible unsafe locking scenario:

[  440.225244]        CPU0                    CPU1
[  440.225244]        ----                    ----
[  440.225245]   lock(&(&sta->lock)->rlock);
[  440.225245]                                lock(&(&sta->rate_ctrl_lock)->rlock);
[  440.225246]                                lock(&(&sta->lock)->rlock);
[  440.225247]   lock(&(&q->lock)->rlock#2);
[  440.225248]
                *** DEADLOCK ***

[  440.225249] 5 locks held by kworker/u16:28/2362:
[  440.225250]  #0: 0000000048fcd291 ((wq_completion)phy0){+.+.}, at: process_one_work+0x1b5/0x5d0
[  440.225252]  #1: 00000000f1c6828f ((work_completion)(&sta->ampdu_mlme.work)){+.+.}, at: process_one_work+0x1b5/0x5d0
[  440.225254]  #2: 00000000433d2b2c (&sta->ampdu_mlme.mtx){+.+.}, at: ieee80211_ba_session_work+0x5c/0x2f0 [mac80211]
[  440.225265]  #3: 000000002cfedc59 (&(&sta->lock)->rlock){+.-.}, at: ieee80211_stop_tx_ba_cb+0x32/0x1f0 [mac80211]
[  440.225276]  #4: 000000009d7b9a44 (rcu_read_lock){....}, at: ieee80211_agg_start_txq+0x33/0x2b0 [mac80211]
[  440.225286]
               stack backtrace:
[  440.225288] CPU: 2 PID: 2362 Comm: kworker/u16:28 Not tainted 5.1.0-rc2+ #22
[  440.225289] Hardware name: LENOVO 20KGS23S0P/20KGS23S0P, BIOS N23ET55W (1.30 ) 08/31/2018
[  440.225300] Workqueue: phy0 ieee80211_ba_session_work [mac80211]
[  440.225301] Call Trace:
[  440.225304]  dump_stack+0x85/0xc0
[  440.225306]  print_circular_bug.isra.38.cold.58+0x15c/0x195
[  440.225307]  check_prev_add.constprop.48+0x5f0/0xc00
[  440.225309]  ? check_prev_add.constprop.48+0x39d/0xc00
[  440.225311]  ? __lock_acquire+0x41d/0x1100
[  440.225312]  __lock_acquire+0xd98/0x1100
[  440.225313]  ? __lock_acquire+0x41d/0x1100
[  440.225315]  lock_acquire+0xb9/0x1a0
[  440.225317]  ? mt76_wake_tx_queue+0x4c/0xb0 [mt76]
[  440.225319]  _raw_spin_lock_bh+0x34/0x40
[  440.225321]  ? mt76_wake_tx_queue+0x4c/0xb0 [mt76]
[  440.225323]  mt76_wake_tx_queue+0x4c/0xb0 [mt76]
[  440.225334]  ieee80211_agg_start_txq+0xe8/0x2b0 [mac80211]
[  440.225344]  ieee80211_stop_tx_ba_cb+0xb8/0x1f0 [mac80211]
[  440.225354]  ieee80211_ba_session_work+0x1c1/0x2f0 [mac80211]
[  440.225356]  process_one_work+0x237/0x5d0
[  440.225358]  worker_thread+0x3c/0x390
[  440.225359]  ? wq_calc_node_cpumask+0x70/0x70
[  440.225360]  kthread+0x11d/0x140
[  440.225362]  ? kthread_create_on_node+0x40/0x40
[  440.225363]  ret_from_fork+0x3a/0x50

Cc: stable@vger.kernel.org
Fixes: 88046b2c9f6d ("mt76: add support for reporting tx status with skb")
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Acked-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
@kar200 kar200 mentioned this pull request Dec 23, 2020
nbd168 pushed a commit that referenced this pull request Apr 13, 2021
Introduce rcu section in mt7921_mcu_tx_rate_report before dereferencing
wcid pointer otherwise loockdep will report the following issue:

[  115.245740] =============================
[  115.245754] WARNING: suspicious RCU usage
[  115.245771] 5.10.20 #0 Not tainted
[  115.245784] -----------------------------
[  115.245816] other info that might help us debug this:
[  115.245830] rcu_scheduler_active = 2, debug_locks = 1
[  115.245845] 3 locks held by kworker/u4:1/20:
[  115.245858]  #0: ffffff80065ab138 ((wq_completion)phy0){+.+.}-{0:0}, at: process_one_work+0x1f8/0x6b8
[  115.245948]  #1: ffffffc01198bdd8 ((work_completion)(&(&dev->mphy.mac_work)->work)){+.+.}-{0:0}, at: process_one_8
[  115.246027]  #2: ffffff8006543ce8 (&dev->mutex#2){+.+.}-{3:3}, at: mt7921_mac_work+0x60/0x2b0 [mt7921e]
[  115.246125]
[  115.246125] stack backtrace:
[  115.246142] CPU: 1 PID: 20 Comm: kworker/u4:1 Not tainted 5.10.20 #0
[  115.246152] Hardware name: MediaTek MT7622 RFB1 board (DT)
[  115.246168] Workqueue: phy0 mt7921_mac_work [mt7921e]
[  115.246188] Call trace:
[  115.246201]  dump_backtrace+0x0/0x1a8
[  115.246213]  show_stack+0x14/0x30
[  115.246228]  dump_stack+0xec/0x134
[  115.246240]  lockdep_rcu_suspicious+0xcc/0xdc
[  115.246255]  mt7921_get_wtbl_info+0x2a4/0x310 [mt7921e]
[  115.246269]  mt7921_mac_work+0x284/0x2b0 [mt7921e]
[  115.246281]  process_one_work+0x2a0/0x6b8
[  115.246293]  worker_thread+0x40/0x440
[  115.246305]  kthread+0x144/0x148
[  115.246317]  ret_from_fork+0x10/0x18

Fixes: 1c099ab44727c ("mt76: mt7921: add MCU support")
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants