Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mame4All v1.2 Slow and unplayable with sound issues. #4

Open
qbertaddict opened this issue Sep 1, 2010 · 6 comments
Open

Mame4All v1.2 Slow and unplayable with sound issues. #4

qbertaddict opened this issue Sep 1, 2010 · 6 comments

Comments

@qbertaddict
Copy link

Mame4All V1.2 by slaaneesh is so slow that a lot of games are now unplayable when scaling is selected on games. Without scaling games run at a decent speed for a while but seem to suffer from stutter and slowdown. These games do not have the same issue on the original rootfs and kernel. Changing the sound freq does not help. The sound is so high pitched that the voice in gorf sounds like alvin and the chipmunks are making fun of me instead of the gorfian overlord.

@mthuurne
Copy link
Owner

mthuurne commented Sep 1, 2010

Could you try starting the emulator from telnet and doing "export SDL_AUDIODRIVER=dummy" before starting it? That will disable audio output from SDL, to get an indication whether the audio is causing the slowdown or whether these are two separate issues.

@SiENcE
Copy link

SiENcE commented Sep 2, 2010

Gnuboy4D has the same behaviour. Sound has big slowdowns and thats why Gnuboy4D is much slower.

To force SDL to use ALSA use:

export SDL_AUDIODRIVER=alsa
or try
export SDL_AUDIODRIVER=dsp

@SiENcE
Copy link

SiENcE commented Sep 2, 2010

Some other reported this:

export SDL_PATH_DSP="/dev/dsp"
export SDL_AUDIODRIVER="alsa"

@qbertaddict
Copy link
Author

Rom that was tested is Ghouls n Ghosts (USA)
the following settings were used
mame ghoulsu -clock 420 -depth 8 -bestscale -nodirty --brightness 70 -samplerate
16000

SDL_AUDIODRIVER=dummy
Ouch my ears

export SDL_AUDIODRIVER=alsa
Resulted in horrible screeching

export SDL_AUDIODRIVER=dsp
resulted in high pitched audio and slowdowns

export SDL_PATH_DSP="/dev/dsp"
aaaaahhhh my ears again

export SDL_AUDIODRIVER="alsa"
Resulted in high pitched audio but not as much slowdown

@SiENcE
Copy link

SiENcE commented Sep 2, 2010

could you try export SDL_AUDIODRIVER="alsa" and -samplerate 44100

@SiENcE
Copy link

SiENcE commented Sep 2, 2010

please reopen.

mthuurne pushed a commit that referenced this issue Jun 6, 2011
===================================================
[ INFO: suspicious rcu_dereference_check() usage. ]
---------------------------------------------------
net/mac80211/sta_info.c:125 invoked rcu_dereference_check() without protection!

other info that might help us debug this:

rcu_scheduler_active = 1, debug_locks = 0
5 locks held by wpa_supplicant/468:
 #0:  (rtnl_mutex){+.+.+.}, at: [<c1465d84>] rtnl_lock+0x14/0x20
 jonsmirl#1:  (&rdev->mtx){+.+.+.}, at: [<f84b8c2b>] cfg80211_mgd_wext_siwfreq+0x6b/0x170 [cfg80211]
 jonsmirl#2:  (&rdev->devlist_mtx){+.+.+.}, at: [<f84b8c37>] cfg80211_mgd_wext_siwfreq+0x77/0x170 [cfg80211]
 #3:  (&wdev->mtx){+.+.+.}, at: [<f84b8c44>] cfg80211_mgd_wext_siwfreq+0x84/0x170 [cfg80211]
 #4:  (&rtlpriv->locks.conf_mutex){+.+.+.}, at: [<f8506476>] rtl_op_bss_info_changed+0x26/0xc10 [rtlwifi]

stack backtrace:
Pid: 468, comm: wpa_supplicant Not tainted 2.6.38-rc6+ #79
Call Trace:
 [<c108806a>] ? lockdep_rcu_dereference+0xaa/0xb0
 [<f8523d2c>] ? sta_info_get_bss+0x19c/0x1b0 [mac80211]
 [<f8523d62>] ? ieee80211_find_sta+0x22/0x40 [mac80211]
 [<f850661c>] ? rtl_op_bss_info_changed+0x1cc/0xc10 [rtlwifi]
 [<c153671c>] ? __mutex_unlock_slowpath+0x14c/0x160
 [<c153673d>] ? mutex_unlock+0xd/0x10
 [<f8507180>] ? rtl_op_config+0x120/0x310 [rtlwifi]
 [<c10896db>] ? trace_hardirqs_on+0xb/0x10
 [<f8522169>] ? ieee80211_bss_info_change_notify+0xf9/0x1f0 [mac80211]
 [<f8506450>] ? rtl_op_bss_info_changed+0x0/0xc10 [rtlwifi]
 [<f853646f>] ? ieee80211_set_channel+0xbf/0xd0 [mac80211]
 [<f84b5f41>] ? cfg80211_set_freq+0x121/0x180 [cfg80211]
 [<f85363b0>] ? ieee80211_set_channel+0x0/0xd0 [mac80211]
 [<f84b8ceb>] ? cfg80211_mgd_wext_siwfreq+0x12b/0x170 [cfg80211]
 [<f84b87eb>] ? cfg80211_wext_siwfreq+0x9b/0x100 [cfg80211]
 [<c153b98b>] ? sub_preempt_count+0x7b/0xb0
 [<c150f874>] ? ioctl_standard_call+0x74/0x3b0
 [<c1465d84>] ? rtnl_lock+0x14/0x20
 [<f84b8750>] ? cfg80211_wext_siwfreq+0x0/0x100 [cfg80211]
 [<c14568bd>] ? __dev_get_by_name+0x8d/0xb0
 [<c150fddb>] ? wext_handle_ioctl+0x16b/0x180
 [<f84b8750>] ? cfg80211_wext_siwfreq+0x0/0x100 [cfg80211]
 [<c145bc7a>] ? dev_ioctl+0x5ba/0x720
 [<c108a947>] ? __lock_acquire+0x3e7/0x19b0
 [<c1443b0b>] ? sock_ioctl+0x1eb/0x290
 [<c108bfa5>] ? lock_release_non_nested+0x95/0x2f0
 [<c1443920>] ? sock_ioctl+0x0/0x290
 [<c114d74d>] ? do_vfs_ioctl+0x7d/0x5c0
 [<c1112232>] ? might_fault+0x62/0xb0
 [<c113e3c6>] ? fget_light+0x226/0x390
 [<c1112278>] ? might_fault+0xa8/0xb0
 [<c114dd17>] ? sys_ioctl+0x87/0x90
 [<c1002f9f>] ? sysenter_do_call+0x12/0x38

This work was supported by a hardware donation from the CE Linux Forum.

Signed-off-by: Alessio Igor Bogani <abogani@kernel.org>
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
mthuurne pushed a commit that referenced this issue Jun 6, 2011
Konstanin Khlebnikov reports that a dangerous race between umount and
shmem_writepage can be reproduced by this script:

  for i in {1..300} ; do
	mkdir $i
	while true ; do
		mount -t tmpfs none $i
		dd if=/dev/zero of=$i/test bs=1M count=$(($RANDOM % 100))
		umount $i
	done &
  done

on a 6xCPU node with 8Gb RAM: kernel very unstable after this accident. =)

Kernel log:

  VFS: Busy inodes after unmount of tmpfs.
                 Self-destruct in 5 seconds.  Have a nice day...

  WARNING: at lib/list_debug.c:53 __list_del_entry+0x8d/0x98()
  list_del corruption. prev->next should be ffff880222fdaac8, but was (null)
  Pid: 11222, comm: mount.tmpfs Not tainted 2.6.39-rc2+ #4
  Call Trace:
   warn_slowpath_common+0x80/0x98
   warn_slowpath_fmt+0x41/0x43
   __list_del_entry+0x8d/0x98
   evict+0x50/0x113
   iput+0x138/0x141
  ...
  BUG: unable to handle kernel paging request at ffffffffffffffff
  IP: shmem_free_blocks+0x18/0x4c
  Pid: 10422, comm: dd Tainted: G        W   2.6.39-rc2+ #4
  Call Trace:
   shmem_recalc_inode+0x61/0x66
   shmem_writepage+0xba/0x1dc
   pageout+0x13c/0x24c
   shrink_page_list+0x28e/0x4be
   shrink_inactive_list+0x21f/0x382
  ...

shmem_writepage() calls igrab() on the inode for the page which came from
page reclaim, to add it later into shmem_swaplist for swapoff operation.

This igrab() can race with super-block deactivating process:

  shrink_inactive_list()          deactivate_super()
  pageout()                       tmpfs_fs_type->kill_sb()
  shmem_writepage()               kill_litter_super()
                                  generic_shutdown_super()
                                   evict_inodes()
   igrab()
                                    atomic_read(&inode->i_count)
                                     skip-inode
   iput()
                                   if (!list_empty(&sb->s_inodes))
                                          printk("VFS: Busy inodes after...

This igrap-iput pair was added in commit 1b1b32f "tmpfs: fix
shmem_swaplist races" based on incorrect assumptions: igrab() protects the
inode from concurrent eviction by deletion, but it does nothing to protect
it from concurrent unmounting, which goes ahead despite the raised
i_count.

So this use of igrab() was wrong all along, but the race made much worse
in 2.6.37 when commit 63997e9 "split invalidate_inodes()" replaced
two attempts at invalidate_inodes() by a single evict_inodes().

Konstantin posted a plausible patch, raising sb->s_active too: I'm unsure
whether it was correct or not; but burnt once by igrab(), I am sure that
we don't want to rely more deeply upon externals here.

Fix it by adding the inode to shmem_swaplist earlier, while the page lock
on page in page cache still secures the inode against eviction, without
artifically raising i_count.  It was originally added later because
shmem_unuse_inode() is liable to remove an inode from the list while it's
unswapped; but we can guard against that by taking spinlock before
dropping mutex.

Reported-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Signed-off-by: Hugh Dickins <hughd@google.com>
Tested-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mthuurne pushed a commit that referenced this issue Aug 2, 2011
Commit d09b62d fixed grace-period synchronization, but left some smp_mb()
invocations in rcu_process_callbacks() that are no longer needed, but
sheer paranoia prevented them from being removed.  This commit removes
them and provides a proof of correctness in their absence.  It also adds
a memory barrier to rcu_report_qs_rsp() immediately before the update to
rsp->completed in order to handle the theoretical possibility that the
compiler or CPU might move massive quantities of code into a lock-based
critical section.  This also proves that the sheer paranoia was not
entirely unjustified, at least from a theoretical point of view.

In addition, the old dyntick-idle synchronization depended on the fact
that grace periods were many milliseconds in duration, so that it could
be assumed that no dyntick-idle CPU could reorder a memory reference
across an entire grace period.  Unfortunately for this design, the
addition of expedited grace periods breaks this assumption, which has
the unfortunate side-effect of requiring atomic operations in the
functions that track dyntick-idle state for RCU.  (There is some hope
that the algorithms used in user-level RCU might be applied here, but
some work is required to handle the NMIs that user-space applications
can happily ignore.  For the short term, better safe than sorry.)

This proof assumes that neither compiler nor CPU will allow a lock
acquisition and release to be reordered, as doing so can result in
deadlock.  The proof is as follows:

1.	A given CPU declares a quiescent state under the protection of
	its leaf rcu_node's lock.

2.	If there is more than one level of rcu_node hierarchy, the
	last CPU to declare a quiescent state will also acquire the
	->lock of the next rcu_node up in the hierarchy,  but only
	after releasing the lower level's lock.  The acquisition of this
	lock clearly cannot occur prior to the acquisition of the leaf
	node's lock.

3.	Step 2 repeats until we reach the root rcu_node structure.
	Please note again that only one lock is held at a time through
	this process.  The acquisition of the root rcu_node's ->lock
	must occur after the release of that of the leaf rcu_node.

4.	At this point, we set the ->completed field in the rcu_state
	structure in rcu_report_qs_rsp().  However, if the rcu_node
	hierarchy contains only one rcu_node, then in theory the code
	preceding the quiescent state could leak into the critical
	section.  We therefore precede the update of ->completed with a
	memory barrier.  All CPUs will therefore agree that any updates
	preceding any report of a quiescent state will have happened
	before the update of ->completed.

5.	Regardless of whether a new grace period is needed, rcu_start_gp()
	will propagate the new value of ->completed to all of the leaf
	rcu_node structures, under the protection of each rcu_node's ->lock.
	If a new grace period is needed immediately, this propagation
	will occur in the same critical section that ->completed was
	set in, but courtesy of the memory barrier in #4 above, is still
	seen to follow any pre-quiescent-state activity.

6.	When a given CPU invokes __rcu_process_gp_end(), it becomes
	aware of the end of the old grace period and therefore makes
	any RCU callbacks that were waiting on that grace period eligible
	for invocation.

	If this CPU is the same one that detected the end of the grace
	period, and if there is but a single rcu_node in the hierarchy,
	we will still be in the single critical section.  In this case,
	the memory barrier in step #4 guarantees that all callbacks will
	be seen to execute after each CPU's quiescent state.

	On the other hand, if this is a different CPU, it will acquire
	the leaf rcu_node's ->lock, and will again be serialized after
	each CPU's quiescent state for the old grace period.

On the strength of this proof, this commit therefore removes the memory
barriers from rcu_process_callbacks() and adds one to rcu_report_qs_rsp().
The effect is to reduce the number of memory barriers by one and to
reduce the frequency of execution from about once per scheduling tick
per CPU to once per grace period.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
mthuurne pushed a commit that referenced this issue Aug 2, 2011
…(try #4)

Use invalidate_inode_pages2 that don't leave pages even if shrink_page_list()
has a temp ref on them. It prevents a data coherency problem when
cifs_invalidate_mapping didn't invalidate pages but the client thinks that a data
from the cache is uptodate according to an oplock level (exclusive or II).

Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
Signed-off-by: Steve French <sfrench@us.ibm.com>
mthuurne pushed a commit that referenced this issue Aug 2, 2011
(Note: this was reverted, and is now being re-applied in pieces, with
this being the fifth and final piece.  See below for the reason that
it is now felt to be safe to re-apply this.)

Commit d09b62d fixed grace-period synchronization, but left some smp_mb()
invocations in rcu_process_callbacks() that are no longer needed, but
sheer paranoia prevented them from being removed.  This commit removes
them and provides a proof of correctness in their absence.  It also adds
a memory barrier to rcu_report_qs_rsp() immediately before the update to
rsp->completed in order to handle the theoretical possibility that the
compiler or CPU might move massive quantities of code into a lock-based
critical section.  This also proves that the sheer paranoia was not
entirely unjustified, at least from a theoretical point of view.

In addition, the old dyntick-idle synchronization depended on the fact
that grace periods were many milliseconds in duration, so that it could
be assumed that no dyntick-idle CPU could reorder a memory reference
across an entire grace period.  Unfortunately for this design, the
addition of expedited grace periods breaks this assumption, which has
the unfortunate side-effect of requiring atomic operations in the
functions that track dyntick-idle state for RCU.  (There is some hope
that the algorithms used in user-level RCU might be applied here, but
some work is required to handle the NMIs that user-space applications
can happily ignore.  For the short term, better safe than sorry.)

This proof assumes that neither compiler nor CPU will allow a lock
acquisition and release to be reordered, as doing so can result in
deadlock.  The proof is as follows:

1.	A given CPU declares a quiescent state under the protection of
	its leaf rcu_node's lock.

2.	If there is more than one level of rcu_node hierarchy, the
	last CPU to declare a quiescent state will also acquire the
	->lock of the next rcu_node up in the hierarchy,  but only
	after releasing the lower level's lock.  The acquisition of this
	lock clearly cannot occur prior to the acquisition of the leaf
	node's lock.

3.	Step 2 repeats until we reach the root rcu_node structure.
	Please note again that only one lock is held at a time through
	this process.  The acquisition of the root rcu_node's ->lock
	must occur after the release of that of the leaf rcu_node.

4.	At this point, we set the ->completed field in the rcu_state
	structure in rcu_report_qs_rsp().  However, if the rcu_node
	hierarchy contains only one rcu_node, then in theory the code
	preceding the quiescent state could leak into the critical
	section.  We therefore precede the update of ->completed with a
	memory barrier.  All CPUs will therefore agree that any updates
	preceding any report of a quiescent state will have happened
	before the update of ->completed.

5.	Regardless of whether a new grace period is needed, rcu_start_gp()
	will propagate the new value of ->completed to all of the leaf
	rcu_node structures, under the protection of each rcu_node's ->lock.
	If a new grace period is needed immediately, this propagation
	will occur in the same critical section that ->completed was
	set in, but courtesy of the memory barrier in #4 above, is still
	seen to follow any pre-quiescent-state activity.

6.	When a given CPU invokes __rcu_process_gp_end(), it becomes
	aware of the end of the old grace period and therefore makes
	any RCU callbacks that were waiting on that grace period eligible
	for invocation.

	If this CPU is the same one that detected the end of the grace
	period, and if there is but a single rcu_node in the hierarchy,
	we will still be in the single critical section.  In this case,
	the memory barrier in step #4 guarantees that all callbacks will
	be seen to execute after each CPU's quiescent state.

	On the other hand, if this is a different CPU, it will acquire
	the leaf rcu_node's ->lock, and will again be serialized after
	each CPU's quiescent state for the old grace period.

On the strength of this proof, this commit therefore removes the memory
barriers from rcu_process_callbacks() and adds one to rcu_report_qs_rsp().
The effect is to reduce the number of memory barriers by one and to
reduce the frequency of execution from about once per scheduling tick
per CPU to once per grace period.

This was reverted do to hangs found during testing by Yinghai Lu and
Ingo Molnar.  Frederic Weisbecker supplied Yinghai with tracing that
located the underlying problem, and Frederic also provided the fix.

The underlying problem was that the HARDIRQ_ENTER() macro from
lib/locking-selftest.c invoked irq_enter(), which in turn invokes
rcu_irq_enter(), but HARDIRQ_EXIT() invoked __irq_exit(), which
does not invoke rcu_irq_exit().  This situation resulted in calls
to rcu_irq_enter() that were not balanced by the required calls to
rcu_irq_exit().  Therefore, after these locking selftests completed,
RCU's dyntick-idle nesting count was a large number (for example,
72), which caused RCU to to conclude that the affected CPU was not in
dyntick-idle mode when in fact it was.

RCU would therefore incorrectly wait for this dyntick-idle CPU, resulting
in hangs.

In contrast, with Frederic's patch, which replaces the irq_enter()
in HARDIRQ_ENTER() with an __irq_enter(), these tests don't ever call
either rcu_irq_enter() or rcu_irq_exit(), which works because the CPU
running the test is already marked as not being in dyntick-idle mode.
This means that the rcu_irq_enter() and rcu_irq_exit() calls and RCU
then has no problem working out which CPUs are in dyntick-idle mode and
which are not.

The reason that the imbalance was not noticed before the barrier patch
was applied is that the old implementation of rcu_enter_nohz() ignored
the nesting depth.  This could still result in delays, but much shorter
ones.  Whenever there was a delay, RCU would IPI the CPU with the
unbalanced nesting level, which would eventually result in rcu_enter_nohz()
being called, which in turn would force RCU to see that the CPU was in
dyntick-idle mode.

The reason that very few people noticed the problem is that the mismatched
irq_enter() vs. __irq_exit() occured only when the kernel was built with
CONFIG_DEBUG_LOCKING_API_SELFTESTS.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
mthuurne pushed a commit that referenced this issue Aug 2, 2011
Running ktest.pl, I hit this bug:

[   19.780654] BUG: unable to handle kernel NULL pointer dereference at 0000000c
[   19.780660] IP: [<c112efcd>] dev_get_drvdata+0xc/0x46
[   19.780669] *pdpt = 0000000031daf001 *pde = 0000000000000000
[   19.780673] Oops: 0000 [jonsmirl#1] SMP
[   19.780680] Dumping ftrace buffer:^M
[   19.780685]    (ftrace buffer empty)
[   19.780687] Modules linked in: ide_pci_generic firewire_ohci firewire_core evbug crc_itu_t e1000 ide_core i2c_i801 iTCO_wdt
[   19.780697]
[   19.780700] Pid: 346, comm: v4l_id Not tainted 2.6.39-test-02740-gcaebc16-dirty #4                  /DG965MQ
[   19.780706] EIP: 0060:[<c112efcd>] EFLAGS: 00010202 CPU: 0
[   19.780709] EIP is at dev_get_drvdata+0xc/0x46
[   19.780712] EAX: 00000008 EBX: f1e37da4 ECX: 00000000 EDX: 00000000
[   19.780715] ESI: f1c3f200 EDI: c33ec95c EBP: f1e37d80 ESP: f1e37d80
[   19.780718]  DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
[   19.780721] Process v4l_id (pid: 346, ti=f1e36000 task=f2bc2a60 task.ti=f1e36000)
[   19.780723] Stack:
[   19.780725]  f1e37d8c c117d395 c33ec93c f1e37db4 c117a0f9 00000002 00000000 c1725e54
[   19.780732]  00000001 00000007 f2918c90 f1c3f200 c33ec95c f1e37dd4 c1789d3d 22222222
[   19.780740]  22222222 22222222 f2918c90 f1c3f200 f29194f4 f1e37de8 c178d5c4 c1725e54
[   19.780747] Call Trace:
[   19.780752]  [<c117d395>] st_kim_ref+0x28/0x41
[   19.780756]  [<c117a0f9>] st_register+0x29/0x562
[   19.780761]  [<c1725e54>] ? v4l2_open+0x111/0x1e3
[   19.780766]  [<c1789d3d>] fmc_prepare+0x97/0x424
[   19.780770]  [<c178d5c4>] fm_v4l2_fops_open+0x70/0x106
[   19.780773]  [<c1725e54>] ? v4l2_open+0x111/0x1e3
[   19.780777]  [<c1725e9b>] v4l2_open+0x158/0x1e3
[   19.780782]  [<c065173b>] chrdev_open+0x22c/0x276
[   19.780787]  [<c0647c4e>] __dentry_open+0x35c/0x581
[   19.780792]  [<c06498f9>] nameidata_to_filp+0x7c/0x96
[   19.780795]  [<c065150f>] ? cdev_put+0x57/0x57
[   19.780800]  [<c0660cad>] do_last+0x743/0x9d4
[   19.780804]  [<c065d5fc>] ? path_init+0x1ee/0x596
[   19.780808]  [<c0661481>] path_openat+0x10c/0x597
[   19.780813]  [<c05204a1>] ? trace_hardirqs_off+0x27/0x37
[   19.780817]  [<c0509651>] ? local_clock+0x78/0xc7
[   19.780821]  [<c0661945>] do_filp_open+0x39/0xc2
[   19.780827]  [<c1cabc76>] ? _raw_spin_unlock+0x4c/0x5d^M
[   19.780831]  [<c0674ccd>] ? alloc_fd+0x19e/0x1b7
[   19.780836]  [<c06499ca>] do_sys_open+0xb7/0x1bd
[   19.780840]  [<c0608eea>] ? sys_munmap+0x78/0x8d
[   19.780844]  [<c0649b06>] sys_open+0x36/0x58
[   19.780849]  [<c1cb809f>] sysenter_do_call+0x12/0x38
[   19.780852] Code: d8 2f 20 c3 01 83 15 dc 2f 20 c3 00 f0 ff 00 83 05 e0 2f 20 c3 01 83 15 e4 2f 20 c3 00 5d c3 55 89 e5 3e 8d 74 26 00 85 c0 74 28 <8b> 40 04 83 05 e8 2f 20 c3 01 83 15 ec 2f 20 c3 00 85 c0 74 13 ^M
[   19.780889] EIP: [<c112efcd>] dev_get_drvdata+0xc/0x46 SS:ESP 0068:f1e37d80
[   19.780894] CR2: 000000000000000c
[   19.780898] ---[ end trace e7d1d0f6a2d1d390 ]---

The id of 0 passed to st_kim_ref() found no device, keeping pdev null,
and causing pdev->dev cause a NULL pointer dereference. After having
st_kim_ref() check for NULL, the st_unregister() function needed to be
updated to handle the case that st_gdata was not set by the
st_kim_ref().

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
mthuurne pushed a commit that referenced this issue Aug 2, 2011
…ning (try #4)

Hopefully last version. Base signing check on CAP_UNIX instead of
tcon->unix_ext, also clean up the comments a bit more.

According to Hongwei Sun's blog posting here:

    http://blogs.msdn.com/b/openspecification/archive/2009/04/10/smb-maximum-transmit-buffer-size-and-performance-tuning.aspx

CAP_LARGE_WRITEX is ignored when signing is active. Also, the maximum
size for a write without CAP_LARGE_WRITEX should be the maxBuf that
the server sent in the NEGOTIATE request.

Fix the wsize negotiation to take this into account. While we're at it,
alter the other wsize definitions to use sizeof(WRITE_REQ) to allow for
slightly larger amounts of data to potentially be written per request.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
mthuurne pushed a commit that referenced this issue Aug 2, 2011
* git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6:
  cifs: fix wsize negotiation to respect max buffer size and active signing (try #4)
  CIFS: Fix problem with 3.0-rc1 null user mount failure
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants