@@ -157,13 +157,13 @@ For example, smp_mb__before_atomic_dec() can be used like so:
157157 smp_mb__before_atomic_dec();
158158 atomic_dec(&obj->ref_count);
159159
160- It makes sure that all memory operations preceeding the atomic_dec()
160+ It makes sure that all memory operations preceding the atomic_dec()
161161call are strongly ordered with respect to the atomic counter
162- operation. In the above example, it guarentees that the assignment of
162+ operation. In the above example, it guarantees that the assignment of
163163"1" to obj->dead will be globally visible to other cpus before the
164164atomic counter decrement.
165165
166- Without the explicitl smp_mb__before_atomic_dec() call, the
166+ Without the explicit smp_mb__before_atomic_dec() call, the
167167implementation could legally allow the atomic counter update visible
168168to other cpus before the "obj->dead = 1;" assignment.
169169
@@ -173,11 +173,11 @@ ordering with respect to memory operations after an atomic_dec() call
173173(smp_mb__{before,after}_atomic_inc()).
174174
175175A missing memory barrier in the cases where they are required by the
176- atomic_t implementation above can have disasterous results. Here is
177- an example, which follows a pattern occuring frequently in the Linux
176+ atomic_t implementation above can have disastrous results. Here is
177+ an example, which follows a pattern occurring frequently in the Linux
178178kernel. It is the use of atomic counters to implement reference
179179counting, and it works such that once the counter falls to zero it can
180- be guarenteed that no other entity can be accessing the object:
180+ be guaranteed that no other entity can be accessing the object:
181181
182182static void obj_list_add(struct obj *obj)
183183{
@@ -291,19 +291,19 @@ to the size of an "unsigned long" C data type, and are least of that
291291size. The endianness of the bits within each "unsigned long" are the
292292native endianness of the cpu.
293293
294- void set_bit(unsigned long nr, volatils unsigned long *addr);
295- void clear_bit(unsigned long nr, volatils unsigned long *addr);
296- void change_bit(unsigned long nr, volatils unsigned long *addr);
294+ void set_bit(unsigned long nr, volatile unsigned long *addr);
295+ void clear_bit(unsigned long nr, volatile unsigned long *addr);
296+ void change_bit(unsigned long nr, volatile unsigned long *addr);
297297
298298These routines set, clear, and change, respectively, the bit number
299299indicated by "nr" on the bit mask pointed to by "ADDR".
300300
301301They must execute atomically, yet there are no implicit memory barrier
302302semantics required of these interfaces.
303303
304- int test_and_set_bit(unsigned long nr, volatils unsigned long *addr);
305- int test_and_clear_bit(unsigned long nr, volatils unsigned long *addr);
306- int test_and_change_bit(unsigned long nr, volatils unsigned long *addr);
304+ int test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
305+ int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
306+ int test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
307307
308308Like the above, except that these routines return a boolean which
309309indicates whether the changed bit was set _BEFORE_ the atomic bit
@@ -335,7 +335,7 @@ subsequent memory operation is made visible. For example:
335335 /* ... */;
336336 obj->killed = 1;
337337
338- The implementation of test_and_set_bit() must guarentee that
338+ The implementation of test_and_set_bit() must guarantee that
339339"obj->dead = 1;" is visible to cpus before the atomic memory operation
340340done by test_and_set_bit() becomes visible. Likewise, the atomic
341341memory operation done by test_and_set_bit() must become visible before
@@ -474,7 +474,7 @@ Now, as far as memory barriers go, as long as spin_lock()
474474strictly orders all subsequent memory operations (including
475475the cas()) with respect to itself, things will be fine.
476476
477- Said another way, _atomic_dec_and_lock() must guarentee that
477+ Said another way, _atomic_dec_and_lock() must guarantee that
478478a counter dropping to zero is never made visible before the
479479spinlock being acquired.
480480
0 commit comments