Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Comparing changes

Choose two branches to see what's changed or to start a new pull request. If you need to, you can also compare across forks.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also compare across forks.
base fork: m-labs/linux-milkymist
base: 919c3d59ec3b
...
head fork: m-labs/linux-milkymist
compare: e7f8b2eb45d1
Checking mergeability… Don't worry, you can still create the pull request.
  • 15 commits
  • 11 files changed
  • 0 commit comments
  • 1 contributor
Commits on Feb 26, 2013
@larsclausen larsclausen lm32: Get rid of pm_idle
This is currently unused and will be gone in upstream soon.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
64d30a6
@larsclausen larsclausen lm32: Use _save_altstack helper
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
b533446
@larsclausen larsclausen lm32: switch to generic sys_sigaltstack
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
a94271a
@larsclausen larsclausen lm32: sys_rt_sigreturn: Use current_pt_regs()
Use current_pt_regs() to get a pointer to the registers of the current process.
None of the syscalls expect a pointer to registers of the current process in r7
anymore, so we can also get rid of that.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
d2374d9
@larsclausen larsclausen lm32: Properly update stack pointer copy_thread()
If we get a new stack pointer address for the new thread in copy_thread() we
need to update the register set to the new stack address.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
fa7fba2
@larsclausen larsclausen lm32: Don't clobber userspace stack during syscall
We shouldn't use the userspace stack to backup the registers which we using
during the early syscall code, this only works by chance.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
0a92396
@larsclausen larsclausen lm32: Switch to kernel stack during interrupts
Don't run the interrupt handlers on userspace stack, this is quite wrong and
quite dangerous and may cause random process corruption. Instead switch to the
kernel space stack as soon as we enter kernel space.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
fbc42a6
@larsclausen larsclausen lm32: Simplify current_thread_info()
Now that we are always on kernel stack in kernel mode we can calculate the
current thread info address based on the stack pointer. The thread info is
always stored at the lowest address of the kernel stack of a process.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
b1ea119
@larsclausen larsclausen lm32: We are always on kernel stack in resume
We are always on kernel stack now and always resume to kernel stack during a
context switch. So there is no need to check on which stack we are.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
664ec9d
@larsclausen larsclausen lm32: Inline _{save,restore}_irq_frame
There is really no point in having these as separate functions since there is
only one invocation of them. And making them a macro aslo means we can reuse
them in _{save,restore}_syscall_frame to get rid of some duplicated code.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
d5c38bd
@larsclausen larsclausen lm32: Remove usp field from thread_struct struct
The usp is always stored in the sp field of the pt_regs of the process. No need
to track it separately.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
c6e4d95
@larsclausen larsclausen lm32: Simplify mode switching
Whenever we enter kernel mode we'll switch to the kernel stack. We will always
start at the bottom of the kernel stack, once we leave kernel mode we'll be at
the bottom of the kernel stack again. Since we can calculate the kernel stack
address based on the current thread_info address (current thread_info address is
always at the lowest address of the kernel stack) we do not have to track the
kernel stack address of a process separately. Also the which_stack field is
redundant since we are always on kernel stack in kernel mode and always on user
stack in user mode, so we can remove it altogether as well.

Finally as a minor optimization put all the global variables used during mode
switch in a common struct. This is for one quite cache friendly and we only have
to load the address of the struct and can use relative addressing to access the
members instead of loading the address of each global variable individually.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
50c3718
@larsclausen larsclausen lm32: Save the process state in the thread_info struct
Put the process state which is saved during process switching in the thread_info
stack instead of on top of the stack. This has the advantage that we know where
the registers are saved and don't need to track this, so we can finally get rid
of the ksp field of the thread_struct struct. Also only save those registers
which are callee saved. All other register will already be saved on previous
stack frames. As a result copy_thread() also looks much nicer.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
52d0953
@larsclausen larsclausen lm32: Cleanup start_thread()
There is no need to call set_fs(USER_DS) in since start_thread() since this is
already done in generic places. Also don't memset regs to 0 since some of the
callers pass in preinitialized registers.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
9a504d6
@larsclausen larsclausen lm32: Cleanup processor.h/process.c a bit
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
e7f8b2e
View
1  arch/lm32/Kconfig
@@ -8,6 +8,7 @@ config LM32
select GENERIC_CPU_DEVICES
select GENERIC_SYSCALL_TABLE
select GENERIC_ATOMIC64
+ select GENERIC_SIGALTSTACK
select ARCH_REQUIRE_GPIOLIB
select OF
select OF_EARLY_FLATTREE
View
40 arch/lm32/include/asm/processor.h
@@ -54,29 +54,14 @@
*/
#define TASK_UNMAPPED_BASE 0
-/*
- * if you change this structure, you must change the code and offsets
- * in asm-offsets.c
- */
-
-struct thread_struct {
- unsigned long ksp; /* kernel stack pointer */
- unsigned long usp; /* user stack pointer */
- unsigned long which_stack; /* 0 if we are on kernel stack, 1 if we are on user stack */
-};
+struct thread_struct {};
+#define INIT_THREAD {}
#define KSTK_TOS(tsk) ((unsigned long)task_stack_page(tsk) + THREAD_SIZE - 32)
#define task_pt_regs(tsk) ((struct pt_regs *)KSTK_TOS(tsk) - 1)
#define KSTK_EIP(tsk) 0
#define KSTK_ESP(tsk) 0
-#define INIT_THREAD { \
- sizeof(init_stack) + (unsigned long) init_stack, 0, \
- 0, \
- 0 \
-}
-
-#define reformat(_regs) do { } while (0)
/*
* Do necessary setup to start up a newly executed thread.
@@ -86,26 +71,19 @@ extern void start_thread(struct pt_regs * regs, unsigned long pc, unsigned long
/* Forward declaration, a strange C thing */
struct task_struct;
-/* Free all resources held by a thread. */
-static inline void release_thread(struct task_struct *dead_task)
+static inline void release_thread(struct task_struct *dead_task) { }
+static inline void exit_thread(void) { }
+
+static inline unsigned long thread_saved_pc(struct task_struct *tsk)
{
+ return 0;
}
-/* Prepare to copy thread state - unlazy all lazy status */
-#define prepare_to_copy(tsk) do { } while (0)
-
-extern int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
-
-/*
- * Free current thread data structures etc..
- */
-static inline void exit_thread(void)
+static inline unsigned long get_wchan(struct task_struct *p)
{
+ return 0;
}
-unsigned long thread_saved_pc(struct task_struct *tsk);
-unsigned long get_wchan(struct task_struct *p);
-
#define cpu_relax() barrier()
#endif
View
15 arch/lm32/include/asm/switch_to.h
@@ -1,16 +1,15 @@
#ifndef __LM32_SYSTEM_H
#define __LM32_SYSTEM_H
-#include <linux/thread_info.h>
#include <linux/linkage.h>
+#include <linux/thread_info.h>
-struct task_struct;
-extern asmlinkage struct task_struct* resume(struct task_struct* last, struct task_struct* next);
+extern asmlinkage struct task_struct* _switch_to(struct task_struct *,
+ struct thread_info *, struct thread_info *);
-#define switch_to(prev, next, last) \
- do { \
- lm32_current_thread = task_thread_info(next); \
- ((last) = resume((prev), (next))); \
- } while (0)
+#define switch_to(prev,next,last) \
+do { \
+ last = _switch_to(prev, task_thread_info(prev), task_thread_info(next)); \
+} while (0)
#endif
View
40 arch/lm32/include/asm/thread_info.h
@@ -25,6 +25,30 @@ typedef struct {
unsigned long seg;
} mm_segment_t;
+struct cpu_context_save {
+ unsigned long r11;
+ unsigned long r12;
+ unsigned long r13;
+ unsigned long r14;
+ unsigned long r15;
+ unsigned long r16;
+ unsigned long r17;
+ unsigned long r18;
+ unsigned long r19;
+ unsigned long r20;
+ unsigned long r21;
+ unsigned long r22;
+ unsigned long r23;
+ unsigned long r24;
+ unsigned long r25;
+ unsigned long gp;
+ unsigned long fp;
+ unsigned long sp;
+ unsigned long ra;
+ unsigned long ea;
+ unsigned long ba;
+};
+
/*
* low level task data.
* If you change this, change the TI_* offsets below to match.
@@ -37,6 +61,7 @@ struct thread_info {
int preempt_count; /* 0 => preemptable, <0 => BUG */
struct restart_block restart_block;
mm_segment_t addr_limit;
+ struct cpu_context_save cpu_context;
};
#define init_thread_info (init_thread_union.thread_info)
@@ -44,13 +69,22 @@ struct thread_info {
/* how to get the thread information struct from C */
-static inline struct thread_info *current_thread_info(void) __attribute_const__;
+static inline struct thread_info *current_thread_info(void) __pure;
+
+struct lm32_state {
+ struct thread_info *current_thread;
+ unsigned long kernel_mode;
+ unsigned long saved_r9;
+ unsigned long saved_r10;
+ unsigned long saved_r11;
+};
-extern struct thread_info* lm32_current_thread;
+extern struct lm32_state lm32_state;
static inline struct thread_info *current_thread_info(void)
{
- return lm32_current_thread;
+ register unsigned long sp asm ("sp");
+ return (struct thread_info *)(sp & ~(THREAD_SIZE - 1));
}
View
34 arch/lm32/kernel/asm-offsets.c
@@ -27,10 +27,6 @@ int main(void)
DEFINE(TASK_MM, offsetof(struct task_struct, mm));
DEFINE(TASK_ACTIVE_MM, offsetof(struct task_struct, active_mm));
- DEFINE(TASK_KSP, offsetof(struct task_struct, thread.ksp));
- DEFINE(TASK_USP, offsetof(struct task_struct, thread.usp));
- DEFINE(TASK_WHICH_STACK, offsetof(struct task_struct, thread.which_stack));
-
DEFINE(PT_R0, offsetof(struct pt_regs, r0));
DEFINE(PT_R1, offsetof(struct pt_regs, r1));
DEFINE(PT_R2, offsetof(struct pt_regs, r2));
@@ -72,9 +68,33 @@ int main(void)
DEFINE(TI_ADDR_LIMIT, offsetof(struct thread_info, addr_limit));
DEFINE(_THREAD_SIZE, THREAD_SIZE);
- DEFINE(THREAD_KSP, offsetof(struct thread_struct, ksp));
- DEFINE(THREAD_USP, offsetof(struct thread_struct, usp));
- DEFINE(THREAD_WHICH_STACK, offsetof(struct thread_struct, which_stack));
+ DEFINE(TI_CC_R11, offsetof(struct thread_info, cpu_context.r11));
+ DEFINE(TI_CC_R12, offsetof(struct thread_info, cpu_context.r12));
+ DEFINE(TI_CC_R13, offsetof(struct thread_info, cpu_context.r13));
+ DEFINE(TI_CC_R14, offsetof(struct thread_info, cpu_context.r14));
+ DEFINE(TI_CC_R15, offsetof(struct thread_info, cpu_context.r15));
+ DEFINE(TI_CC_R16, offsetof(struct thread_info, cpu_context.r16));
+ DEFINE(TI_CC_R17, offsetof(struct thread_info, cpu_context.r17));
+ DEFINE(TI_CC_R18, offsetof(struct thread_info, cpu_context.r18));
+ DEFINE(TI_CC_R19, offsetof(struct thread_info, cpu_context.r19));
+ DEFINE(TI_CC_R20, offsetof(struct thread_info, cpu_context.r20));
+ DEFINE(TI_CC_R21, offsetof(struct thread_info, cpu_context.r21));
+ DEFINE(TI_CC_R22, offsetof(struct thread_info, cpu_context.r22));
+ DEFINE(TI_CC_R23, offsetof(struct thread_info, cpu_context.r23));
+ DEFINE(TI_CC_R24, offsetof(struct thread_info, cpu_context.r24));
+ DEFINE(TI_CC_R25, offsetof(struct thread_info, cpu_context.r25));
+ DEFINE(TI_CC_GP, offsetof(struct thread_info, cpu_context.gp));
+ DEFINE(TI_CC_FP, offsetof(struct thread_info, cpu_context.fp));
+ DEFINE(TI_CC_SP, offsetof(struct thread_info, cpu_context.sp));
+ DEFINE(TI_CC_RA, offsetof(struct thread_info, cpu_context.ra));
+ DEFINE(TI_CC_EA, offsetof(struct thread_info, cpu_context.ea));
+ DEFINE(TI_CC_BA, offsetof(struct thread_info, cpu_context.ba));
+
+ DEFINE(STATE_CURRENT_THREAD, offsetof(struct lm32_state, current_thread));
+ DEFINE(STATE_KERNEL_MODE, offsetof(struct lm32_state, kernel_mode));
+ DEFINE(STATE_SAVED_R9, offsetof(struct lm32_state, saved_r9));
+ DEFINE(STATE_SAVED_R10, offsetof(struct lm32_state, saved_r10));
+ DEFINE(STATE_SAVED_R11, offsetof(struct lm32_state, saved_r11));
return 0;
}
View
406 arch/lm32/kernel/entry.S
@@ -62,51 +62,71 @@ ENTRY(interrupt_handler)
nop
nop
-ENTRY(system_call)
- /* break */
- /* store away r9,r10 so that we can use it here TODO: use clobbered ones*/
- sw (sp+0), r9 /* needed for various */
- sw (sp+-4), r10 /* needed for current = current_thread_info()->task */
- sw (sp+-8), r11 /* needed for user stack pointer, if switching */
-
- /* test if already on kernel stack: test current_thread_info->task->which_stack */
- mvhi r9, hi(lm32_current_thread)
- ori r9, r9, lo(lm32_current_thread)
- lw r9, (r9+0) /* dereference lm32_current_thread */
- lw r10, (r9+TI_TASK) /* load pointer to task */
- lw r9, (r10+TASK_WHICH_STACK)
-
- mv r11, sp /* remember sp for restoring r9, r10, r11 */
-
- be r9, r0, 1f
-
- /* we are on user stack, have to switch */
- sw (r10+TASK_USP), sp /* store usp */
- lw sp, (r10+TASK_KSP) /* load ksp */
- sw (r10+TASK_WHICH_STACK), r0 /* set which_stack to 0 */
-
+.macro switch_to_kernel_mode
+ /*
+ * Store away r9,r10,r11 so that we can use it here. The tricky part is that we
+ * need to do this without clobbering any other registers. The r0 register
+ * is supposed to be always 0. Since we are running with interrupts we can
+ * allow ourselves to temporarily change it's value. Note though that r0 is
+ * also used in pseudo instructions like 'mv', so we need to restore it
+ * immediately afterwards.
+ */
+ mvhi r0, hi(lm32_state)
+ ori r0, r0, lo(lm32_state)
+ sw (r0+STATE_SAVED_R9), r9
+ sw (r0+STATE_SAVED_R10), r10
+ sw (r0+STATE_SAVED_R11), r11
+ mv r9, r0 /* mv is 'or rX, rY, r0', so this works */
+ xor r0, r0, r0
+
+ /*
+ * store the current kernel_mode value to the stack frame and set
+ * kernel_mode to 1
+ */
+ lw r10, (r9+STATE_KERNEL_MODE)
+
+ mv r11, sp
+
+ bne r10, r0, 1f
+
+ lw sp, (r9+STATE_CURRENT_THREAD)
+ addi sp, sp, THREAD_SIZE - 36
1:/* already on kernel stack */
addi sp, sp, -132
+
+ /* save pt_mode, stack pointer and ra in current stack frame */
+ sw (sp+132), r10
sw (sp+116), r11
+ sw (sp+120), ra
+
+ mvi r10, PT_MODE_KERNEL
+ sw (r9+STATE_KERNEL_MODE), r10
- /* restore r9, r10, r11 */
- lw r9, (r11+0)
- lw r10, (r11+-4)
- lw r11, (r11+-8)
+ lw r11, (r9+STATE_SAVED_R11)
+ lw r10, (r9+STATE_SAVED_R10)
+ lw r9, (r9+STATE_SAVED_R9)
+.endm
- /* save registers */
- sw (sp + 120), ra
+.macro switch_to_user_mode
+ rcsr r2, IE
+ andi r3, r2, 0xfffe
+ wcsr IE, r3
+ lw r2, (sp+132)
+ mvhi r1, hi(lm32_state)
+ ori r1, r1, lo(lm32_state)
+ sw (r1+STATE_KERNEL_MODE), r2
+.endm
+
+ENTRY(system_call)
+ switch_to_kernel_mode
+ /* save registers */
calli _save_syscall_frame
rcsr r11, IE
ori r11, r11, 1
wcsr IE, r11
- /* r7 always holds the pointer to struct pt_regs */
- addi r7, sp, 4
- #addi r4, sp, 4
-
/* r8 always holds the syscall number */
/* check if syscall number is valid */
mvi r9, __NR_syscalls
@@ -167,21 +187,44 @@ ENTRY(ret_from_kernel_thread)
ori ra, ra, lo(syscall_tail)
b r11
-ENTRY(sys_rt_sigreturn)
- mv r1, r7
- bi _sys_rt_sigreturn
+.macro save_irq_frame
+ sw (sp+8), r1
+ sw (sp+12), r2
+ sw (sp+16), r3
+ sw (sp+20), r4
+ sw (sp+24), r5
+ sw (sp+28), r6
+ sw (sp+32), r7
+ sw (sp+36), r8
+ sw (sp+40), r9
+ sw (sp+44), r10
+ /* ra (sp + 120) has already been written */
+ sw (sp+124), ea
+.endm
-ENTRY(sys_sigaltstack)
- lw r3, (r7+112)
- bi do_sigaltstack
+/* restore all caller saved registers saved in save_irq_frame */
+.macro restore_irq_frame
+ lw r1, (sp+8);
+ lw r2, (sp+12);
+ lw r3, (sp+16);
+ lw r4, (sp+20);
+ lw r5, (sp+24);
+ lw r6, (sp+28);
+ lw r7, (sp+32);
+ lw r8, (sp+36);
+ lw r9, (sp+40);
+ lw r10, (sp+44);
+ lw ra, (sp+120)
+ lw ea, (sp+124)
+ lw sp, (sp+116)
+.endm
/* in IRQ we call a function between save and restore */
/* we therefore only save and restore the caller saved registers */
/* (r1-r10, ra, ea because an interrupt could interrupt another one) */
_long_interrupt_handler:
- addi sp, sp, -132
- sw (sp+120), ra
- calli _save_irq_frame
+ switch_to_kernel_mode
+ save_irq_frame
/* Workaround hardware hazard. Sometimes the interrupt handler is entered
* although interrupts are disabled */
@@ -225,66 +268,12 @@ _long_interrupt_handler:
addi r1, sp, 4
calli manage_signals_irq
6:
- bi _restore_irq_frame_and_return
-
-_save_irq_frame:
- sw (sp+8), r1
- sw (sp+12), r2
- sw (sp+16), r3
- sw (sp+20), r4
- sw (sp+24), r5
- sw (sp+28), r6
- sw (sp+32), r7
- sw (sp+36), r8
- sw (sp+40), r9
- sw (sp+44), r10
- /* ra (sp + 120) has already been written */
- sw (sp+124), ea
-
- mvhi r1, hi(kernel_mode)
- ori r1, r1, lo(kernel_mode)
- lw r2, (r1+0)
- sw (sp+132), r2
- mvi r2, PT_MODE_KERNEL
- sw (r1+0), r2
-ret
-
-/* restore all caller saved registers saved in _save_irq_frame and return from exception */
-_restore_irq_frame_and_return:
- rcsr r2, IE
- andi r3, r2, 0xfffe
- wcsr IE, r3
- lw r2, (sp+132)
- mvhi r1, hi(kernel_mode)
- ori r1, r1, lo(kernel_mode)
- sw (r1+0), r2
-
- lw r1, (sp+8);
- lw r2, (sp+12);
- lw r3, (sp+16);
- lw r4, (sp+20);
- lw r5, (sp+24);
- lw r6, (sp+28);
- lw r7, (sp+32);
- lw r8, (sp+36);
- lw r9, (sp+40);
- lw r10, (sp+44);
- lw ra, (sp+120)
- lw ea, (sp+124)
- addi sp, sp, 132
+ switch_to_user_mode
+ restore_irq_frame
eret
_save_syscall_frame:
- sw (sp+8), r1
- sw (sp+12), r2
- sw (sp+16), r3
- sw (sp+20), r4
- sw (sp+24), r5
- sw (sp+28), r6
- sw (sp+32), r7
- sw (sp+36), r8
- sw (sp+40), r9
- sw (sp+44), r10
+ save_irq_frame
sw (sp+48), r11
sw (sp+52), r12
sw (sp+56), r13
@@ -303,15 +292,8 @@ _save_syscall_frame:
sw (sp+108), r26
sw (sp+112), r27
/* ra (sp + 120) has already been written */
- sw (sp+124), ea
sw (sp+128), ba
- mvhi r11, hi(kernel_mode)
- ori r11, r11, lo(kernel_mode)
- lw r12, (r11+0)
- sw (sp+132), r12
- mvi r12, PT_MODE_KERNEL
- sw (r11+0), r12
ret
/************************/
@@ -325,68 +307,29 @@ _save_syscall_frame:
#define RETURN_FROM_SYSCALL_OR_EXCEPTION(label, addr_register, return_instr) \
label: \
- rcsr r2, IE; \
- andi r3, r2, 0xfffe; \
- wcsr IE, r3; \
- lw r2, (sp+132); \
- mvhi r1, hi(kernel_mode); \
- ori r1, r1, lo(kernel_mode); \
- sw (r1+0), r2; \
- /* prepare switch to user stack but keep kernel stack pointer in r11 */ \
- /* r9: scratch register */ \
- /* r10: current = current_thread_info()->task */ \
- /* r11: ksp backup */ \
- /* setup r10 = current */ \
- addi sp, sp, 132; \
- bne r2, r0, 1f; \
- mvhi r9, hi(lm32_current_thread); \
- ori r9, r9, lo(lm32_current_thread); \
- lw r9, (r9+0); /* dereference lm32_current_thread */ \
- lw r10, (r9+TI_TASK); /* load pointer to task */ \
- /* set task->thread.which_stack to 1 (user stack) */ \
- mvi r9, TASK_USP - TASK_KSP; \
- sw (r10+TASK_WHICH_STACK), r9; \
- /* store ksp (after restore of frame) into task->thread.ksp */ \
- sw (r10+TASK_KSP), sp; \
- /* save sp into r11 */ \
- /* get usp into sp*/ \
- 1: \
- addi r11, sp, -132; \
+ switch_to_user_mode; \
/* restore frame from original kernel stack */ \
/* restore r1 as the return value is stored onto the stack */ \
- lw r1, (r11+8); \
- lw r2, (r11+12); \
- lw r3, (r11+16); \
- lw r4, (r11+20); \
- lw r5, (r11+24); \
- lw r6, (r11+28); \
- lw r7, (r11+32); \
- lw r8, (r11+36); \
- lw r9, (r11+40); \
- lw r10, (r11+44); \
- /* skip r11 */; \
- lw r12, (r11+52); \
- lw r13, (r11+56); \
- lw r14, (r11+60); \
- lw r15, (r11+64); \
- lw r16, (r11+68); \
- lw r17, (r11+72); \
- lw r18, (r11+76); \
- lw r19, (r11+80); \
- lw r20, (r11+84); \
- lw r21, (r11+88); \
- lw r22, (r11+92); \
- lw r23, (r11+96); \
- lw r24, (r11+100); \
- lw r25, (r11+104); \
- lw r26, (r11+108); \
- lw r27, (r11+112); \
- lw sp, (r11+116); \
- lw ra, (r11+120); \
- lw ea, (r11+124); \
- lw ba, (r11+128); \
- /* r11 must be restored last */ \
- lw r11, (r11+48); \
+ lw r11, (sp+48); \
+ lw r12, (sp+52); \
+ lw r13, (sp+56); \
+ lw r14, (sp+60); \
+ lw r15, (sp+64); \
+ lw r16, (sp+68); \
+ lw r17, (sp+72); \
+ lw r18, (sp+76); \
+ lw r19, (sp+80); \
+ lw r20, (sp+84); \
+ lw r21, (sp+88); \
+ lw r22, (sp+92); \
+ lw r23, (sp+96); \
+ lw r24, (sp+100); \
+ lw r25, (sp+104); \
+ lw r26, (sp+108); \
+ lw r27, (sp+112); \
+ lw ra, (sp+120); \
+ lw ba, (sp+128); \
+ restore_irq_frame; \
/* scall stores pc into ea/ba register, not pc+4, so we have to add 4 */ \
addi addr_register, addr_register, 4; \
return_instr
@@ -394,88 +337,61 @@ label: \
RETURN_FROM_SYSCALL_OR_EXCEPTION(_restore_and_return_exception,ea,eret)
/*
- * struct task_struct* resume(struct task_struct* prev, struct task_struct* next)
+ * struct task_struct* switch_to(struct task_struct* prev,
+ * struct thread_info *prev_ti, struct thread_info *next_ti)
* Returns the previous task
*/
-ENTRY(resume)
- /* store whole state to current stack (may be usp or ksp) */
- addi sp, sp, -132
- sw (sp+16), r3
- sw (sp+20), r4
- sw (sp+24), r5
- sw (sp+28), r6
- sw (sp+32), r7
- sw (sp+36), r8
- sw (sp+40), r9
- sw (sp+44), r10
- sw (sp+48), r11
- sw (sp+52), r12
- sw (sp+56), r13
- sw (sp+60), r14
- sw (sp+64), r15
- sw (sp+68), r16
- sw (sp+72), r17
- sw (sp+76), r18
- sw (sp+80), r19
- sw (sp+84), r20
- sw (sp+88), r21
- sw (sp+92), r22
- sw (sp+96), r23
- sw (sp+100), r24
- sw (sp+104), r25
- sw (sp+108), r26
- sw (sp+112), r27
- addi r3, sp, 132 /* special case for stack pointer */
- sw (sp+116), r3 /* special case for stack pointer */
- sw (sp+120), ra
-/* sw (sp+124), ea
- sw (sp+128), ba */
-
-
- /* TODO: Aren't we always on kernel stack at this point? */
-
- /* find out whether we are on kernel or user stack */
- lw r3, (r1 + TASK_WHICH_STACK)
- add r3, r3, r1
- sw (r3 + TASK_KSP), sp
+ENTRY(_switch_to)
+
+ /* r1 gets passed through unmodified */
+
+ sw (r2+TI_CC_R11), r11
+ sw (r2+TI_CC_R12), r12
+ sw (r2+TI_CC_R13), r13
+ sw (r2+TI_CC_R14), r14
+ sw (r2+TI_CC_R15), r15
+ sw (r2+TI_CC_R16), r16
+ sw (r2+TI_CC_R17), r17
+ sw (r2+TI_CC_R18), r18
+ sw (r2+TI_CC_R19), r19
+ sw (r2+TI_CC_R20), r20
+ sw (r2+TI_CC_R21), r21
+ sw (r2+TI_CC_R22), r22
+ sw (r2+TI_CC_R23), r23
+ sw (r2+TI_CC_R24), r24
+ sw (r2+TI_CC_R25), r25
+ sw (r2+TI_CC_GP), r26
+ sw (r2+TI_CC_FP), r27
+ sw (r2+TI_CC_SP), sp
+ sw (r2+TI_CC_RA), ra
+ sw (r2+TI_CC_EA), ea
+ sw (r2+TI_CC_BA), ba
+
+ mvhi r4, hi(lm32_state)
+ ori r4, r4, lo(lm32_state)
+ sw (r4+STATE_CURRENT_THREAD), r3
/* restore next */
-
- /* find out whether we will be on kernel or user stack */
- lw r3, (r2 + TASK_WHICH_STACK)
- add r3, r3, r2
- lw sp, (r3 + TASK_KSP)
-
- lw r3, (sp+16)
- lw r4, (sp+20)
- lw r5, (sp+24)
- lw r6, (sp+28)
- lw r7, (sp+32)
- lw r8, (sp+36)
- lw r9, (sp+40)
- lw r10, (sp+44)
- lw r11, (sp+48)
- lw r12, (sp+52)
- lw r13, (sp+56)
- lw r14, (sp+60)
- lw r15, (sp+64)
- lw r16, (sp+68)
- lw r17, (sp+72)
- lw r18, (sp+76)
- lw r19, (sp+80)
- lw r20, (sp+84)
- lw r21, (sp+88)
- lw r22, (sp+92)
- lw r23, (sp+96)
- lw r24, (sp+100)
- lw r25, (sp+104)
- lw r26, (sp+108)
- lw r27, (sp+112)
- /* skip sp for now */
- lw ra, (sp+120)
-/* lw ea, (sp+124)
- lw ba, (sp+128) */
- /* Stack pointer must be restored last --- it will be updated */
- lw sp, (sp+116)
+ lw r11, (r3+TI_CC_R11)
+ lw r12, (r3+TI_CC_R12)
+ lw r13, (r3+TI_CC_R13)
+ lw r14, (r3+TI_CC_R14)
+ lw r15, (r3+TI_CC_R15)
+ lw r16, (r3+TI_CC_R16)
+ lw r17, (r3+TI_CC_R17)
+ lw r18, (r3+TI_CC_R18)
+ lw r19, (r3+TI_CC_R19)
+ lw r20, (r3+TI_CC_R20)
+ lw r21, (r3+TI_CC_R21)
+ lw r22, (r3+TI_CC_R22)
+ lw r23, (r3+TI_CC_R23)
+ lw r24, (r3+TI_CC_R24)
+ lw r25, (r3+TI_CC_R25)
+ lw r26, (r3+TI_CC_GP)
+ lw r27, (r3+TI_CC_FP)
+ lw sp, (r3+TI_CC_SP)
+ lw ra, (r3+TI_CC_RA)
+ lw ea, (r3+TI_CC_EA)
+ lw ba, (r3+TI_CC_BA)
ret
View
104 arch/lm32/kernel/process.c
@@ -50,14 +50,6 @@ asmlinkage void ret_from_fork(void);
asmlinkage void ret_from_kernel_thread(void);
asmlinkage void syscall_tail(void);
-struct thread_info* lm32_current_thread;
-
-/*
- * The following aren't currently used.
- */
-void (*pm_idle)(void);
-EXPORT_SYMBOL(pm_idle);
-
void (*pm_power_off)(void);
EXPORT_SYMBOL(pm_power_off);
@@ -70,8 +62,6 @@ static void default_idle(void)
__asm__ __volatile__("and r0, r0, r0" ::: "memory");
}
-void (*idle)(void) = default_idle;
-
/*
* The idle thread. There's no useful work to be
* done, so just try to conserve power and have a
@@ -82,7 +72,7 @@ void cpu_idle(void)
{
/* endless idle loop with no priority at all */
while (1) {
- idle();
+ default_idle();
preempt_enable_no_resched();
schedule();
preempt_disable();
@@ -132,80 +122,28 @@ void flush_thread(void)
{
}
-/* no stack unwinding */
-unsigned long get_wchan(struct task_struct *p)
-{
- return 0;
-}
-
-unsigned long thread_saved_pc(struct task_struct *tsk)
-{
- return 0;
-}
-
-int copy_thread(unsigned long clone_flags,
- unsigned long usp_thread_fn, unsigned long thread_fn_arg,
- struct task_struct *p)
+int copy_thread(unsigned long clone_flags, unsigned long usp_thread_fn,
+ unsigned long thread_fn_arg, struct task_struct *p)
{
- unsigned long child_tos = KSTK_TOS(p);
struct pt_regs *childregs = task_pt_regs(p);
+ struct cpu_context_save *cc = &task_thread_info(p)->cpu_context;
- if (p->flags & PF_KTHREAD) {
- /* kernel thread */
-
- childregs = (struct pt_regs *)(child_tos) - 1;
- memset(childregs, 0, sizeof(childregs));
- childregs->r11 = usp_thread_fn;
- childregs->r12 = thread_fn_arg;
- /* childregs = full task switch frame on kernel stack of child */
-
- /* return via ret_from_fork */
- childregs->ra = (unsigned long)ret_from_kernel_thread;
+ memset(cc, 0, sizeof(*cc));
+ cc->sp = (unsigned long)childregs - 4;
- /* setup ksp/usp */
- p->thread.ksp = (unsigned long)childregs - 4; /* perhaps not necessary */
- childregs->sp = p->thread.ksp;
- p->thread.usp = 0;
- p->thread.which_stack = 0; /* kernel stack */
+ if (p->flags & PF_KTHREAD) {
+ memset(childregs, 0, sizeof(*childregs));
+ childregs->pt_mode = PT_MODE_KERNEL;
- //printk("copy_thread1: ->pid=%d tsp=%lx r5=%lx p->thread.ksp=%lx p->thread.usp=%lx\n",
- // p->pid, task_stack_page(p), childregs->r5, p->thread.ksp, p->thread.usp);
+ cc->r11 = usp_thread_fn;
+ cc->r12 = thread_fn_arg;
+ cc->ra = (unsigned long)ret_from_kernel_thread;
} else {
- /* userspace thread (vfork, clone) */
-
- struct pt_regs* childsyscallregs;
-
- /* childsyscallregs = full syscall frame on kernel stack of child */
- childsyscallregs = (struct pt_regs *)(child_tos) - 1; /* 32 = safety */
- /* child shall have same syscall context to restore as parent has ... */
- *childsyscallregs = *current_pt_regs();
-
- /* childregs = full task switch frame on kernel stack of child below * childsyscallregs */
- childregs = childsyscallregs - 1;
- memset(childregs, 0, sizeof(childregs));
-
- /* user stack pointer is shared with the parent per definition of vfork */
- p->thread.usp = usp_thread_fn;
-
- /* kernel stack pointer is not shared with parent, it is the beginning of
- * the just created new task switch segment on the kernel stack */
- p->thread.ksp = (unsigned long)childregs - 4;
- p->thread.which_stack = 0; /* resume from ksp */
-
- /* child returns via ret_from_fork */
- childregs->ra = (unsigned long)ret_from_fork;
- /* child shall return to where sys_vfork_wrapper has been called */
- childregs->r13 = (unsigned long)syscall_tail;
- /* child gets zero as return value from syscall */
- childregs->r11 = 0;
- /* after task switch segment return the stack pointer shall point to the
- * syscall frame */
- childregs->sp = (unsigned long)childsyscallregs - 4;
-
- /*printk("copy_thread2: ->pid=%d p=%lx regs=%lx childregs=%lx r5=%lx ra=%lx "
- "dsf=%lx p->thread.ksp=%lx p->thread.usp=%lx\n",
- p->pid, p, regs, childregs, childregs->r5, childregs->ra,
- dup_syscallframe, p->thread.ksp, p->thread.usp);*/
+ *childregs = *current_pt_regs();
+ if (usp_thread_fn)
+ childregs->sp = usp_thread_fn;
+
+ cc->ra = (unsigned long)ret_from_fork;
}
return 0;
@@ -214,20 +152,12 @@ int copy_thread(unsigned long clone_flags,
/* start userspace thread */
void start_thread(struct pt_regs * regs, unsigned long pc, unsigned long usp)
{
- set_fs(USER_DS);
-
- memset(regs, 0, sizeof(regs));
-
/* -4 because we will add 4 later in ret_from_syscall */
regs->ea = pc - 4;
#ifdef CONFIG_BINFMT_ELF_FDPIC
regs->r7 = current->mm->context.exec_fdpic_loadmap;
#endif
regs->sp = usp;
- current->thread.usp = usp;
regs->fp = current->mm->start_data;
regs->pt_mode = PT_MODE_USER;
-
- /*printk("start_thread: current=%lx usp=%lx\n", current, usp);*/
}
-
View
32 arch/lm32/kernel/ptrace.c
@@ -32,29 +32,15 @@ void ptrace_disable(struct task_struct *child)
static int ptrace_getregs(struct task_struct *child, unsigned long __user *data)
{
struct pt_regs *regs = task_pt_regs(child);
- int ret;
- ret = copy_to_user(data, regs, sizeof(regs));
- if (!ret) {
- /* special case: sp: we always want to get the USP! */
- __put_user (current->thread.usp, data + 28);
- }
-
- return ret;
+ return copy_to_user(data, regs, sizeof(regs));
}
static int ptrace_setregs (struct task_struct *child, unsigned long __user *data)
{
struct pt_regs *regs = task_pt_regs(child);
- int ret;
-
- ret = copy_from_user(regs, data, sizeof(regs));
- if (!ret) {
- /* special case: sp: we always want to set the USP! */
- child->thread.usp = regs->sp;
- }
- return ret;
+ return copy_from_user(regs, data, sizeof(regs));
}
long arch_ptrace(struct task_struct *child, long request, unsigned long addr,
@@ -68,14 +54,9 @@ long arch_ptrace(struct task_struct *child, long request, unsigned long addr,
switch (addr) {
- case 0 ... 27:
- case 29 ... 31:
+ case 0 ... 31:
tmp = *(((unsigned long *)task_pt_regs(child)) + addr);
break;
- case 28: /* sp */
- /* special case: sp: we always want to get the USP! */
- tmp = child->thread.usp;
- break;
case PT_TEXT_ADDR:
tmp = child->mm->start_code;
break;
@@ -93,14 +74,9 @@ long arch_ptrace(struct task_struct *child, long request, unsigned long addr,
}
case PTRACE_POKEUSR:
switch (addr) {
- case 0 ... 27:
- case 29 ... 31:
+ case 0 ... 31:
*(((unsigned long *)task_pt_regs(child)) + addr) = data;
break;
- case 28: /* sp */
- /* special case: sp: we always want to set the USP! */
- child->thread.usp = data;
- break;
default:
printk("ptrace attempted to POKEUSR at %lx\n", addr);
return -EIO;
View
10 arch/lm32/kernel/setup.c
@@ -50,7 +50,10 @@
#include <asm/page.h>
#include <asm/setup.h>
-unsigned int kernel_mode = PT_MODE_KERNEL;
+struct lm32_state lm32_state = {
+ .current_thread = (struct thread_info*)&init_thread_union,
+ .kernel_mode = PT_MODE_KERNEL,
+};
char __initdata cmd_line[COMMAND_LINE_SIZE];
@@ -66,11 +69,6 @@ void __init __weak plat_setup_arch(void)
void __init setup_arch(char **cmdline_p)
{
- /*
- * init "current thread structure" pointer
- */
- lm32_current_thread = (struct thread_info*)&init_thread_union;
-
/* populate memory_start and memory_end, needed for bootmem_init() */
early_init_devtree(__dtb_start);
View
7 arch/lm32/kernel/signal.c
@@ -74,8 +74,9 @@ static int restore_sigcontext(struct pt_regs *regs,
return __copy_from_user(regs, &sc->regs, sizeof(*regs));
}
-asmlinkage int _sys_rt_sigreturn(struct pt_regs *regs)
+asmlinkage int sys_rt_sigreturn(void)
{
+ struct pt_regs *regs = current_pt_regs();
struct rt_sigframe __user *frame = (struct rt_sigframe __user *)(regs->sp + 4);
sigset_t set;
stack_t st;
@@ -143,9 +144,7 @@ static int setup_rt_frame(int sig, struct k_sigaction *ka,
goto give_sigsegv;
err |= __clear_user(&frame->uc, sizeof(frame->uc));
- err |= __put_user((void *)current->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
- err |= __put_user(sas_ss_flags(regs->sp), &frame->uc.uc_stack.ss_flags);
- err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
+ err |= __save_altstack(&frame->uc.uc_stack, regs->sp);
err |= setup_sigcontext(&frame->uc.uc_mcontext, regs, set->sig[0]);
err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
View
2  arch/lm32/kernel/traps.c
@@ -83,7 +83,7 @@ void show_stack(struct task_struct *task, unsigned long *stack)
if (!stack) {
if (task)
- stack = (unsigned long *)task->thread.ksp;
+ stack = (unsigned long *)task_thread_info(task)->cpu_context.sp;
else
stack = (unsigned long *)&stack;
}

No commit comments for this range

Something went wrong with that request. Please try again.