fix: hold sched_lock through context_switch to prevent timer race
Root cause of rare kernel panics with EIP on the kernel stack:
When schedule() was called from process context (waitpid, sleep),
irq_flags had IF=1. spin_unlock_irqrestore() re-enabled interrupts
BEFORE context_switch(). If a timer fired in this window:
1. current_process was already set to 'next' (line 835)
2. But we were still executing on prev's stack
3. Nested schedule() treated 'next' as prev, saved prev's ESP
into next->sp — CORRUPTING next->sp
4. Future context_switch to 'next' loaded the wrong stack offset,
popping garbage registers and a garbage return address
5. EIP ended up pointing into the kernel stack → PAGE FAULT
Fix (three parts):
1. schedule(): move context_switch BEFORE spin_unlock_irqrestore.
After context_switch we are on the new process's stack, and its
saved irq_flags correctly releases the lock.
2. arch_kstack_init: set initial EFLAGS to 0x002 (IF=0) instead of
0x202 so popf in context_switch doesn't enable interrupts while
the lock is held.
3. thread_wrapper: release sched_lock and enable interrupts, since
new processes arrive here via context_switch's ret (bypassing
the spin_unlock_irqrestore after context_switch).
Also: remove get_next_ready_process() which incorrectly returned
fallback processes not in rq_active, causing rq_dequeue to corrupt
the runqueue bitmap. Inlined the logic correctly in schedule().
Verified: 20/20 boots without 'ring3' — zero panics.
Build: clean, cppcheck: clean, smoke: 19/19 pass