Tools
Tools: Process, Threads and Goroutines
2026-02-19
0 views
admin
What Is a Process? ## What Is a Thread? ## The Problem With Heavy Threads ## What Is a Goroutine? ## How Goroutines Work ## Why Goroutines Are Lightweight ## Work Stealing Scheduler ## Blocking and System Calls ## Fork Join Model in Go ## Quick Comparison ## When To Use What ## Final Takeaway ## References A program is an executable file on disk. When the operating system loads that file into memory and starts execution, it becomes a process. A process has its own: Stack and Heap memory I/O and system resources The key property is isolation. Each process has its own resources, and one process cannot access or interact with another process directly.
Although it is possible to communicate with another process via IPC (Inter-Process Communication) and pipes, they need to be intentionally set up. Google Chrome runs each tab in a separate process. If one tab crashes or malfunctions, other tabs keep running. That isolation comes from process boundaries. Expensive context switching The operating systems must save registers, memory mappings, and kernel data structures also know as PCB(Process Control Blocks) on every context switch between process executions. A thread is the unit of execution inside a process. Its like a subset of the process that runs inside the process boundaries. Every process has at least one thread, called the main thread. Many processes run multiple threads. Threads within the same process has same: heap and global variables However, each thread have their own stack and registers Because threads share memory, communication is fast. No need for inter process communication. But this introduces risk. any faulty thread can crash the entire process. Context switching between threads is faster than between processes because: No memory context switch Less kernel level operations Still, thread switching requires kernel involvement. That overhead adds up in highly concurrent systems. OS threads are expensive: Large stack memory, mostly around 1 MB Expensive to create and destroy A blocking system call will block the entire thread If you spawn thousands of threads, memory usage explodes. This is where Go changes the model. A goroutine is a lightweight unit of execution managed by the Go runtime, not the operating system. You start one by prefixing a function call with the go keyword: The main function itself runs as a goroutine. Go uses an M:N scheduling model: M is usually much larger than N. Which means the Go runtime scheduler multiplexes many goroutines onto fewer OS threads. One goroutine executes at a time Many runnable goroutines wait in a local queue When a goroutine blocks on I/O, the runtime: Moves other runnable goroutines from that thread to another thread and Keeps CPU cores busy. This avoids wasting resources. Initial stack around 2 KB Stack grows dynamically Context switching happens in user space No full kernel switches for goroutine scheduling You can run large number of goroutines in a single program with modest memory usage. Go uses a work stealing scheduler. If one logical CPU runs out of work: It steals runnable goroutines from another CPUs queue This balances load across cores If no local work exists: This keeps CPUs busy without manual tuning. When a goroutine performs a blocking system call: The OS blocks the thread The runtime detaches other goroutines Another thread takes over execution Some operations, like network polling, use dedicated threads. This reduces unnecessary thread parking. You write code that looks synchronous.
The runtime handles concurrency and scheduling. Go follows the fork join idea. The parent goroutine spawns child goroutines Child goroutines run concurrently After execution each child joins back with the parent using synchronization primitives like WaitGroup or channels If main exits early, the program terminates before the child goroutine finishes. You must coordinate execution. Goroutines share can memory and communicate, but in that case They must synchronize access using: Ignoring synchronization can leads to race conditions. Use processes when the priority is: You need shared memory You operate outside Go You rely on OS level primitives You build concurrent services in Go You need high concurrency with low overhead You want simple concurrency syntax If you build network services, background workers, or streaming systems in Go, goroutines give you scale without thread explosion. A process isolates.
A thread executes inside a process.
A goroutine executes inside a thread but is managed by the Go runtime. Isolation decreases as you move down the stack and Efficiency increases. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK:
go speak("Hello World") Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
go speak("Hello World") CODE_BLOCK:
go speak("Hello World") CODE_BLOCK:
func speak(msg string) { fmt.Println(msg)
} func main() { go speak("Hello World") time.Sleep(1 * time.Second)
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
func speak(msg string) { fmt.Println(msg)
} func main() { go speak("Hello World") time.Sleep(1 * time.Second)
} CODE_BLOCK:
func speak(msg string) { fmt.Println(msg)
} func main() { go speak("Hello World") time.Sleep(1 * time.Second)
} - Memory Address
- Program Counter
- Stack and Heap memory
- I/O and system resources - Heavyweight
- Expensive context switching
- Higher memory usage - memory address
- heap and global variables - No memory context switch
- Less kernel level operations - Large stack memory, mostly around 1 MB
- Kernel managed
- Expensive to create and destroy
- A blocking system call will block the entire thread - M goroutines
- N OS threads - One OS thread runs
- One goroutine executes at a time
- Many runnable goroutines wait in a local queue - Parks the thread
- Moves other runnable goroutines from that thread to another thread - Initial stack around 2 KB
- Stack grows dynamically
- Context switching happens in user space
- No full kernel switches for goroutine scheduling - It steals runnable goroutines from another CPUs queue
- This balances load across cores - The scheduler checks a global queue - The OS blocks the thread
- The runtime detaches other goroutines
- Another thread takes over execution - The parent goroutine spawns child goroutines
- Child goroutines run concurrently - Atomic operations - Strong isolation
- Security boundaries
- Crash containment - You need shared memory
- You operate outside Go
- You rely on OS level primitives - You build concurrent services in Go
- You need high concurrency with low overhead
- You want simple concurrency syntax - ByteByte Go
- Goroutines blog
how-totutorialguidedev.toaikernelnetworkswitch