wtf series - wtf is a goroutine?

This is part of a series of posts explaining cryptic tech terms in an introductory way.

Disclaimer: this series is not intended to be a main learning source. However, there might be follow up posts with hands-on experiments or deeper technical content for some of these topics.

Before explaining what goroutines are, let's define some concepts first

Concurrency: The ability for different parts of a computer program to execute in out-of-order or in partial order without affecting the end result of that program. Concurrency is not parallelism.

Parallelism: The simultaneous execution of different parts of a computer program across a number of cores improving overall performance. Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once.

Goroutines

Goroutines are a way of doing tasks concurrently in golang. They allow us to create and run multiple methods or functions concurrently in the same address space inexpensively. They are lightweight abstractions over threads because their creation and destruction are very cheap as compared to threads, and they are scheduled over OS threads. Executing the methods in the background is as easy as prepending the word go in a function call. Golang achieves parallelism by multiplexing goroutines onto multiple OS threads so if one should block, such as while waiting for I/O, others continue to run. Their design hides many of the complexities of thread creation and management and delegates that to the go runtime scheduler.

There is a number of differences between goroutines and native threads:

  • Memory Consumption: Goroutines don't require much space (~2kb of stack space) as they grow by allocating memory from the heap space. Threads are greedy they need ~1MB of memory and a guard page which you can think of like a wall between threads memory pools
  • Setup/Teardown: Greedy threads cost significantly more during setup/teardown as they request resources from OS and have to free those after they finish executing. Because goroutines are created and terminated by the runtime, they are very cheap to create and destroy.
  • Context Switching: When a thread block, on I/O for example, another thread has to take it's place. This operation is called context switching and during the context switching the OS has to save the thread state which is ALL the registers (ballpark ~40)[don't know an exact number, don't hesitate to message me if you have a more accurate number], program counter, stack pointer and co-processor state. Goroutines are scheduled cooperatively and when a switch occurs, only 3 registers need to be saved/restored - Program Counter, Stack Pointer and Data registers(DX). The cost is much lower.
  • Goroutines have channels and wait groups, primitive structures for communication. One google search will let you know how much harder doing that in native threads is.

Quick Hands-on: Fetch json from a list of urls in goroutines and only output the result when all the goroutines are finished

// not recommended, just here to demonstrate how channels work and for the sake of completeness.
Channels version:

func main() {
  urls = string[]{
    "url1",
    "url2",
    ...
  }
  done := make(chan bool) // We don't need any data to be passed
  responses := make(chan string) // save responses coming from urls

  for _, url := range urls {
      go func(url) {
          responses <- fetchUrls(url)
          done <- true // signal that the routine has completed
      }(url)
  }

  // Since we started len(urls) routines, receive len(urls) messages.
  // this will block main routine til all are recieved
  for i := 0; i < len(urls); i++ {
      <- done
  }
    
  for i := range responses { // treating the channel as a range
      fmt.Println(i)
  }
}

Waitgroups version:

func main() {
  urls = string[]{
    "url1",
    "url2",
    ...
  }
  responses := make(chan string) // save responses coming from urls
  var wg sync.WaitGroup
  wg.Add(len(urls)) // increment the counter of the waitgroup by len(urls)

  for _, url := range urls {
      go func(url string) {
          defer wg.Done() // defer keyword execute the given method after the scope is terminated, wg.Done() decrements the counter of the waitgroup
          responses <- fetchUrl(url)
      }(url)
  }

  go func() {
    for response := range responses { // treating the channel as a range
      fmt.Println(response)
    }
  }

  wg.Wait() // blocks til the counter reaches zero
}

What's next?

Keywords:
buffered vs unbuffered channels, reusing golang waitgroups, non-blocking channel operations, channel timeouts, worker pools, ticker, mutexes, atomic counters, user space threads vs kernel threads.

References

Rob Pike - Concurrency is not parallelism

Effective Go - Goroutines

Go by example

Essam Hassan

Essam Hassan

A pragmatic software engineer, cyber security enthusiast and a Linux geek. I curse at my machine on a daily basis. My views are my own.
Zurich, Switzerland