Coroutines in Kotlin

This article describes what a coroutine is, what problem coroutine is trying to solve, and how coroutines implemented in lates release (1.1) of JVM language called Kotlin.

What is the problem we trying to solve?

To explain what the problem is and why it should be solved let’s consider a simple web-application. This application simply receives a request, reads local file, uploads it to some server and returns back to a client URL of the uploaded file. Of course, this is not a real application, but let’s focus on idea, rather than functionality. The picture below shows how our hypothetical application might work in the single-thread mode. Green colour shows thread is running, yellow - thread is waiting. In our case, waiting means waiting for file read and waiting for file upload. So far so good, however it only works for a single concurrent request. When working thread is busy with handling request A, it won’t be able to respond to request B.

Single thread

To scale this, we can simple create new thread per request - but as we all know, thread creation is expensive and this approach has natural limit - number of thread OS can manage concurrently. Another solution is to introduce pool of working processes and then distribute requests between them. The picture below shows how pool of four threads handles four concurrent requests started with some delay from each other. The problem here is appears when 5th request comes in. There is simply no available thread to use although all 4 threads in the pool are waiting. So in this situation, the whole system is blocked and client needs to wait again.

Thread pool

But if we think about for a second, question we might ask - if thread is doing nothing but waiting for a I/O operation (file or network) - why not just to re-use it in a meanwhile? If we just try to remove all waiting time and then group all executing code into simple pipeline, the picture might look like this:

Thread pool

Basically, our application code for 4 concurrent requests can run in a single thread.

Single thread

Note, that there are two important points highlighted in our request handler lifecycle - yield and continue. Our function now split into chunks and it can release the thread after running the chunk. Then it can continue the next chunk once the result it needs is ready (file is read or written, network request is sent or received etc). Also, our function decides when actually yield - i.e. return control over current thread to someone else. This is different from how OS thread scheduling works - OS just takes the thread back without asking a function too much (preemptive multitasking). But our functions are cooperating to use thread effectively and thus approach is called cooperative or non-preemptive multitasking. And if you still with me till this moment - congratulations, we just invented coroutines! So coroutine is a mechanism to optimise and then execute a function (subroutine) in non-preemptive multi-tasking way.

So answer to the question in the title is - coroutines are trying to solve the problem of effectively utilising resources (thread) using cooperative multitasking.

Where all the I/O runs then?

It is quite easy just to remove blocking I/O operations from the diagram. However the question is where is it actually happens in a real system? Well, this is a really good question and answer is “It depends”. The subject of non-blocking I/O is very big and mechanism depends from the runtime, library, OS etc. Some I/O libraries use their own thread pools (libuv, Java NIO), some utilise low-level OS capabilities (epoll, kqueue). The best thing is this is an implementation detail, and it can be managed outside of application’s code.

Where coroutine returns control to?

Now it is time to mention a very important concept in non-blocking coroutines world - event loop. To understand it, we need to ask ourselves one simple question. When we say ‘coroutine returns control over a current thread’ where it actually returns control to? Consider a web application from our example - there should be a piece of code which listens for network connections and then calls appropriate handlers. Similar in UI applications - there should be a UI thread which handles all user interactions and then invokes certain handlers as a reaction to this. So basically the main purpose of this code for both web and UI apps is to wait for event (http request, button click) and then dispatch it to the application code. And that is what event loop is - some code that runs in a loop in main application thread and dispatches events. In most of the cases we don’t even need to manage event loop manually, underlying framework (e.g. Spring or JavaFX) creates and controls it.

How to define a coroutine?

As we found out above, coroutines need to cooperate in order to efficiently use thread they running on. To achieve this, they can suspend (and yield control) and continue the execution after. And the next practical question is - how compiler can ’slice’ function into chunks, also known as continuations? Well, there are few ways to do that.

Still not exactly what we need, code is very hard to read and maintain. Using promises and building a call chain with then(…) helps, but still not perfect. The worst thing about it is that instead of linear imperative code we code with callbacks combinations.

Coroutines in Kotlin

Now when we know what coroutine is let’s look into Kotlin implementation. Kotlin uses suspend keyword to identify suspendable function. So if we rewrite our example code into real Kotlin source it will be this:

fun handler(): String {
    val file = openFile(url)
    val image_url = sendBytes(file)
    return image_url
suspend fun openFile(url) { ... }
suspend fun sendBytes(file) { ... }

And now it is time to talk about beauty of Kotlin implementation of coroutines. The trick is that Kotlin compiler only takes care about first step - automatically extract continuations from function code and creates a set of kotlin.experimental.coroutines.Continuation classes. How and where to run these continuations is managed by runtime implementations!

So basically, by default, Kotlin doesn’t force a developer to use particular runtime for coroutines (like C# does with Task Parallel Library, part of .NET) but delegates executing a coroutine to a 3rd party library. However it still does the dirty work of splitting a function code into chunks so developer can just write linear imperative code without callbacks. By default, Kotlin provides its own coroutines runtime implementation - kotlinx.experimental.coroutines (note x in root package name).


So before we were talking about single thread coroutines. However there is limitation that each continuation (function ‘chunk’) should run on the same thread. As kotlinx.coroutines manages state of this coroutines quite good, coroutine itself can run on the top on thread pool! To achieve this, Kotlin uses contexts. To know more about it, please refer to the official (and just amazing) Kotlin documentation -

What about Futures for non-blocking calls?

You may have noticed that idea of non-blocking function execution is not very new and there are quite a few libraries which try to do the same. The obvious example is Java’s Future and CompletableFuture, as well as Java NIO with non-blocking I/O API. Third party libraries to be mentioned are RxJava and Reactive Streams (part of Java 9). Good news is that Kotlin with its reference runtime implementation (kotlinx.experimental.coroutines) doesn’t ignore that, but provides libraries to seamlessly integrate exiting code into coroutines runtime.

Is this about IO only?

Yes and no. Yes - you can use coroutines for any user code. And no - having no external event loop for IO, and running CPU bound operations in coroutines thread pool will be exhausted quickly and the whole system become blocked.

What next?

In the next article we will take a look how to unlock the power of Kotlin’s coroutines in upcoming Spring 5 reactive web framework.