paint-brush
Grand Central Dispatch (GCD) in iOS: The Developer's Guideby@mniagolov
5,564 reads
5,564 reads

Grand Central Dispatch (GCD) in iOS: The Developer's Guide

by Maksim NiagolovJuly 3rd, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Apple provides two methods for multitasking in iOS development, Grand Central Dispatch (GCD) and NSOperation. In this article, we will look at GCD. The main idea behind GCD is to move the thread management closer to the operating system. In essence, GCD decides which thread should be used to execute any given task.
featured image - Grand Central Dispatch (GCD) in iOS: The Developer's Guide
Maksim Niagolov HackerNoon profile picture

Modern Central Processing Units (CPUs) contain multiple cores which allow for several threads to be run in parallel and more than one task to be performed at a time. This enables your iPhone app to, for instance, download a large file in the background while the user interface remains responsive.


Apple provides two methods for multitasking in iOS development, Grand Central Dispatch (GCD) and NSOperation. In this article, we will look at GCD. If you need a refresher on the basics of concurrency in iOS development, check out this article.


GCD was introduced in iOS 4 and offers the developer flexibility when using concurrency in their application.


The main idea behind GCD is to move thread management closer to the operating system. It abstracts threads away from the developer, who will not need to consider so many details. In essence, GCD decides which thread should be used to execute any given task.


Content Overview

  • Dispatch Queues
  • Quality of Service (QOS)
  • Background
  • Utility
  • User-Initiated
  • User-interactive
  • Order and Manner of Dispatch
  • Synchronous vs Asynchronous
  • Serial vs Concurrent
  • Serial Queues
  • Concurrent Queues
  • Global Queues
  • Dispatch Methods
  • Conclusion

Dispatch Queues

A term that is frequently used when discussing GCD is dispatch queue. A dispatch queue is an abstraction layer on top of a queue that contains tasks to be executed following the FIFO (first in, first out) algorithm.


GCD manages a collection of dispatch queues. The work items submitted to these dispatch queues are executed on a pool of threads. There is a dispatch queue that represents the application’s main thread (also known as the UI thread) and multiple dispatch queues which represent the many background threads.

Quality Of Service (QOS)

A quality-of-service (QOS) class categorizes the work items which are performed on a dispatch queue. With this parameter, the priority of a given task is specified. There are four QOS classes that may be assigned to a queue.

Background

These are tasks that can take minutes or hours to complete such as loading or processing large amounts of data. They are not time-critical and the user needs to be able to do other things while they are happening. Such tasks are mapped to a low-priority queue.

Utility

These tasks take seconds or minutes and should be completed immediately. An example of a utility task would be one with a loading bar, such as a download.

User-initiated

These are tasks initiated by the user which should be executed right away, such as opening a document. They will be mapped to a high-priority queue and should only take a few seconds or less.

User-interactive

These are UI tasks that need to be finished immediately to ensure that the user is able to continue with the next interaction. They are mapped to the queue with the highest priority and run on the main thread.


Order and manner of dispatch

When using queues, the order and manner in which tasks are dispatched need to be chosen. GCD queues can be serial or concurrent and pushing tasks to them can happen synchronously or asynchronously.


Synchronous vs asynchronous

When a work item is scheduled synchronously, the execution of code that is happening on the current thread is paused until the item’s execution is completed. It is important not to execute a work item synchronously on the main queue as this results in a deadlock.


When it is scheduled asynchronously, the code on the current thread continues to be executed while the work item is scheduled to run on another thread.


Serial vs concurrent

In a serial queue, tasks are executed one after the other according to the FIFO algorithm. The serial queue just needs one thread since only one task is running at a time.

A concurrent queue creates as many threads as it has tasks dispatched into it. The developer does not control the creation of the threads.


Serial Queues

An example of a serial queue can be found below.

func mySerialQueue() {
  let serialQueue = DispatchQueue(label: "com.kraken.serial")
  serialQueue.async {
    sleep(1)
    print("Task 1")
  }
  serialQueue.async {
    print("Task 2")
  }
}


Despite the “sleep” instruction within the first task, the output will be:

Task1
Task2


This is because the tasks are executed serially in the order in which they were added to the queue.


Serial execution is not ideal since one task has to wait for the other. Even if the task that is first in the queue takes much longer than the task behind it, the first task will be given priority.

Think of it like two people waiting in line at the grocery store with the first person making a large purchase for the whole week while the person behind them is only buying one small item. In this case, it makes sense to let the person with the small item pass first so that they can go on with their day without any unnecessary delay.


Concurrent Queues

Let us look more closely at how multiple tasks, or work items, can be run at the same time by utilizing a concurrent queue.


Using the .concurrent flag creates a queue that allows many tasks to be performed simultaneously. One task does not have to wait for another to finish.


func concurrentQueues(){
    let myQueue = DispatchQueue(label: "com.multithreading.concurr", qos: .utility, attributes: .concurrent)
    myQueue.async {
        for i in 0..<10 {
            print("1")
        }
    }
    myQueue.async {
        for j in 0..<10 {
            print("2")
        }
    }
}


In this case, the output would alternate between “1” and “2” as the threads belonging to each of the two tasks are getting switched out.


With concurrent execution, it is important not to call methods that block the current thread. If a task that blocks a thread is scheduled by a concurrent dispatch queue, it will cause the system to create new threads to run other concurrent tasks in the queue. As a result, the system could run out of threads.


Apps can also use up too many threads if too many private concurrent dispatch queues are created. Each dispatch queue consumes thread resources. Instead of creating private concurrent queues, tasks can be submitted to one of the global concurrent dispatch queues.


Global Queues

GCD creates a set of dispatch queues also known as global queues. They can be used freely like custom queues.

let myGlobalQueue = DispatchQueue.global()


The QOS class can be set by the developer. If this parameter is not set, the default case is used.

let myGlobalQueue = DispatchQueue.global(qos: .userInitiated)


Accessing the main queue to perform tasks such as updating the UI can be done by calling the main queue and specifying if a synchronous or asynchronous call should be made.

DispatchQueue.main.async {}


Dispatch Methods

Let us look at some of the different functions GCD provides to dispatch tasks.


Dispatch Group

A dispatch group is a group of tasks that is monitored as one unit. This way, collections of tasks can be aggregated. Multiple work items are attached to a group and scheduled for asynchronous execution. This way, multiple processes can be started but only one event is needed which occurs when all tasks have been completed.


For instance, a dispatch group can be useful when multiple calls to APIs need to be made on a background thread before the UI can be updated on the main thread.

func myTask1(dispatchGroup:DispatchGroup){
    DispatchQueue.global().async {
        print("Task 1 finished")
        dispatchGroup.leave()
    }
}

func myTask2(dispatchGroup:DispatchGroup){
    DispatchQueue.global().async {
        print("Task 2 finished")
        dispatchGroup.leave()
    }
}
 
func myDispatchGroup(){
    let dispatchGroup = DispatchGroup()
    dispatchGroup.enter()
    myTask1(dispatchGroup: dispatchGroup)
    dispatchGroup.enter()
    myTask2(dispatchGroup: dispatchGroup)
 
    dispatchGroup.notify(queue: .main) {
        print("All tasks finished.")
    }
}


In the given example, the two tasks are grouped together and the main queue is only notified once they have both been completed.


Dispatch Work Item

A dispatch work item encapsulates a block of code and provides the flexibility to cancel the given task.


Imagine that you are using the search function in an app. With every typed letter, a new search call is made and the previous one is canceled.

func search(_ searchBar: UISearchBar, textChanged searchText: String) {
    workItem?.cancel()
 
    let searchWorkItem = DispatchWorkItem {
        print("Run search call with text: \(searchText)")
    }
 
    workItem = searchWorkItem
    DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(100), execute: searchWorkItem)
}


Dispatch Semaphore

The dispatch semaphore uses a counting semaphore to control access to a resource that can be accessed by multiple threads. This is a way to implement the critical section, where a shared resource is accessed by a process that would not operate correctly if the resource was accessed concurrently by another process.

let mySemaphore = DispatchSemaphore(value: 1)
mySemaphore.wait()
task { (result) in
    mySemaphore.signal()
}


wait() is called to access a shared resource. signal() is called when a shared resource should be released. The value contained in DispatchSemaphore describes the number of tasks running concurrently.


Dispatch Barrier

Another way to implement the critical section is a dispatch barrier which ensures that no other task is being processed while the given one is executed.

func myDispatchBarrier(){
    let myConcQueue = DispatchQueue(label: "com.kraken.barrier", attributes: .concurrent)
 
    for i in 1…3 {
        myConcQueue.async() {
            print("Asynchronous Task")
        }
    }
 
    for j in 1…3 {
        myConcQueue.async(flags: .barrier) {
            print("Barrier")
        }
    }
}


The three “Barrier” printouts in the output will not be interrupted by an “Asynchronous Task” printout at any point.



Async after

This method can be utilized to delay the execution of a task. GCD enables the developer to set the amount of time after which the task should be run.


let time = 2.0
DispatchQueue.main.asyncAfter(deadline: .now() + time){
    print("Delayed task")
}


Conclusion

With GCD, Apple has provided a multithreading model that is efficient and minimizes the risk of issues related to concurrency such as deadlocks.


Expensive tasks are performed in the background, leaving the main thread unaffected and thus improving the responsiveness of the application.


As thread management is taken on by the operating system, the developer does not have to implement it by hand and can rely on GCD to match the running applications to the available system resources in a balanced manner.


Also published here.


The lead image for this article was generated by HackerNoon's AI Image Generator via the prompt "ios development”.