Concurrency with Modern C++
What every professional C++ programmer should know about concurrency.
C++11 is the first C++ standard that deals with concurrency. The story goes on with C++17, C++20, and will continue with C++23.
I'll give you a detailed insight into the current and the upcoming concurrency in C++. This insight includes the theory and a lot of practice.
About
About the Book
- C++11 and C++14 have the basic building blocks for creating concurrent or parallel programs.
- With C++17, we got the parallel algorithms of the Standard Template Library (STL). That means most of the algorithms of the STL can be executed sequentially, in parallel, or vectorized.
- The concurrency story in C++ goes on. With C++20, we got coroutines, atomic smart pointers, semaphores, latches, and barriers.
- C++23 supports the first concrete coroutine: std::generator.
- With future C++ standards, we can hope for executors, extended futures, transactional memory, and more.
This book explains the details of concurrency in modern C++ and gives you nearly 200 running code examples. Therefore, you can combine theory with practice and get the most out of it.
Because this book is about concurrency, I present many pitfalls and show you how to overcome them.
The book is 100 % finished, but I will update it regularly. The next update is probably about C++26. Furthermore, I will write about lock-free concurrent data structure and patterns for parallelization.
Packages
Pick Your Package
All packages include the ebook in the following formats: PDF, EPUB, and Web
The Book
Minimum price
Suggested price$41.00$33.00
- Source Code
Concurreny with Modern C++ Team Edition: Five Copies
Minimum price
Suggested price$123.00Get five copies to the price of three. This package includes all code examples.
$99.00
- Source Code
Author
About the Author
Rainer Grimm
I've worked as a software architect, team lead, and instructor since 1999. In 2002, I created a further education round at my company. I have given training courses since 2002. My first tutorials were about proprietary management software, but soon after, I began teaching Python and C++. In my spare time, I like to write articles about C++, Python, and Haskell. I also like to speak at conferences. I publish weekly on my English blog https://www.modernescpp.com.
Since 2016, I have been an independent instructor giving seminars about modern C++ and Python. I have published several books in various languages about modern C++ and in particular, concurrency. Due to my profession, I always search for the best way to teach modern C++.
My books "C++ 11 für Programmierer ", "C++" and "C++ Standardbibliothek kurz & gut" for the "kurz & gut" series were published by Pearson and O'Reilly. They are available in German, English, Korean, and Persian. In summer 2018 I published a new book on Leanpub: "Concurrency with Modern C++". This book is also available in German: "Modernes C++: Concurrency meistern".
Contents
Table of Contents
Reader Testimonials
Introduction
- Conventions
- Special Fonts
- Special Symbols
- Special Boxes
- Tip Headline
- Warning Headline
- Distilled Information
- Source Code
- Run the Programs
- How should you read the book?
- Personal Notes
- Acknowledgment
- About Me
- IA Quick Overview
1.Concurrency with Modern C++
- 1.1C++11 and C++14: The Foundation
- 1.1.1Memory Model
- 1.1.2Multithreading
- 1.2C++17: Parallel Algorithms of the Standard Template Library
- 1.2.1Execution Policy
- 1.2.2New Algorithms
- 1.3Coroutines
- 1.4Case Studies
- 1.4.1Calculating the Sum of a Vector
- 1.4.2The Dining Philosophers Problem by Andre Adrian
- 1.4.3Thread-Safe Initialization of a Singleton
- 1.4.4Ongoing Optimization with CppMem
- 1.4.5Fast Synchronization of Threads
- 1.5Variations of Futures
- 1.6Modification and Generalization of a Generator
- 1.7Various Job Workflows
- 1.8The Future of C++
- 1.8.1Executors
- 1.8.2Extended futures
- 1.8.3Transactional Memory
- 1.8.4Task Blocks
- 1.8.5Data-Parallel Vector Library
- 1.9Patterns and Best Practices
- 1.9.1Synchronization
- 1.9.2Concurrent Architecture
- 1.9.3Best Practices
- 1.10Data Structures
- 1.11Challenges
- 1.12Time Library
- 1.13CppMem
- 1.14Glossary
- IIThe Details
2.Memory Model
- 2.1Basics of the Memory Model
- 2.1.1What is a memory location?
- 2.1.2What happens if two threads access the same memory location?
- 2.2The Contract
- 2.2.1The Foundation
- 2.2.2The Challenges
- 2.3Atomics
- 2.3.1Strong versus Weak Memory Model
- 2.3.2The Atomic Flag
- Initialization of a
std::atomic_flagin C++11 - 2.3.3
std::atomic atomicis notvolatile- Push versus Pull Principle
- Check the type properties at compile time
- The Importance of being Thread-Safe
- The
fetch_multalgorithm is lock_free - 2.3.4All Atomic Operations
- 2.3.5Free Atomic Functions
- Atomic Smart Pointers with C++20
- 2.3.6
std::atomic_ref(C++20) - 2.4The Synchronization and Ordering Constraints
- 2.4.1The Six Variants of Memory Orderings in C++
- 2.4.2Sequential Consistency
- 2.4.3Acquire-Release Semantic
- The memory model for a deeper understanding of multithreading
- Release Sequence
- 2.4.4
std::memory_order_consume - 2.4.5Relaxed Semantics
- The
addalgorithm is wait-free - 2.5Fences
- 2.5.1
std::atomic_thread_fence - Synchronization between the release fence and the acquire fence
- 2.5.2
std::atomic_signal_fence - Distilled Information
3.Multithreading
- 3.1The Basic Thread
std::thread - 3.1.1Thread Creation
- 3.1.2Thread Lifetime
- The Challenge of
detach scoped_threadby Anthony Williams- 3.1.3Thread Arguments
- Thread arguments by reference
- 3.1.4Member Functions
- Access to the system-specific implementation
- 3.2The Improved Thread
std::jthread(C++20) - 3.2.1Automatically Joining
- 3.2.2Cooperative Interruption of a
std::jthread - 3.3Shared Data
- 3.3.1Mutexes
std::coutis thread-safe- 3.3.2Locks
- 3.3.3
std::lock - Resolving the deadlock with a
std::scoped_lock - 3.3.4Thread-safe Initialization
- Thread-safe Initialization in the main thread
defaultanddelete- Know your Compiler support for static
- 3.4Thread-Local Data
- From a Single-Threaded to a Multithreaded Program.
- 3.5Condition Variables
std::condition_variable_any- 3.5.1The Predicate
- 3.5.2Lost Wakeup and Spurious Wakeup
- 3.5.3The Wait Workflow
- Use a mutex to protect the shared variable
- 3.6Cooperative Interruption (C++20)
- Killing a Thread is Dangerous
- 3.6.1
std::stop_source - 3.6.2
std::stop_token - 3.6.3
std::stop_callback - 3.6.4A General Mechanism to Send Signals
- 3.6.5Additional Functionality of
std::jthread - 3.6.6New
waitOverloads for thecondition_variable_any - 3.7Semaphores (C++20)
- Edsger W. Dijkstra invented semaphores
- 3.8Latches and Barriers (C++20)
- 3.8.1
std::latch - 3.8.2
std::barrier - 3.9Tasks
- Regard tasks as data channels between communication endpoints
- 3.9.1Tasks versus Threads
- 3.9.2
std::async std::asyncshould be your first choice- Eager versus lazy evaluation
- 3.9.3
std::packaged_task - 3.9.4
std::promiseandstd::future - 3.9.5
std::shared_future - 3.9.6Exceptions
std::current_exceptionandstd::make_exception_ptr- 3.9.7Notifications
- 3.10Synchronized Outputstreams (C++20)
- Distilled Information
4.Parallel Algorithms of the Standard Template Library
- 4.1Execution Policies
- 4.1.1Parallel and Vectorized Execution
- 4.1.2Exceptions
- 4.1.3Hazards of Data Races and Deadlocks
- 4.2Algorithms
- 4.3The New Algorithms
transform_reducebecomesmap_reduce- 4.3.1More overloads
- 4.3.2The functional Heritage
- 4.4Compiler Support
- 4.4.1Microsoft Visual Compiler
- 4.4.2GCC Compiler
- 4.4.3Further Implementations of the Parallel STL
- 4.5Performance
- Compiler Comparison
- 4.5.1Microsoft Visual Compiler
- 4.5.2GCC Compiler
- Distilled Information
5.Coroutines (C++20)
- The Challenge of Understanding Coroutines
- 5.1A Generator Function
- 5.2Characteristics
- 5.2.1Typical Use Cases
- 5.2.2Underlying Concepts
- 5.2.3Design Goals
- 5.2.4Becoming a Coroutine
- Distinguish Between the Coroutine Factory and the Coroutine Object
- 5.3The Framework
- 5.3.1Promise Object
- 5.3.2Coroutine Handle
- The resumable object requires an inner type
promise_type - 5.3.3Coroutine Frame
- 5.4Awaitables and Awaiters
- 5.4.1Awaitables
- 5.4.2The Concept Awaiter
- 5.4.3
std::suspend_alwaysandstd::suspend_never - 5.4.4
initial_suspend - 5.4.5
final_suspend - 5.4.6Awaiter
awaiter = awaitable- 5.5The Workflows
- 5.5.1The Promise Workflow
- 5.5.2The Awaiter Workflow
- 5.6
co_return - 5.6.1A Future
- 5.7
co_yield - 5.7.1An Infinite Data Stream
- 5.8
co_await - 5.8.1Starting a Job on Request
- 5.8.2Thread Synchronization
- 5.9
std::generator(C++23) - Distilled Information
6.Case Studies
- The Reference PCs
- 6.1Calculating the Sum of a Vector
- 6.1.1Single-Threaded addition of a Vector
- 6.1.2Multi-threaded Summation with a Shared Variable
- Reduced Source Files
- 6.1.3Thread-Local Summation
- 6.1.4Summation of a Vector: The Conclusion
- 6.2The Dining Philosophers Problem by Andre Adrian
- 6.2.1Multiple Resource Use
- 6.2.2Multiple Resource Use with Logging
- 6.2.3Erroneous Busy Waiting without Resource Hierarchy
- 6.2.4Erroneous Busy Waiting with Resource Hierarchy
- 6.2.5Still Erroneous Busy Waiting with Resource Hierarchy
- 6.2.6Correct Busy Waiting with Resource Hierarchy
- 6.2.7Good low CPU load Busy Waiting with Resource Hierarchy
- 6.2.8
std::mutexwith Resource Hierarchy - 6.2.9
std::lock_guardwith Resource Hierarchy - 6.2.10
std::lock_guardand Synchronized Output with Resource Hierarchy - 6.2.11
std::lock_guardand Synchronized Output with Resource Hierarchy and a count - 6.2.12A
std::unique_lockusing deferred locking - 6.2.13A
std::scoped_lockwith Resource Hierarchy - 6.2.14The Original Dining Philosophers Problem using Semaphores
- 6.2.15A C++20 Compatible Semaphore
- 6.3Thread-Safe Initialization of a Singleton
- Thoughts about Singletons
- 6.3.1Double-Checked Locking Pattern
- 6.3.2Performance Measurement
- The
volatileVariabledummy - 6.3.3Thread-Safe Meyers Singleton
- I reduce the examples to the singleton implementation
- 6.3.4
std::lock_guard - 6.3.5
std::call_oncewithstd::once_flag - 6.3.6Atomics
- 6.3.7Performance Numbers of the various Thread-Safe Singleton Implementations
- 6.4Ongoing Optimization with CppMem
- 6.4.1CppMem: Non-Atomic Variables
- Guarantees for int variables
- Why is the execution consistent?
- Using volatile
- 6.4.2CppMem: Locks
- Using
std::lock_guardin CppMem - 6.4.3CppMem: Atomics with Sequential Consistency
- 6.4.4CppMem: Atomics with Acquire-Release Semantics
- 6.4.5CppMem: Atomics with Non-atomics
- 6.4.6CppMem: Atomics with Relaxed Semantic
- 6.4.7Conclusion
- 6.5Fast Synchronization of Threads
- About the Numbers
- 6.5.1Condition Variables
- 6.5.2
std::atomic_flag - 6.5.3
std::atomic<bool> - 6.5.4Semaphores
- 6.5.5All Numbers
- 6.6Variations of Futures
- 6.6.1A Lazy Future
- Lifetime Challenges of Coroutines
- 6.6.2Execution on Another Thread
- 6.7Modification and Generalization of a Generator
- 6.7.1Modifications
- 6.7.2Generalization
- 6.8Various Job Workflows
- 6.8.1The Transparent Awaiter Workflow
- 6.8.2Automatically Resuming the Awaiter
- 6.8.3Automatically Resuming the Awaiter on a Separate Thread
- 6.9Thread-Safe Queue
- Distilled Information
7.The Future of C++
- 7.1Executors
- 7.1.1A long Way
- 7.1.2What is an Executor?
- Executors are the Building Blocks
- 7.1.3First Examples
- 7.1.4Goals of an Executor Concept
- 7.1.5Terminology
- 7.1.6Execution Functions
- 7.1.7A Prototype Implementation
- 7.2Extended Futures
- 7.2.1Concurrency TS v1
- The proposal N3721
- 7.2.2Unified Futures
- 7.3Transactional Memory
- 7.3.1ACI(D)
- 7.3.2Synchronized and Atomic Blocks
- 7.3.3
transaction_safeversustransaction_unsafeCode - 7.4Task Blocks
- 7.4.1Fork and Join
- HPX (High Performance ParalleX)
- 7.4.2
define_task_blockversusdefine_task_block_restore_thread - 7.4.3The Interface
- 7.4.4The Scheduler
- 7.5Data-Parallel Vector Library
- Auto-Vectorization
- 7.5.1Data-Parallel Vectors
- 7.5.2The Interface of the Data-Parallel Vectors
- Distilled Information
- IIIPatterns
8.Patterns and Best Practices
- 8.1History
- 8.2Invaluable Value
- 8.3Pattern versus Best Practices
- 8.4Anti-Pattern
- Distilled Information
9.Synchronization Patterns
- 9.1Dealing with Sharing
- 9.1.1Copied Value
- Value Object
- 9.1.2Thread-Specific Storage
- Use the Algorithms of the Standard Template Library.
- 9.1.3Future
- 9.2Dealing with Mutation
- 9.2.1Scoped Locking
- 9.2.2Strategized Locking
- Null Object
- 9.2.3Thread-Safe Interface
- Inline static data members
- 9.2.4Guarded Suspension
- Distilled Information
10.Concurrent Architecture
- 10.1Active Object
- 10.1.1Challenges
- 10.1.2Solution
- 10.1.3Components
- 10.1.4Dynamic Behavior
- Proxy
- 10.1.5Advantages and Disadvantages
- 10.1.6Implementation
- 10.2Monitor Object
- 10.2.1Challenges
- 10.2.2Solution
- 10.2.3Components
- 10.2.4Dynamic Behavior
- 10.2.5Advantages and Disadvantages
- 10.2.6Implementation
- Thread-Safe Queue - Two Serious Errors
- 10.3Half-Sync/Half-Async
- 10.3.1Challenges
- 10.3.2Solution
- 10.3.3Components
- 10.3.4Dynamic Behavior
- 10.3.5Advantages and Disadvantages
- 10.3.6Example
- 10.4Reactor
- 10.4.1Challenges
- 10.4.2Solution
- 10.4.3Components
- Synchronous Event Demultiplexer
- 10.4.4Dynamic Behavior
- 10.4.5Advantages and Disadvantages
- 10.4.6Example
- The Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP)
- Acceptor-Connector
- 10.5Proactor
- 10.5.1Challenges
- 10.5.2Solution
- 10.5.3Components
- 10.5.4Advantages and Disadvantages
- Asio
- 10.5.5Example
- 10.6Further Information
- Distilled Information
11.Best Practices
- 11.1General
- 11.1.1Code Reviews
- 11.1.2Minimize Sharing of Mutable Data
- 11.1.3Minimize Waiting
- 11.1.4Prefer Immutable Data
- 11.1.5Use pure functions
- 11.1.6Look for the Right Abstraction
- 11.1.7Use Static Code Analysis Tools
- 11.1.8Use Dynamic Enforcement Tools
- 11.2Multithreading
- 11.2.1Threads
- 11.2.2Data Sharing
- 11.2.3Condition Variables
- 11.2.4Promises and Futures
- 11.3Memory Model
- 11.3.1Don’t use volatile for synchronization
- 11.3.2Don’t program Lock Free
- 11.3.3If you program Lock-Free, use well-established patterns
- 11.3.4Don’t build your abstraction, use guarantees of the language
- 11.3.5Don’t reinvent the wheel
- Distilled Information
- IVData Structures
12.General Considerations
- 12.1Concurrent Stack
- 12.2Locking Strategy
- 12.3Granularity of the Interface
- 12.4Typical Usage Pattern
- 12.4.1Linux (GCC)
- 12.4.2Windows (cl.exe)
- 12.5Avoidance of Loopholes
- 12.6Contention
- 12.6.1Single-Threaded Summation without Synchronization
- 12.6.2Single-Threaded Summation with Synchronization (lock)
- 12.6.3Single-Threaded Summation with Synchronization (atomic)
- 12.6.4The Comparison
- 12.7Scalability
- 12.8Invariants
- 12.9Exceptions
- Distilled Information
13.Lock-Based Data Structures
- 13.1Concurrent Stack
- 13.1.1A Stack
- 13.2Concurrent Queue
- 13.2.1A Queue
- 13.2.2Coarse-Grained Locking
- 13.2.3Fine-Grained Locking
- Distilled Information
14.Lock-Free Data Structures
- Design a Lock-Free Data Structure is Very Challenging
- 14.1General Considerations
- 14.1.1The Next Evolutionary Step
- 14.1.2Sequential Consistency
- 14.2Concurrent Stack
- 14.2.1A Simplified Implementation
pushis lock-free but not wait-free- 14.2.2A Complete Implementation
- 14.3Concurrent Queue
- Distilled Information
- VFurther Information
15.Challenges
- 15.1ABA Problem
- Two new proposals
- 15.2Blocking Issues
- 15.3Breaking of Program Invariants
- 15.4Data Races
- 15.5Deadlocks
- Locking a non-recursive mutex more than once
- 15.6False Sharing
- The optimizer detects the false sharing
std::hardware_destructive_interference_sizeandstd::hardware_constructive_interference_sizewith C++17- 15.7Lifetime Issues of Variables
- 15.8Moving Threads
- 15.9Race Conditions
16.The Time Library
- 16.1The Interplay of Time Point, Time Duration, and Clock
- 16.2Time Point
- 16.2.1From Time Point to Calendar Time
- 16.2.2Cross the valid Time Range
- 16.3Time Duration
- 16.3.1Calculations
- Evaluation at compile time
- 16.4Clocks
- No guarantees about the accuracy, starting point, and valid time range
- 16.4.1Accuracy and Steadiness
- 16.4.2Epoch
- 16.5Sleep and Wait
17.CppMem - An Overview
- 17.1The simplified Overview
- 17.1.11. Model
- 17.1.22. Program
- 17.1.33. Display Relations
- 17.1.44. Display Layout
- 17.1.55. Model Predicates
- 17.1.6The Examples
18.Glossary
- 18.1
adress_free - 18.2ACID
- 18.3CAS
- 18.4Callable Unit
- 18.5Complexity
- 18.6Concepts
- 18.7Concurrency
- 18.8Critical Section
- 18.9Deadlock
- 18.10Eager Evaluation
- 18.11Executor
- 18.12Function Objects
- Instantiate function objects to use them
- 18.13Lambda Functions
- Lambda functions should be your first choice
- 18.14Lazy evaluation
- 18.15Lock-free
- 18.16Lock-based
- 18.17Lost Wakeup
- 18.18Math Laws
- 18.19Memory Location
- 18.20Memory Model
- 18.21Modification Order
- 18.22Monad
- 18.23Non-blocking
- 18.24obstruction-free
- 18.25Parallelism
- 18.26Predicate
- 18.27Pattern
- 18.28RAII
- 18.29Release Sequence
- 18.30Sequential Consistency
- 18.31Sequence Point
- 18.32Spurious Wakeup
- 18.33Thread
- 18.34Total order
- 18.35TriviallyCopyable
- 18.36Undefined Behavior
- 18.37volatile
- 18.38wait-free
Index
Get the free sample chapters
Click the buttons to get the free sample in PDF or EPUB, or read the sample online here
The Leanpub 60 Day 100% Happiness Guarantee
Within 60 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.
Now, this is technically risky for us, since you'll have the book or course files either way. But we're so confident in our products and services, and in our authors and readers, that we're happy to offer a full money back guarantee for everything we sell.
You can only find out how good something is by trying it, and because of our 100% money back guarantee there's literally no risk to do so!
So, there's no reason not to click the Add to Cart button, is there?
See full terms...
Earn $8 on a $10 Purchase, and $16 on a $20 Purchase
We pay 80% royalties on purchases of $7.99 or more, and 80% royalties minus a 50 cent flat fee on purchases between $0.99 and $7.98. You earn $8 on a $10 sale, and $16 on a $20 sale. So, if we sell 5000 non-refunded copies of your book for $20, you'll earn $80,000.
(Yes, some authors have already earned much more than that on Leanpub.)
In fact, authors have earned over $14 million writing, publishing and selling on Leanpub.
Learn more about writing on Leanpub
Free Updates. DRM Free.
If you buy a Leanpub book, you get free updates for as long as the author updates the book! Many authors use Leanpub to publish their books in-progress, while they are writing them. All readers get free updates, regardless of when they bought the book or how much they paid (including free).
Most Leanpub books are available in PDF (for computers) and EPUB (for phones, tablets and Kindle). The formats that a book includes are shown at the top right corner of this page.
Finally, Leanpub books don't have any DRM copy-protection nonsense, so you can easily read them on any supported device.
Learn more about Leanpub's ebook formats and where to read them
Write and Publish on Leanpub
You can use Leanpub to easily write, publish and sell in-progress and completed ebooks and online courses!
Leanpub is a powerful platform for serious authors, combining a simple, elegant writing and publishing workflow with a store focused on selling in-progress ebooks.
Leanpub is a magical typewriter for authors: just write in plain text, and to publish your ebook, just click a button. (Or, if you are producing your ebook your own way, you can even upload your own PDF and/or EPUB files and then publish with one click!) It really is that easy.