@@ -1,10 +1,10 @@
|
||||
# Intro
|
||||
|
||||
One of Rust's big promises is *fearless concurrency*: making it easier to write safe, concurrent programs.
|
||||
We haven't seen much of that yet. All the work we've done so far has been single-threaded.
|
||||
One of Rust's big promises is _fearless concurrency_: making it easier to write safe, concurrent programs.
|
||||
We haven't seen much of that yet. All the work we've done so far has been single-threaded.
|
||||
Time to change that!
|
||||
|
||||
In this chapter we'll make our ticket store multithreaded.
|
||||
In this chapter we'll make our ticket store multithreaded.\
|
||||
We'll have the opportunity to touch most of Rust's core concurrency features, including:
|
||||
|
||||
- Threads, using the `std::thread` module
|
||||
@@ -12,4 +12,4 @@ We'll have the opportunity to touch most of Rust's core concurrency features, in
|
||||
- Shared state, using `Arc`, `Mutex` and `RwLock`
|
||||
- `Send` and `Sync`, the traits that encode Rust's concurrency guarantees
|
||||
|
||||
We'll also discuss various design patterns for multithreaded systems and some their trade-offs.
|
||||
We'll also discuss various design patterns for multithreaded systems and some their trade-offs.
|
||||
|
||||
@@ -1,26 +1,26 @@
|
||||
# Threads
|
||||
|
||||
Before we start writing multithreaded code, let's take a step back and talk about what threads are
|
||||
Before we start writing multithreaded code, let's take a step back and talk about what threads are
|
||||
and why we might want to use them.
|
||||
|
||||
## What is a thread?
|
||||
|
||||
A **thread** is an execution context managed by the underlying operating system.
|
||||
A **thread** is an execution context managed by the underlying operating system.\
|
||||
Each thread has its own stack, instruction pointer, and program counter.
|
||||
|
||||
A single **process** can manage multiple threads.
|
||||
These threads share the same memory space, which means they can access the same data.
|
||||
|
||||
Threads are a **logical** construct. In the end, you can only run one set of instructions
|
||||
at a time on a CPU core, the **physical** execution unit.
|
||||
Threads are a **logical** construct. In the end, you can only run one set of instructions
|
||||
at a time on a CPU core, the **physical** execution unit.\
|
||||
Since there can be many more threads than there are CPU cores, the operating system's
|
||||
**scheduler** is in charge of deciding which thread to run at any given time,
|
||||
partitioning CPU time among them to maximize throughput and responsiveness.
|
||||
|
||||
## `main`
|
||||
|
||||
When a Rust program starts, it runs on a single thread, the **main thread**.
|
||||
This thread is created by the operating system and is responsible for running the `main`
|
||||
When a Rust program starts, it runs on a single thread, the **main thread**.\
|
||||
This thread is created by the operating system and is responsible for running the `main`
|
||||
function.
|
||||
|
||||
```rust
|
||||
@@ -37,8 +37,8 @@ fn main() {
|
||||
|
||||
## `std::thread`
|
||||
|
||||
Rust's standard library provides a module, `std::thread`, that allows you to create
|
||||
and manage threads.
|
||||
Rust's standard library provides a module, `std::thread`, that allows you to create
|
||||
and manage threads.
|
||||
|
||||
### `spawn`
|
||||
|
||||
@@ -66,12 +66,12 @@ fn main() {
|
||||
```
|
||||
|
||||
If you execute this program on the [Rust playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=afedf7062298ca8f5a248bc551062eaa)
|
||||
you'll see that the main thread and the spawned thread run concurrently.
|
||||
you'll see that the main thread and the spawned thread run concurrently.\
|
||||
Each thread makes progress independently of the other.
|
||||
|
||||
### Process termination
|
||||
|
||||
When the main thread finishes, the overall process will exit.
|
||||
When the main thread finishes, the overall process will exit.\
|
||||
A spawned thread will continue running until it finishes or the main thread finishes.
|
||||
|
||||
```rust
|
||||
@@ -90,7 +90,7 @@ fn main() {
|
||||
}
|
||||
```
|
||||
|
||||
In the example above, you can expect to see the message "Hello from a thread!" printed roughly five times.
|
||||
In the example above, you can expect to see the message "Hello from a thread!" printed roughly five times.\
|
||||
Then the main thread will finish (when the `sleep` call returns), and the spawned thread will be terminated
|
||||
since the overall process exits.
|
||||
|
||||
@@ -109,7 +109,7 @@ fn main() {
|
||||
}
|
||||
```
|
||||
|
||||
In this example, the main thread will wait for the spawned thread to finish before exiting.
|
||||
This introduces a form of **synchronization** between the two threads: you're guaranteed to see the message
|
||||
In this example, the main thread will wait for the spawned thread to finish before exiting.\
|
||||
This introduces a form of **synchronization** between the two threads: you're guaranteed to see the message
|
||||
"Hello from a thread!" printed before the program exits, because the main thread won't exit
|
||||
until the spawned thread has finished.
|
||||
|
||||
@@ -20,12 +20,12 @@ error[E0597]: `v` does not live long enough
|
||||
|
||||
`argument requires that v is borrowed for 'static`, what does that mean?
|
||||
|
||||
The `'static` lifetime is a special lifetime in Rust.
|
||||
The `'static` lifetime is a special lifetime in Rust.\
|
||||
It means that the value will be valid for the entire duration of the program.
|
||||
|
||||
## Detached threads
|
||||
|
||||
A thread launched via `thread::spawn` can **outlive** the thread that spawned it.
|
||||
A thread launched via `thread::spawn` can **outlive** the thread that spawned it.\
|
||||
For example:
|
||||
|
||||
```rust
|
||||
@@ -43,11 +43,11 @@ fn f() {
|
||||
}
|
||||
```
|
||||
|
||||
In this example, the first spawned thread will in turn spawn
|
||||
a child thread that prints a message every second.
|
||||
In this example, the first spawned thread will in turn spawn
|
||||
a child thread that prints a message every second.\
|
||||
The first thread will then finish and exit. When that happens,
|
||||
its child thread will **continue running** for as long as the
|
||||
overall process is running.
|
||||
its child thread will **continue running** for as long as the
|
||||
overall process is running.\
|
||||
In Rust's lingo, we say that the child thread has **outlived**
|
||||
its parent.
|
||||
|
||||
@@ -59,7 +59,7 @@ Since a spawned thread can:
|
||||
- run until the program exits
|
||||
|
||||
it must not borrow any values that might be dropped before the program exits;
|
||||
violating this constraint would expose us to a use-after-free bug.
|
||||
violating this constraint would expose us to a use-after-free bug.\
|
||||
That's why `std::thread::spawn`'s signature requires that the closure passed to it
|
||||
has the `'static` lifetime:
|
||||
|
||||
@@ -77,9 +77,9 @@ where
|
||||
|
||||
All values in Rust have a lifetime, not just references.
|
||||
|
||||
In particular, a type that owns its data (like a `Vec` or a `String`)
|
||||
In particular, a type that owns its data (like a `Vec` or a `String`)
|
||||
satisfies the `'static` constraint: if you own it, you can keep working with it
|
||||
for as long as you want, even after the function that originally created it
|
||||
for as long as you want, even after the function that originally created it
|
||||
has returned.
|
||||
|
||||
You can thus interpret `'static` as a way to say:
|
||||
@@ -104,9 +104,9 @@ The most common case is a reference to **static data**, such as string literals:
|
||||
let s: &'static str = "Hello world!";
|
||||
```
|
||||
|
||||
Since string literals are known at compile-time, Rust stores them *inside* your executable,
|
||||
in a region known as **read-only data segment**.
|
||||
All references pointing to that region will therefore be valid for as long as
|
||||
Since string literals are known at compile-time, Rust stores them _inside_ your executable,
|
||||
in a region known as **read-only data segment**.
|
||||
All references pointing to that region will therefore be valid for as long as
|
||||
the program runs; they satisfy the `'static` contract.
|
||||
|
||||
## Further reading
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
# Leaking data
|
||||
|
||||
The main concern around passing references to spawned threads is use-after-free bugs:
|
||||
accessing data using a pointer to a memory region that's already been freed/de-allocated.
|
||||
The main concern around passing references to spawned threads is use-after-free bugs:
|
||||
accessing data using a pointer to a memory region that's already been freed/de-allocated.\
|
||||
If you're working with heap-allocated data, you can avoid the issue by
|
||||
telling Rust that you'll never reclaim that memory: you choose to **leak memory**,
|
||||
telling Rust that you'll never reclaim that memory: you choose to **leak memory**,
|
||||
intentionally.
|
||||
|
||||
This can be done, for example, using the `Box::leak` method from Rust's standard library:
|
||||
@@ -19,7 +19,7 @@ let static_ref: &'static mut u32 = Box::leak(x);
|
||||
## Data leakage is process-scoped
|
||||
|
||||
Leaking data is dangerous: if you keep leaking memory, you'll eventually
|
||||
run out and crash with an out-of-memory error.
|
||||
run out and crash with an out-of-memory error.
|
||||
|
||||
```rust
|
||||
// If you leave this running for a while,
|
||||
@@ -32,14 +32,14 @@ fn oom_trigger() {
|
||||
}
|
||||
```
|
||||
|
||||
At the same time, memory leaked via `Box::leak` is not truly forgotten.
|
||||
At the same time, memory leaked via `Box::leak` is not truly forgotten.\
|
||||
The operating system can map each memory region to the process responsible for it.
|
||||
When the process exits, the operating system will reclaim that memory.
|
||||
|
||||
Keeping this in mind, it can be OK to leak memory when:
|
||||
|
||||
- The amount of memory you need to leak is not unbounded/known upfront, or
|
||||
- Your process is short-lived and you're confident you won't exhaust
|
||||
- Your process is short-lived and you're confident you won't exhaust
|
||||
all the available memory before it exits
|
||||
|
||||
"Let the OS deal with it" is a perfectly valid memory management strategy
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Scoped threads
|
||||
|
||||
All the lifetime issues we discussed so far have a common source:
|
||||
the spawned thread can outlive its parent.
|
||||
All the lifetime issues we discussed so far have a common source:
|
||||
the spawned thread can outlive its parent.\
|
||||
We can sidestep this issue by using **scoped threads**.
|
||||
|
||||
```rust
|
||||
@@ -26,16 +26,16 @@ Let's unpack what's happening.
|
||||
|
||||
## `scope`
|
||||
|
||||
The `std::thread::scope` function creates a new **scope**.
|
||||
`std::thread::scope` takes as input a closure, with a single argument: a `Scope` instance.
|
||||
The `std::thread::scope` function creates a new **scope**.\
|
||||
`std::thread::scope` takes as input a closure, with a single argument: a `Scope` instance.
|
||||
|
||||
## Scoped spawns
|
||||
|
||||
`Scope` exposes a `spawn` method.
|
||||
Unlike `std::thread::spawn`, all threads spawned using a `Scope` will be
|
||||
**automatically joined** when the scope ends.
|
||||
`Scope` exposes a `spawn` method.\
|
||||
Unlike `std::thread::spawn`, all threads spawned using a `Scope` will be
|
||||
**automatically joined** when the scope ends.
|
||||
|
||||
If we were to "translate" the previous example to `std::thread::spawn`,
|
||||
If we were to "translate" the previous example to `std::thread::spawn`,
|
||||
it'd look like this:
|
||||
|
||||
```rust
|
||||
@@ -61,13 +61,13 @@ println!("Here's v: {v:?}");
|
||||
|
||||
The translated example wouldn't compile, though: the compiler would complain
|
||||
that `&v` can't be used from our spawned threads since its lifetime isn't
|
||||
`'static`.
|
||||
`'static`.
|
||||
|
||||
That's not an issue with `std::thread::scope`—you can **safely borrow from the environment**.
|
||||
|
||||
In our example, `v` is created before the spawning points.
|
||||
It will only be dropped _after_ `scope` returns. At the same time,
|
||||
all threads spawned inside `scope` are guaranteed to finish _before_ `scope` returns,
|
||||
therefore there is no risk of having dangling references.
|
||||
therefore there is no risk of having dangling references.
|
||||
|
||||
The compiler won't complain!
|
||||
|
||||
@@ -1,18 +1,18 @@
|
||||
# Channels
|
||||
|
||||
All our spawned threads have been fairly short-lived so far.
|
||||
All our spawned threads have been fairly short-lived so far.\
|
||||
Get some input, run a computation, return the result, shut down.
|
||||
|
||||
For our ticket management system, we want to do something different:
|
||||
For our ticket management system, we want to do something different:
|
||||
a client-server architecture.
|
||||
|
||||
We will have **one long-running server thread**, responsible for managing
|
||||
our state, the stored tickets.
|
||||
We will have **one long-running server thread**, responsible for managing
|
||||
our state, the stored tickets.
|
||||
|
||||
We will then have **multiple client threads**.
|
||||
Each client will be able to send **commands** and **queries** to
|
||||
the stateful thread, in order to change its state (e.g. add a new ticket)
|
||||
or retrieve information (e.g. get the status of a ticket).
|
||||
We will then have **multiple client threads**.\
|
||||
Each client will be able to send **commands** and **queries** to
|
||||
the stateful thread, in order to change its state (e.g. add a new ticket)
|
||||
or retrieve information (e.g. get the status of a ticket).\
|
||||
Client threads will run concurrently.
|
||||
|
||||
## Communication
|
||||
@@ -22,16 +22,16 @@ So far we've only had very limited parent-child communication:
|
||||
- The spawned thread borrowed/consumed data from the parent context
|
||||
- The spawned thread returned data to the parent when joined
|
||||
|
||||
This isn't enough for a client-server design.
|
||||
Clients need to be able to send and receive data from the server thread
|
||||
_after_ it has been launched.
|
||||
This isn't enough for a client-server design.\
|
||||
Clients need to be able to send and receive data from the server thread
|
||||
_after_ it has been launched.
|
||||
|
||||
We can solve the issue using **channels**.
|
||||
|
||||
## Channels
|
||||
|
||||
Rust's standard library provides **multi-producer, single-consumer** (mpsc) channels
|
||||
in its `std::sync::mpsc` module.
|
||||
in its `std::sync::mpsc` module.\
|
||||
There are two channel flavours: bounded and unbounded. We'll stick to the unbounded
|
||||
version for now, but we'll discuss the pros and cons later on.
|
||||
|
||||
@@ -43,8 +43,8 @@ use std::sync::mpsc::channel;
|
||||
let (sender, receiver) = channel();
|
||||
```
|
||||
|
||||
You get a sender and a receiver.
|
||||
You call `send` on the sender to push data into the channel.
|
||||
You get a sender and a receiver.\
|
||||
You call `send` on the sender to push data into the channel.\
|
||||
You call `recv` on the receiver to pull data from the channel.
|
||||
|
||||
### Multiple senders
|
||||
@@ -53,21 +53,21 @@ You call `recv` on the receiver to pull data from the channel.
|
||||
each client thread) and they will all push data into the same channel.
|
||||
|
||||
`Receiver`, instead, is not clonable: there can only be a single receiver
|
||||
for a given channel.
|
||||
for a given channel.
|
||||
|
||||
That's what **mpsc** (multi-producer single-consumer) stands for!
|
||||
|
||||
### Message type
|
||||
|
||||
Both `Sender` and `Receiver` are generic over a type parameter `T`.
|
||||
Both `Sender` and `Receiver` are generic over a type parameter `T`.\
|
||||
That's the type of the _messages_ that can travel on our channel.
|
||||
|
||||
It could be a `u64`, a struct, an enum, etc.
|
||||
|
||||
### Errors
|
||||
|
||||
Both `send` and `recv` can fail.
|
||||
`send` returns an error if the receiver has been dropped.
|
||||
Both `send` and `recv` can fail.\
|
||||
`send` returns an error if the receiver has been dropped.\
|
||||
`recv` returns an error if all senders have been dropped and the channel is empty.
|
||||
|
||||
In other words, `send` and `recv` error when the channel is effectively closed.
|
||||
In other words, `send` and `recv` error when the channel is effectively closed.
|
||||
|
||||
@@ -10,13 +10,13 @@ impl<T> Sender<T> {
|
||||
}
|
||||
```
|
||||
|
||||
`send` takes `&self` as its argument.
|
||||
`send` takes `&self` as its argument.\
|
||||
But it's clearly causing a mutation: it's adding a new message to the channel.
|
||||
What's even more interesting is that `Sender` is cloneable: we can have multiple instances of `Sender`
|
||||
trying to modify the channel state **at the same time**, from different threads.
|
||||
|
||||
That's the key property we are using to build this client-server architecture. But why does it work?
|
||||
Doesn't it violate Rust's rules about borrowing? How are we performing mutations via an _immutable_ reference?
|
||||
Doesn't it violate Rust's rules about borrowing? How are we performing mutations via an _immutable_ reference?
|
||||
|
||||
## Shared rather than immutable references
|
||||
|
||||
@@ -31,31 +31,31 @@ It would have been more accurate to name them:
|
||||
- exclusive references (`&mut T`)
|
||||
|
||||
Immutable/mutable is a mental model that works for the vast majority of cases, and it's a great one to get started
|
||||
with Rust. But it's not the whole story, as you've just seen: `&T` doesn't actually guarantee that the data it
|
||||
points to is immutable.
|
||||
Don't worry, though: Rust is still keeping its promises.
|
||||
with Rust. But it's not the whole story, as you've just seen: `&T` doesn't actually guarantee that the data it
|
||||
points to is immutable.\
|
||||
Don't worry, though: Rust is still keeping its promises.
|
||||
It's just that the terms are a bit more nuanced than they might seem at first.
|
||||
|
||||
## `UnsafeCell`
|
||||
|
||||
Whenever a type allows you to mutate data through a shared reference, you're dealing with **interior mutability**.
|
||||
Whenever a type allows you to mutate data through a shared reference, you're dealing with **interior mutability**.
|
||||
|
||||
By default, the Rust compiler assumes that shared references are immutable. It **optimises your code** based on that assumption.
|
||||
By default, the Rust compiler assumes that shared references are immutable. It **optimises your code** based on that assumption.\
|
||||
The compiler can reorder operations, cache values, and do all sorts of magic to make your code faster.
|
||||
|
||||
You can tell the compiler "No, this shared reference is actually mutable" by wrapping the data in an `UnsafeCell`.
|
||||
You can tell the compiler "No, this shared reference is actually mutable" by wrapping the data in an `UnsafeCell`.\
|
||||
Every time you see a type that allows interior mutability, you can be certain that `UnsafeCell` is involved,
|
||||
either directly or indirectly.
|
||||
either directly or indirectly.\
|
||||
Using `UnsafeCell`, raw pointers and `unsafe` code, you can mutate data through shared references.
|
||||
|
||||
Let's be clear, though: `UnsafeCell` isn't a magic wand that allows you to ignore the borrow-checker!
|
||||
`unsafe` code is still subject to Rust's rules about borrowing and aliasing.
|
||||
It's an (advanced) tool that you can leverage to build **safe abstractions** whose safety can't be directly expressed
|
||||
in Rust's type system. Whenever you use the `unsafe` keyword you're telling the compiler:
|
||||
Let's be clear, though: `UnsafeCell` isn't a magic wand that allows you to ignore the borrow-checker!\
|
||||
`unsafe` code is still subject to Rust's rules about borrowing and aliasing.
|
||||
It's an (advanced) tool that you can leverage to build **safe abstractions** whose safety can't be directly expressed
|
||||
in Rust's type system. Whenever you use the `unsafe` keyword you're telling the compiler:
|
||||
"I know what I'm doing, I won't violate your invariants, trust me."
|
||||
|
||||
Every time you call an `unsafe` function, there will be documentation explaining its **safety preconditions**:
|
||||
under what circumstances it's safe to execute its `unsafe` block. You can find the ones for `UnsafeCell`
|
||||
Every time you call an `unsafe` function, there will be documentation explaining its **safety preconditions**:
|
||||
under what circumstances it's safe to execute its `unsafe` block. You can find the ones for `UnsafeCell`
|
||||
[in `std`'s documentation](https://doc.rust-lang.org/std/cell/struct.UnsafeCell.html).
|
||||
|
||||
We won't be using `UnsafeCell` directly in this course, nor will we be writing `unsafe` code.
|
||||
@@ -64,15 +64,15 @@ every day in Rust.
|
||||
|
||||
## Key examples
|
||||
|
||||
Let's go through a couple of important `std` types that leverage interior mutability.
|
||||
These are types that you'll encounter somewhat often in Rust code, especially if you peek under the hood of
|
||||
Let's go through a couple of important `std` types that leverage interior mutability.\
|
||||
These are types that you'll encounter somewhat often in Rust code, especially if you peek under the hood of
|
||||
some the libraries you use.
|
||||
|
||||
### Reference counting
|
||||
|
||||
`Rc` is a reference-counted pointer.
|
||||
It wraps around a value and keeps track of how many references to the value exist.
|
||||
When the last reference is dropped, the value is deallocated.
|
||||
`Rc` is a reference-counted pointer.\
|
||||
It wraps around a value and keeps track of how many references to the value exist.
|
||||
When the last reference is dropped, the value is deallocated.\
|
||||
The value wrapped in an `Rc` is immutable: you can only get shared references to it.
|
||||
|
||||
```rust
|
||||
@@ -91,7 +91,7 @@ assert_eq!(Rc::strong_count(&b), 2);
|
||||
// and share the same reference counter.
|
||||
```
|
||||
|
||||
`Rc` uses `UnsafeCell` internally to allow shared references to increment and decrement the reference count.
|
||||
`Rc` uses `UnsafeCell` internally to allow shared references to increment and decrement the reference count.
|
||||
|
||||
### `RefCell`
|
||||
|
||||
@@ -99,7 +99,7 @@ assert_eq!(Rc::strong_count(&b), 2);
|
||||
It allows you to mutate the value wrapped in a `RefCell` even if you only have an
|
||||
immutable reference to the `RefCell` itself.
|
||||
|
||||
This is done via **runtime borrow checking**.
|
||||
This is done via **runtime borrow checking**.
|
||||
The `RefCell` keeps track of the number (and type) of references to the value it contains at runtime.
|
||||
If you try to borrow the value mutably while it's already borrowed immutably,
|
||||
the program will panic, ensuring that Rust's borrowing rules are always enforced.
|
||||
@@ -111,4 +111,4 @@ let x = RefCell::new(42);
|
||||
|
||||
let y = x.borrow(); // Immutable borrow
|
||||
let z = x.borrow_mut(); // Panics! There is an active immutable borrow.
|
||||
```
|
||||
```
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Two-way communication
|
||||
|
||||
In our current client-server implementation, communication flows in one direction: from the client to the server.
|
||||
In our current client-server implementation, communication flows in one direction: from the client to the server.\
|
||||
The client has no way of knowing if the server received the message, executed it successfully, or failed.
|
||||
That's not ideal.
|
||||
|
||||
@@ -8,9 +8,9 @@ To solve this issue, we can introduce a two-way communication system.
|
||||
|
||||
## Response channel
|
||||
|
||||
We need a way for the server to send a response back to the client.
|
||||
We need a way for the server to send a response back to the client.\
|
||||
There are various ways to do this, but the simplest option is to include a `Sender` channel in
|
||||
the message that the client sends to the server. After processing the message, the server can use
|
||||
this channel to send a response back to the client.
|
||||
|
||||
This is a fairly common pattern in Rust applications built on top of message-passing primitives.
|
||||
This is a fairly common pattern in Rust applications built on top of message-passing primitives.
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# A dedicated `Client` type
|
||||
|
||||
All the interactions from the client side have been fairly low-level: you have to
|
||||
All the interactions from the client side have been fairly low-level: you have to
|
||||
manually create a response channel, build the command, send it to the server, and
|
||||
then call `recv` on the response channel to get the response.
|
||||
then call `recv` on the response channel to get the response.
|
||||
|
||||
This is a lot of boilerplate code that could be abstracted away, and that's
|
||||
exactly what we're going to do in this exercise.
|
||||
This is a lot of boilerplate code that could be abstracted away, and that's
|
||||
exactly what we're going to do in this exercise.
|
||||
|
||||
@@ -1,18 +1,18 @@
|
||||
# Bounded vs unbounded channels
|
||||
|
||||
So far we've been using unbounded channels.
|
||||
You can send as many messages as you want, and the channel will grow to accommodate them.
|
||||
In a multi-producer single-consumer scenario, this can be problematic: if the producers
|
||||
So far we've been using unbounded channels.\
|
||||
You can send as many messages as you want, and the channel will grow to accommodate them.\
|
||||
In a multi-producer single-consumer scenario, this can be problematic: if the producers
|
||||
enqueues messages at a faster rate than the consumer can process them, the channel will
|
||||
keep growing, potentially consuming all available memory.
|
||||
|
||||
Our recommendation is to **never** use an unbounded channel in a production system.
|
||||
You should always enforce an upper limit on the number of messages that can be enqueued using a
|
||||
Our recommendation is to **never** use an unbounded channel in a production system.\
|
||||
You should always enforce an upper limit on the number of messages that can be enqueued using a
|
||||
**bounded channel**.
|
||||
|
||||
## Bounded channels
|
||||
|
||||
A bounded channel has a fixed capacity.
|
||||
A bounded channel has a fixed capacity.\
|
||||
You can create one by calling `sync_channel` with a capacity greater than zero:
|
||||
|
||||
```rust
|
||||
@@ -21,23 +21,23 @@ use std::sync::mpsc::sync_channel;
|
||||
let (sender, receiver) = sync_channel(10);
|
||||
```
|
||||
|
||||
`receiver` has the same type as before, `Receiver<T>`.
|
||||
`sender`, instead, is an instance of `SyncSender<T>`.
|
||||
`receiver` has the same type as before, `Receiver<T>`.\
|
||||
`sender`, instead, is an instance of `SyncSender<T>`.
|
||||
|
||||
### Sending messages
|
||||
|
||||
You have two different methods to send messages through a `SyncSender`:
|
||||
|
||||
- `send`: if there is space in the channel, it will enqueue the message and return `Ok(())`.
|
||||
- `send`: if there is space in the channel, it will enqueue the message and return `Ok(())`.\
|
||||
If the channel is full, it will block and wait until there is space available.
|
||||
- `try_send`: if there is space in the channel, it will enqueue the message and return `Ok(())`.
|
||||
- `try_send`: if there is space in the channel, it will enqueue the message and return `Ok(())`.\
|
||||
If the channel is full, it will return `Err(TrySendError::Full(value))`, where `value` is the message that couldn't be sent.
|
||||
|
||||
Depending on your use case, you might want to use one or the other.
|
||||
Depending on your use case, you might want to use one or the other.
|
||||
|
||||
### Backpressure
|
||||
|
||||
The main advantage of using bounded channels is that they provide a form of **backpressure**.
|
||||
The main advantage of using bounded channels is that they provide a form of **backpressure**.\
|
||||
They force the producers to slow down if the consumer can't keep up.
|
||||
The backpressure can then propagate through the system, potentially affecting the whole architecture and
|
||||
preventing end users from overwhelming the system with requests.
|
||||
preventing end users from overwhelming the system with requests.
|
||||
|
||||
@@ -1,16 +1,16 @@
|
||||
# Update operations
|
||||
|
||||
So far we've implemented only insertion and retrieval operations.
|
||||
So far we've implemented only insertion and retrieval operations.\
|
||||
Let's see how we can expand the system to provide an update operation.
|
||||
|
||||
## Legacy updates
|
||||
|
||||
In the non-threaded version of the system, updates were fairly straightforward: `TicketStore` exposed a
|
||||
In the non-threaded version of the system, updates were fairly straightforward: `TicketStore` exposed a
|
||||
`get_mut` method that allowed the caller to obtain a mutable reference to a ticket, and then modify it.
|
||||
|
||||
## Multithreaded updates
|
||||
|
||||
The same strategy won't work in the current multi-threaded version,
|
||||
The same strategy won't work in the current multi-threaded version,
|
||||
because the mutable reference would have to be sent over a channel. The borrow checker would
|
||||
stop us, because `&mut Ticket` doesn't satisfy the `'static` lifetime requirement of `SyncSender::send`.
|
||||
|
||||
@@ -18,7 +18,7 @@ There are a few ways to work around this limitation. We'll explore a few of them
|
||||
|
||||
### Patching
|
||||
|
||||
We can't send a `&mut Ticket` over a channel, therefore we can't mutate on the client-side.
|
||||
We can't send a `&mut Ticket` over a channel, therefore we can't mutate on the client-side.\
|
||||
Can we mutate on the server-side?
|
||||
|
||||
We can, if we tell the server what needs to be changed. In other words, if we send a **patch** to the server:
|
||||
@@ -32,8 +32,8 @@ struct TicketPatch {
|
||||
}
|
||||
```
|
||||
|
||||
The `id` field is mandatory, since it's required to identify the ticket that needs to be updated.
|
||||
The `id` field is mandatory, since it's required to identify the ticket that needs to be updated.\
|
||||
All other fields are optional:
|
||||
|
||||
- If a field is `None`, it means that the field should not be changed.
|
||||
- If a field is `Some(value)`, it means that the field should be changed to `value`.
|
||||
- If a field is `Some(value)`, it means that the field should be changed to `value`.
|
||||
|
||||
@@ -1,39 +1,39 @@
|
||||
# Locks, `Send` and `Arc`
|
||||
|
||||
The patching strategy you just implemented has a major drawback: it's racy.
|
||||
The patching strategy you just implemented has a major drawback: it's racy.\
|
||||
If two clients send patches for the same ticket roughly at same time, the server will apply them in an arbitrary order.
|
||||
Whoever enqueues their patch last will overwrite the changes made by the other client.
|
||||
|
||||
## Version numbers
|
||||
|
||||
We could try to fix this by using a **version number**.
|
||||
Each ticket gets assigned a version number upon creation, set to `0`.
|
||||
Whenever a client sends a patch, they must include the current version number of the ticket alongside the
|
||||
We could try to fix this by using a **version number**.\
|
||||
Each ticket gets assigned a version number upon creation, set to `0`.\
|
||||
Whenever a client sends a patch, they must include the current version number of the ticket alongside the
|
||||
desired changes. The server will only apply the patch if the version number matches the one it has stored.
|
||||
|
||||
In the scenario described above, the server would reject the second patch, because the version number would
|
||||
In the scenario described above, the server would reject the second patch, because the version number would
|
||||
have been incremented by the first patch and thus wouldn't match the one sent by the second client.
|
||||
|
||||
This approach is fairly common in distributed systems (e.g. when client and servers don't share memory),
|
||||
and it is known as **optimistic concurrency control**.
|
||||
and it is known as **optimistic concurrency control**.\
|
||||
The idea is that most of the time, conflicts won't happen, so we can optimize for the common case.
|
||||
You know enough about Rust by now to implement this strategy on your own as a bonus exercise, if you want to.
|
||||
|
||||
## Locking
|
||||
|
||||
We can also fix the race condition by introducing a **lock**.
|
||||
We can also fix the race condition by introducing a **lock**.\
|
||||
Whenever a client wants to update a ticket, they must first acquire a lock on it. While the lock is active,
|
||||
no other client can modify the ticket.
|
||||
|
||||
Rust's standard library provides two different locking primitives: `Mutex<T>` and `RwLock<T>`.
|
||||
Rust's standard library provides two different locking primitives: `Mutex<T>` and `RwLock<T>`.\
|
||||
Let's start with `Mutex<T>`. It stands for **mut**ual **ex**clusion, and it's the simplest kind of lock:
|
||||
it allows only one thread to access the data, no matter if it's for reading or writing.
|
||||
|
||||
`Mutex<T>` wraps the data it protects, and it's therefore generic over the type of the data.
|
||||
You can't access the data directly: the type system forces you to acquire a lock first using either `Mutex::lock` or
|
||||
`Mutex<T>` wraps the data it protects, and it's therefore generic over the type of the data.\
|
||||
You can't access the data directly: the type system forces you to acquire a lock first using either `Mutex::lock` or
|
||||
`Mutex::try_lock`. The former blocks until the lock is acquired, the latter returns immediately with an error if the lock
|
||||
can't be acquired.
|
||||
Both methods return a guard object that dereferences to the data, allowing you to modify it. The lock is released when
|
||||
can't be acquired.\
|
||||
Both methods return a guard object that dereferences to the data, allowing you to modify it. The lock is released when
|
||||
the guard is dropped.
|
||||
|
||||
```rust
|
||||
@@ -57,10 +57,10 @@ drop(guard)
|
||||
|
||||
## Locking granularity
|
||||
|
||||
What should our `Mutex` wrap?
|
||||
The simplest option would be the wrap the entire `TicketStore` in a single `Mutex`.
|
||||
What should our `Mutex` wrap?\
|
||||
The simplest option would be the wrap the entire `TicketStore` in a single `Mutex`.\
|
||||
This would work, but it would severely limit the system's performance: you wouldn't be able to read tickets in parallel,
|
||||
because every read would have to wait for the lock to be released.
|
||||
because every read would have to wait for the lock to be released.\
|
||||
This is known as **coarse-grained locking**.
|
||||
|
||||
It would be better to use **fine-grained locking**, where each ticket is protected by its own lock.
|
||||
@@ -73,17 +73,17 @@ struct TicketStore {
|
||||
}
|
||||
```
|
||||
|
||||
This approach is more efficient, but it has a downside: `TicketStore` has to become **aware** of the multithreaded
|
||||
nature of the system; up until now, `TicketStore` has been blissfully ignored the existence of threads.
|
||||
This approach is more efficient, but it has a downside: `TicketStore` has to become **aware** of the multithreaded
|
||||
nature of the system; up until now, `TicketStore` has been blissfully ignored the existence of threads.\
|
||||
Let's go for it anyway.
|
||||
|
||||
## Who holds the lock?
|
||||
|
||||
For the whole scheme to work, the lock must be passed to the client that wants to modify the ticket.
|
||||
For the whole scheme to work, the lock must be passed to the client that wants to modify the ticket.\
|
||||
The client can then directly modify the ticket (as if they had a `&mut Ticket`) and release the lock when they're done.
|
||||
|
||||
This is a bit tricky.
|
||||
We can't send a `Mutex<Ticket>` over a channel, because `Mutex` is not `Clone` and
|
||||
This is a bit tricky.\
|
||||
We can't send a `Mutex<Ticket>` over a channel, because `Mutex` is not `Clone` and
|
||||
we can't move it out of the `TicketStore`. Could we send the `MutexGuard` instead?
|
||||
|
||||
Let's test the idea with a small example:
|
||||
@@ -131,22 +131,22 @@ note: required because it's used within this closure
|
||||
|
||||
## `Send`
|
||||
|
||||
`Send` is a marker trait that indicates that a type can be safely transferred from one thread to another.
|
||||
`Send` is a marker trait that indicates that a type can be safely transferred from one thread to another.\
|
||||
`Send` is also an auto-trait, just like `Sized`; it's automatically implemented (or not implemented) for your type
|
||||
by the compiler, based on its definition.
|
||||
You can also implement `Send` manually for your types, but it requires `unsafe` since you have to guarantee that the
|
||||
by the compiler, based on its definition.\
|
||||
You can also implement `Send` manually for your types, but it requires `unsafe` since you have to guarantee that the
|
||||
type is indeed safe to send between threads for reasons that the compiler can't automatically verify.
|
||||
|
||||
### Channel requirements
|
||||
|
||||
`Sender<T>`, `SyncSender<T>` and `Receiver<T>` are `Send` if and only if `T` is `Send`.
|
||||
`Sender<T>`, `SyncSender<T>` and `Receiver<T>` are `Send` if and only if `T` is `Send`.\
|
||||
That's because they are used to send values between threads, and if the value itself is not `Send`, it would be
|
||||
unsafe to send it between threads.
|
||||
|
||||
### `MutexGuard`
|
||||
|
||||
`MutexGuard` is not `Send` because the underlying operating system primitives that `Mutex` uses to implement
|
||||
the lock require (on some platforms) that the lock must be released by the same thread that acquired it.
|
||||
the lock require (on some platforms) that the lock must be released by the same thread that acquired it.\
|
||||
If we were to send a `MutexGuard` to another thread, the lock would be released by a different thread, which would
|
||||
lead to undefined behavior.
|
||||
|
||||
@@ -154,13 +154,13 @@ lead to undefined behavior.
|
||||
|
||||
Summing it up:
|
||||
|
||||
- We can't send a `MutexGuard` over a channel. So we can't lock on the server-side and then modify the ticket on the
|
||||
- We can't send a `MutexGuard` over a channel. So we can't lock on the server-side and then modify the ticket on the
|
||||
client-side.
|
||||
- We can send a `Mutex` over a channel because it's `Send` as long as the data it protects is `Send`, which is the
|
||||
case for `Ticket`.
|
||||
At the same time, we can't move the `Mutex` out of the `TicketStore` nor clone it.
|
||||
- We can send a `Mutex` over a channel because it's `Send` as long as the data it protects is `Send`, which is the
|
||||
case for `Ticket`.
|
||||
At the same time, we can't move the `Mutex` out of the `TicketStore` nor clone it.
|
||||
|
||||
How can we solve this conundrum?
|
||||
How can we solve this conundrum?\
|
||||
We need to look at the problem from a different angle.
|
||||
To lock a `Mutex`, we don't need an owned value. A shared reference is enough, since `Mutex` uses internal mutability:
|
||||
|
||||
@@ -173,15 +173,15 @@ impl<T> Mutex<T> {
|
||||
}
|
||||
```
|
||||
|
||||
It is therefore enough to send a shared reference to the client.
|
||||
We can't do that directly, though, because the reference would have to be `'static` and that's not the case.
|
||||
It is therefore enough to send a shared reference to the client.\
|
||||
We can't do that directly, though, because the reference would have to be `'static` and that's not the case.\
|
||||
In a way, we need an "owned shared reference". It turns out that Rust has a type that fits the bill: `Arc`.
|
||||
|
||||
## `Arc` to the rescue
|
||||
|
||||
`Arc` stands for **atomic reference counting**.
|
||||
`Arc` stands for **atomic reference counting**.\
|
||||
`Arc` wraps around a value and keeps track of how many references to the value exist.
|
||||
When the last reference is dropped, the value is deallocated.
|
||||
When the last reference is dropped, the value is deallocated.\
|
||||
The value wrapped in an `Arc` is immutable: you can only get shared references to it.
|
||||
|
||||
```rust
|
||||
@@ -196,9 +196,9 @@ let data_ref: &u32 = &data;
|
||||
```
|
||||
|
||||
If you're having a déjà vu moment, you're right: `Arc` sounds very similar to `Rc`, the reference-counted pointer we
|
||||
introduced when talking about interior mutability. The difference is thread-safety: `Rc` is not `Send`, while `Arc` is.
|
||||
It boils down to the way the reference count is implemented: `Rc` uses a "normal" integer, while `Arc` uses an
|
||||
**atomic** integer, which can be safely shared and modified across threads.
|
||||
introduced when talking about interior mutability. The difference is thread-safety: `Rc` is not `Send`, while `Arc` is.
|
||||
It boils down to the way the reference count is implemented: `Rc` uses a "normal" integer, while `Arc` uses an
|
||||
**atomic** integer, which can be safely shared and modified across threads.
|
||||
|
||||
## `Arc<Mutex<T>>`
|
||||
|
||||
@@ -209,9 +209,9 @@ If we pair `Arc` with `Mutex`, we finally get a type that:
|
||||
- `Mutex` is `Send` if `T` is `Send`.
|
||||
- `T` is `Ticket`, which is `Send`.
|
||||
- Can be cloned, because `Arc` is `Clone` no matter what `T` is.
|
||||
Cloning an `Arc` increments the reference count, the data is not copied.
|
||||
Cloning an `Arc` increments the reference count, the data is not copied.
|
||||
- Can be used to modify the data it wraps, because `Arc` lets you get a shared
|
||||
reference to `Mutex<T>` which can in turn be used to acquire a lock.
|
||||
reference to `Mutex<T>` which can in turn be used to acquire a lock.
|
||||
|
||||
We have all the pieces we need to implement the locking strategy for our ticket store.
|
||||
|
||||
@@ -219,4 +219,4 @@ We have all the pieces we need to implement the locking strategy for our ticket
|
||||
|
||||
- We won't be covering the details of atomic operations in this course, but you can find more information
|
||||
[in the `std` documentation](https://doc.rust-lang.org/std/sync/atomic/index.html) as well as in the
|
||||
["Rust atomics and locks" book](https://marabos.nl/atomics/).
|
||||
["Rust atomics and locks" book](https://marabos.nl/atomics/).
|
||||
|
||||
@@ -3,11 +3,11 @@
|
||||
Our new `TicketStore` works, but its read performance is not great: there can only be one client at a time
|
||||
reading a specific ticket, because `Mutex<T>` doesn't distinguish between readers and writers.
|
||||
|
||||
We can solve the issue by using a different locking primitive: `RwLock<T>`.
|
||||
`RwLock<T>` stands for **read-write lock**. It allows **multiple readers** to access the data simultaneously,
|
||||
but only one writer at a time.
|
||||
We can solve the issue by using a different locking primitive: `RwLock<T>`.\
|
||||
`RwLock<T>` stands for **read-write lock**. It allows **multiple readers** to access the data simultaneously,
|
||||
but only one writer at a time.
|
||||
|
||||
`RwLock<T>` has two methods to acquire a lock: `read` and `write`.
|
||||
`RwLock<T>` has two methods to acquire a lock: `read` and `write`.\
|
||||
`read` returns a guard that allows you to read the data, while `write` returns a guard that allows you to modify it.
|
||||
|
||||
```rust
|
||||
@@ -26,20 +26,20 @@ let guard2 = lock.read().unwrap();
|
||||
|
||||
## Trade-offs
|
||||
|
||||
On the surface, `RwLock<T>` seems like a no-brainer: it provides a superset of the functionality of `Mutex<T>`.
|
||||
On the surface, `RwLock<T>` seems like a no-brainer: it provides a superset of the functionality of `Mutex<T>`.
|
||||
Why would you ever use `Mutex<T>` if you can use `RwLock<T>` instead?
|
||||
|
||||
There are two key reasons:
|
||||
|
||||
- Locking a `RwLock<T>` is more expensive than locking a `Mutex<T>`.
|
||||
This is because `RwLock<T>` has to keep track of the number of active readers and writers, while `Mutex<T>`
|
||||
- Locking a `RwLock<T>` is more expensive than locking a `Mutex<T>`.\
|
||||
This is because `RwLock<T>` has to keep track of the number of active readers and writers, while `Mutex<T>`
|
||||
only has to keep track of whether the lock is held or not.
|
||||
This performance overhead is not an issue if there are more readers than writers, but if the workload
|
||||
is write-heavy `Mutex<T>` might be a better choice.
|
||||
- `RwLock<T>` can cause **writer starvation**.
|
||||
If there are always readers waiting to acquire the lock, writers might never get a chance to run.
|
||||
is write-heavy `Mutex<T>` might be a better choice.
|
||||
- `RwLock<T>` can cause **writer starvation**.\
|
||||
If there are always readers waiting to acquire the lock, writers might never get a chance to run.\
|
||||
`RwLock<T>` doesn't provide any guarantees about the order in which readers and writers are granted access to the lock.
|
||||
It depends on the policy implemented by the underlying OS, which might not be fair to writers.
|
||||
|
||||
In our case, we can expect the workload to be read-heavy (since most clients will be reading tickets, not modifying them),
|
||||
so `RwLock<T>` is a good choice.
|
||||
so `RwLock<T>` is a good choice.
|
||||
|
||||
@@ -1,30 +1,30 @@
|
||||
# Design review
|
||||
|
||||
Let's take a moment to review the journey we've been through.
|
||||
Let's take a moment to review the journey we've been through.
|
||||
|
||||
## Lockless with channel serialization
|
||||
|
||||
Our first implementation of a multithreaded ticket store used:
|
||||
|
||||
- a single long-lived thread (server), to hold the shared state
|
||||
- multiple clients sending requests to it via channels from their own threads.
|
||||
- multiple clients sending requests to it via channels from their own threads.
|
||||
|
||||
No locking of the state was necessary, since the server was the only one modifying the state. That's because
|
||||
the "inbox" channel naturally **serialized** incoming requests: the server would process them one by one.
|
||||
We've already discussed the limitations of this approach when it comes to patching behaviour, but we didn't
|
||||
No locking of the state was necessary, since the server was the only one modifying the state. That's because
|
||||
the "inbox" channel naturally **serialized** incoming requests: the server would process them one by one.\
|
||||
We've already discussed the limitations of this approach when it comes to patching behaviour, but we didn't
|
||||
discuss the performance implications of the original design: the server could only process one request at a time,
|
||||
including reads.
|
||||
|
||||
## Fine-grained locking
|
||||
|
||||
We then moved to a more sophisticated design, where each ticket was protected by its own lock and
|
||||
clients could independently decide if they wanted to read or atomically modify a ticket, acquiring the appropriate lock.
|
||||
clients could independently decide if they wanted to read or atomically modify a ticket, acquiring the appropriate lock.
|
||||
|
||||
This design allows for better parallelism (i.e. multiple clients can read tickets at the same time), but it is
|
||||
This design allows for better parallelism (i.e. multiple clients can read tickets at the same time), but it is
|
||||
still fundamentally **serial**: the server processes commands one by one. In particular, it hands out locks to clients
|
||||
one by one.
|
||||
|
||||
Could we remove the channels entirely and allow clients to directly access the `TicketStore`, relying exclusively on
|
||||
Could we remove the channels entirely and allow clients to directly access the `TicketStore`, relying exclusively on
|
||||
locks to synchronize access?
|
||||
|
||||
## Removing channels
|
||||
@@ -37,18 +37,18 @@ We have two problems to solve:
|
||||
### Sharing `TicketStore` across threads
|
||||
|
||||
We want all threads to refer to the same state, otherwise we don't really have a multithreaded system—we're just
|
||||
running multiple single-threaded systems in parallel.
|
||||
running multiple single-threaded systems in parallel.\
|
||||
We've already encountered this problem when we tried to share a lock across threads: we can use an `Arc`.
|
||||
|
||||
### Synchronizing access to the store
|
||||
|
||||
There is one interaction that's still lockless thanks to the serialization provided by the channels: inserting
|
||||
(or removing) a ticket from the store.
|
||||
(or removing) a ticket from the store.\
|
||||
If we remove the channels, we need to introduce (another) lock to synchronize access to the `TicketStore` itself.
|
||||
|
||||
If we use a `Mutex`, then it makes no sense to use an additional `RwLock` for each ticket: the `Mutex` will
|
||||
already serialize access to the entire store, so we wouldn't be able to read tickets in parallel anyway.
|
||||
already serialize access to the entire store, so we wouldn't be able to read tickets in parallel anyway.\
|
||||
If we use a `RwLock`, instead, we can read tickets in parallel. We just to pause all reads while inserting
|
||||
or removing a ticket.
|
||||
|
||||
Let's go down this path and see where it leads us.
|
||||
Let's go down this path and see where it leads us.
|
||||
|
||||
@@ -1,28 +1,28 @@
|
||||
# `Sync`
|
||||
|
||||
Before we wrap up this chapter, let's talk about another key trait in Rust's standard library: `Sync`.
|
||||
Before we wrap up this chapter, let's talk about another key trait in Rust's standard library: `Sync`.
|
||||
|
||||
`Sync` is an auto trait, just like `Send`.
|
||||
`Sync` is an auto trait, just like `Send`.\
|
||||
It is automatically implemented by all types that can be safely **shared** between threads.
|
||||
|
||||
In order words: `T: Sync` means that `&T` is `Send`.
|
||||
|
||||
## `Sync` doesn't imply `Send`
|
||||
|
||||
It's important to note that `Sync` doesn't imply `Send`.
|
||||
For example: `MutexGuard` is not `Send`, but it is `Sync`.
|
||||
It's important to note that `Sync` doesn't imply `Send`.\
|
||||
For example: `MutexGuard` is not `Send`, but it is `Sync`.
|
||||
|
||||
It isn't `Send` because the lock must be released on the same thread that acquired it, therefore we don't
|
||||
want `MutexGuard` to be dropped on a different thread.
|
||||
It isn't `Send` because the lock must be released on the same thread that acquired it, therefore we don't
|
||||
want `MutexGuard` to be dropped on a different thread.\
|
||||
But it is `Sync`, because giving a `&MutexGuard` to another thread has no impact on where the lock is released.
|
||||
|
||||
## `Send` doesn't imply `Sync`
|
||||
|
||||
The opposite is also true: `Send` doesn't imply `Sync`.
|
||||
For example: `RefCell<T>` is `Send` (if `T` is `Send`), but it is not `Sync`.
|
||||
The opposite is also true: `Send` doesn't imply `Sync`.\
|
||||
For example: `RefCell<T>` is `Send` (if `T` is `Send`), but it is not `Sync`.
|
||||
|
||||
`RefCell<T>` performs runtime borrow checking, but the counters it uses to track borrows are not thread-safe.
|
||||
Therefore, having multiple threads holding a `&RefCell` would lead to a data race, with potentially
|
||||
multiple threads obtaining mutable references to the same data. Hence `RefCell` is not `Sync`.
|
||||
multiple threads obtaining mutable references to the same data. Hence `RefCell` is not `Sync`.\
|
||||
`Send` is fine, instead, because when we send a `RefCell` to another thread we're not
|
||||
leaving behind any references to the data it contains, hence no risk of concurrent mutable access.
|
||||
leaving behind any references to the data it contains, hence no risk of concurrent mutable access.
|
||||
|
||||
Reference in New Issue
Block a user