Hands-On Concurrency with Rust | Programming | Paperback (original) (raw)

Hands-On Concurrency with Rust

Hands-On Concurrency with Rust

Hands-On Concurrency with Rust: Confidently build memory-safe, parallel, and efficient software in Rust

Subscription Free Trial Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital copy whilst your Print order is Shipped

Product feature icon Paperback book shipped to your preferred address

Product feature icon Redeem a companion digital copy on all Print orders

Product feature icon Access this title in our online reader with advanced features

Product feature icon DRM FREE - Read whenever, wherever and however you want

OR

Contact Details

Modal Close icon

Payment Processing...

tick Completed

Shipping Address

Billing is same as shipping information

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

"Make it work, then make it beautiful, then if you really, really have to, make it fast."

- Joe Armstrong

In the previous chapter, we discussed the basics of modern computer architectures—the CPU and its function, memory hierarchies, and their interplay. We left off with a brief introduction to debugging and performance analysis of Rust programs. In this chapter, we'll continue that discussion, digging into the performance characteristics of sequential...

This chapter requires a working Rust installation. The details of verifying your installation are covered in Chapter 1, Preliminaries – Machine Architecture and Getting Started with Rust. The Valgrind suite of tools will be used here. Many operating systems bundle valgrind packages but you can find further installation instructions for your system at valgrind.org. Linux Perf is used and is bundled by many Linux distributions. Any other software required for this chapter is installed as a part of the text.

You can find the source code for this book's projects on GitHub: https://github.com/PacktPublishing/Hands-On-Concurrency-with-Rust. The source code for this chapter is under Chapter02.

The hard truth is that there's a diminishing return when applying more and more concurrent computational resources to a problem. Performing parallel computations implies some coordination overhead—spawning new threads, chunking data, and memory bus issues in the presence of barriers or fences, depending on your CPU. Parallel computing is not free. Consider this Hello, world! program:

fn main() { println!("GREETINGS, HUMANS"); }

Straightforward enough, yeah? Compile and run it 100 times:

hello_worlds > rustc -C opt-level=3 sequential_hello_world.rs hello_worlds > time for i in {1..100}; do ./sequential_hello_world > /dev/null; done

real 0m0.091s user 0m0.004s sys 0m0.012s

Now, consider basically the same program but involving the overhead of spawning a thread:

use std::thread;

fn main() { ...

In this section, we'll focus on the serial performance of a common data structure—associative arrays. We'll apply the tools we learned about in the previous chapter to probe different implementations. We'll focus on the associative array because it is fairly well-trod territory, studied in introductory computer science courses, and is available in most higher-level languages by default, Rust being no exception save the higher-level bit. We'll look at Rust's associative array first, which is called std::collections::HashMap.

Let's poke around in HashMap's internals. A good starting place, I find, for inspecting unfamiliar Rust data structures is...

What should we understand from all of this? To produce software that operates at the edge of the machine's ability, you must understand some important things. Firstly, if you aren't measuring your program, you're only guessing. Measuring runtime, as criterion does, is important but a coarse insight. Where is my program spending its time? is a question the Valgrind suite and perf can answer, but you've got to have benchmarks in place to contextualize your questions. Measuring and then validating behavior is also an important chunk of this work, which is why we spent so much time on QuickCheck and AFL. Secondly, have a goal in mind. In this chapter, we've made the speed of standard library HashMap our goal but, in an actual code base, there's always going to be places to polish and improve. What matters is knowing what needs to happen to...

In this chapter, we covered measuring and improving the performance of a serial Rust program while demonstrating the program's fitness for purpose. This is a huge area of work and there's a deep well of literature to pull from.

Left arrow icon

Page1 of 6

Right arrow icon

Download code icon Download Code

Key benefits

Description

Most programming languages can really complicate things, especially with regard to unsafe memory access. The burden on you, the programmer, lies across two domains: understanding the modern machine and your language's pain-points. This book will teach you to how to manage program performance on modern machines and build fast, memory-safe, and concurrent software in Rust. It starts with the fundamentals of Rust and discusses machine architecture concepts. You will be taken through ways to measure and improve the performance of Rust code systematically and how to write collections with confidence. You will learn about the Sync and Send traits applied to threads, and coordinate thread execution with locks, atomic primitives, data-parallelism, and more. The book will show you how to efficiently embed Rust in C++ code and explore the functionalities of various crates for multithreaded applications. It explores implementations in depth. You will know how a mutex works and build several yourself. You will master radically different approaches that exist in the ecosystem for structuring and managing high-scale systems. By the end of the book, you will feel comfortable with designing safe, consistent, parallel, and high-performance applications in Rust.

Who is this book for?

This book is aimed at software engineers with a basic understanding of Rust who want to exploit the parallel and concurrent nature of modern computing environments, safely.

What you will learn

Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95

(Includes tracking information)

Publication date : May 31, 2018

Length: 462 pages

Edition : 1st

Language : English

ISBN-13 : 9781788399975

What do you get with Print?

Product feature icon Instant access to your digital copy whilst your Print order is Shipped

Product feature icon Paperback book shipped to your preferred address

Product feature icon Redeem a companion digital copy on all Print orders

Product feature icon Access this title in our online reader with advanced features

Product feature icon DRM FREE - Read whenever, wherever and however you want

OR

Contact Details

Modal Close icon

Payment Processing...

tick Completed

Shipping Address

Billing is same as shipping information

Billing Address

Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95

(Includes tracking information)

Publication date : May 31, 2018

Length: 462 pages

Edition : 1st

Language : English

ISBN-13 : 9781788399975