reqwest
In this chapter, we will finally get around to using the reqwest
crate. As you read through this chapter, you’ll soon see why we didn’t learn it until now: it’s because the reqwest
crate is the first one we have encountered that involves async Rust! Well, sort of. Read on to find out.
While we’re at it, we’ll also learn about feature flags, which let you bring in just part of an external crate and thereby help keep compilation time down.
Back in chapter 17, we had a code sample that included a Client (http://mng.bz/mjv4) from the reqwest
crate in one of our structs. We didn’t use it at the time because (among other reasons) the Rust Playground doesn’t allow you to make HTTP requests. The code looked like this:
use reqwest::Client; struct Logger { logs: Vec<Log>, url: String, client: Client, }
Let’s simplify this even more by removing the Logger
struct and just creating a Client
:
use reqwest::Client; fn main() { let client = Client::default(); }
That was easy. So how do we use it? We can use our Client to .post()
data, .get()
it, .delete()
, and so on. The easiest method to use is .get()
. With this, we can ask a server to give you the HTML for a website or a response in a form like JSON from a server. The .get()
method is pretty simple:
pub fn get<U: IntoUrl>(&self, url: U) -> RequestBuilder
This IntoUrl
trait is one that the reqwest
crate made, not the standard library, so you don’t have to remember it. But you can guess from the name that IntoUrl
means anything that can become a URL, and it’s implemented for both &str
and String
. In other words, we can use .get()
and stick a website URL inside. The .get()
method gives us a RequestBuilder
, which is a struct that has a lot of configuration methods like .timeout()
, .body()
, .headers()
, and so on. But one of them is called .send()
, and since we don’t need to configure anything in particular to use it, that’s the one we want.
use reqwest::Client; fn main() { let client = Client::default(); client.get("https://www.rust-lang.org").send().unwrap(); }
Surprisingly, we get a cryptic error!
no method named `unwrap` found for opaque type `impl Future<Output = ➥Result<Response, reqwest::Error>>` in the current scope --> src\main.rs:5:52 | 5 | client.get("https://www.rust-lang.org").send().unwrap(); | ^^^^^^ method not ➥found in `impl Future<Output = Result<Response, reqwest::Error>>` | help: consider `await`ing on the `Future` and calling the method on its ➥`Output` | 5 | client.get("https://www.rust-lang.org").send().await.unwrap(); | ++++++
It seems to be returning a type called impl Future<Output = Result<Response, reqwest::Error>>
! The Future
trait is used in async Rust, which we haven’t learned yet. We’ll learn about this return type in the next section and see what Future
and async
mean. But in the meantime, let’s go back to the main page of reqwest
and see if it can help. On the page, we see the following information:
The reqwest::Client is asynchronous. For applications wishing to only make
➥a few HTTP requests, the reqwest::blocking API may be more convenient.
Okay, so it looks like there is a so-called “blocking” Client that isn’t async. We still have no idea what async
is, but the documentation suggests a blocking Client as an option, so we’ll go with that. The blocking Client can be found at reqwest::blocking::Client
, so we’ll give it a try.
However, the message here has given us a hint about what async
is because we have seen the word blocking in places like the .lock()
method for Mutex
, which “acquires a mutex, blocking the current thread until it is able to do so” (http://mng.bz/5o7a). So it’s reasonable to assume that blocking means blocking the current thread. And if regular Rust is blocking (operations block the thread until they are done), then async Rust must be non-blocking (they don’t block the thread). But more on that later. Let’s try the blocking Client
:
fn main() { let client = reqwest::blocking::Client::default(); client.get("https://www.rust-lang.org").send(); }
error[E0433]: failed to resolve: could not find `blocking` in `reqwest`
--> src\main.rs:2:37
|
2 | let client = reqwest::blocking::Client::default();
| ^^^^^^ not found in
➥`reqwest::blocking`
|
help: consider importing this struct
|
1 | use reqwest::Client;
|
help: if you import `Client`, refer to it directly
|
2 - let client = reqwest::blocking::Client::default();
2 + let client = Client::default();
|
Now, this is certainly odd. The blocking Client is right there in the documentation (http://mng.bz/5oda), clear as day. But why can’t the compiler find it? To find out, we’ll take a very short detour and learn what feature flags are.
Rust code can sometimes take a while to compile. To try to reduce this as much as possible, a lot of crates use something called feature flags, which let you compile just a part of the crate. Crates that use flags have some code enabled by default, and if you want to add more functionality, you have to indicate them inside Cargo.toml
.
We didn’t need to do this in the Playground because the Playground has all features enabled for every crate. But in our own projects, we don’t want to spend time compiling things we won’t use and must be more selective when it comes to which features we want to enable.
This is where the problem came up in the previous section: as far as Rust is concerned, if a feature flag isn’t enabled, the code doesn’t exist. When we tried to create a blocking Client
, there simply wasn’t any code for the compiler to look at, which is why there was no nice error message suggesting that we enable the feature flag. Because for the compiler to give a nice error message, it would first need to pull in the code, and if it pulled in the code, that would increase compile time, which nobody wants. The end result is that Rust users sometimes need to look at the source code directly to see whether a feature is hidden behind a feature flag.
Let’s try using the command cargo add reqwest
again. This command adds the reqwest
crate but also shows which features are enabled, which is particularly useful here. The features that are enabled by default have a +
to the left, and those that aren’t enabled have a -
instead. One of them is called blocking
:
Adding reqwest v0.11.18 to dependencies. Features: + __tls + default-tls + hyper-tls + native-tls-crate + tokio-native-tls - __internal_proxy_sys_no_cache - __rustls - async-compression - blocking - brotli - cookie_crate - cookie_store - cookies - deflate - gzip - hyper-rustls - json - mime_guess - multipart - native-tls - native-tls-alpn - native-tls-vendored - proc-macro-hack - rustls - rustls-native-certs - rustls-pemfile - rustls-tls - rustls-tls-manual-roots - rustls-tls-native-roots - rustls-tls-webpki-roots - serde_json - socks - stream - tokio-rustls - tokio-socks - tokio-util - trust-dns - trust-dns-resolver - webpki-roots
Now you can see why most features aren’t enabled by default. All we want to do is make a simple HTTP request, and we certainly don’t want to bring in code for cookies, gzip, cookie_store, socks
, and so on.
To see feature flags in the documentation, click on the Feature Flags button on the top near the center. The page begins as follows:
reqwest This version has 42 feature flags, 5 of them enabled by default. default: default-tls default-tls: hyper-tls native-tls-crate __tls tokio-native-tls ... (and many others)
It has a flag called default-tls
that enables four other flags. Fine, but how do we get the blocking Client? With cargo add
, it’s pretty easy. Change cargo add reqwest
to cargo add reqwest --feature blocking
, and now it will be there. Or, inside Cargo .toml, you can manually change it from
reqwest = "0.11.22"
reqwest = { version = "0.11.22", features = ["blocking"] }
Besides looking at the documentation, you can also find out whether a feature is behind a feature flag by looking through the source code for the attribute #[cfg(feature = "feature_name")]
. You’ll usually find this in a crate’s lib.rs file where the module declarations are. A sample from the reqwest
crate (http://mng.bz/vPy4) shows the exact location where the blocking feature is being hidden behind a feature flag:
async_impl; cfg(feature = "blocking")] mod blocking; connect; cfg(feature = "cookies")] mod cookie; mod dns; proxy; mod redirect; cfg(feature = "__tls")] mod tls; util;
In short, if Rust can’t find something, check to see whether there’s a feature flag for it.
Armed with this knowledge, we can get back to the blocking Client. With the feature enabled, this code no longer gives an error:
fn main() { let client = reqwest::blocking::Client::default(); client.get("https://www.rust-lang.org").send(); }
The compiler warns us that there is a Result
we haven’t used. We’ll just unwrap for now. That gives us a struct called a Response
—the response to our .get()
. The Response
struct (http://mng.bz/6n8A) has its own methods, too, like .status()
, .content_length()
, and so on, but the one we are interested in is .text()
: it gives a Result<String>
. Let’s unwrap that and print it out:
fn main() { let client = reqwest::blocking::Client::default(); let response = client.get("https://www.rust-lang.org").send().unwrap(); println!("{}", response.text().unwrap()); }
Success! Our output starts with this:
<!doctype html>
<html lang="en-US">
<head>
<meta charset="utf-8">
<title>
Rust Programming Language
</title>
<meta name="viewport" content="width=device-width,initial-scale=1.0">
<meta name="description" content="A language empowering everyone to
➥build reliable and efficient software.">
And much, much more. It gave us the text of the whole home page.
If you are using reqwest
, you probably already know what you want to use it for, so take a look around the documentation to see what fits your needs. If you want to post something in JSON format, for example, you can use a method called .json()
(
http://mng.bz/orQp). At least here, it lets us know that it is behind a feature flag:
Available on crate feature json only.
So, that was reqwest
, or at least part of it. However, the Client on reqwest
is async by default, so it looks like it’s time to learn what async
is about.
We saw that regular Rust code will block the thread it is in while waiting. Async Rust is the opposite of regular Rust code because it doesn’t block. The reqwest
crate is the perfect example of why async Rust is often used: What if you send a get
or a post that takes a long time? Rust code is extremely fast, but if you have to wait around for a server somewhere to respond, you aren’t getting the full benefits of the speed Rust offers. One of the solutions to that is async
, namely allowing other parts of the code to take care of other tasks while you wait. Let’s see how this is done.
async Rust is possible through a trait called Future
. (Some languages have something similar and call it a “promise,” but the underlying structure is different.) The Future
trait is well named as it refers to a value that will be available at some time in the future. The “future” might be 1 microsecond away (in other words, basically instantaneous), or it might be 10 seconds away.
The Future
trait is interesting as it looks sort of like Option
. If a Future
is Ready
, it will have a value inside, and if it’s still Pending
(not ready), there will naturally be no value to access:
pub enum Poll<T> { Ready(T), Pending, }
Here is the signature for the trait:
pub trait Future {
type Output;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) ->
➥Poll<Self::Output>;
}
Pin
is used to pin the memory in place, the reasons for which are explained quite well in the book Asynchronous Programming in Rust (http://mng.bz/n1m2). But a deep understanding of Pin
isn’t necessary to use async
in Rust, so feel free to ignore it for the time being unless you are really curious.
What is important is that there is an associated type called Output
and that the main method in async
is called poll
—in other words, to check whether it’s ready. We’ll look at poll
in more detail shortly.
The first big difference you’ll notice in async
is that functions begin with async fn
instead of fn
. Interestingly, though, the return types look the same!
fn give_8() -> u8 { 8 } async fn async_give_8() -> u8 { 8 }
Both functions return a u8
but in different ways. The fn
function returns one right away, but the async fn
returns something that will be a u8
when it’s done. Maybe it’ll be done right away, or maybe it won’t. And because it’s async
, if it’s not done yet, your code can do other work as it waits.
Rust is actually hiding something here. An async_give_8() -> u8
is not returning just a u8
. Let’s use our trusty method to see the true type by making the compiler mad via a method that doesn’t exist:
async fn async_give_8() -> u8 { 8 } fn main() { let y = async_give_8(); ① y.thoethoe(); // ② }
① Gets the output from async_give_8
② Makes up a method that doesn’t exist to see the error
error[E0599]: no method named `thoethoe` found for opaque type `impl
➥Future<Output = u8>` in the current scope
--> src/main.rs:12:7
|
12 | y.thoethoe();
| ^^^^^^^^ method not found in `impl Future<Output = u8>`
So there’s the type. It’s not a u8
, it’s an impl Future<Output = u8>
! That’s the actual type signature that Rust hides from us. The makers of async Rust decided that this would be better than making people type impl Future<Output = u8>
all the time.
Now comes the poll
method. Poll means to ask whether a Future
is ready and, if it’s not ready, to come back later to check again. The main way to poll a future in Rust is by adding the .await
keyword, which gets the run time to handle the polling. (More on what an async
run time is in the next section.) And every time a future is polled, it will return one of two things:
This is the part that looks like Option:
Option
has None
if there’s nothing, while poll
has Pending
if there’s nothing yet. None
isn’t holding a value, and neither is Pending
.
Option
has Some(T)
if there’s something, while poll
has Ready(T)
if the Future
is ready. Some
holds a value and so does Pending
.
Okay, let’s give it a try. We’ll add .await
to try to turn this impl Future<Output = u8>
into an actual u8
. There’s no complex code inside the function, so the poll
should resolve right away:
async fn async_give_8() -> u8 { 8 } fn main() { let some_number = async_give_8().await; }
It doesn’t work yet! This is why:
error[E0728]: `await` is only allowed inside `async` functions and blocks
--> src/main.rs:6:37
|
5 | fn main() {
| ---- this is not `async`
6 | let some_number = async_give_8().await;
| ^^^^^^ only allowed inside `async`
➥functions and blocks
Ah, so .await
can only be used inside a function or block that has the async
keyword. And since we are trying to use .await
in main
, which is a function, main
should be an async fn
, too. Let’s try it again. Change fn main()
to async fn main()
:
error[E0752]: `main` function is not allowed to be `async` --> src/main.rs:5:1 | 5 | async fn main() { | ^^^^^^^^^^^^^^^ `main` function is not allowed to be `async`
On second thought, this sort of makes sense because main can only return a ()
, a Result
, or an ExitStatus
(http://mng.bz/n152). But an async fn
returns a Future
, which is not one of those three return types. Plus, if main
returned a Future
, wouldn’t that mean that something else would have to call .await
on that Future
? Where does it end?
On top of this, remember how .await
polls a future and then comes back later to ask again if it’s not ready yet? Who decides this? The answer to both of these is that you need an async
run time, something that takes care of all of this. Rust doesn’t have an official async
run time, but as of 2023, almost everything uses a crate called Tokio (https://tokio.rs/). It’s not the official run time, but everybody uses it, and it can be thought of as Rust’s default async
run time.
After all this explaining, fortunately, the solution is quite simple: you can make main
into an async main
through Tokio by adding #[tokio::main]
above it. Do this, and the code will work:
use tokio;
async fn async_give_8() -> u8 {
8
}
#[tokio::main] ①
async fn main() {
let some_number = async_give_8().await;
}
① The Playground enables all feature flags by default automatically so this code will run as is, while on your computer, you need to enable two feature flags: "macros" to bring in the macro above main and "rt-multi-thread" to enable Tokio’s multithreaded run time. All together, adding this to Cargo.toml will make the code compile: tokio = { version = "1.35.0", features = ["macros", "rt-multi-thread"]}.
Now some_number
ends up as a regular u8
, and the program finishes.
So how does async
suddenly, magically work without needing to poll main
? Tokio does this by invisibly making a scope inside main
where it does all of its async
work. After it’s done, it exits and goes back into the regular main function, and the program exits. It’s sort of a fake async main
, but for our purposes it’s real.
In fact, we can see this in the Playground by clicking on Tools > Expand Macros. Let’s see what this async fn main()
actually is! We’ll use almost the same code but add an extra .await
and print out the result:
use tokio; async fn async_give_8() -> u8 { 8 } #[tokio::main] async fn main() { let some_number = async_give_8().await; let second_number = async_give_8().await; println!("{some_number}, {second_number}"); }
Here is the expanded code (with unrelated parts removed):
use tokio; async fn async_give_8() -> u8 { 8 } fn main() { ① let body = async { ② let some_number = async_give_8().await; let second_number = async_give_8().await; { ::std::io::_print(format_args!("{0}, {1}\n", some_number, ➥second_number)); }; }; { return tokio::runtime::Builder::new_multi_thread() ③ .enable_all() .build() .expect("Failed building the Runtime") .block_on(body); ④ } }
① Look here—async fn is a lie! It’s actually just a regular fn main(). As far as Rust is concerned, the main() function is not async at all.
② First, everything gets enclosed inside a big async block called body. The .await keyword can be used inside here.
③ Now the Tokio run time starts. It uses the builder pattern to set some configuration.
④ And, finally, the part that matters: a method called block_on(). Tokio is actually just blocking until everything has been resolved!
So, at the end of the day, an async fn main()
is just a regular fn main()
that Tokio manages by blocking until everything inside has run to completion. And when it’s done, it returns whatever the output of the async
block is, and fn main()
, along with the entire program, is also done.
These are the main points when getting started with async
:
You need to be inside an async fn
or an async
block to use the .await
keyword.
Type .await
to turn output into a concrete type again. (You don’t need to manually use the poll
method.)
You need a run time to manage the polling, which usually means adding #[tokio::main]
.
Regular functions can’t await async
functions, so if you have a regular function that needs to call an async
function, it will become async, too. So once you start to use async
you’ll see a lot of your other functions becoming async, too.
async
functions can call regular functions. This is usually no problem, but remember that regular functions will block the thread until they are done.
Knowing this, let’s try reqwest
again. This time, we are finally using the default Client, which is async
. Knowing what we know, it’s now pretty easy:
use reqwest; use tokio; #[tokio::main] async fn main() { let client = reqwest::Client::default(); let response = client .get("https://www.rust-lang.org") .send() .await .unwrap(); println!("{}", response.text().await.unwrap()); }
See the difference? Each async
function has an .await
after it. And here we are just unwrapping, but in real code, you would want to handle errors properly, which usually means using the ?
operator. That’s why you see .await?
everywhere in async
code.
You might have noticed that we still haven’t used async Rust in a very async
way just yet. So far, our code has just used .await
to resolve values before moving on to the next line. Technically, this isn’t a problem, as the code still compiles and works just fine. But to take advantage of async Rust, we’ll need to set up our code to poll many futures at the same time. One of the ways to do this is by using the join!
macro.
First, let’s look at an example that doesn’t use this macro. We’ll make a function that uses rand
to wait a bit and then return a u8
. Inside tokio
is an async
function called sleep()
that results in a non-blocking sleep—in this case, between 1 and 100 milliseconds. (We’ll learn about sleep()
and Duration
in the next section.) After the sleep is over, it gives the number. Then we’ll get three numbers and see what order we get them in:
use std::time::Duration;
use rand::*;
use tokio::time::sleep; ①
async fn wait_and_give_u8(num: u8) -> u8 {
let mut rng = rand::thread_rng();
let wait_time = rng.gen_range(1..100);
sleep(Duration::from_millis(wait_time)).await;
println!("Got a number! {num}");
num
}
#[tokio::main]
async fn main() {
let num1 = wait_and_give_u8(1).await;
let num2 = wait_and_give_u8(2).await;
let num3 = wait_and_give_u8(3).await;
println!("{num1}, {num2}, {num3}");
}
① This function is behind another feature flag called "time," so add that to Cargo.toml if you are running this code on your computer.
When you run this, it will always be the same:
Got a number! 1 Got a number! 2 Got a number! 3 1, 2, 3
So we await one value, get it, and then call the next function, await it, and so on. It will always be 1, then 2, and then 3.
Now, let’s change it a bit by joining them. Instead of .await
on each, we’ll use join
, which will poll them all at the same time. Change the code to this:
use rand::*; use tokio::join; use std::time::Duration; async fn wait_and_give_u8(num: u8) -> u8 { let mut rng = rand::thread_rng(); let wait_time = rng.gen_range(1..100); tokio::time::sleep(Duration::from_millis(wait_time)).await; println!("Got a number! {num}"); num } #[tokio::main] async fn main() { let nums = join!( wait_and_give_u8(1), wait_and_give_u8(2), wait_and_give_u8(3) ); println!("{nums:?}"); }
Here, too, the numbers (inside the nums
variable) will always be (1, 2, 3)
, but the println!
shows us that it is now polling in an async
way. Sometimes it will print this:
Got a number! 1 Got a number! 2 Got a number! 3 (1, 2, 3)
But other times, it might print this:
Got a number! 1 Got a number! 3 Got a number! 2 (1, 2, 3)
That’s because each time the function waits for a random length of time, and one might finish before the other. As soon as they finish, they print out the number, and the polling is done. So this join!
is what you want to use if you want to get the most speed out of your async
code as possible.
As you use async
code you might want to do more things than just using .await
and the join!
macro. For example, what if you have multiple functions that you want to poll at the same time and just take the first one that finishes? You can do that with a macro called select!
. This macro uses its own syntax that looks like this:
name_of_variable = future => handle_variable
In other words, you first assign a name to the future you are polling and then add a =>
and decide what to do with the output. This is particularly useful when polling futures that don’t return the same type because you can modify the output to return the same type, which will allow the code to compile.
This is best understood with an example. Here, we will poll four futures at the same time. Three of them sleep for very similar lengths of time, so the output will differ depending on which one finishes first. The fourth future has no name and simply returns after 100 milliseconds have passed, indicating a timeout. Try changing the sleep time to see different results, such as lowering the timeout duration:
use std::time::Duration; use tokio::{select, time::sleep}; async fn sleep_then_string(sleep_time: u64) -> String { ① sleep(Duration::from_millis(sleep_time)).await; format!("Slept for {sleep_time} millis!") } async fn sleep_then_num(sleep_time: u64) -> u64 { ② sleep(Duration::from_millis(sleep_time)).await; sleep_time } #[tokio::main] async fn main() { let num = select!( ③ first = sleep_then_string(10) => first, second = sleep_then_string(11) => second, third = sleep_then_num(12) => format!("Slept for {third} millis!"),④ _ = sleep(Duration::from_millis(100)) => format!("Timed out after 100 millis!") ⑤ ); println!("{num}"); }
① This async function sleeps and returns a String.
② But this async function sleeps and returns a u64.
③ The first three futures in this select! sleep for almost the same length of time, so it’s not certain which one will return first.
④ The variable num has to be a String, so we can’t just pass on the variable third here. But with a quick format!, it is now a String, too.
⑤ Finally, we’ll add a timeout to the select. If neither of the first three return before 100 milliseconds have passed, the select will finish with a timeout message.
There are many other similar macros, such as try_join!
, which joins unless one of the futures fails, in which case it returns an Err
. Here is a quick example of the try_ join!
macro:
use tokio::try_join; async fn wait_then_u8(num: u8, worked: bool) -> Result<u8, &'static str> { if worked { Ok(num) } else { Err("Oops, didn't work") } } #[tokio::main] async fn main() { let failed_join = try_join!( wait_then_u8(1, true), wait_then_u8(2, false), wait_then_u8(3, true) ); let successful_join = try_join!( wait_then_u8(1, true), wait_then_u8(2, true), wait_then_u8(3, true) ); println!("{failed_join:?}"); println!("{successful_join:?}"); }
Err("Oops, didn't work") Ok((1, 2, 3))
Async is a large subject in Rust, but hopefully this has made it less mysterious. The async ecosystem in Rust is still somewhat new, so a lot of it takes place in external crates (the main one is the futures
crate; https://docs.rs/futures/latest/futures/). The futures_concurrency
crate (http://mng.bz/or6p) is another convenient crate that contains traits to deal with joining, chaining, merging, zipping, and other such methods on futures. And, of course, Tokio (https://docs.rs/tokio/latest/tokio/index.html) is filled to the brim with ways to work with async code.
Much of the async ecosystem is slowly moving into the standard library. For example, the Stream
trait in the futures
crate showed up as an experimental AsyncIterator
trait in the standard library in 2022 (http://mng.bz/6nBA). One other example is the async_trait
crate (https://docs.rs/async-trait/latest/async_trait/), which contains a macro that allows traits to be async. This crate was needed because async traits simply weren’t possible until Rust 1.75, which was released just a few days before the end of 2023. As the only way to make async traits in Rust before version 1.75, you will still see the async_trait
crate in a lot of code. So, by the time you read this book, some of the macros or traits inside the async
external crates might be in the standard library!
With this introduction to async Rust out of the way, we are going to relax a bit by spending the next two chapters on a quick tour of the standard library. There are a lot of modules and types in there that we haven’t come across yet, plus more methods and internal details about types we already know.
If the compiler can’t find a type for no good reason, check to see whether you need a feature flag to enable it.
The most important thing to remember about async is that it doesn’t block threads. Regular functions block them.
An async function just returns a Future
, which doesn’t do anything. You have to .await
it to get some actual usable output.
There are many ways of working with multiple futures. You can join!
them together, select!
to race them against each other and take the first that completes, and so on.
Much of this functionality in the async
ecosystem is found in external crates. These often work as staging grounds for testing out new functionality to stabilize and add to the standard library.