What's New in Rust 1.39Back to Episode Page
Jon Gjengset: Hello, Ben.
Ben Striegel: Hello, Jon. How are you doing today?
Jon: I’m doing pretty well. I’m excited that we’re finally finally doing this big episode of 1.39.
Ben: Yeah, it was so big, 1.39, that we actually we were imposed. It was just like we gazed upon the edifice, and it was so large, that we couldn’t do it. But now we have gathered up our courage and we’re about to begin.
Jon: It’s true. And obviously, what we’re going to do when talking about 1.39 is, I’m really excited to talk about references to by-move bindings and match guards.
Ben: I mean, I like to actually about the borrow checker migration. It’s really cool. These are pretty much the two biggest features of 1.39.
Jon: Yeah. I mean, there’s some, like, other minor things we might get to, but let’s start with the big stuff. Oh, yeah, we did find a typo in the release notes.
Ben: That’s true.
Jon: I promise you all we will get to async/await. But we’re going to leave that to the end of the episode, and we’re going to start out by going over the other interesting things that happened in 1.39, because there are some of those too.
Ben: By-move bindings, you mentioned?
Jon: Yeah. Okay, so this is something that is really annoying if you run
into it. But you probably don’t know about this problem if you haven’t run into
it. Specifically, the issue is if you match on a type, and in the pattern, you
try to move something out of the match. So this could be something like, you
match on an option or you match on a result. And when you pattern match on
you try to take the value, move the value out of the
Err variant, then
Rust is going to move that value. If you had a guard that tried to use that
value by reference—
Ben: Hold up, what’s a guard?
Jon: So a guard is when you say something like
match r, where
r is a
Result and you say,
Ok(v) if v > 5.
Ben: So it’s— you have
if in your pattern.
Jon: Exactly, yeah. And specifically, previously if you moved in the pattern, but your guard had a reference, then the compiler would yell at you, and say you’re not allowed to do that. Even though it should be totally fine, right? Because you move the value— or, you check the guard before you move the value. And then you move the value if the guard matches. So it should be totally fine. Previously, the compiler didn’t understand this. Now it does.
Ben: I think the fact that you can use
ifs in patterns is kind of also one
of those more obscure features for us. That’s very useful if you know how to do
it. So hopefully this gets more people doing it.
Jon: Oh, yeah. Match guards are— they’re real handy when you need them. Sometimes they can just really clean up some some, like, hairy pattern matching code.
Ben: Yeah. Um, what’s next? I see one about attributes being enabled on function parameters.
Jon: Yeah. So this is a fun one where, this usually happens if you have a crate to that needs to be able to run on multiple different platforms or targets. It can also happen in other circumstances. Imagine you have a function that takes like a— you have a low level library that operates on, like, operating system specific things. And so you want the argument to have one type on Windows, and one type on Linux, and one type on macOS. Previously, what you had to do is you had to have one instance of the function for each of those sort of configs. And then you had to replicate the entire signature and body of the function, only making the changes you needed.
Ben: And because it’s kind of like, you know, a little bit of boilerplate, people would probably often use macros for this. So nowadays you don’t need to do that. Whereas before, you would just put an attribute on the function, and then have multiple copies of the function, each with different attributes for config’ing on and off. Now you can actually just have, like, whatever parameters need to be config’ed on and off, and just have the one function, which should hopefully result in less duplication and maybe a few fewer macros lying around.
Jon: Yeah. One thing that’s important to note here, though, is— and we talked about this a little bit off line before we started— is that you need to be careful when you’re operating with configs this way. Because it can be tempting to do something like, only include this argument if this particular feature is enabled—
Ben: Feature in what way? It’s kind of overloaded in Rust world.
Jon: Feature in the sense of a crate-level feature. So for a crate, you can define features, optional features that might enable additional, well, features of your crate. They might enable additional dependencies or performance optimizations, additional APIs. And it might be tempting to do something like, this function takes three parameters. But if this feature is enabled, it takes four. And this is actually not okay to do. And the reason for this is that Cargo requires features to be additive. And what I mean by that is, if you have two crates A and B, that both depend on some crate C. And A depends on C, with some feature enabled and B depends on C with some feature disabled. What cargo will do, if you have something that depends on both A and B, is it will only compile C once, and it will compile it once with the feature enabled. If B didn’t compile with the feature enabled, this would be a problem, and so all features need to be basically anything that depends on a crate needs to be possible to compile if all the features were on.
Ben: How dangerous is this, though? Like, we would mostly only cause a compiler error whenever you import the new dependency. And so it’s bad to do as a library author. But mostly it’s only unsafe in the sense of, it will just cause frustration for your users down the line.
Jon: It’s actually pretty bad, because— you’re right, that it causes frustration for users. But it causes frustration in a way that they can’t easily fix. Because it means that as a user, what you will see is, you have some dependency from before, you add some new dependency bar, and they happen to like, deep in their dependency graph, both depend on this library that isn’t using features correctly, and now suddenly your crate won’t compile, and you have no way of making it compile, because cargo will try to sort of build that deep dependency with all the features enabled. And one or the other of your dependencies just will not build.
Ben: So I’d say don’t go too crazy with configs. Try and use them sparingly, and only when you, kind of, need to, like, if you have to support different platforms that have fundamentally different types, or some kind of thing like that.
Jon: Exactly. Think of it as, you only want APIs to change based on things that are global to the program, features or not. Whereas whether you’re compiling on Windows or not is sort of global to the entire compilation process, so you can change APIs based on that. You just can’t change it based on sort of user configurable parameters.
Ben: Let’s talk about borrow check migration. There were previously some warnings in 2018, the Rust edition, 2018 edition. Now they have become hard errors. So I think we talked about this before, in a previous episode, I think it was our episode on 1.36. And I’m not going to go into the entire thing. Basically, there is a new borrow checker that exposes some bugs in the old borrow checker, and during this current ongoing migration period, if the new borrow checker hits a bug, it then runs the old borrow checker to make sure that the flaw also exists in that one. And if it doesn’t, it issues a warning, or at least it did until now. Now it’s become a hard error. And so, basically, if you’re using Rust 2018 you can no longer enable the old borrow checker.
Jon: So you said 2018 there. But what happens with the 2015 edition?
Ben: Because the 2015 edition is kind of designed for people who— like obviously, if you’re still on that edition, you have problems upgrading and you’re more conservative. And so that edition will eventually have the same behavior as this, just because of how editions work and how it’s all implemented under the hood. But I forget which edition that will be in. Not sure if it’s near or far. Do you recall?
Jon: I think it’s 1.40, actually. I think it’s going to be the next release.
Ben: OK. So this is kind of your last warning. If you have never tried compiling your code free of warnings in the past year or so, now is a good time. Make sure that there aren’t any bugs because otherwise you will be surprised come December 18th or so, whenever the next Rust release is.
Jon: But Ben, compiler warnings are just warnings. I don’t need to read them.
Ben: Yeah, I just you know, I just deny— I allow all of them, who cares. If it was a problem. It would be a hard error, obviously.
Jon: Yeah, obviously. So I am curious. Do you know why they want to make this change, like, why not— they obviously don’t want to just switch to the new one, because then a bunch of code would break. But why not keep the old one around forever?
Ben: Well, so there are— this editions system in Rust, and so there’s a 2015 edition currently, and a 2018 edition. And contrary to what you might think, if you compile on, say, the 2015 edition, you aren’t using some old version of the compiler. You’re still using the same ultimately underlying version of the compiler everyone else is using on the 2018 edition. So additions don’t imply that you’re on, like, a fork of the compiler. The only difference is in the front end. And so that means that if you actually want to make a change to the 2018 edition, you have to say, if you don’t want to include the code for the old borrow checker for the rest of eternity, you have to at some point, remove support from 2015. And so I’d say at this point, if it’s on track to be stabilized or become a hard error in 1.40, I would assume that code has already been removed, and there are many tens of thousands of lines fewer in the compiler, due to no longer needed to support this old version of the borrow checker.
Jon: And probably a bunch of compiler developers that are very happy.
Ben: Throwing a party, confetti everywhere.
Jon: Exactly. So one other thing that happened in 1.39, and we’ve started
seeing this is sort of a recurring theme, is that there are more
Ben: More things in the standard library have gone from just being normal
fns to becoming
Jon: Yeah. Why is it exciting, that things are
Ben: Just because, any person— there are two kinds of contexts in Rust,
things that happen at compile time, things that happen at run time, and anything
that happens at compile time can also happen at run time. But the reverse isn’t
true. So just for being maximally useful to everyone, it is good for as many
functions, especially in the standard library, to be const as possible. And so
the ones that are const this time around:
LinkedList::new. Now you can actually instantiate brand new empty strings in
vectors and linked lists in const contexts. You can also have
str::as_bytes, some absolute value math functions are now const. So
the math ones are pretty easy.
Jon: So why isn’t everything just const?
Ben: So, very good question. It’s because— think about how, at compile time,
to compute, say, if you have the absolute value of some number, they’re going to
need to run some code. And that might involve running arbitrary Rust code, which
means that you actually have to somehow more or less compile Rust code while
you’re compiling Rust code. So, in fact, the Rust compiler contains an
interpreter, or parts of one, called Miri, which is an interpreter for Rust MIR
intermediate byte code. And this— the output of that is then, kind of just
inserted right into your binary. So that’s how we can kind of coalesce these
const things into things that you can then use at runtime. So if you type
2 + 2, for example, Miri will just be like, hey, I can like, figure this out,
and then turn it into 4. And then at run time, you just get the value of 4.
There’s no actual addition in sight.
Jon: And you mentioned that there was there was an exciting new improvement coming to Miri soon.
Ben: Yeah, well, so the reason that we can’t just do this for everything at
once is because Miri is being developed over time. It’s been in development for
several years now, and in fact, a plug for what we’re currently doing in the
background: I was at Rust Fest talking to one of the developers of Miri,
oli_obk. And I will have an interview out soon regarding Miri and
const fn and
all sorts of fun things. So you should be on the lookout for that. And I was
noticing today, in fact, as we are recording this, there is a brand new PR that
makes it possible to use
match within constant expressions. Behind a
feature flag, so not being stabilized any time soon, but it’s coming along. So
this is a big step for Rust, in terms of— because once you have the ability to
match, as you can imagine, there are plenty of functions that wish
they were const that cannot be const because they currently contain an
ifs are pretty fundamental to a program, as you might imagine.
Jon: Exactly. I guess we can now imagine that over the next coming releases we’ll just see sort of another flurry of additional functions becoming const.
Ben: We’ve talked about this every single time, there’s always some new
const fns because of some improvement to Miri. Or in fact, I’ve even seen some
PRs where it’s like, actually, we just rewrote this, like standard library
function, based on what Miri supports, so now it is, you know, just— we can make
Jon: Oh, that’s neat.
Ben: It’s pretty funny. This never involves any, kind of like, performance regressions. But sometimes it’s some weird— right now Miri, in stable, mostly supports bitwise operations, and so it sometimes involves weird bitwise things. But that’s not— usually not actually a problem, because if you’re removing branches from your code, it actually optimizes pretty well, so it might actually result in faster code. But you never know. It’s a trade off between fast code and readable code.
Jon: Yeah, that’s right. Speaking of readable code, one additional, smaller
thing that stabilized in 1.39 is two new methods to
std::time::Instant, and that is
Ben: And this means nothing to me, but Jon seemed very excited about this.
Jon: Yeah, so if you’ve ever tried to take an
Instant and then subtract it
Instant to get a
Duration, you will have found that this panics
if the result is negative, basically. If the first time stamp is smaller than
the second time stamp, then your code just panics. And this can be really
surprising, because sometimes you just sort of naturally assume that you can
take the difference between two
Instants and don’t really think about it. And
now— it used to be that you had to do this sort of song and dance of, if
timestamp 1 is greater than timestamp 2, then subtract 1 from 2, otherwise
subtract the other way and, like, keep track of the sign. Whereas now, there’s
just one function that gives you a—
checked_duration_since that just gives you
saturating_duration_since which just gives you zero if one is
too large to do the subtraction, it’s just one of those nice little additions
that makes code that deals with this just a tiny bit nicer.
Ben: There are a few other small things you wanted to note, I believe.
Jon: Yeah, So there are two things, actually. The first thing is that the
try! macro that we all sort of know and love, if you used Rust a long time
ago. This was basically the precursor to the question operator, the question
mark operator. And previously everyone was super excited that we had the
macro. And now everyone is really excited about the question mark operator, and
try! is now, I don’t want to say finally, but
try! is now going away.
Ben: Well, it’s being deprecated, but it can’t be removed until an edition might remove it sometime in the future.
Jon: Exactly. But now, if you use
try! in your code in the 2018 edition,
it’s now going to issue you a warning.
Ben: On the plus side,
try! is so simple that you could just make your own
macro if you really miss it.
Jon: That is also true. It is a very straightforward macro.
I also wanted to talk a little bit about rustup. So rustup is the tool that many
of us use to manage our Rust installation, especially if you’re sort of
installing beta and nightly or different versions of nightly. Rustup recently
had its 1.20 release, and it came with two pretty cool features, and also
highlighted one feature that many people don’t know about. The two new features
are profiles, and updating to the latest compatible nightly. So profiles are—
the idea here is that you can tell rustup that whenever I install a new
toolchain, whether I install beta or a particular nightly, I want you to always
include the following components. So previously, it used to be that if you
wanted to, like, install a particular nightly, for example, you would get that
nightly, but you would get none of the components. So you then had to do like
rustup component add rust-analysis,
rustup component add clippy,
rustup component add rls. Now you can do you can do
rustup set profile and
then give it the name of a profile, either
And now, any time you install a new toolchain, rustup will automatically install
the necessary components.
This is also really handy in CI builds, where you want rustup to install as
little as possible. You don’t want it to like, download the entire standard
library documentation, for example, which can be pretty large. So there, in your
CI script, you might want to do
rustup set profile minimal so that if you
install any toolchains, they don’t— the CI doesn’t spend a bunch of time
downloading things it doesn’t need.
The other thing is this, updating to the latest compatible nightly. If you work
on nightly, you may have gotten annoyed in the past at running
and then it telling you “couldn’t install latest nightly” because it’s missing
the following component that you have installed. Well, with rustup 1.20, if you
rustup update, it will search backwards through time for the latest
nightly, that is newer than the one you have, and that has all of the components
that you have installed. And so now you should just always be able to run
rustup update, and it will update to the latest nightly you could possibly
The one additional feature that I don’t think people are aware of is that rustup
has this really handy feature called
rustup doc. So many of us may have run
cargo doc or
cargo doc --open, which opens the documentation,
builds and opens the documentation for the current crate. But a similar thing
exists for looking up things in the standard library. You can run
followed by the name of any standard library type or function or macro, and it
will open locally the documentation for that function or method or type. This
means that even if you are offline and don’t have an Internet connection, you
rustup doc to open the full standard library documentation in your
browser for whatever that type is. And this can be really handy for developing
on the go.
Ben: And how are we doing so far? Is that everything that isn’t async/await related? Is it time?
Jon: I think it might be time. I think it might be time, Ben.
Ben: I think again, the usual disclaimer is warranted. Where async/await is one of those features where it’s entirely possible that you just don’t care about anything that’s going on here. There are plenty of folks who just want, like synchronous single threaded Rust. Or code in that sense and to them is kind of a non-issue. It’s kind of a non-event. But for people who do want to do async stuff, this is kind of a very, very big thing. Which is why you probably see so much excitement about this in the wild. But it’s entirely possible that you don’t care about it. You just tune out right now. We’ll see you later.
Jon: Or you can just sort of tune out and just listen to the comforting sound of our voices.
Ben: The dulcet tones of Jon and I. In the meantime, what’s first to talk
about? So I believe that in the 1.36 podcast we talked about when
Jon: Yes, I think that’s right.
Ben: And we talked about how to temper your expectations with how, even
Future trait was stabilized in the standard library, that doesn’t
really mean that it’s usable yet. In 1.36 the real idea was,
stabilized, so that libraries could begin to update in preparation for
async/await coming out. Now that it’s out, does that mean that we’re finally,
finally ready to actually use it everywhere?
Jon: I really want to say yes.
Ben: “Want” being the key word there.
Jon: Exactly. So, first of all, we should be clear, that async/await is a
major achievement. Now that async/await has landed, it’s going to make it so
much nicer to work with any sort of asynchronous futures-based code. The reason
for this is, now you probably don’t ever have to implement future manually
yourself. You generally will not have to think about things like pinning. In
general, you’re just going to write
async blocks or
async fns. You’re going
to await any futures that you’re given and things will just sort of work.
The reason why we say that the wait is not still over, is because there are still more things that are coming down the line, but that haven’t quite landed yet, and are sort of needed for completeness. The two primary ones here are async closures, which currently you can’t do. Currently your options are async blocks and async functions. And the ability to do streams, and potentially sinks. We’ll see what happens with that.
Ben: I would even add a third, too, and say async trait methods.
Jon: Yes, yes, asynchronous trait methods are also going to be a big thing. And that also gets us into, like, impl trait for associated types, which is also going to be a huge boon for the ecosystem.
Ben: But it’ll be a while, anyway. I cut you off with streams. Please go ahead.
Jon: So streams are basically the
Future version of iterators, so they’re
asynchronous iterators. And currently you can work with those using async/await.
Usually what you’ll do is something like, you’ll be given a stream and you will
do, sort of,
stream.next().await and then you’ll write your own
around that, and that works just fine. But it would be nice if we had some
better idiomatic language support for these things.
Ben: And it would be very nice if we had a
Stream trait in the standard
library, too, alongside
Jon: Exactly. So that’s something that we don’t currently have, but it is in the works, and that is hopefully something that’s going to come, and that would also include the ability to write functions that are sort of like async functions. But that construct iterators, or future-aware iterators rather than just futures. These are often referred to as generators. Do you want to talk a little bit about what a generator looks like? What are they useful for?
Ben: Sure. And I want to also say here that we’re, like, kind of maybe
beginning to sound like we’re pooh-poohing the idea of async/await. Actually,
no, it’s really cool, and it took a lot of work to get here. And what you might
be wondering is well, why? And so Jon mentioned generators. And so, basically,
before we get into it, it’s kind of— think of async/await as not really doing
much of anything that you couldn’t do manually before with the
from the standard library. Mostly what it does is, imagine if you write
async fn foo that returns, say, I don’t know, a
String. You could have
probably written that as just
fn foo returns something that implements
Future, or a
Box<Future> of some kind, and the
Future type would then
String or some error, possibly, if you’re using an older version of
futures like I am sadly.
Ben: It’s not time for that. Let’s move on. So generators, looking ahead
forward, pretty much the way it works is, a generator, say, maybe you’re
familiar from, like, Python, where you have the
yield keyword. And so it’s a
function that you can call, and then you can pause it in the middle of the
function and then resume it again. And, it might say, sort of having one single
return statement. You can kind of resume the function in the middle of its own
execution over and over and over again. And so the idea of Rust’s current
async/await support is that it is built on generators internally. It would be
nice to someday stabilize generators in the sense that you could use it without
using async/await. For the moment, the only way to really access this future is
through async/await, and then whenever you write an async block, basically you
are creating a generator under the hood, or an async function.
Jon: It’s an interesting little, like, dirty secret is that Rust already has
generators. It’s just that it doesn’t have general purpose generators. It just
has generators specifically for async functions and async blocks. So you
yield keyword, and you can sort of think of this as, whenever
an async block or an async function awaits a future, if that future isn’t ready
yet, then the function yields. It’s like basically, what’s going on. And then
it’s going to resume from that yield, whenever the future that you’re awaiting
could make progress.
Ben: Yeah, and generators are themselves a really fascinating topic, specifically with how they’re implemented. There are, at least I think, one or two really good blog posts by Tyler Mandry, if you just search for “Tyler Mandry Rust async/await”. He has a series of blog posts, and also again, to plug our upcoming RustFest interviews, Tyler was in Barcelona, and I got to talk to him about some of his generator work on the compiler, so we’ll see about that. Keep an eye out for that in the future. But in the meantime, what is there beyond generators to talk about with async/await? Is that kind of it? Is that just everything?
Jon: No, so I think there are a couple of other things we can talk about. And the first one is the difference between async functions, async blocks, and async closures.
Ben: Right, there are async blocks as well.
Jon: Yeah, so an async function as we talked about, is essentially a
function that can yield whenever it’s awaiting a future that isn’t ready yet.
And then you’re just going to resume when it can make progress. An async block
is very similar, except that it’s sort of like a closure. But it’s a closure
that you only run once. So an async block is a
Future, and it’s going to run
top to bottom just once, and within that future, whenever you call
acts like a generator. So whenever you await a future in an async block, it
basically yields within the future that’s generated by that block.
Ben: And it’s still a block, in the sense of a normal Rust block. With all the other things you expect, it still creates a new scope. It can still has expressions internally, you can assign this async block to a variable, that sort of thing. And so it’s conceptually pretty simple.
Jon: In fact, you also have things like the
move keyword. You can say that
I want to move a variable into the scope, or if you don’t say that, it’s just
going to borrow into that scope. So they really feel a lot like, sort of a
closure that you run immediately or just a regular Rust scope with curly
brackets. And so the question then becomes, what are async closures? So we
mentioned that async closures is something we don’t have in stable Rust yet. The
reasons for that, like why they weren’t stabilized with the others, are fairly
subtle, so we’re not going to go into it now. But async closures are basically
async blocks that you can run more than once with arguments, or you could think
of them as asynchronous functions that can capture stuff from their environment,
and so they really are exactly like closures, except that they produce futures
Ben: And I think there’s kind of an analog here with generators from, I think Python, where you can call a generator and you can have it resume with different arguments every time that you resume the generator. And so I think that’s the current thing that’s kind of blocking stabilization, which is like, how would we actually do this and model this in Rust? So there’s some open questions there.
Jon: Exactly. Now Ben, is it so that async/await and futures just exist in isolation, like you could just use what’s in the standard library and you need nothing else?
Ben: Sadly, no. If you actually want to do something useful without having
to reimplement the entire world yourself, you’re going to want to use some kind
of library. And people who care about asynchronous programming, usually you’re
doing something web related. So you’re writing a server, say, which is what I do
for my work, and we happen to use async/await. Or rather, we don’t, sadly, we
still use original futures, but we are planning to move to async/await, but the
thing we’re currently waiting on is library support. Support for things in, say,
web servers like
rocket, that kind of thing. And then those
themselves will want to work on an executor like
tokio, or I think
has their own executor or— there’s actually several out there these days.
Fuchsia has their own executor, but that’s apart from Tokio. I think there is at
least one or two more than I’ve seen, so that’s pretty cool actually.
Jon: And in fact, the
futures-rs library has its own executor as well. So
there’s like, there’s a lot of executors out there. So what are executions? Why
do we need executors?
Ben: So if you’ve used async/await or any kind of asynchronous green threads of any kind, say, which is analogous in some ways, in other languages you probably have noticed that what’s different is that Rust does not bundle any kind of runtime. So you have to actually bring your own. So in Node.js, for example, there’s an implementation under the hood of something that will actually run your futures and schedule them and poll for your updates from ones that are currently waiting, and the same for Go, or Python or any other language. And what makes Rust different is that there’s no one blessed executor. You can bring your own. In fact, you have to, because we can’t simply impose that kind of choice on our consumers. Rust is a very low level language, and that means that we are cursed with having to be modular and supporting many, many use cases. And so no matter what we shipped, somebody would have a problem with it. And so at the end of the day, we have to let you swap in and out different ways of running your green threads.
Jon: Yeah. One way to think about why this is necessary is, imagine that you’re running on, like, an embedded device that maybe doesn’t even have an operating system. You can’t use sort of a normal executor, the same way you would, like, on a Windows or Linux machine, right? Because you might not have access to system calls. You might not have the same ability to do things like epoll or use IOCP or any of these features that the executor might rely on. And so you might want to write a custom executor specifically for that environment. And so if Rust just shipped with an executor that everyone had to use, you wouldn’t have that option.
Ben: Yeah, I believe there is actually— there is a blog post that was
written on the official Rust blog about when the prior release came out, kind of
just talking about various things, including the ecosystem of things. And so it
was talking about, like, I believe Tokio is planning a major release soon. The
futures-rs library, which is kind of the playground for things that might
someday end up in the standard library, just had a 0.3 release.
of about a week or so ago, just released their 1.0. So that’s a great thing.
I’ve seen plenty of other libraries start to support
async-std, there’s a
really good developing ecosystem there.
Jon: I thought one thing that was cool from that blog post, which we’ll link
in the show notes, was this
wasm_bindgen_futures crate. So this is a crate
promises. So if you’re using wasm, it lets you— I don’t know exactly how the
Ben: Very cool.
Jon: Which is just really cool. The fact that this even possible is amazing.
Ben: What else is there talk about, do you think? We talked about async closures. Talked about lots of this stuff. Is that kind of it? It wasn’t nearly as long as we thought it might take.
Jon: That’s true. I think one thing that I want to mention about
async/await. And I think one reason why people are so excited, as an explanation
for those who haven’t used async/await, and don’t know why they should be
excited. One thing that’s really neat about async/await is how uninteresting it
ends up making your code. Because with the old future stuff you had to, like, do
you have to write code like
Ben: Yes, I know this well. It’s my everyday.
Jon: There’s just like this cascade of things. Whereas now that we have
async/await, you can just write the code the way you normally did. Sort of a
straight code, top to bottom, no call backs, no
and_thens, and you can use
things like the question mark operator, and it will just work. And actually,
what you’ll probably find is if you took a synchronous library that had been
ported to futures 0.1 and then ported that to async/await, you would probably
find that the code ended up looking the way it did when it was synchronous,
except with, like, a little smattering of
await keywords. And that
is really good, right? It means that the overhead of using the asynchronous
code, the cognitive overhead, is much lower than what it used to be, in like,
the futures 0.1 world.
Jon: So what do you think is next? What’s the big next thing now? What are we excited about?
Ben: For async/await specifically? So I think actually, I would propose a
kind of different tack, which is I think that maybe it’s time for async/await to
stop dominating the headlines of every Rust release. We can look forward to
other things in the Rust compiler, things like, we mentioned new
improvements, improvements to Miri, things like Polonius, which is the new new
borrow checker. I think— I believe Niko Matsakis has a few good blog posts on.
Chalk, which again, a product of Niko Matsakis, which would allow things like
generic associated types, which would then allow things like async trait
methods. So there’s a lot of things now, and I believe that we’re finally at a
point where async/await is now stable enough, even though there’s plenty more to
do, that it no longer needs to kind of loom large on the horizon, like it’s here
now, things are developing, and now it’s just part of Rust.
Jon: Yep. I think now, with async/await, what we really want to see is just, like, the clean up, sort of like, making the error messages nicer, like, the ergonomics nicer.
Ben: Oh yes, absolutely.
Jon: But now the core feature is finally there, and I agree with you, that it would be nice now to— not that we haven’t focused on other things as well, but I think async/await has certainly stolen some of the limelight. And that will hopefully now sort of fade a bit into the background of our lives, and just make us all subtly happy that we have async/await. This brings us into an interesting sort of last point to touch on, though, which is the Rust call for blog posts.
Ben: Rust 2020 roadmap call for blog posts, to try and figure out the Rust team should focus on, or really make the priority for the next year.
Jon: Yeah, do you want to talk a little bit about this initiative? Because we’ve done this before, right?
Ben: I’m not sure we have, on the podcast. So every year, say, the Rust developers kind of, you know, there’s a six week cycle for Rust releases, but planning tends to happen every year, just cause it’s convenient, like with time, and sometime around December, what they’ll usually do is be like, hey, Rust users, we want your feedback. We know that many of you don’t participate every single day out there on the forums or the bug trackers. And if ever, you ever want to give any kind of feedback about Rust, now is the time. Even if you give us only a paragraph-long blog post, even a tweet, just letting us know what things that we should do for the next year, or should begin to prioritize, that when we make the roadmap for 2020, we begin to address the actual needs of Rust users. So there are plenty of those already; I think it’s been open for about a month now. I think last year they had problems with waiting too long to get the blog post out, or the road map out, that is. So they’re probably going to close the call for the blog posts around mid-December or possibly earlier. Don’t trust me. Go read up, we’ll have a link somewhere. So hopefully by when 2020 actually rolls around, promptly there will be a roadmap to start guiding the way for development for the next year.
Jon: So how careful do I have to be if I want to write one of these blog posts?
Ben: Not at all.
Jon: Do I have to write, like, a 30 page treatise? What kind of thing do they want to see?
Ben: If you want to, like, scribble that thing on a napkin and just toss it in the general direction of a Rust developer near you, just go ahead and do that. The idea here, again, is that there is, kind of, in open source development of any kind, the squeaky wheel gets the oil, say, and so often it is kind of dominated by people who have a lot of time or a lot of effort to put into commenting on every single issue, every single PR, that sort of thing. And so in this process, a lot more weight is given to, kind of a breadth of voices. And so it— because there’s plenty of folks, like, you can imagine many of the technologies that you use every day. You’re not passionate about it all, where it’s like, hey, I’m using, say, some random, like, library to like, terminal thing, or any sort of thing that you use, and if ever you want to give feedback on it, now is the time, like, the suggestion box is currently open. Slip your thing in while you can.
Jon: Yeah, and I think it’s important to note that the feedback you give doesn’t have to be super specific. It doesn’t have to be like, I want this issue to be fixed. It can also be sort of larger things, like, what do I want Rust to focus on for the next year?
Ben: And it can be the same things as ever, just compile times. Sure. That’s all your problem is, then let us know. That’s very useful still, to know that despite improvements, that there’s still a problem, which I don’t think anyone would disagree with, in fact, but it’s good to know.
Jon: Yeah, exactly. And it can also be things like— one thing we saw last year was, a number of calls that went like, I want Rust to be boring for the next year. I want it to just focus on making the things that we have better, rather than new fancy things. It can be more sort of, I don’t want to say lofty goals, but it can be more, sort of, overarching goals for Rust development. It doesn’t have to be concrete proposals.
Ben: So I believe that’s it unless you have anything else to mention Jon.
Jon: No, I think you’re right.
Ben: We do have procedural things to get through, which is, I want to thank Jonathan very nicely for making a brand new website for rustacean-station.org. It looks fantastic. I don’t know who made the first website, but it was terrible, so we’ll blame them for not doing a very good job.
Jon: I think that was me as well.
Ben: That was also you. Yeah, I was definitely thinking—
Jon: I think it’s really just, I sat down and had a quick conversation with myself.
Ben: You do a lot of work for the back end of this podcast. And I definitely appreciate not having to ever worry about. In my end, I just kind of make release notes, and I push the PR and it’s done, and I am so grateful for all of that. And now the website looks very, very nice. I have really enjoyed— especially with the— you have little like, links for in the release notes. You can click on a time stamp, and it will jump to an embedded player, and actually goes to the time of the whatever I’m talking about. And so it’s actually really nice. It’s super great.
Jon: Yeah, I’m happy with it as well, Even though I feel like, whoever designed this new website should be should be happy with it.
Ben: Be commended. You should be given a raise, definitely, double their salary.
Jon: Double salary. That sounds great.
Ben: In the meantime, if you, like Jon, have any ideas for improvements to the podcast, if you want to participate, if you want to submit audio, it’s literally as easy as kind of, just like, flinging an audio file at us that has to do with Rust. It could be an interview with somebody that you know, that’s doing cool Rust stuff, or really anything at all. This is a crowdsourced, community-based podcast, and so if you have an idea for a thing, you could even just hit us up on any of the venues that are linked in the release notes. We have a Discord, a Twitter. Give us an idea. Introduce us to somebody that you know, that you think is doing cool things, and maybe we’ll get around to interviewing them. So, yeah, that’s the whole point of this lovely, lovely podcast we have here.
Jon: Yeah, and I know that some of the people listening are sitting on some audio clips that they haven’t sent us, whether it’s whether it’s out of fear that they need to polish them, or whether they’re unsure whether the topic will be relevant, just send them our way. It’s fine for it to be something that’s rough—
Ben: Let’s channel the spirit of the 2020 blog post process. Just give us whatever.
Jon: Exactly, and it doesn’t have to be cut diamond. It’s not as though Ben and I are professional podcasters.
Ben: No, we’re terrible at this.
Jon: Just— exactly. Just send us what you have, and it’s going to be great. We have a bunch of lovely people, who are volunteering to do audio work, and so you don’t have to do like all of the fancy audio editing. Just send us what you have, and we’ll make it episode out of it.
Ben: Yeah, so I think that’s it for this time. Glad to finally get it done. Apologies for the delay, but I think it turned out pretty well.
Jon: Yeah, and I think, Ben, with that, the future that is this podcast, this podcast episode—
Ben: I should have known.
Jon: It has finally stopped yielding. It is now returning for the last time.
Ben: Well, I’m grateful for that. You can never use an await based pun ever again to close that.
Jon: You say that, you say that. I’m just waiting for streams to land in stable Rust, and then I’ll make another.
Ben: All right. Well, see you around, folks. Thanks for listening. See you next time.
Jon: Until next time. Bye.