The Rustacean Station Podcast

What's New in Rust 1.48 and 1.49

Episode Page with Show Notes

Jon Gjengset: All right, Ben. How about this? We are back for the first time in the new year.

Ben Striegel: Yes. Happy New Year to you. Let’s all hope that 2021 is both more eventful and less eventful than the last year. And the first event of this year is the new Rust release.

Jon: Yeah, I mean, I think we’re guaranteed for this year to be better, because it’s a higher number.

Ben: I think so. I mean, like, every Rust number that comes out that’s a higher number is better than the last. And so I expect years to be the same.

Jon: Yeah. I mean, this is why two is better than one, objectively.

Ben: That’s what we— we all agree. We all know.

Jon: Yeah, exactly.

Ben: Also, why tau is better than pi.

Jon: Exactly. I’m so glad we we have a consistent rule we can apply. And also why 1.49 is better than 1.48. Although as we’ll find out, they’re both pretty good.

Ben: But 1.48 is still pretty good. So let’s start talking about that right now.

Jon: Fantastic. Let’s do it. All right. So, 1.48. I’m actually really excited about this release. And the biggest reason is the very first feature that’s listed in the release notes, which is easier linking in rustdoc. And where this comes up is, you’ve probably found yourself, when writing documentation, because you write documentation, right? Always. When you write documentation, you often want to refer to other parts of your crate. This might be other types, other functions, modules, whatever. And previously, the way you had to do this is you put, like, the square brackets around whatever thing you want to link. And then you put some, like, actual path, like you do like ../ or something. And you had to figure out all these paths, you had to keep them up to date, as you’re working, you have to, like, remember to update them if something moves, and it was a huge pain. Now, in 1.48 rustdoc has you covered. It now has a way to basically figure out these paths for you. So if you want to link to a type or a function or whatever, all you do is you type the function or name in backticks, and then put square brackets around them and that’s all you have to do. If they’re in scope in the current file, then rustdoc will just figure out what the path is to that name in the ultimate rustdoc output, and then make it a link there. So you don’t do any round brackets. You don’t actually give the target of the link. You just put square brackets around the type. And this works for anything that can be resolved in a Rust file already. So it can be a module. It can be a path, like you could do, like, crate::foo::bar. It can be a function that’s in the current file. Anything you can think of, you can link to this way.

You can even give it different names. If you wanted some part of your text to link to, say, the type foo, then you would just write square brackets around the text, round brackets, and then give the path to the foo name. So like crate::foo or something. And all of this will just magically work, and it is beautiful. It means that now, actually, interlinking documentation will be nice to work with, which hopefully means that more people will do it. I’m very excited for this. As you can maybe tell.

Ben: Yeah, it’s a good change. rustdoc doesn’t really get a lot of love here in this podcast, because in the release notes, usually it’s overshadowed by, you know, language changes and library changes. But we should definitely shout out to it a bit more. It’s an unsung hero of the Rust world.

Jon: Yeah.

Ben: Can’t live without the documentation. And there’s one more change too, to rustdoc in this release.

Jon: Yeah, I was pretty happy to see rustdoc get some love here. Do you want to talk about this new #[doc(alias)] feature?

Ben: Sure. And so pretty much one of the cool features of rustdoc is it has a built in search bar at the very top there, which, if you’re not using it, you should be. And sometimes it can be hard to find exactly what you’re looking for if you don’t quite remember what you were— like, what the name of the thing is. And so now in this new release, you can, on top of any item that you want to document, you can give it an alias, and you can say, hey, someone might be misremembering the name of this, or they might mis-type it, do a typo. And so you can say, hey, like, if they search for this, show them this instead. So it’s just a nice little quality of life thing, for making the docs a bit easier to navigate.

At some point, we should also go through and, like, talk about other cool ways to use the advanced features of rustdoc searching, like, you can search for return types or function types. If you know that some function returns a String, you can just be like, hey, I want to find all things around here that return a String. Oh, here’s what I wanted in the first place. So even if you don’t know the name of the function or if one even exists, you can kind of explore the standard library, which is pretty big now, kind of getting large with all these convenience types, and find the thing that you’re looking for in there.

Jon: Yeah, it’s funny, because in some sense, this is like, how Haskell documentation to some extent works, is that the function signature is a big part of the documentation, and that is sort of true in Rust as well. Like in many cases, you can give me a function signature without the function name, and I might be able to tell you what that function does just based on the types involved. And rustdoc sort of lets you search in that way, to say, I want a function that, like returns this type and takes arguments of this type, and that might get you pretty far.

Ben: In the release notes here, it talks about how an obvious use of this is for, if you are doing FFI. If you’re writing wrappers for C crates, where often C types and Rust types might have different name, based on, you know, the naming conventions of the platform or the language. And so in this case you could, say, search for memcpy, and then get the Rust equivalent, which I believe is “copy overlapping” or one of those things. (editor’s note: copy_nonoverlapping.)

Jon: Yeah, I mean, the other thing that actually came up to me, and where I would want to use this, is for sort of symbols and operators, like if you have a plus function, you can alias it to the actual plus symbol. And that might seem trivial. But if you think of something like a linear algebra library, you can now alias your operations to the mathematical symbols that the operation represents.

Ben: That’s a cool idea.

Jon: Which in theory makes it much easier to find things. And especially because this is Unicode, right? So you could put— like, if you have a sum function, you could put the sum symbol in there. It might be hard for people to type to search for, but it’s a neat quality of life improvement.

Ben: I suppose you could. On one of these days we’ll get those Unicode identifiers, so we can actually have the function named that in the first place.

Jon: I mean, I think really what I want is to alias my functions to have emojis, so like, search for the happy function.

Ben: Oh, Swift’s had that for years and years. So we’re lagging behind here.

Jon: Yeah, that’s great. The other change that came up is, sort of, I think, the start of a long range of improvements that we’re going to see going forward, which is the start of seeing const generics affect the standard library. We’ve seen this a little bit in that trait implementations for arrays now are implemented for arrays of any length, and that’s really cool. We talked about that, I think, in 1.47. But this, I think, is the first library change where we’re seeing const generics show up in new methods that are available. In particular, there’s now an implementation of [T; N] for TryFrom<Vec<T>>. So the idea here is that if you have a vector of Ts, you can try to turn it into an array of length N, and it will succeed if the vector is also of length N. And this wouldn’t really work without const generics. Or it would be weird because it would only work for vectors of very small lengths. But now we can say this implementation, this method exists for any N, and I think this is really neat. I think this is a cool use of const generics.

Ben: Yeah, it’s really cool because previously would be kind of, like, awkward and difficult to make this conversion, you’d have to— you’d probably want to, like, initialize the array first and then manually copy, loop over all the fields, and then over each index, copy into it. And more likely, you would probably just use a slice. And so, in this case, we’re going to start seeing, I believe, a lot of APIs that start actually getting to use these fixed-size arrays, which might not always be the most ergonomic or best way of doing things. But I’m sure there are plenty of, like, places where you don’t wanna use a slice, you’d rather just use the fixed-size array.

Jon: Yeah, we got into, when we were discussing this before we started, we got into, like, what other kind of interesting things might be coming down the line. You want to talk a little bit about the slice functions you found?

Ben: Yeah. Let me pull those up real quick. I believe they were called, slice::array_chunks. Is that correct?

Jon: Yeah. It’s like, as_chunks and array_chunks.

Ben: There’s also an array_windows function somewhere there too? Let’s bring it up real quick.

Jon: Yeah, there’s, like, as_chunks_mut, array_chunks_mut, array_windows. And all of these are— they’re pretty neat because they’re basically a way to say, because I happen to have them open here. If you have a slice and you want to get, sort of, sub-slices of that slice, of different lengths. Like, you have a slice of length 8, and you want every— you want, like, slices of 2, chunks of 2, into that slice. There’s now in nightly, there’s this new method on slices called as_chunks, where it’s generic over the length of the slice, and it returns you, like, sliced arrays, that each is one of those chunks. So if you want to slice into chunks of 2, what you’re going to get back is a slice of array slices of length 2, that are those chunks. And it’s just really neat, I think.

Ben: Yeah, I think it’s one of those really cool things that, once we start seeing it in the wild, and start internalizing the use of these const generics, it’s going to be, I think, like, pretty idiomatic, and pretty widely distributed. So let’s say, get used to it because I believe 1.51 is when min_const_generics is going to finally make it into stable. I believe it was stabilized a few days ago, as of this recording. So we should be having a lot of fun in the future.

And we should probably talk about, once that lands, some crates like the vector math crates which I believe really want to use these to their full potential.

Jon: Yeah. There were a couple of other APIs that were stabilized in this release as well. There’s two in particular that I want to talk about, that are related to the futures ecosystem. So two functions that were stabilized are future::ready and future::pending, and these are kind of interesting. They are functions that you give them a— well, in the case of future::ready, you give it a type T, and it returns you a future of that type T, that just immediately yields Ready with the value you gave. And then future::pending is sort of the dual to that, in that you don’t give it a value. You just give it a generic type T, and it will give you a future of that T that just never resolves. It always returns Pending.

And you might wonder, like, why are these interesting? Like, why wouldn’t I just use— at least for future::ready— why wouldn’t I just use an async block, like an async move { t } because it gives you sort of the same functionality. And the answer here goes back to when we talked in the very beginning, about like, pinning and futures, when async/await landed, which is that async blocks in Rust are by default— not even by default, they just are generators. And generators currently are always not Unpin. So once you start polling them, you can’t move them. And this is what the Pin API enforces. But there are a bunch of APIs that require you to pass in futures that are Unpin. And normally the way you get a not-Unpin future to become Unpin is that you Box it. Because if you put it on the heap, then moving the type you have is fine, because the actual underlying type won’t move because it’s always on the heap. Now ready is a way for you to get a Unpin future that just returns some T. So the difference between a future::ready<T> and an async move { t } is that future::ready<T> will be Unpin and async move { t } will not be Unpin. And so the future::ready one is always more general. It’s always easier to reuse, and it’s named, so you can have types that contain what future::ready returns. Which you can’t really do with an async closure or async block without, like, existential types which we don’t really have yet.

This did end up taking me down a pretty deep rabbit hole though, of why do we have future::pending? This one was added in the same PR as future::ready, and it was added because both the futures crate and the async_std crate already had these sort of helper functions pending and ready. And I tried digging back into the history of futures-rs of, why is pending there? The reason I wanted to dig into this is because pending doesn’t make much sense to me. It’s a future, that if you ever await it, it’s basically an infinite loop, and that doesn’t seem very useful. It’s an infinite loop where you don’t get to choose what happens during the infinite loop. Normally, we tolerate infinite loops, or we use them because we want to run something and take advantage of the side effects. But if you do future::pending.await, there are no side effects. You’re just blocking the current task forever. And that struck me as weird.

And so I traced the sort of lineage of pending all the way back to the very first commit of futures-rs, which does not explain why it’s there. Like, this is a commit by Alex Crichton back in March of 2016, and it’s just there, and then it’s just been carried forward ever since. I think if I were to guess, future::pending is useful in testing cases, where you want to make sure that even if a future is always pending, it doesn’t, like, block the system or anything. But it’s just weird to me, like it seems like such a niche type. So if anyone thinks of a reason why future pending is useful outside of just some very trivial tests, then please let us know and we’ll try to make sure to talk about this in the next episode as to why I was wrong and why future::pending is essential.

Ben: The only remaining thing with the APIs in this release is that some APIs have been made const, and so this is just not const generics, just const evaluation.

Jon: What? That’s entirely new.

Ben: This happens every single release. There’s always a few things that become const.

Jon: What? We have const fns?

Ben: No need to be so incredulous. This time, they are a few Option methods, so Option::is_some, Option::is_none. And so if you have an Option you can use if my_option.is_some() to test if it’s Some or None, and this is a bit redundant with the if let syntax, if you’re familiar with that. I believe these methods predate if let. They are ancient. They’ve been around since Option has existed. But now it’s cool that they are const, because if you wanted to use— I believe, was it last release, or the previous one, where now you can use branches, ifs in const fns, And so now these are kind of just expanding on those building blocks. Now the things you might want to use with those if statements are now becoming const. So we’re gradually seeing more and more of the building blocks of all Rust code becoming const. Which opens up more and more chances for more things to become const. So ongoing constification of std continues unabated.

Jon: I think it might even be that it’s because the constification work happened to enable conditionals. That’s how these functions are able to be const now. Right? Because if you think about internally, is_some and is_none sort of need to branch on whether the inner type is Some or None, right?

Ben: I imagine they’re a match statement internally.

Jon: Yeah, exactly.

Ben: Same with Result::is_ok, Result::is_err, and then a few other things: Ordering::reverse, Ordering::then. Did you have something else to say?

Jon: Yeah, I was just going to say, I think what we’re seeing with most of these is, as you mentioned, like, the continuing trend from before, I do wonder, like, what the next step is for constification. We got some limited amount of looping and conditionals. Are we going to get, like, compile time heap memory or something, soon? Like, that would be really cool. Like, what would it take to make, like—

Ben: Maybe we should interview someone who’s working on this and find out?

Jon: Oh, yeah. We should chat to Oli for example. That’d be really cool. Or if you, as the listener, want to talk to Oli about this, you should reach out, and we will happily help you make an episode with Oli where we talk about constification. That sounds great.

Ben: Yeah, it’s a great time to segue into our fact that Rustacean Station is a community podcast. If you ever want to help out with any of this, making the episode, hosting, interviewing, or just submit ideas for episodes, let us know in our contact info, which you’ll see above and below the podcast.

Jon: And we will be extremely excited.

Ben: We always are. You know us.

And I think that’s all for 1.48. Unless you have—

Jon: I do have one more thing for 1.48 actually, which is—

Ben: One more thing, ooh.

Jon: Yeah, I did, as always, I dug into the detailed release notes.

Ben: Oh yes, this.

Jon: Yeah, and I found some interesting things. One of them is not that interesting in some sense. But it’s that mem::zeroed and mem::uninitialized— remember, mem::uninitialized is a deprecated function now, that has been sort of superseded by MaybeUninit, which is a lot easier to use safely. But mem::zeroed and mem::uninitialized will now panic if you ever try to use them on a type that contains a type that is not allowed to be uninitialized. So, for example, in Rust, it is undefined behavior, and it is just not okay to have an invalid value. And what that means is, for example, if you have a bool, its value must be 0 or 1. You’re not allowed to have a bool that contains, say, the bit pattern for a 2. Even though, like in C, this would be totally fine, in Rust it is not. Similarly, if you have a reference, then that reference must not be dangling; it must not be null; it must point to a valid object. And so you can’t use mem::zeroed for a reference. You can’t use mem::uninitialized for a bool. It is just not okay. And now, those functions will panic if you try to use them in that way. And it’s interesting, because the documentation for them doesn’t actually say that they panic. But they do say that it’s undefined behavior, and panic is one form of valid replacement for something that’s undefined behavior. So this is like an interesting case where the compiler’s trying to guard you against undefined behavior even when it’s allowed to just be entirely unhelpful.

Ben: Also a reminder that mem::uninitialized is deprecated in favor of the MaybeUninit type. So if you are using uninitialized, hopefully you aren’t hitting it, but please stop using it.

Jon: Yes, please use MaybeUninit instead.

Ben: Yes. And then one more thing you have to talk about.

Jon: Yeah, this one’s interesting. So this one led me down another rabbit hole. I don’t know why. I just really like going down these.

This is an issue that was filed in August of 2015 and has been open ever since, until 1.48. It used to be that if you had bounds on an associated type of a trait, if you constructed a trait object over that trait, the associated bound for the trait was just not checked. So what this meant was, if you wrote a function that say, took some type that implements a trait, and you called it with a trait object whose associated type didn’t implement the bounds of the associated type. So for example, more concretely here: Let’s say you have a trait Foo that has an associated type Bar. And in the definition of Bar, you say that Bar must implement Copy. And then you write some function that takes a Foo. It takes an instance of the trait. And then you call [that function] with a dyn Foo where Bar = T and it’s generic over T. Notice here that there’s no bound that says that the Bar has to be Copy on this function. It just takes any T. It is allowed to call the function that consumes a Foo, and that function that consumes a Foo does not recheck, it doesn’t actually validate that the input type it got implements Copy. It just assumes that it implements Copy. And so this meant that ever since 2015, you’ve been able to take a type and have it just sort of implement any type you want. Just assumed that it implements any type you want, which is obviously not okay. This was fixed in 1.48, which is really cool. Unfortunately, it is backwards incompatible with certain implementations, because what people were doing— and they had no idea that this was the case— they assumed that this worked because the compiler would infer the bounds, that you just didn’t have to name the bounds for associated types, because the compiler would infer them for you. In fact, the compiler was just never checking them. And so now in 1.49, in the release notes, as we’ll get to in a second, there’s a note that this is backwards-incompatible, and you may now need to modify your code to include additional bounds. But it was sort of funny to trace back the lineage of, the reason this was there is because of a soundness bug, and people assumed that it was actually working as correctly because they thought the compiler supported an inferred bound when in fact it did not. It just didn’t check them.

I think the next thing I wanted to touch on briefly, before we get to 1.49, is rustup 1.23, which came out shortly after 1.48. There’s not too much interesting stuff in 1.23. There is support for Apple’s M1 devices, just in rustup itself. There’s support for installing minor releases so you can install, like, Rust 1.45.1 instead of just 1.45. And this is one that’s kind of cool, which is, if you use rustup, you can create a file in your project that says which toolchain you need. Like you might say, this compiles using nightly-2020-07-10, or something. And then, if someone tries to compile that project, rustup will automatically use that toolchain. And what happened in Rust 1.23 is that that file, called rust-toolchain, now can be a TOML file, where you can not only specify which toolchain you want, you can also say, I wanted to have the following components installed, I wanted to have the following targets installed, and rustup will make sure to install those, and use those for anyone who runs any cargo command on the project, or rustup command, I should say. So it’s a nice quality of life improvement.

Ben: And is that it for 1.48?

Jon: I think that’s it for 1.48. I think we’re ready to move to the higher number.

Ben: The next largest number, 1.49.

Jon: Released on the very last day.

Ben: The last day of 2020. So a great farewell to the year. The headline feature of this release is not to do with the language itself, but with the implementation. So there are tiers of support for Rust. There are— there’s Tier 1, Tier 2, and Tier 3. And what these tiers mean is the level of guaranteed— I’d say quality of the release artifacts, whenever they are provided, and also of the code itself. So a Tier 1 platform is something like Windows on Intel 64-bit things. And so, like the big platforms that you expect it to work on, you know, Linux, Mac, Windows, for Intel. And then there’s Tier 2, which are, they are guaranteed to build, so that, you know, whenever Rust has a release, if those targets don’t build, then they roll back the PR that caused it to fail and then fix it again. But they don’t necessarily run tests for all PR’s for all those platforms, because as you’d imagine, getting enough machines for all these platforms that are often more obscure than Windows, Mac, or Linux, can be a bit of a daunting infrastructural challenge.

There’s also Tier 3, which are, we have support for this in the code base, but we don’t really test it, and it’s kind of up to the community to really make sure that these things keep working. Often these are, kind of, more obscure embedded processors or homebrew OSes, these kind of things. And so the headline feature of this release is that 64-bit ARM Linux is now a Tier 1 support. And so that means that there are now builders in the cloud, now testing with every single PR to Rust, making sure that every single PR both builds and passes all tests, and with every stable release of Rust, it also ensures that Crater passes on this platform, and if you don’t know, Crater is the tool that the Rust project uses to— basically what it does is, on Rust’s crates host,, which is where all the community puts the open source— the goodness, that you all know and love, it downloads every single crate, and it tests before and after, and if there are any regressions, it will say, hey, we failed. We need to fix this, or, you know, take a look at this, something’s happened. This is how, for example, when Jon mentioned previously, with the situation where, oh, we need to do this bug fix where we weren’t checking these bounds; now we do check the bounds. You can actually detect, we have code in the wild that is failing now that we’ve fixed this bug, let’s not just fix the bug blithely and then shove it out. Let’s maybe go through and, like, alert people who own these packages. Let’s try and fix their code, let’s actually manually submit PRs that fix their code and then maybe even wait a bit, until their code is, or they have new versions pushed out. Trying to minimize the impact of any breaking changes that are due to bug fixes. And so it’s an immensely useful tool for making sure that Rust does not regress or cause any undue breakage. And so now that ARM Linux is now Tier 1, there’s really a big deal, because it’s also the first non-x86 platform to become Tier 1.

And also as well, macOS and Windows 64-bit ARM are now Tier 2, which means that they’re now guaranteed to build, but they aren’t yet running the test suite or doing full Crater runs for releases. But you can also get the binaries from rustup. Go on, Jon.

Jon: Yeah, I think this is going to be a big deal going forward, to have even just one ARM platform as Tier 1, because it means that suddenly there’s going to be a lot more exposure to ARM runs to the community at large. I think one thing that happens with like, Intel 64-bit processors is that everyone just sort of implicitly bakes in the assumption that they’re running on that, especially things that use concurrency primitives. It’s very easy to just rely on the fairly strong memory ordering guarantees that the Intel platform gives you. And I think that exposing more people to ARM through things like Crater runs, I think it’s going to be great for the ecosystem to sort of shore up some of those corner cases where, maybe it turns out your code is only correct on Intel, because you use the wrong— you use too weak of a primitive underlying it. So a classic example of this might be something like using, Ordering::Relaxed for all of your memory ordering operations, which generally will probably work fine on Intel processors, because they provide pretty strong guarantees by default that you can’t really turn off. Whereas on ARM, Relaxed is fairly relaxed, and I wonder how many bugs we’re going to see crop up as a result of this, from the developers not knowing that they needed to use a stronger ordering model.

Ben: On a similar note too, like, do you know how SIMD support shapes up for Rust on ARM, as opposed to x86?

Jon: I’m not sure. I think SIMD support is set up to be, per CPU feature, so I think it’s like, per-architecture. I don’t know that there’s a general SIMD feature that is, like, architecture-independent at the moment, and I don’t know whether we’ll see one easily, because I think that SIMD works fairly differently on the different architectures as well. So I think for now it’s really providing the intrinsics, more so than providing high level wrappers, although hopefully maybe that—

Ben: Regardless, it’s good to have it. Because now when we write our SIMD abstractions, we’ll be forced to consider more than one architecture, which is, you know, a pretty big deal.

Jon: Yeah, exactly.

Ben: Next up?

Jon: There were some— yeah, there were some technical changes in 1.49 as well, and this one’s kind of interesting. So you might have, in the past, run cargo test. At least, I hope you have. And if you have, you might have noticed that by default, it doesn’t really print anything. Like, if you throw a bunch of printlns in your library or in your tests, nothing really gets printed unless the test fails. And that should feel like magic to you, right? Like, if I call println and my test ran, why doesn’t it print? How does this work? And you might think like, oh, maybe the testing suite is just like, special, but— and to some extent it is. But there’s actually a pretty fascinating, I want to call it a hack, because it sort of is a hack in the standard library to enable this feature, which is the standard library, has a thread-local that the test— it’s public but unstable and hidden. And it’s a hook that the test crate— that is what Rust uses to run your tests— it basically sets that thread-local to true before it runs the test, and then sets it to false after. And the print function in the standard library. The thing that actually prints to stdout or stderr, has logic in it to check this thread-local. And if it’s true, it writes it to, like, a buffer in memory instead of writing it to the actual stdout.

And this, the fact that this is sort of a little bit of a hack has been visible, if you just knew where to dig. One example of this is if you manually open, like, stdout or stderr and then write to it, then that will get printed if you run cargo test. The other is that if you spawn a thread and then print from inside the thread, that would still show up in your test output. And now that you understand the underlying mechanism, you might see why. It’s because there’s a thread-local that determines whether the standard library is going to be using this in-memory buffering or actually write to the real stdout. When you spawn a thread, that new thread has no value for that thread-local, and so it just defaults to writing to stdout as it would in any sort of normal run of your program.

But in 1.49 there’s a change that landed, that basically propagates this variable out through thread::spawn. So if you do a thread::spawn, it will first check whether that thread-local has been set, and if it has been set, then it also sets it in the new thread that you spawned. So this way you now actually do get to capture the output, even from spawned threads. There are still ways around it, of course, like you can still manually open stdout if you really, really need to. Or you can turn off this feature with cargo test -- --no- capture. But it is interesting to dig into why— what is the underlying mechanism that enables the testing suite to do this, and the realization of why it didn’t work with threads, and why it does now. I just thought that was kind of cool.

Ben: There are a few library changes. Not very many. There’s also a few- there’s there’s two more constification ones, to Poll::is_ready and Poll::is_pending. Not very interesting, but still they’re there. Did you wanna talk more about those?

Jon: No, I think those are fairly straightforward.

Ben: We’ll link to them if you want to read about them.

Jon: Yeah, there are new some new stable functions for slice, which are basically, like, implementation details for binary search. Like, if I remember correctly, these are— so, select_nth_unstable, for example, which is one of the ones that stabilized, is: reorder the slice such that the element at the index that you give is at its final sorted position. So these are highly specialized functions that—

Ben: Oh, jeez.

Jon: Yeah, it’s like— I think it’s basically for, if you want to implement quicksort, you need this function. I’m not entirely sure why it’s in the standard library, but I guess it’s useful for different implementations. Not entirely sure.

There are, though, a couple of other interesting things in the changelog, that—

Ben: Hidden away, tucked away.

Jon: Sorry, say that again?

Ben: They’re hidden away, they’re tucked away secretly behind the link into the detailed change notes.

Jon: Yeah, and often things are hidden away for a good reason, like they’re very niche and only people like me start digging into them. But this time there’s actually one that’s really cool, and that is, in 1.49 move_ref_pattern was stabilized, and that might not mean much to you, but I promise you it’s cool. So you may sometimes have written, like, a match statement, like let’s say you’re matching on foo. And foo is a type that happens to have multiple fields. Let’s say it has the fields x and y. Previously, you had the choice of either matching those fields by reference or matching them by move, right? So if you do a match foo, then you can write the pattern as Foo {x, y} and then in the match arm, you can either, sort of move x and y out of foo and into that scope. Or you can just move them by reference so that after the match exits you can continue to use foo. And that’s how things were up to 1.49.

But what you were unable to do before, was to say, I want to move y out of foo, but I want to take x by reference. And if you think about it, this is a little bit of a weird operation, because what state is foo at, during and after those scopes, right? Like, if you do match foo and then the actual pattern is Foo {ref x, y}, then inside of the scope, if you now try to refer to foo, will that work or not? Right? Because you’ve moved, y out of foo. So it’s sort of a partial move. The x is still there, and you have a reference to it. But the y is not. And this is actually something that they wanted to support this sort of mixing of move and ref bindings, all the way before Rust 1.0. But the borrow checker just wasn’t up to the task of ensuring that you wouldn’t try to then use foo.y indirectly or directly, later on in your code. And so it’s only in somewhat recent history that we’ve been able to enable this feature. And then it wasn’t really enabled for a while, just because they wanted to be conservative about what they’re supported. But now, in 1.49 this is supported. So you can now match on a type, and move some of the fields and reference others. This is really handy for, especially things like state machines, where you might want to move out some of the fields because you’re going to reassign to self later. And it also comes up, just in other cases where, you want to, like, match on, say, the previous state, but then mutate some other field. If you’ve run across this, this would be a huge win, and if you haven’t that’s okay. It might not matter too much, but this is a thing that I’ve wanted for a long time, and I’m happy to see it stabilized.

Ben: There’s one more thing hidden away in these change notes here, which is that unions can now implement Drop. And there is a, kind of a long and somewhat boring history of, like, why unions previously had this restriction. But the idea here is that that’s finally been lifted. And by unions, I mean the union keyword, that’s very similar to C-style unions, as distinct from Rust enums. Mostly used for C interop. But there are a few places where, for extreme memory savings, you might want to use this. The idea being that, here in this new 1.49 release, you can now have fields of your union that have the ManuallyDrop trait, which is sort of like Drop. But the idea is you have to actually trigger the destructor manually, and then you can also now implement Drop on the union itself. One of the big deals about this is that previously, the way this was enforced, you just couldn’t have any field of your union that didn’t implement Copy. And the idea being, there was no way, in the standard library at least, to say, hey, we want to implement this trait for things that don’t implement Drop, but because Copy and Drop are incompatible, and that is compiler-enforced, you can just say, hey, if all these fields implement Copy and they can’t implement Drop, and so the problem is solved. And so now, even if you don’t want to have a field that implements Drop in your union, you might want to have it implement, not Copy, because Copy is obviously opt-in on any structure. And so it’s kind of just lifting a restriction there.

And also, there are a few crates which I saw people who are very happy about this. So, for example, the smallvec crate, which kind of lets you do inline vectors. Which kind of lets you— if you have a very small vector, you might have a memory savings from not having to actually put things on the heap. You could just store it inline, where normally, the pointer and size information for the vector would be. And so now this new feature, this lets them actually save a little bit of space. And so now there should be no overhead, compared to using the actual built-in vector, which is pretty cool. And so just one of these little things that lifts a restriction that makes everyone’s life a little bit better.

Jon: Yeah, it’s funny. I feel like the longer we get into like, these Rust release cycles, the more we see, like very, in some sense niche fixes, that happened to unblock a decent amount of work. And I think what we’re seeing really, is that the ecosystem is growing. People are taking advantage of more features of the language. And there are just more users that are using more advanced functionality. And so more and more of these corner cases end up being actual pain points that are worth addressing. And I think it’s cool to see that the process seems to be scaling. Like, we are still improving things that matter to people.

Ben: Also, one thing that the change notes often don’t talk about, rarely talked about, is compiler performance, and so I’ve seen people talking about how 1.49 has about a 5 to 10% reduction in most common benchmarks that they use to determine compiler speed, so that should be pretty good. Bit of a regression last year with the upgrade to a new version of LLVM, and I believe those have been resolved, so I’m not sure if it’s actually faster than it used to be, like, you know, as of a year ago. But it’s definitely regained what it lost earlier this year. And there’s always these performance enhancements coming in. Rarely do they get the spotlight in the release notes, because usually there’s— every six weeks there’s maybe, like a few percent increases, but it adds up over time. And it’s certainly pretty welcome.

Jon: Yeah, I mean, I would highly recommend looking at It’s a really cool page for just looking at compiler performance over time. I find it pretty funny to open it every now and again and just look at the curves and see how things are going.

Speaking of how things are going, one thing that is not really related to a Rust release, but it is a release nonetheless. The 2020 Rust survey, the results are out. They came out December 16th, I think, and we’re not going to go through them in detail here now, but I highly recommend that you read through it. There’s some interesting results on how Rust use has evolved over time, what the current pain points are, and sort of, where do we go from here? What are the trends? What does this community look like? And I think it’s a worthwhile read.

Ben: And I think that’s all we have for today.

Jon: I think so.

Ben, Do you have anything else to share, Jon?

Jon: No, I think we’ve been through a lot of numbers today. We’ve gone through 1.48, 1.23, 1.49, and 2020. It’s pretty good. I think we’re doing well. I think we’re doing well, Ben.

Ben: Yeah, we’re definitely— we’re mastering the number line. Pretty gradually, we’re getting there. It’s only a few left, I think. So, we’re running out.

Jon: Yeah, I think, actually, what we should do is, once 1.51 comes out, we can use const generics to just handle all of them immediately.

Ben: Yeah, we can just be generic. We can make the release after that just be for all future Rust releases simultaneously.

Jon: I think we should do that. And just— actually, what we should do is just speak in very general terms. And then it will just remain valid forever.

Ben: Well, I think it will be approximately April by then. So it could be a great April Fool’s Day prank just to talk about incredibly generic improvements of the compiler.

Jon: Oh, that’s a great idea. That’s a great idea.

Ben: We’ve got some constification this time. Here’s a few new APIs. Here’s a bug fix that, you know, breaks some weird code.

Jon: Yeah, and we could be outraged at, how dare they remove this Tier 1 thing to Tier 2. I can’t believe that happened.

Ben: And then we’re done. Podcast over.

Jon: Yeah, well, we’ll just re-release that same episode, month after month. From that point forward.

Ben: Great, Rust saves the day yet again.

Jon: Nice. Nice. I like it. And we can actually—

Ben: That’s how we do it. All right.

Jon: Nice. All right, Ben.

Ben: Have a great day.

Jon: Take care until next time.

Ben: Once again, happy New Year to all our listeners.

Jon: Bye, everyone.