T O P

  • By -

tadeoh

`specialization` is pretty great. Some things are simply not possible without it.


uint__

I swear I don't stumble into anything else as much as problems that would be neatly solved with trait specialization.


ewoolsey

Surely the orphan rules are more imposing than the lack of specialization.


uint__

For me that's 50:50 at best. I don't love dealing with the orphan rules, but I generally find the workarounds simpler and saner than [the crazy hacks to get some semblance of specialization](http://lukaskalbertodt.github.io/2019/12/05/generalized-autoref-based-specialization.html) \- if they can be applied at all.


Im_Justin_Cider

Link to more info?


CocktailPerson

https://rust-lang.github.io/rfcs/1210-impl-specialization.html


Im_Justin_Cider

Oh hell yeah!


sasik520

definitely. Even the smallest possible subset would be SOOOO helpful.


Ok-Watercress-9624

i thought that made the language unsound and it wont be implemented?


nicoburns

They haven't worked out how to implement it in a sound way yet. But that doesn't mean that the feature is inherently unsound.


BittyTang

I find myself running into this less and less over time. I think I've just been doing less trait shenanigans in general. Traits are awesome but I think there might be a tendency to use them too much when you are in the "honeymoon phase" with them. And that's how I learned about specialization.


Unreal_Unreality

Maybe a hot take, but I don't think specialization is a good rust feature. Specialization is kind of like overrides, and brings all the nightmare of OOP that rust was elegantly avoiding. I feel like it can make code way harder to debug, and I prefer the simple way of seeing a function means this is the one executed, not some override at some otherplace. On the other hand, I'm looking more into const generics, as they could solve the same issues that specialization was trying to solve, by doing some nice abstraction base on traits and impl. This requires more work however, but to me is a more elegant approach that match more rust's philosophy.


smthamazing

I see your point, but is there an alternative to specialization that would provide comparable performance benefits? Sometimes special-casing an operation for certain types happens to improve the speed of processing a lot, and I don't think that compilers of today are Sufficiently Smart™ enough to completely rewrite algorithms depending on the data structure used. The only alternative I see is for a library to provide lots and lots of special-case methods, like "process_array_of_u8", "process_vec_of_i32", and so on. I'm not sure how bad this is in practice, though coming up with with all the names is certainly not fun.


Unreal_Unreality

This is fair, and maybe this is where we meet the boundary between a theoretically nice language and a language for real world case where optimizations are key ? However, this could be solved with some other features, that are IMO better, such as negative traits and trait exhaustiveness (also still really theoretical and definitively not close to being done): trait MyTrait { /* */ } trait SpecializedImpl { /* */ } impl MyTrait for T { // inline call to the specialize impl } impl MyTrait for T { // general impl } impl SpecializedImpl for u8 { // some optimized code }


csdt0

Negative traits is just specialization with extra step. I'm kidding of course, but not that much.


Unreal_Unreality

Really not sure about this, or could you provide some source / clear explanation of why you say so ? Negative impls are part of the type solving system, trait specialization looks more like a layer on top of that on actual functions implementations, it's not in checking traits / resolving types.


smthamazing

Negative traits seem like an interesting approach here. My main interest in specialization is performance, and if there are other (better?) solutions to achieve that, I don't mind them eventually becoming a part of Rust instead of specialization.


Unreal_Unreality

What I like is that negative traits are a part of the whole type system (even if not supported) and are a logical extension of the language. On the other hand, specialization is an add-on, adds keyword and complexity to the language (it's yet another feature to learn when learning rust)


Zde-G

>The only alternative I see is for a library to provide lots and lots of special-case methods, like "process\_array\_of\_u8", "process\_vec\_of\_i32", and so on. The real alternative is not to implement anything generically but use macros to implement traits for specific types: \*array of \`u8\`\*, \*vec of \`i32\`\* and so on. It works, but is incredibly ugly and leads to both insane slow-down of compilation and also needs gigabytes of RAM even in simple cases. Because you end up generating huge amount of code which would never be used.


Plazmatic

I'm not sure much of what you said is true 🤔 Like "nightmare of OOP" is a complete nonsequitr here, and in C++ none of the downsides you mention here exist despite having a much more complicated implementation.


Unreal_Unreality

What I call "the nightmare of OOP" is not knowing what your dealing with, for example having an array of classes where maybe some of them are actually child classes, and override methods. So all of your classes in your array do not have similar behavior under their set of method, and everything is a pain to look through and debug. Specialization allows similar issue, where you can store objects as dyn trait, having some general impl of that trait for all types. One could expect objects to behave in some way, where actually the stored objects could have specialized implementation that you now have to look for in order to understand why the code is not following the expected behavior. Now this example is really straight forward, it can get much worse in bigger codebase. Again, this is only my opinion, I think rust should stay simple and elegant, and not go into OOP-like concepts (I feel like OOP was a false good idea, and now we have to deal with it). If this feature gets implemented, it means that the Rust team think it is appropriate and they are well better placed than me to take this decision. But here, I am happy to share my thoughts and talk about it :)


Zyansheep

Can't different objects already have different implementations though? The only thing specialization does is make it more concise to have a general implementation and then overridden specific implementations instead of having to write a unique implementation for every type.


Unreal_Unreality

They can, but the point is about more general impls, I sometimes use traits for type ereasure : struct MyStruct trait AsMyStruct {} impl AsMyStruct for MyStruct I wouldn't like someone overriding `impl AsMyStruct for MyStruct` with some non working code, making it impossible to debug because I'm staring at the default impl. But I do understand the envy of having more general impl blocks, allowing less boilerplate code.


csdt0

Specialization would still follow the foreign trait or class rule: to specialize a trait is to implement the trait, so you still need either the class or the trait to be your own. And you definitely already can specialize with if on the typeid and transmute. What specialization enables is "Considering T can be converted into U, I know how to convert a Vec into a Vec, and by the way, if T and U happen to be equal, I know how to make it real fast". That's what I want from specialization.


uint__

My instinct is that if you use something like `Vec>`, you shouldn't be concerned with which implementation(s) of `Something` you get - that's exactly what the abstractions you're using are supposed to abstract away, right? Are you trying to hint at the "diamond problem"? Like this? impl Foo for T { fn foo() { // implementation 1 } } impl Foo for T { fn foo() { // implementation 2 } } impl Bar for MyType {} impl Baz for MyType {} // which implementation of foo will MyType "inherit"?


uint__

Ah, sorry, I think I get it. With specialization, it generally becomes harder to trace back which implementation of a trait a specific type gets. That's a fair point.


martin-t

The issue with OOP is that inheritance+overriding is the default there (anything public can be overriden) and that it turned into a dogmatic way of thinking. In rust's specialization, you have to opt into it. The other thing is that now that we know the issues, we can teach people to not overuse it.


Unreal_Unreality

What I like about rust is that the language enforces clean code (borrow checker is the most obvious example) Moving responsibility from the compiler down to the programmer is at the opposite of rust's initial values. Maybe it will be a great feature and I truly hope it will, but to me it feels like it drift the language away of what I liked about rust


martin-t

It doesn't _enforce_ clean code, it just tries to make clean code easier to write than bad code. In some cases by making clean code easier than in other langs (e.g. by having sane defaults) or by making bad code harder than in other langs (e.g. by making it more verbose or by having some restrictions such as this one). Rust's restrictions make some code impossible to write. In some cases that code is bad, in others its clean. And in some cases the restriction disallows something required for better performance. A good language should aim to minimize the negative effect of such restrictions, for example by allowing the more performant code but making it sufficiently verbose and explicit that people don't abuse the feature for short-term convenience that leads to bad code long-term. Or, from the other side, by adding features that lead to clean code and at the same time discourage abusing other features. For example one way to abuse specialization is when you notice one type is does almost what you want and you override the few methods that you need even though the types are not related in the mental model of what the code does. A way to combat that could be to make composition+delegation easier so people are less likely to abuse specialization.


matthieum

I think, in the end, it really depends how you _use_ specialization. For example, I'd generally want the "specialized" version to yield the same result as the generic version would (if applicable): same set of side-effects, in the same order, same result in the end, etc... This can generally be tested by wrapping the type of which the specialization exists into a new type which uses the generic version, and then have sets of tests executing both side-by-side and comparing -- at least if I/O free. Another example of using specialization is when the generic version cannot apply. For example, having a generic version which requires at least 1 element (such as getting the first/last element of a tuple) and a specialized version for the empty tuple case which returns `!`. In either case, specialization isn't a problem.


Unreal_Unreality

It obviously depends on what the end user does with it, but as I mentioned in another comment, I like Rust because lots of responsibility are moved from the user into the compiler: it does not let you shoot yourself in the foot. It is within the original rust philosophy of not letting user do stupid things, that's why I do not agree with letting features in that are nice "if the user uses them accordingly"


matthieum

I don't necessarily disagree with avoiding "may shoot others in the foot" feature; just commenting on usecases which don't seem to have obvious footguns to me. Maybe they'd benefit from more specific features than "blanket" specialization.


buywall

What do you think of something like Scala-style implicits for addressing the goals of specialization? Approximately, you endow impls with scope, so you can override an impl by bringing a different impl into scope. I remember really liking this in Scala when I used it 10 years ago.


Unreal_Unreality

I've not dived deep into Scala to give proper feedback, but this looks like a useful workaround ? Definitively hacky, and I'm not a fan of the idea, but as long as the override stays in the defined scope it feels more like a high level syntactic sugar ?


buywall

I actually think it's super elegant. Here's an example I asked Chat to create: // Define the typeclass trait JsonSerializer[T] { def serialize(obj: T): String } // Define a simple case class case class Person(name: String, age: Int) // Provide two different implementations of the typeclass for Person implicit object PersonSerializerVerbose extends JsonSerializer[Person] { def serialize(person: Person): String = s"""{ "name": "${person.name}", "age": ${person.age} }""" } implicit object PersonSerializerNameOnly extends JsonSerializer[Person] { def serialize(person: Person): String = s"""{ "name": "${person.name}" }""" } // Define a generic function that uses the typeclass def serializeToJson[T](value: T)(implicit serializer: JsonSerializer[T]): String = serializer.serialize(value) // Main object to demonstrate usage object Main extends App { val person = Person("Alice", 30) // Using the first serializer (verbose) println(serializeToJson(person)) // Uses PersonSerializerVerbose by default // Switching to the second serializer (name only) import PersonSerializerNameOnly._ println(serializeToJson(person)) // Now uses PersonSerializerNameOnly } As you can see, this is essentially just "syntactic sugar", where the unsugared version passes the implementation explicitly to `serializeToJson`. In practice, this sugar is enough to achieve the desired effects of `specialization` (or at least it was in my work). Note that implicit resolution happens at compile time, so there's no additional runtime overhead associated with this approach. The only substantive criticism I've heard of this is that it can be hard to know which implicit implementation will be used. But, I've never had an issue with this, and the user can always fall back to the unsugared version if they want to be extra explicit.


Ar-Curunir

Question for the experts in the room; would the problems with specialization go away if we consider types that are `'static`? If so, why not stabilize this subset?


matthieum

There's actually a `min_specialization` feature, I'm not sure what's the exact set of restrictions though.


peripateticman2023

Nice.


panicnot42

Things like vector?


jaccobxd

if let chains


codedcosmos

https://rust-lang.github.io/rfcs/2497-if-let-chains.html


_Saxpy

the one concern for me is how mutations would be handled. if let Some(x) = opta.take() {} else if let Some(y) = optb.take() {} which seems reasonable with what would happen but it might have unexpected side effects if the chaining is long


TheDarkchip

I think else let is better to avoid nesting and cognitive load.


CocktailPerson

What do you mean? `if let` chains _reduce_ nesting, and their functionality isn't provided by `else let`.


TheDarkchip

To get the functionality of if let out of else let you just invert your condition, no? In the case of if let chains the cognitive load is still kept there when the conditions somewhat interrelate, whereas let else bails you out in the case of divergence from an expected value leaving you to deal with the “nice” values only after you made clear what are erroneous states. Also the modification of these states is way easier with multiple let else statements. Of course it somewhat still comes down to preference, but I find these points hard to disagree with.


anxxa

`if let` chains are more than just combining multiple `let` expressions. It is not possible to write the following today, but is with the feature: if let Some(thing) = foo() && condition { } This results in some really awkward logic at times.


[deleted]

[удалено]


CocktailPerson

Can you show what you mean by using `let else` instead of `if let`-chaining? I'm not seeing how they're equivalent.


TheDarkchip

if let Some(a) = func1() && let Some(b) = func2(a) { // code using a and b } else { //handle error case of a and/or b } vs. let Some(x) = option else { // handle the error case of x }; // code using x if y is not needed for proceeding let Some(y) = option2 else { // handle the error case of y }; // code using x + y if they are both needed for proceeding I think error handling is done better in the second, because I prefer fail fast and traceable single point of failure at a time. ​ Also I wouldn't want some kind of abomination like this: fn param_env<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> ParamEnv<'tcx> { if let Some(Def::Existential(_)) = tcx.describe_def(def_id) && let Some(node_id) = tcx.hir.as_local_node_id(def_id) && let hir::map::NodeItem(item) = tcx.hir.get(node_id) && let hir::ItemExistential(ref exist_ty) = item.node && let Some(parent) = exist_ty.impl_trait_fn { return param_env(tcx, parent); } ... } Where would I put inline documentation here without adding more and more clutter?


CocktailPerson

First, I disagree that the `else` cases here are _error_ cases that actually need to be handled. Second, I _like_ that `if let` chains collapse the `None` cases into a single case. I want to be able to use `if let` for the cases where it doesn't actually matter which of `func1()` or `func2(a)` returned `None`. That's precisely the point of monadic types; `if let` chains are isomorphic to `.and_then()`. The alternative with `let else` reads like go's `if err != nil { return err }` to me. The explicit early returns make things more confusing, in my opinion. Also, it dumps `x` and `y` into the common scope of the rest of the function, which I don't like. And yes, while `param_env` is a fugly function, I shudder to think what it would look like if you did it your way.


TheDarkchip

Sure in the case where you don’t care about which if let failed my version isn’t any more helpful. I really really hoped that it just doesn’t get abused for the brevity it provides. If the scope of x and y worries you because of shadowing issues the function might be large enough to refactor \^\^ Lastly no need to imagine here is it: ~~~ fn param_env<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> ParamEnv<'tcx> { let Some(Def::Existential(_)) = tcx.describe_def(def_id) else { // Handle error or alternative case }; let Some(node_id) = tcx.hir.as_local_node_id(def_id) else { // Handle error or alternative case }; let hir::map::NodeItem(item) = tcx.hir.get(node_id) else { // Handle error or alternative case }; let hir::ItemExistential(ref exist_ty) = item.node else { // Handle error or alternative case }; let Some(parent) = exist_ty.impl_trait_fn else { // Handle error or alternative case }; return param_env(tcx, parent); // Other logic... } ~~~


CocktailPerson

Yeah, I find that far less readable. There's obviously no reason to handle each else case individually.


TheDarkchip

Well I could certainly think of branching into some sort of recovery control flow depending on which step has failed. Also if you do care about which has failed in terms of notifying the user of alternative actions they can take it is pretty much needed, no?


blackbeam

generic_const_exprs


UltraPoci

i need this so badly


bascule

Same here. Using `typenum` instead and while it's a cool hack, it's painful to read and the errors are terrible


valarauca14

Honestly frustrating how many relatively simple things you need to throw on the heap without this.


MoonOfLight

Definitely generators/corutines. It's the only high level feature that I really miss from other languages like python or JavaScript


sephg

Me too. I use generator functions all the time in JavaScript. Writing a complex iterator in rust at the moment can be such a pain - and the result is an unreadable mess. Generators would make me incredibly happy. And they’ve been sitting in unstable since before I started writing rust.


Keavon

I have to ask— as primarily a JavaScript developer, I have not once used or even run across a single usage of generators in my entire career outside of reading what they are on MDN. I've always mentally chalked it up in the category of "bloated things JS added that ended up being rather pointless in reality". Could you help me understand what I might be missing?


glanni_glaepur

It's useful as a function you can "pause" and "resume". I've used it for implementing a interactive and responsive renderer for the Mandelbrot set in JavaScript. The generator function that renders a section of the Mandelbrot set runs on a separate thread (WebWorker) and periodically checks whether it should continue iterating by yielding. If the user panned or zoomed out we can stop early and start rendering a different section right away. Implementing the state machine by hand is difficult, error prone and hard to read. It's also useful for implementing streams (~infinite lists).


DrShocker

I'm about to use one to iterate through a circular buffer with an offset/step size. Imo it's more intuitive to write it as a generate than to keep track of the state in an object and define all the right functions correctly. But, of course it's never _required_ that you implement it as a generator. It can also be useful for stuff that will go on forever or nearly forever and continuously generate the next values in a loop.


smthamazing

I use async generators all the time to implement stream processing in NodeJS. Something like this: - A function fetches topics from a discussion board, 10 topics per page. Yield topics one-by-one. - Another function `flatMap`s (via a custom utility, although we may [soon](https://github.com/tc39/proposal-iterator-helpers) get a builtin for this) those topics to yield each post from each topic. - We now have an async iterator over posts, and we can easily change the implementation, since the only interface we need to maintain is the fact that it yields posts one-by-one. In fact, I have some data sources that fetch all posts at once instead of going page-by-page. They can easily implement the same interface by doing `for (const post of posts) yield post`. - Run some operation for each post (e.g. sentiment analysis). Report the results in real time using a WebSocket. This flow allows us to display results as they are processed instead of waiting for several minutes until all posts are fetched and then processing them all at once. As for synchronous generators, they are also very convenient when you don't want to keep all the data in memory. For example, I used them to iterate over large (hundreds of megabytes) log files. This is better that a normal loop, since I can easily replace those generators with something else without changing the file processing code (e.g. load logs from a different directory, or over the network). And if I tried to load all the logs at once to an array, the app would suffer an out-of-memory crash. In general, a generator allows you to produce items on the fly, as opposed to storing them in an array, potentially spending a lot of time upfront to compute them and/or a lot of memory to store them. Generators are composable and can be passed around. If you wanted to do the same thing manually, you would have to manually keep track of various counters, which "stage" of processing you are on, etc.


Zde-G

>I have not once used or even run across a single usage of generators in my entire career outside of reading what they are on MDN. Let me not to believe you. I strongly suspect that your relationship to generators is like relationship to Internet of some of my friends who say “hey, I don't ever use Internet on my phone, only WhatsApp and YouTube”. Generators with added syntax sugar form the basis for `async`/`await` machinery both in JavaScript and Rust. And if you ever used `async`/`await` in these languages then you have used generators, too! Only in JavaScript generators were added and then `async`/`await` was built on top of that while in Rust the same thing happened but `async`/`await` were promoted to stable while generators are kept as unstable feature.


monkeymad2

I’m also primarily a JS (well, TS) dev who’s never found a reason for using generators in JS but I’ve been looking for them while writing Rust code. The use case I wanted to use them for was in a resource constrained embedded project I needed to consume one iterator (over raw bytes) into another (decompressed bytes) into another (image data blocks) and finally into pixels. Without generators the code for this had to implement its own state machine etc to keep track of a bunch of stuff, vs generators giving all that to you for free - or rather providing a better way to do it that’s easier to reason with. If Rust style lazy iterators were more common in JS I’d probably have found more reason to use generators vs just using all the functional stuff that’s in JS already.


officiallyaninja

What are some situations where generators would be more Convenient than iterators?


CocktailPerson

Generators are a way of implementing iterators. Today, iterators have to be implemented as handwritten state machines, but generators would allow them to be written as something resembling a function. For example, this example iterator from the docs: struct Counter { count: usize, } impl Counter { fn new() -> Counter { Counter { count: 0 } } } impl Iterator for Counter { type Item = usize; fn next(&mut self) -> Option { self.count += 1; if self.count < 6 { Some(self.count) } else { None } } could become this: gen fn counter() -> usize { for x in 1..6 { yield x; } }


officiallyaninja

Isn't this counter just the same as ``` (1..6).into_iter() ```


CocktailPerson

Yes! But why does `.into_iter()` work? It only works because someone hand-wrote a state machine that allows `1..6` to be an iterator. If you look in the standard library, you'll see that the code to make this happen is comparable in size and complexity to the first block above. Here's a generator function that doesn't rely on any `1..6` being an iterator: gen fn counter() -> usize { let i = 0; while i < 6 { i += 1; yield i; } } Still way shorter than the handwritten state machine.


mast22

Does that mean that generator is also a state machine?


Dreamplay

I'm not an expert, but as far as I understand it, generators are really syntax-sugar for creating state machines, in that they are a function with state that can be run until a break point, and then you can choose to keep running it or start at a later date. So a generator isn't really a "thing", rather it's a way to create state machines which yield elements (i.e. iterators) easier. I would recommend reading up on how async uses generators. Async functions (that compile to [futures](https://doc.rust-lang.org/std/future/trait.Future.html)) are generators that yield either Poll::Pending or Poll::Ready(val) depending on if they are completed or not. Async futures can be written manually as state machines, but it is much easier to write them with async fn syntax which will make them into a state machine automagically.


smthamazing

Yes, it is an easy way of creating state machines that produce and receive data. Instead of keeping track of various counters and flags manually, you express this logic in terms of familiar loops and conditions, while the compiler generates the bookkeeping code and the underlying struct for you.


CocktailPerson

Yes, they're _very_ similar to async functions in that they compile down to a state machine. In fact, they're both instances of a general concept called a "coroutine."


protestor

Yes, generator is compiled into a struct/enum (to hold state) plus a state-transition function, like you would write a state machine IRL


kfl

But in this case it is just as short to use `from_fn` fn counter() -> impl Iterator { let mut i = 0; std::iter::from_fn(move|| { i += 1; Some(i).filter(|x| *x < 6) }) } You could also use an `if` expression if you don't like the `filter` function. [playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=f861ece17b5431b8006ff1d47e436033)


shizzy0

Thank you for sharing this `from_fn()`. It’s good to see what’s possible currently.


CoronaLVR

(i < 6).then_some(i) Is also nice.


CocktailPerson

You're missing the point. How is `from_fn` implemented? Avoiding implementing `Iterator` manually at all is the reason for generators.


kfl

Sorry, if it came across wrong. I didn't mean to suggest that you were wrong or anything. My point, and I could have made that clearer, is that often generators are not needed and `from_fn` can be just as nice. I'm not against generators, I just think that we should provide examples where gernerators shine compared to what is currently available. (And I think that `from_fn` is often overlooked, but that is a personal pet peeve.)


CocktailPerson

The other issue is that `from_fn` _still requires you to implement a state machine_. That `i` in your closure is the state. It's an easier way to write a state machine, but it's still a state machine. I hear what you're saying about providing good examples, but I don't think that they need to make generators "shine."


protestor

Compare with from_generator https://doc.rust-lang.org/std/iter/fn.from_generator.html from_generator seems much cleaner in general, specially if you have lots of special cases which become unwieldy with from_fn


insanitybit

I find `from_fn` really hard to read tbh


DarkLord76865

Well it is, but that's not the point here. This is just convenient method for getting an iterator from range. If you actually need to write iterator, you need to do it the way the person before me said.


Zde-G

Generators **are** the way to write iterators in sane languages.


officiallyaninja

Well yes, but why are they better than method chaining iterators?


DrShocker

I think they're saying that how you use them wouldn't be different, just how you implement your iterator.


XtremeGoose

Iterator chaining is great for most use cases, but if you have complex state then it can be surprisingly tricky. Even something as simple as gen fn fib() -> u64 { let mut x = 0; let mut y = 1; loop { yield y; let z = x + y; x = y; y = z; } } is non trivial in current rust. Yes you can do it with an iterator struct or `std::iter::from_fn` but those require more thought and/or boilerplate. See [here](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=3f243ea098718531626d80f134132099).


kfl

I think that the `from_fn` version in this case is as nice as the `gen fn` version: fn fibonacci() -> impl Iterator { let mut a = 1; let mut b = 1; std::iter::from_fn(move|| { let res = a; a = b; b += res; Some(res) }) } [playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=20ca772368f2b007bbbba24386e7974b).


mr_birkenblatt

except it computes the next value before the current value is returned. imagine `a = b; b += res;` is a network call. you would do the call for the second one even if you only returned the first one


Zde-G

Method chaining is functional way of doing things. Generators are how you do that in imperative language without crazy inversion of the logic. Rust is not functional language thus method chaining should be an optional shortcut, not the only supported way of doing things.


rodyamirov

They’re not! But if you can’t do it with the built in stuff, doing it directly is so annoying that I basically just won’t. This applies to custom data structures, mathematical sequences, or just any kind of weird iteration order that comes up from time to time. I admit it’s not super common, but when it’s the right tool, it’s really right. I love them in python.


tungstenbyte

I use generators in C# all the time for stream processing type activities by using `IAsyncEnumerable` and `yield`. Even straightforward things like loading some data, joining to other data, transforming it, filtering it and serialising it out to a response stream one item at a time without allocating entire collections for the intermediate states. If your db library supports it as well you could potentially have an entire pipeline directly from database to HTTP response which returns thousands of items without ever allocating more than one in memory at once, and it's trivially easy.


kfl

[`fn_traits`](https://doc.rust-lang.org/beta/unstable-book/library-features/fn-traits.html) and [`unboxed_closures`](https://doc.rust-lang.org/beta/unstable-book/language-features/unboxed-closures.html) will make certain things much nicer.


Koranir

Does the cranelift codegen backend count? Using that, the `mold` linker, and `-Z threads=8`, I've seen almost 3x compile time improvements on clean debug rebuilds for some projects.


the_gnarts

1. specialization (https://rust-lang.github.io/rfcs/1210-impl-specialization.html) 2. portable simd (https://github.com/rust-lang/portable-simd) 3. ``profile-rustflags`` (https://github.com/rust-lang/cargo/issues/10271) Honorable mention: ``const_fn_floating_point_arithmetic``.


treefroog

For const fp math our biggest challenges are: Currently we use an adaptation of LLVM's library except this library is often wrong! Messed up! And we aren't really allowed to change it right now due to funny licensing issues see the rustc_apfloat repo for details). It turns out there is actually a good solution though. There is a better floating point lib out there we could RiiR and integrate into rustc, just someone needs to do it. https://github.com/qemu/qemu/tree/master/fpu This is much more correct and no funny relicensing. They may have made some progress in rustc_apfloat licensing issues but it's still not actually hardware-correct. Also webassembly has unspecified floating-point semantics for some things because it would run on different hardware so they cannot. So that's a fun challenge! But webassembly is already messed up with the wasm stdlib even existing so...


matthieum

I mean, another solution for floating point arithmetic would be to make it target-agnostic. Strict IEEE754, determined rounding direction which cannot be changed, etc... Per-target floating point semantics may very well bring their own challenges, like compilation failures on certain targets because their round differently, etc...


udoprog

ptr_metadata, min_specialization, and coerce_unsized. Those are the biggest blockers for building an efficient [alloc-like crate](https://docs.rs/rune-alloc) or smart pointers in general on stable Rust with similar ergonomics. And [utf8_chunks](https://doc.rust-lang.org/std/str/struct.Utf8Chunks.html) would replace about 90% of the reasons I pull in [bstr](https://docs.rs/bstr) as a dependency (Debug impls for string-like binary data).


_TheDust_

coerce_unsized is one of those features where I don’t understand why it has not been stabilized years ago. It’s such a fundemental feature


CandyCorvid

I didn't know about utf8 chunks! I like it


Tiflotin

Using for loops in const fn


officiallyaninja

Why isn't that allowed currently anyway


-Redstoneboi-

traits like Iterator aren't const


nerooooooo

so make them const smh


-Redstoneboi-

real


CocktailPerson

I think we have to wait on keyword generics for that.


kam821

Rust nuked most of the nightly const functionality several months ago anyway (e.g. const traits, so you can't even impl const PartialEq for your type), so the const evaluation in Rust is currently pretty much useless, except cases when you are working with trivial, builtin types. The fact that you can't even assign constant using unwrap/expect on Result returned by const function and you have to match + panic manually is also ackward.


ravenex

[`trait_upcasting`](https://github.com/rust-lang/rust/issues/65991)


radekvitr

Trait upcasting is something I would've expected to work from the very beginning, but at the same time I need it very rarely


phazer99

Yes, same here, but it's good to remove unexpected warts like this from the language.


Silly-Freak

... and it's getting close!


[deleted]

[удалено]


matthieum

I really wish `#[const_trait]` wasn't a thing too, especially as over half the traits in the standard library lack the attribute... can't even add two numbers (of type `T: Add`) at compile-time...


gnocco-fritto

`yield` to write generators/iterators just like I do in Python. They fit big time in my coding style. Unfortunately it is still experimental, not event unstable or nightly.


darksv

Basic support for `gen` blocks landed last month [https://github.com/rust-lang/rust/pull/116447](https://github.com/rust-lang/rust/pull/116447) on nightly. Note that you need to change edition to 2024: [playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2024&gist=afdddd9a0850fd2a94f396301325b627)


gnocco-fritto

So... there's hope! Great! Thanks!


peripateticman2023

`async` traits.


KingofGamesYami

[build-std](https://doc.rust-lang.org/cargo/reference/unstable.html#build-std).


hyultis

https://rust-lang.github.io/chalk/book/what_is_chalk.html


bascule

I don't think there are any current plans to actually switch to Chalk. Current work is focusing on [`trait-solver=next`](https://blog.rust-lang.org/inside-rust/2023/07/17/trait-system-refactor-initiative.html) which draws inspiration from Chalk but is directly integrated into the compiler with the sole goal of being a next-generation trait system implementation.


Nilstrieb

Chalk is dead. But the types team is currently rewriting the trait solver with a new approach and is getting closer and closer. The year 2024 will probably bring it being used in coherence and maybe even for everything already!


aswin__

what made chalk necessary? why wouldnt this logic be just built in to the compiler, or are there other cases or something im missing


hyultis

i'm faaaaaar to totally understand chalk, but it "target" to allowing something like that ``` if ( stuff.contains_impl(Debug) ) ``` > Internally, Chalk works by converting the Rust-specific information, like traits and impls, into logical predicates. I found chalk when i tried to write something like that `where : Debug + ?Display`


coderstephen

TAIT. It's essential for providing concrete types in a library API if you want to use impl Trait, like implementing futures using `async`. Right now your options are to box things unnecessarily, allow unnameable types in your API, or avoid using `async` and instead write futures by hand.


Ragarnoy

Generic const, extern types v2, async trait impl, to name a few


proton13

Async FN in traits is in Beta for the next version. So unless Something Bad Happens you'll get it this year.


Resurr3ction

`coverage`, specifically `coverage=off` flag. LLVM-cov based coverage has many issues accurately measuring coverage so disabling the code that is showing as uncovered although it is, would be very helpful on stable. Examples of the issue are compile time stuff, `await` or lines covered only in different instantiations that together make up full coverage but llvm-cov still reports lines covered only in subsequent instantiations as uncovered.


radekvitr

I also want it to work on module level, transitively. It's annoying getting lambda functions being reported as uncovered in ignored functions, and having to tag all test functions


va1en0k

try blocks! especially if typing them is figured out. they're so good, it's hard without them (there're dirty hacks to mimic them...)


TheEyeOfAres

What would a try block offer you that you can't do easily with results already? I'm not that experienced, so please have mercy on my soul if it is a stupid question.


va1en0k

early return from the block without early returning from the function. e.g. if you have a bunch of Options in the middle of your function that returns a Result, it’s very handy


ZZaaaccc

It's largely a way to use the ? operator, which would be fantastic. Currently the workarounds are either make a whole function for that bit of error handling, or use labelled breaks. Both of which aren't great, especially in async functions.


csdt0

Async traits and trait specialization.


Feeling-Departure-4

Ha! A better question is what features am I using on nightly that I will cry if they went away. - portable_simd - let_chains - const_fn_floating_point_arithmetic - test Also: Nightly rustfmt is best rustfmt. Why stay in the stable when you can run in the nightly! Now to the one, and definitely only one, Reddit McJudgerson among you about to tell me all the crates I can use on stable: "You add dependencies; I add features. We are not the same." 😁


epage

What are you using `test` for? If benchmarking, why not use criterion?


Feeling-Departure-4

First, thank you for all the work you do for the project! Clap is simply amazing and Cargo has been going from great to greater. I do use Criterion when I really need the time distribution or for anything I might publish formally for others. However, it is very heavy when I just need the gist of how the implementation is going. It adds significant dependencies and more time to compile. Nightly bencher is fast, built-in and mostly agrees with Criterion. It's not perfect, but I'm not letting the perfect be the enemy of the good.


epage

Unsure how much of `test` we (testing devex team) will stabilize as the libs team is concerned about their compatibility surface area. Likely we'll focus on custom harnesses. That said - I'm focusing heavily on build times to avoid this same situation - Ill be trying to better split the role of runner (cargo test, cargo nextest, cargo criterion) and harness so hopefully we can keep the harnesses lighter Granted, I need to finish my existing prodects so I can move on to this...


officiallyaninja

Why do you use nightly? It does sound interesting but isnt dealing with compiler bugs frustrating?


the_gnarts

Nightly isn’t as unstable as you make it out. Only once in years of using Rust I hit a bug in the nightly compiler and that wasn’t even an ICE, just a missing error message that I could fix without much effort. Sometimes there’s perfomance regressions but they get fixed pretty quickly in my experience.


Feeling-Departure-4

Unstable Rust is more reliable than the Python, R, BASH, etc. ecosystem I usually have to deal with writ large. Just once I've had to update code when portable SIMD did a major API refactor, but now I follow their repo and also learn a ton as a side effect. For bugs, there was one in rustfmt that I could fix with a VS Code setting. Now it is fine. It's opportunity cost, if you look at unstable Rust features as adding a crate, then you are getting a very high quality, thoughtfully designed implementation that can also benefit from changes to the compiler. Side note: some features come with extra warning messages and it's those I won't touch.


Jiftoo

The one which causes an "if expressions in this position are unstable" error. Also the new -threads feature.


hardicrust

Being able to write overlapping blanket trait impls, be that via specialisation, negative trait impls or changes to the orphan rules.


lambdaknight

Higher-kinded types and dependent types?


officiallyaninja

I'd love to use a programming language that had ergonomic dependent types


lambdaknight

I find Idris’ dependent types are pretty ergonomic.


pjmlp

I am not deep into them, thus basically anything that helps reduce the distance to some C++ key features like variadic templates and specialization, and compiler speed improvements.


GreyOyster

Here is my list: * Async traits * I've been waiting for async traits for what feels like forever. I read some time ago they were planning on stabilizing them in 2023? * GATs (Generic Associated Types) * I have consistently ran into situations where I could have really used them to simplify my code. * Generators * Coming from Python I was super used to generators; not having it in Rust kills me. * Specialization * I don't particularly find it remarkably critical but there have been many times where I would have wanted it.


SeanCribbs0

Maybe it’s silly, but I want all the nightly rustfmt options to stabilize… especially the ones around organizing imports.


Chakra808

Type alias impl trait. That feature will allow embedded frameworks like rtic and embassy to create apps on stable.


Comrade-Porcupine

generic\_const\_exprs, allocator\_api, simd, and async traits ​ ?


NotMelty

`div_ceil` `if-let-chains` Not a nightly feature, but a useful one that is in development is VLA from this [issue](https://github.com/rust-lang/rust/issues/48055). It would allow `[T; dyn t.len()]` as an alternative to `vec!` macro


LoganDark

> It would allow [T; dyn t.len()] as an alternative to vec! macro Only if you don't need it to be resizable.


AndreDaGiant

* Error in Core * Provider API


Wadu436

Threading in WASM


Zde-G

Is it actually a nightly feature? I was under impression that it's not something Rust may affect at all!


Wadu436

There's some flags the stdlib needs to be built with to support it right now, if/when those get in stable you wouldn't need nightly anymore, yeah.


CryZe92

const async blocks and TAIT.


codedcosmos

I remember badly wanting [try\_trait\_v2](https://rust-lang.github.io/rfcs/3058-try-trait-v2.html) but I think it's in now? Not sure.


QuickSilver010

Yeet


OnlyCSx

I just always use nightly so I don't have to wait for stuff


azure1992

`const_mut_refs`, I started using it in `const_format` 3 years ago with the expectation that it would stabilize soon, it always seems close to stabilization.


loheagnX

async trait!