Need help?
<- Back

Comments (64)

  • kelnos
    This is one of the (several?) things that make me very worried about Rust long-term. I love the language, and reach for it even when it sometimes isn't the most appropriate thing. But reading some of the made-up syntax in the "Removing Coherence" section makes my head hurt.When I used to write Scala, I accepted the fact that I don't have a background in type/set/etc. theory, and that there were some facets of the language that I'd probably never understand, and some code that others had written that I'd probably never understand.With a language like Rust, I feel like we're getting there. Certain GAT syntxes sometimes take some time for me to wrap my head around when I encounter them. Rust feels like it shouldn't be a language where you need to have some serious credentials to be able to understand all its features and syntax.On the other end we have Go, which was explicitly designed to be easy to learn (and, unrelatedly, I don't like for quite a few reasons). But I was hoping that we could have a middle ground here, and that Rust could be a fully-graspable systems-level language.Then again, for more comparison, I haven't used C++ since before they added lambdas. I wonder if C++ has some hairy concepts and syntax today on par with Rust's more difficult parts.
  • amluto
    I’m not convinced that the problem is actually a problem. Suppose someone writes a type PairOfNumbers with a couple fields. The author did not define a serialization. You use it in another type and want it to serialize it as: { "a": 1, "b": 2 } I use it and want to serialize it as: [ 1, 2 ] What we’re doing is fine. You should get your serialization and I should get mine. But if either of us declares, process-wide, that one of us has determined the One True Serialization of PairOfInts, I think we are wrong.Sure, maybe current Rust and current serde make it awkward to declare non-global serializers, but that doesn’t mean that coherence is a mistake.
  • ekidd
    There's a well-known (and frequently encouraged) workaround for the orphan rule: Create a wrapper type.Let's say you have one library with: pub struct TypeWithSomeSerialization { /* public fields here */ } And you want to define a custom serialization. In this case, you can write: pub struct TypeWithDifferentSerialization(TypeWithSomeSerialization) Then you just implement Serialize and Deserialize for TypeWithDifferentSerialization.This cover most occasional cases where you need to work around the orphan rule. And semantically, it's pretty reasonable: If a type behaves differently, then it really isn't the same type.The alternative is to have a situation where you have library A define a data type, library B define an interface, and library C implement the interface from B for the type from A. Very few languages actually allow this, because you run into the problem where library D tries to do the same thing library C did, but does it differently. There are workarounds, but they add complexity and confusion, which may not be worth it.
  • anon
    undefined
  • smj-edison
    I feel like encapsulation and composition are in strong tension, and this is one place where it boils over.I've written a decent bit of Rust, and am currently messing around with Zig. So the comparison is pretty fresh on my mind:In Rust, you can have private fields. In Zig all fields are public. The consequences are pretty well shown with how they print structs: In Rust, you derive Debug, which is a macro that implements the Debug trait at the definition site. In Zig, the printing function uses reflection to enumerate the provided struct's fields, and creates a print string based on that. So Rust has the display logic at the definition site, while Zig has the logic at the call site.It's similar with hash maps: in Rust you derive/implement the Hash and PartialEq trait, in Zig you provide the hash and eq function at the call site.Each one has pretty stark downsides: Zig - since everything is public, you can't guarantee that your invariants are valid. Anyone can mess around with your internals. Rust - once a field is private (which is the convention), nobody else can mess with the internals. This means outside modules can't access internal state, so if the API is bad, you're pretty screwed.Honestly, I'm not sure if there is a way to resolve this tension.EDIT: one more thought: Zig vs Rust also shows up with how object destruction is handled. In Rust you implement a Drop trait, so each object can only have one way to be destroyed. In Zig you use defer/errdefer, so you can choose what type of destructor runs, but this also means you can mess up destruction in subtle ways.
  • hdevalence
    I don’t think Rust needs this; Rust has done great for the last decade with the coherence rules it has. I am glad to not have to worry about this, and to not have to worry about any of the downstream problems (like linker errors) that coherence structurally eliminates.
  • encody
    "Note that nonbinary crates still obey the orphan rules."I find it slightly humorous that this sentence contains three words which would be understood completely differently by the majority of the English-speaking population.
  • bitbasher
    I used Rust for ~14 months and released one profitable SaaS product built entirely in Rust (actix-web, sqlx, askama).I won't be using Rust moving forward. I do like the language but it's complicated (hard to hold in your head). I feel useless without the LSP and I don't like how taxing the compiler and LSP are on my system.It feels really wasteful to burn CPU and spin up fans every time I save a file. I find it hard to justify using 30+ GB of memory to run an LSP and compiler. I know those are tooling complaints and not really the fault of the language, but they go hand in hand. I've tried using a ctags-based workflow using vim's built in compiler/makeprg, but it's less than ideal.I also dislike the crates.io ecosystem. I hate how crates.io requires a GitHub account to publish anything. We are already centralized around GitHub and Microsoft, why give them more power? There's an open issue on crates.io to support email based signups but it has been open for a decade.
  • nixpulvis
    I don't think explicit naming of impls is wise. They will regularly be TraitImpl or similar and add no real value. If you want to distinguish traits, perhaps force them to be within separate modules and use mod_a::mod_b::<Trait for Type> syntax.> An interesting outcome of removing coherence and having trait bound parameters is that there becomes a meaningful difference between having a trait bound on an impl or on a struct:This seems unfortunate to me.
  • dathinab
    This isn't a new discussion it was there around the early rust days too.And IMHO coherence and orphan rules have majorly contributed to the quality of the eco system.
  • egorelik
    Similar but not exactly the same as named impls, I'd really like to see a language handle this by separating implementing a trait from making a particular existing implementation the implicit default. Orphan rules can apply to the latter, but can be overriden in a local scope by any choice of implementation.This is largely based on a paper I read a long time ago on how one might build a typeclass/trait system on top of an ML-style module system. But, I suspect such a setup can be beneficial even without the full module system.
  • faresahmed
    Take a look at https://contextgeneric.dev, it's as close as one can get to solving this issue without modifying rustc.
  • ozgrakkurt
    It is fundamentally difficult to have an “ecosystem”.Would much rather see a bunch of libraries that implement everything for a given use case like web-dev, embedded etc.Unfortunately this is hard to do in rust because it is hard to implement the low level primitives.Language’s goal should be to make building things easier imo. It should be simple to build a serde or a tokio.From what I have seen in rust, people tend to over-engineer a single library to the absolute limit instead just building a bunch of libraries and moving on.As an example, if it is easy to build a btreemap then you don’t have to have a bunch of traits from a bunch of different libraries pre-implemented on it. You can just copy it, adapt it a bit and move on.Then you can have a complete thing that gives you everything you need to write a web server and it just works
  • Animats
    Note the use case - someone wants to have the ability to replace a base-level crate such as serde.When something near the bottom needs work, should there be a process for fixing it, which is a people problem? Or should there be a mechanism for bypassing it, which is a technical solution to a people problem? This is one of the curses of open source. The first approach means that there will be confrontations which must be resolved. The second means a proliferation of very similar packages.This is part of the life cycle of an open source language. Early on, you don't have enough packages to get anything done, and are grateful that someone took the time to code something. Then it becomes clear that the early packages lacked something, and additional packages appear. Over time, you're drowning in cruft. In a previous posting, I mentioned ten years of getting a single standard ISO 8601 date parser adopted, instead of six packages with different bugs. Someone else went through the same exercise with Javascript.Go tends to take the first approach, while Python takes the second. One of Go's strengths is that most of the core packages are maintained and used internally by Google. So you know they've been well-exercised.Between Github and AI, it's all too easy to create minor variants of packages. Plus we now have package supply chain attacks. Curation has thus become more important. At this point in history, it's probably good to push towards the first approach.
  • mastax
    Language changes could help for sure. There’s a library implementation we can use right now though: https://facet.rs/ Basically a derive macro for reflection. Yeah it’s one (more) trait to derive on all your types but then users can use that to do reflection or pretty printing or diffing or whatever they want.
  • mbo
    I never understood why Rust couldn't figure this shit out. Scala did.> If a crate doesn’t implement serde’s traits for its types then those types can’t be used with serde as downstream crates cannot implement serde’s traits for another crate’s types.You are allowed to do this in Scala.> Worse yet, if someone publishes an alternative to serde (say, nextserde) then all crates which have added support for serde also need to add support for nextserde. Adding support for every new serialization library in existence is unrealistic and a lot of work for crate authors.You can easily autoderive a new typeclass instance. With Scala 3, that would be: trait Hash[A]: extension (a: A) def hash: Int trait PrettyPrint[A]: extension (a: A) def pretty: String // If you have Hash for A, you automatically get PrettyPrint for A given autoDerive[A](using h: Hash[A]): PrettyPrint[A] with extension (a: A) def pretty: String = s"<#${a.hash.toHexString}>" > Here we have two overlapping trait impls which specify different values for the associated type Assoc. trait Trait[A]: type Assoc object A: given instance: Trait[Unit] with type Assoc = Long def makeAssoc: instance.Assoc = 0L object B: given instance: Trait[Unit] with type Assoc = String def dropAssoc(a: instance.Assoc): Unit = val s: String = a println(s.length) @main def entry(): Unit = B.dropAssoc(A.makeAssoc) // Found: Playground.A.instance.Assoc Required: Playground.B.instance².Assoc² Scala catches this too.
  • devnotes77
    [dead]
  • davej32
    [dead]
  • nmilo
    I will never stop hating on the orphan rule, a perfect summary of what’s behind a lot of rust decisions. Purism and perfectionism at the cost of making a useful language, no better way to torpedo your ecosystem and make adding dependencies really annoying for no reason. Like not even a —dangerously-disable-the-orphan-rule, just no concessions here.
  • grougnax
    If you think Rust has problems, it is that you've have not understood well Rust.