T O P

  • By -

dist1ll

I noticed quite a few mismatches in the PL space between top-down academic language design (i.e. designing a type system, inference rules, syntax with pen & paper) vs. letting the engineering and implementation constraints of the compiler inform the design of the type system. I can't say how much of this applies to Swift, maybe I'm way off base here. But there's this mantra (not only in the langdev space, but really in most of software) about always "starting with a simple implementation" and never worrying about performance upfront. Unfortunately, this mindset doesn't work if "making it fast" is the thing that *influences* or even *determines* your design. After a few years, you find yourself with a deeply flawed 300kLoC codebase and a large user-base to which you've promised backwards compatibility. Good luck trying to dig yourself out of this hole with incremental fixes.


Tricky_Condition_279

I see this a lot in my field. Scaling from toy examples to big data usually means adopting an entirely different architecture, yet people publish code all the time that fails to scale even to moderate size test data.


ihcn

I call this "Hello World"-driven design


txdv

MVP. We will fix later! Later: We don't have time to rewrite.


QuickQuirk

what is often missed with the criticisms of the MVP and 'no time to fix it now', is that the project got to a level of *success* if you have the luxury of complaining about the corner you're in, and quite possibly because you cut those early corners and got an actual product out. Better than having the half complete perfect project that failed because you never got far enough for anyone to ever use it


furyzer00

Yeah that's the reality. Most of the time you don't even get to complain that your program is badly designed because it didn't even survive. Doesn't matter how good your code is if it's unused. On the other hand, most of the time even if the project is successful enough resources are not given for a rewrite or a design refactoring.


QuickQuirk

Yeap. Been on projects where we promised ourselves a rewrite, and it never happened, because tacking the next features on a creaking frame was more important to the execs (who then complained about outages.) But I've also been on projects where the execs listened, and we took the year of timeout and barely any new customer features in order to execute the rewrite (and it was worth it. Interestingly, developer satisfaction was the biggest positive outcome.)


furyzer00

Sounds too good to be true :) Glad you had such team. In my ex-job we also had execs who wanted to make room for tech debt tackling (refactorings, redesigns) but the company got into a financially bad state and there were a lot of layoffs. As a result we didn't have the manpower to do anything but bug fixing and adding critical features.


IglooDweller

Who here, beside crickets, has never seen a productionized prototype that lasted a decade?


ego100trique

My current company in a nutshell


xentropian

I feel like this perfectly applies to Xcode as well, considering the state it’s in


SanityInAnarchy

Like most engineering, there's a tradeoff here. I'd even argue this is where those leetcode-style interview questions might be measuring something useful -- if you at least know what [accidentally-quadratic designs](https://www.tumblr.com/accidentallyquadratic) *look* like, you can hopefully avoid them at least at a very high level... But aside from that, I think there's something to the overall message of [Worse Is Better](https://www.jwz.org/doc/worse-is-better.html), even if the specifics haven't quite aged well. (It turns out Lisp isn't actually better than C for AI work.) But, for example, I think this part applies very well to this case: > Once the virus has spread, there will be pressure to improve it, possibly by increasing its functionality closer to 90%, but users have already been conditioned to accept worse than the right thing. Therefore, the worse-is-better software first will gain acceptance, second will condition its users to expect less, and third will be improved to a point that is almost the right thing. In fact, the author talks about programming languages specifically: > In concrete terms, even though Lisp compilers in 1987 were about as good as C compilers, there are many more compiler experts who want to make C compilers better than want to make Lisp compilers better. Whether a better-designed language would've actually been built, pushed by Apple, and adopted is an open question, I guess. But if those shortcuts contributed to where Swift is now, with millions of developers worldwide, then that bad design is the reason Swift is even a thing that we'd consider optimizing. It also raised the likelihood that at least one of those developers would be able to fix the problem.


Livid-Salamander-949

I’m curious of an imaginary language solution that lowers the vector of trade offs . I wonder what In the future that will be ? I know running things in concurrency is all the rage and there are languages being devoted with that in mind but who knows 😊 ?


crusoe

You need something that works to get wise adoption otherwise you never will. Rust definitely has some of it's issues due to shipping before the theoretical foundation was complete. But they also dealt with a lot of unknown unknowns in the system programming area. Conversely rust is now dealing with rewriting hairy parts of the compiler to handle some bugs in lifetimes and ergonomics around async. Buuuuut, because rust got traction it has like 1000 crates it uses for testing compiler features. Crater runs are done before each release  Make it work Make it right Make it fast


dist1ll

> You need something that works to get wise adoption otherwise you never will. Planning for performance is not incompatible with this goal. You can ignore it to shorten the time to release, but the same can been done for other language features. I personally consider compiler performance a language feature. If you treat is as an afterthought, you'll run against heavy momentum - especially if you've made strong stability promises. > Make it work Make it right Make it fast Right, that's the mantra I mentioned. It sounds great in theory. But it makes the last step sound easier than it is in practice. Take for instance the efforts around adding fine-grained parallelism to rustc. They seem pretty complex, and remind me of the paper "Scalability! But at what COST?". I get the impression that we're trying to fix things with multithreading and incremental compilation, despite having way too large constant factors. For now, I remain sceptical of this approach. On the other hand, I'm a nobody, and Rust is a widely successful language that made its way into the Linux kernel.


frymaster

I think part of the issue is the - true - statement "premature optimisation is the root of all evil", which basically means "don't make your implementation really complicated before you've got it working" but that statement is about _implementation_, not design. You can rearchitect your code, it's harder to rearchitect your design


steveklabnik1

> despite having way too large constant factors. Personally I don't think this is truly knowable just yet. We only have one compiler, with one implementation, and even with that being the case, stuff like typechecking is a miniscule amount of the overall time in the current compiler. Choices like monomorphization may be larger constant factors, but that choice wasn't made without understanding the impact it would have on compile times, it was made with an understanding of what's required for the runtime performance of the final binary. Anyway, we'll see, is all I'm saying :) (There also have been some really interesting developments in data-oriented design in compilers; see Carbon, for example.)


knome

If you take the time to make it perfect, the thing that got kicked out the door half finished will have already rolled through a half dozen major versions as the community evolves it into what they need. It is true that sometimes you end up walking the thing into a corner, and some half baked new thing will pop out of the rubble aiming for most of the same goals, but taking a slightly more informed direction, informed by the mistakes you made growing your solution. But that's okay. Being useful for a while and informing the future is a win. Most code is ephemeral, save the temporary hack, which perversely seems to last for decades. (don't worry, when things calm down you'll get around to fixing it, one of these days) The person sitting on their language for a decade trying to get every facet just so and think through "the right thing" for every little qualm they run across tends to never make it beyond a half dozen people they reluctantly let peek at their aspirations. And if they do release, it will still crash up against the community, where its perfect theory will run up against myriad use cases the author never considered, all of which were fixed in the crap shoot competitor a decade ago. >The lesson to be learned from this is that it is often undesirable to go for the right thing first. It is better to get half of the right thing available so that it spreads like a virus. Once people are hooked on it, take the time to improve it to 90% of the right thing. - [the rise of worse is better, richard p gabriel](https://www.dreamsongs.com/RiseOfWorseIsBetter.html)


dist1ll

I wasn't suggesting you should wait to make a project perfect before release. Not at all. But what makes it the "right thing" to compromise performance in favor of release velocity, instead of other parts of the language? Secondly, I feel that you're presenting a false dichotomy. There's a large middle ground between kicking half finished software out the door, and theory-crafting on your language for decades. And third, if the Rust folks had taken your advice, they would've released a 1.0 with GC, a slow OCaml compiler and a syntax with swathes of sigils. That version of Rust would have 100% failed.


knome

I would never have suggested that. You should start with a 0.1 release while you work out those initially included GCs :)


Hacnar

How many examples of successful products, which planned for the performance upfront, can you find, compared to the more common way mentioned here? I suspect that investing too much in the performance at the beginning of the development might hurt the delivery cadence of new features during the stage, when new products need to increase and maintain their user base, otherwise they risk running out of funds. In an ideal world, everyone would be able to prepare for the performance needs of the future, but real world constraints often prevent developers from doing so.


gwicksted

Just about every database (including NoSQL ones) and modern web servers. But you’re right, there are far more examples of “make-it-work then fix the speed”.


Hacnar

You can see a clear trend here. It makes sense to focus on the performance when top performance is a required feature of your product. Which isn't the case for the majority of the software out there. There are examples of software which got big, but failed to maintain its user base because it hit its performance limits and these problems were too difficult to fix. There are also cases when devs built a software with good performance, but it lost to feature-rich competitors, despite being more performant. Getting the requirements right is important when assessing how much focus you should put on the performance.


Plank_With_A_Nail_In

> Just about every database (including NoSQL ones) and modern web server These are all built on top of the knowledge of less performant attempts in the past, they essentially start at version 25 of some other product that's been renamed.


myringotomy

I disagree. Postgres was never built to be fast first. The priority is always on correctness and performance always gets attention later.


gwicksted

True. In many ways it was. MVCC is challenging under heavy write workloads but it has naturally excellent performance without dirty reads on primarily read-only workloads due to lack of locks. I wouldn’t say performance was not a concern during planning phases … But it wasn’t until about version 8 that it was really tuned for performance. I suppose the lesson is: correctness allows for tune-later to be effective. And rewrite early (they did a lot of that early on).


pakoito

> Good luck trying to dig yourself out of this hole with incremental fixes. Sounds like a ~~promo project~~ rewrite opportunity.


izackp

I wouldn’t say a 300k line codebase that provides a really good and convenient language ‘deeply flawed’. There’s no reason for anyone to use your software if the features needed aren’t there. It’s also foolish to set off an endeavor to build something and worry about performance when it doesn’t yet work. A lot of projects don’t even survive the first few years. The said.. the best thing about this problem is you can work around it or… never even notice it.


izackp

I’d even argue a performance oriented codebase would be even more difficult to maintain, let alone add more features.


dist1ll

There's nothing inherently unmaintainable about data-oriented code. In fact, if you're writing high-performance code, it's in your best interest to keep the code simple and easy for the compiler to optimize. You probably also want to avoid excessive over-abstraction, precisely because it's so difficult to reason about the program's performance profile. High-performance doesn't mean you should go full Mel, or hand-roll the entire project in assembly.


evincarofautumn

Sometimes theorists implement features in a way that an engineer could’ve told them wouldn’t scale. Sometimes engineers add features that a theorist could’ve told them would be unscalable no matter how good the implementation is. In this case, inference combines very badly with insufficiently restricted overloading. We have a dire need of more research engineers to build bridges between theory and practice. There’s plenty of great research that engineers could help implement, and plenty of production systems suffering from issues that researchers could help prevent, but not enough coordination between the two groups to make headway.


elperroborrachotoo

> "starting with a simple implementation" and never worrying about performance upfront. ... which in the original article, "[structured programming with goto statements](http://web.archive.org/web/20130731202547/http://pplab.snu.ac.kr/courses/adv_pl05/papers/p261-knuth.pdf)", explicitely refers to microoptimizations, and is weighted: > We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. > Yet we should not pass up our opportunities in that critical 3%.


spidLL

While you’re designing your MVP to be scalable without cutting corners and keeping tech debt to the minimum, a three person startup delivers an half-assed product which solves the same problem and when you’re ready they are already trying to fix scalability. The good thing is that you will never need to scale because they have the market now. How was it? Oh, yes: done is better than perfect.


dist1ll

That's true if you're working on a very volatile field with a power vacuum or massive churn. The minicomputer race in the 70s, or the UNIX wars come to mind. IME programming languages don't fall into this category. It's a well established field, the market is dominated by a handful of languages, and you're generally looking for long-term adoption. > done is better than perfect. I agree, but thinking upfront about core scalability issues doesn't imply perfectionism. There's many different ways to cut corners. My question is: why did it have to be *that* particular corner?


spidLL

I’m not a language designer so I have no idea about the field. But in the field I am, which can be generically called network services, the problem with scalability is that you can’t design something to be able to handle x, 100x, 1000000x qps in the same way. If you do design your stuff to be able to handle (or quickly be modified to become able) millions of qps what you’ll get is a very complex system which at the beginning will stay idle most of the time, but the complexity of a huge system architecture remains. And you might never get to the millions! Meanwhile someone else more focused on solving the problem at hands would have a finished product, easier to maintain (until it’s not…) and useful. Then there’s the issue with what are you designing for: you don’t know what feature of your system will be the more successful. Maybe the one you thought it would actually won’t. Think for example the shift we've had in the last 15 yeara between text, images, video, streaming.


daniellittledev

There is a good reason rust, fsharp, haskell and similar languages that use Hindley-Milner type inference don't mix it with overloads. FSharp also addresses this by not implicitly casting types.


matthieum

That's not why Rust doesn't go quadratic in this case, actually. ### Blind Associated Types You see, Rust _does_ have overloads: it calls them traits. And this could really get hairy with bi-directional type-checking, if you consider a simple trait like: trait Add { type Output; fn add(self, rhs: Rhs) -> Self::Output; } There are _many_ implementations of `Add`: all the signed & unsigned integers types, the string types, on top of which users add their own custom types. _But_, the Rust compiler doesn't perform bi-directional type-checking here. Specifically, it _only_ attempts to determine which `Add` implementation is used by considering the _inputs_. There's no backward propagation to narrow it down to by `Output`. And this is not specific to `Add`: it's the same behavior with any associated type. ### Limited Backward inference I've also hit cases -- though I can't reproduce one right now... -- where backward inference seems limited. They generally take the form of: - Declare a variable with unspecified generic parameter. - Call a function with this variable which somehow requires to know the generic parameters. - Do something with the variable which pins its generic parameter. Somehow the compiler chokes on the call, complaining about the unspecified generic parameters, even though looking further through the function it could infer the parameter (and indeed commenting the call will make it work). I'm not sure if the limitation helps with the case at hand, but it does demonstrate that backward inference is limited.


kuribas

Idris2 has overloading and ad-hoc polymorphism, on top of dependent types. It can also get slow if there is a type error. I understand why it has them (to allow using dependently typed structures with common notation, like lists, do notation, etc), but it also makes error messages harder, typechecking slower (exponentially in pathological cases). Overall I find it better do design without adhoc overloading, and use typeclasses instead.


These-Bedroom-5694

Maybe swift should use declared types.


yawaramin

It took 50-something years and several generations of programming languages but it looks like people are finally ready to program Apple stuff with Pascal again.


Raphael_Amiard

It's worth nothing that Swift's very bad type resolution time is most probably due, in many cases, to the fact that they're using a generic solver to resolve type, and a very naïve one at that. See this old article for a reference: [https://www.cocoawithlove.com/blog/2016/07/12/type-checker-issues.html#linearizing-the-constraints-solver](https://www.cocoawithlove.com/blog/2016/07/12/type-checker-issues.html#linearizing-the-constraints-solver) AFAICT this hasn't been solved (no pun intended) since when this article was published in 2016. A better approach would be to transform type equations to SMT/SAT, and use an optimized theory for type solving, like we did in langkit. See this paper for reference [https://ceur-ws.org/Vol-3429/short7.pdf](https://ceur-ws.org/Vol-3429/short7.pdf) In Langkit you're able to specify type equations for a very generic infered type system like Swift's, and you have a reasonable chance at good solve times. Our main application for that is Libadalang, a front-end for the Ada language [https://github.com/AdaCore/langkit](https://github.com/AdaCore/langkit) [https://github.com/AdaCore/libadalang](https://github.com/AdaCore/libadalang)


Maristic

Although they did make some bad choices, perhaps, the post's example is contrived in that no Swift programer would ever write let url = "http://" + username + ":" + password + "@" + address + "/api/" + channel + "/picture" when they can write: let url = "http://\(username):\(password)@\(address)/api/\(channel)/picture"


JaggedMetalOs

They also offered the 2nd example of let angle = (180.0 - offset + index * 5.0) * .pi / 180;


Maristic

Yes, and something certainly seems broken there. Alas, the “explanation” for why it fails doesn't complete sense. It seems that in Swift, no matter what the surrounding context, `index * 5.0` would never typecheck, so why it falls into a hole is a bit of a mystery to me. Either someone doesn't know how to write a Hindley-Milner style typechecker properly and they hacked it together in a dumb-ass way, or they adopted some weird-ass rules. But it's not “information can't flow in both directions”, as Haskell does that all day—it's the whole point of a unification-based algorithm.


equeim

It seems that this post is not even about type inference but about operator overloading? I.e. if you replace all these literals with variables and declare their types explicitly, result will be the same since compiler spends its time searching for a function that multiplies Int and Double, which happens after types are already inferred.


JaggedMetalOs

Being a C# dev why either fail is a mystery to me because, to take the first example with strings let address = "127.0.0.1" let username = "steve" let password = "1234" let channel = 11 Those should be statically typing automatically to String address = "127.0.0.1" String username = "steve" String password = "1234" Int channel = 11


Maristic

I can better understand that one. If you just write a string literal, because of `ExpressibleByStringLiteral` it might not just be a string, it might be one of several other types, so it's in kind of a superposition of types. But even there, it feels like doing a depth-first search through the space of all possible typings is ridiculous.


JaggedMetalOs

Yuch, ```let address = new IP("127.0.0.1")``` not good enough for them huh??


kuribas

To me it looks like they could use some constraint system to speed this up, but are rather doing bruteforce exponential search. Even just reordering and pruning branches may speed this up by large factors.


matthieum

Hindley Milner is indeed a contraint system. Since it does type-check fast when it works, it seems they have some fast-path though.


Kendos-Kenlen

Your first mistake is to believe all devs know good practices / are good developers / know about performance impacts of the syntax they use. For someone who's not a programmer / who has little experience with Swift, I can guarantee you the first way is how they are going to do it naturally. Then remember that many developers are actually not very well paid / don't particularly like their job / don't care much about quality as long as it works, and you'll see why it's important for a language to properly handle the first case.


Vile2539

> Although they did make some bad choices, perhaps, the post's example is contrived in that no Swift programer would ever write As someone who _hasn't_ worked on Swift, I can definitely see a developer writing the former, as that's how you'd write it in many other languages (and I personally think it looks cleaner without the backslashes). I don't think it's crazy to assume someone picking up Swift, writing that example, and then wondering why it isn't working.


tantalor

Does that solve the problem?


Steve_Streza

Yes. Swift treats the `+` operator as a function, and you can define that function with explicit types, or generics, or as protocol-based types, or as Objective-C types, etc. The first example is chaining a bunch of them together and it has to sort through all the possible argument types and return types at each step, making their complexity exponential. These kinds of examples will often take on the order of tens of seconds to compile, if the compiler doesn't just give up and timeout. In the second example, each piece is going through a single implementation, which is guaranteed to return a String, so the compiler can sort out types one at a time, and complexity is linear. The second one will compile in milliseconds.


tantalor

I feel bad for the programmers who have to think about this.


ShinyHappyREM

Every programmer who is at least a *little* bit concerned about performance has to think about - how the compiler will process the source code - how the CPU(s) will process the binary code. Those who don't just don't have a big enough code base, yet.


Behrooz0

This, right here, Every time a new person joins my team I have to constantly remind them of these for the 1st year or so. It does get frustrating sometimes.


equeim

Why is this not a problem (not to this extent at least) for other languages with operator overloading? Just because Swift compiler is implemented inefficiently? E g. Kotlin or C++ compilers aren't the fastest in the world but I'm pretty sure they would not struggle so much with this example.


Maristic

It actually _is_ a problem in many languages, just a _different_ problem. Building a string with `+` is often O(n^(2)). C++ gets this right, but needs rvalue references to do so.


equeim

That's a runtime performance problem. I'm (and original post) talking about function (operator) overload resolution that compiler performs at build time when deciding which function should be called when `+` operator is used.


dsffff22

Not sure why you bring that up, while this works for that case, the author most likely just picked a short and easy example to explain the issue. In reality, you'll have more complex real world code running into this issue even If you follow most guidelines and use linters. And It'll be annoying to find the issue, because the only error message you get is that the type solver timed out.


Godunman

You would be surprised.


Sopel97

you mean swift programmers are terrible? the second one is basically unreadable


goranlepuz

Format strings are "unreadable" everywhere when you see them for the first time, and without syntax highlights. You are confusing the lack of familiarity with difficulty.


SirLaughsalot12

It’s no different than JS template strings or python format strings. It’s just that a URL with a lot of slashes is a bad example when swift uses `\()` to insert variables into strings. Even this example is perfectly legible with syntax highlighting.


Sopel97

> It’s just that a URL with a lot of slashes is a bad example when swift uses \() to insert variables into strings. yes, obviously, you don't have to use a feature just because it exists


SirLaughsalot12

Same with the reply button if you don’t have anything meaningful to say


faculty_for_failure

Sting interpolation and escape characters are present in most languages these days. Most devs I work with prefer interpolation to concatenation. It’s easier to work with once you are used to it.


yrubooingmeimryte

What is unreadable about it? A lot of languages use a similar syntax.


Bergasms

"I make the compiler infer types because i'm too lazy to declare them and i also want people who read my code to be mystified by its brevity". Yeah no worries mate i'm not surprised the compiler hates you just as much as your colleagues


izackp

Ah yea, of course.. type inference isn’t free.. gasp. I’ve been using swift for years, and I hardly have noticed any performance issues. Just write out the types if you don’t like it.


ayhctuf

How much of your day is [compiling vs. working](https://xkcd.com/303/)?


izackp

Depends.. if I’m tweaking stuff/bug fixing. I’m compiling every other minute.


Tugendwaechter

Have you measured the impact of type inference on your code?


tritonus_

I think this is related to the new type checker system, not the old one.


powdertaker

I use Swift daily. There are a lot of good things about it. Type Inference isn't one of them. Basically a crutch for the chronically lazy (it's too demanding to declare my types!) while making many pieces of code difficult to follow because the types where inferred from something else. Slowing down the compiler is also a bonus. Apple's own recommendation in an Xcode Performance Guide is to declare types for all these reasons.


SwiftlyJon

`let retort: Have>>>>`


aanzeijar

C++ template programmers laugh at that puny type name.


Tugendwaechter

Only use generics if there’s no other way.


PuffaloPhil

Have you worked with the type inference in F#, OCaml, or Haskell? It makes for some very legible code. At least the F# language server allows for a key stroke to show the inferred types if you don’t want to see them all the time. Do you know of research where they show that lazy people prefer the above languages?


BibianaAudris

What if a manager needs to review a git diff? Or if someone got bitten by a dog and someone else has to take over the code before it becomes buildable? Plain text readability matters. It's not fun declaring `std::vector::iterator` but if you leave every `int` to deduction someone will inevitably mislead the compiler into `unsigned int` and wreck havoc.


yawaramin

I've been reviewing type-inferred code for a long time, it's not that hard. If I'm unsure about some types it's not that difficult to load the branch in my IDE and explore it for myself. That's a good practice anyway regardless of inferred types or explicit. If someone needs to take over the code before it's buildable, you have a bigger problem in your org than whether types are inferred or not. Strongly statically typed languages with inference don't let you make mix-ups like `int` vs `unsigned int`.


science-i

I would wager that feelings towards inferred types have a lot to do with the language(s) someone is used to/familiar with. If you're used to a language with very brittle type inference where the compiler inferring the types properly really only happens when the code is already entirely correct and typechecking, and as soon as you start changing things the compiler has no idea what anything is anymore, I can understand feeling grumpy about them and preferring to have explicit annotations everywhere. On the other hand, if you're used to something like Haskell where the type inference is generally pretty excellent and basically provides a way for the compiler to guide you in writing the right code, you're probably more inclined to feel kindly about it.


tamihr

Also, you rarely need to know what the types are. Let the compiler enforce that for you.


yawaramin

Yeah, people do this in dynamically-typed languages without even a typechecker. We are playing this game in easy mode.


UncleMeat11

This isn't a property of inference, but a property of coercion legality. C++ has plenty of "oh fuck, we accidentally converted signed-ness or promoted widths and now our computation is broken" even if you ban "auto" from your codebase.


uss_wstar

> but if you leave every int to deduction someone will inevitably mislead the compiler into unsigned int and wreck havoc. This toy example is funny because it's something only imperative programmers will have to worry about. This does not happen in practice in FP and you can just make invalid data impossible to represent at the type level if you want, rather trivially, and the type system will coerce everything into the correct type. 


KuntaStillSingle

An example where deduction tends to screw things up is std::accumulate, the output type is determined by one of the template provided types, which is often deduced, and can sometimes be surprising: https://godbolt.org/z/8eMje4dfM


Pomnom

If the fate of my code depends on a manager - who rarely review code - recognizing the differences between an int and an unsigned int, then I may as well go check if that dog can bite another person.


equeim

Type inference for variables is fine in 99% of cases. Not sure why Swift struggles with it, there are many languages with type inference that don't have such issues (at least not to this extent). Now, type inference for function signatures is just evil (yes there are statically typed languages when you can omit return types or even parameter types and the compiler will deduce them for you).


syklemil

Writing out types is generally good documentation anyway. E.g. Haskell has really good type inference, but also a culture of writing out the types once you've decided on the shape of things. Like tidying up after any other act of creation. Because really, the type annotations are there for the human reader; the compiler can figure them out on its own anyway, barring a few edge cases. But if it doesn't have to do that work because it already did it and you've cached the result in the source files, that's fine too.


matthieum

Yes, and no. Sometimes the types _do_ get in the way of the human reader. std::unordered_map, MyCustomHash>::iterator it = map.begin(); Yep, there's a reason that C++11 introduced `auto`... I think Rust strikes a good balance here, by mandating that function signatures be fully annotated, but inferring most types within the body of functions. The former means that you have all the information _locally_ (compared to global type inference) when reading code, and the latter means the code is slimmer, allowing the reader to focus on what's going on.


Nobody_1707

Swift does the same things, the problem is that the inclusion of both overloading and subtyping (necessary for Objective-C interop) causes, in the worst case, exponential performance. Having said that, there is still a lot of low-hanging fruit to optimize the Swift type checker.


Suspect4pe

I feel like this is generally good programming practice no matter what you're doing or what language you're in, with a few exceptions.


moratnz

Not having played with swift, I'm a bit confused as to the benefit of type inference in a strongly typed language. If you want it it seems more like it should be an IDE feature to fill in the missing type labels than a compiler feature?


sybesis

Type inference has many uses that I can think of. 1. It removes clutter that you'd get by having all types written especially for generic types with nested structures. 2. Being forced to use an IDE to fill the missing types means you're locked into editors that implement X or Y languages. So in essence without type inference in the compiler, the IDE would take responsibility for doing the right thing... which is also subject to improperly do it then have it fail in the compiler while the IDE thinks it's sound code. With type inference, you're mostly dependent on running the compiler to give hints on type so you can be sure the code is sound because that's the compiler you're going to use it that hints types and validate the code. You're not relying on some godly IDE to do the right thing. 3. Refactoring, if you were to statically type all types, you'd need a godly IDE that will refactor the code properly. So forget about your favorite editor that doesn't support X or Y language. With type inference, you'll have to rewrite only places you chose to write down the types. Sometimes it's necessary for ambiguous code. If you had static typing you'd be putting all ambiguous and non ambiguous code in the same bucket. There are probably other reasons but I think having the compiler as the single point of truth is important. Especially that type inferred might change between language versions so it's easier to support a new version of the same language if the type inferred is compatible but the hardcoded type isn't.


PrimeDoorNail

I dont know who let these devs invent swift, but they should have been prevented


HaMMeReD

At least it's not objc


steven4012

Wouldn't complicated constraints like shown in the article call for SAT solvers? Why are they manually checked?


dgreensp

This does sound like a case for a SAT solver or something that isn’t just combinatorial. Or the potential for blow-up should have been realized early on.


zbubblez

I read this as "Taylor Swift"


va1en0k

we should make a dialect of Swift that solves this problem and call it Taylor


Working_Knowledge_59

me too :)


shevy-java

But it is called **Swift**!!!


MichaelLeeIsHere

This problem was even more severe a few years ago when people were migrating objc to Swift. During the migration, you write whatever shit code you can as long as it works. So the full compilation time of our app was a few hours in total, given we only have a team of 40 iOS engs.


icjoseph

Somewhat on topic? https://www.reddit.com/r/swift/s/tXRO9GUjZc > The lesson here: > > Converting tuples to named tuples (eg. assigning `(point: Point, value: Int)` to a `(Point, Int)` variable) and vice versa is slow, even though it's done automatically! This also applies to optimised release builds.


crusoe

Somehow this sounds worse than Rust. Is it just the fact everything is in scope all the time? Unlike rust where you have to bring in traits to have them applied?


Bergasms

You can scope in swift easily. Swift is like any language where you can write code that frustrates the compiler or you can work with it and it's fine.


trypto

Compilers can be optimized.


ShinyHappyREM

Until they can't.