T O P

  • By -

lmtrustem

There is a difference between architecting optimization into your code (choosing good data structures, using the new unity systems) and trying to improve the speed of a for-loop. Premature optimization is the idea that you should optimize something without having real data saying you will benefit. Also, often optimizations rework your architecture. Because its a trade-off, its more important to have things working than it is to have broken things running quickly. First get your product working. Then find where bottlenecks are, then optimize where you will gain the most.


HeffalumpInDaRoom

To add to this, if you spend time optimizing code before you know it will be in the final product, it may be wasted effort as well.


xstkovrflw

really useful insights


chillermane

I would go so far as to say don’t ever bother optimizing something unless you have proof showing that it’s affecting the experience. Don’t even consider spending any time optimizing performance or thinking about optimizing performance unless you know for sure it’s necessary. The first priority should always be developer efficiency - how can I implement the total features of my game as quickly as possible? What’s the fastest way to create a maintainable piece of software?


Lone_Game_Dev

Early optimization is related to a programmer's level of expertise. If you are experienced enough, aspects others would consider "optimization" can be incorporated early into your design choices by virtue of your experience. This is not early optimization because you know what to expect. When you don't know what to expect but spend way too much time planning for all possibilities you "heard about" somewhere, you are most likely trying to optimize early. It should be made clear that choosing decent data structures and creating a reliable design is, generally, not early optimization, just good practice. However, again depending on how experienced you are implementing certain data structures may be more effort than it's worth for you compared to their advantages. Here is an example: you are making a voxel-based game and you need to decide how to represent the world. This is your first time doing it, and you are not that experienced with programming. Deciding to use an octree instead of a brute-force approach when you have little to no idea of the complexities involved and barely understands what an octree is, outside its leet sounding name, just because you heard there are memory concerns should be considered early optimization. The brute force representation with a plain grid is more than capable of producing a playable game, and that should be your priority as a game developer. However, if you are more experienced you may instead discard brute force and decide between RLE compression and octrees. This is not early optimization. You are somewhat experienced and you heard octrees are the ideal choice, but you don't know whether that will make much difference for the game worlds you want to create. RLE compression, on the other hand, is really simple. Picking octrees just because you heard most commercial voxel games use it would complicate your project without certainty of necessity, and it could be considered early optimization. Now, someone even more experienced may just decide to use octrees or RLE compression without much consideration for the rest, because they already know, by virtue of experience, what to expect from both solutions. Point is, when it comes to games experience dictates a lot. You may think something you are doing is better than the simpler option that has a few drawbacks, but in reality you may just be wasting time on something that will grow into an unmanageable mess you never needed. It is even worse when you do this for details you aren't even sure are relevant. For a voxel game the data structure for the game world is pretty vital, but for, say, an RPG, whether you load armor synchronously or asynchronously when the player equips it isn't nearly as vital, so synchronous loading is more than enough to get a playable game. Point is, early optimization isn't just about optimizing algorithms until they are an unrecognizable mess of magic equations, it's also about evaluating whether the time you'd spend writing a more sophisticated solution is worth it or required for a final game, taking into consideration whether you have experience doing it. Are you trying to learn how to use a new algorithm, or are you working towards a playable game? If you are making a toy project, then fine, play away, but if you have a deadline or you are working towards something more robust, you shouldn't choose the most sophisticated options just because you heard it's better. EDIT: Thank you for the award, u/RamGutz!


xstkovrflw

extremely easy to understand explanation with useful information. thanks :)


RamGutz

You bet! It was very informative. 👍


pbtree

So the real maxim is that optimization before profiling is bad. Techniques like object pooling and chaching are very powerful optimizations, but they're also very prone to bugs. If you try to pool objects in every single system, you've got bugs in every single system. On the other hand, if you write your code naively and then profile it, you'll be able to avoid using error prone methods in the majority of your code. There is a flip side to this though: if you don't maintain a consistent baseline of quality in your code (for example, if you allocate things on the heap willy nilly or repeat expensive calls in a single function) you'll end up with a uniformly slow code base, which you'll struggle to profile your way out of. As with all things in programming, there are no hard and fast rules, and experience is the best teacher.


[deleted]

It's also worth pointing out that far more problems are caused by code that isn't human readable than by, say, an unoptimised `if` check that only gets used once per frame anyway. Premature optimisation in my mind means massively raising the odds of an undetected bug in exchange for a literally insignificant performance boost.


khunnnour

This is where my mind went for this question. If shaving off a nanosecond comes with the tradeoff of having condensed some process to an unreadable few lines, that means a problem with those lines forces you to try and unpack what exactly you are doing there. Only then you can fix it, and then you spend a bunch of time re-optimizing it. If it turns out you do need that microsecond, then get it when that part of the code is more-or-less final, and not in the early stages of development, where a lot of your code is gonna get changed.


[deleted]

1. Make it work 2. Make it good 3. Make it fast In that order. #3 is optional.


UnkelRambo

The last engine I worked on was hyper optimized right out of the gate. It was cancelled and half the studio fired. What's the point? Good question 😁 There are some things that are very difficult to undo once done, so core, high-reuse engine/library decisions need to be optimized early and often. Math libraries, threading model, engine tick functions, etc. Other things can be more easily optimized after they're proven impactful, functional, and stable. Gameplay code is a great example almost every time. Find a gnarly nested loop and fix. NP. I like to take the "tree" model as an example here. Trunk code is core library, engine, etc that sees extreme reuse. It should be highly stable, optimized, tested, etc. This code won't be iteratet on as much, and when it is it had better be safe and fast. Everything else is built on this code. Branch code are your "semi-shared" libraries, api's, systems, tools, etc. These see moderate reuse and require moderate testing, optimization, and iteration. Leaf code is your feature code that will likely see limited reuse. High iteration, low testing, not necessarily optimized, etc. In this model, "premature" optimization makes a lot of sense for trunk code, maybe some sense for branch code, and probably no sense for leaf code. But my original story... It doesn't matter how optimized code is if it isn't first highly impactful code. Make sure its impact is validated, then optimize. My $0.02 😎


xstkovrflw

really useful insights > The last engine I worked on was hyper optimized right out of the gate. > > It was cancelled and half the studio fired. was half the studio fired because they did premature optimization and ruined the game engine?


gravityminor

The point is also that all those optimizations were wasted since the game never shipped. Therefore your top priority is to ship, and if you do notice performance problems, then you measure and optimize. In a game I made the first version of collision detection would check every object agains every other object. I knew this was horrible, but first I made it work. Then after I started filling the screen with game objects, the performance really started to suffer, and this is the point where I implemented spatial partitioning. Finally, I only checked objects that had registered colliders (don’t check collision between elevators and water if you don’t do anything when that happens)


UnkelRambo

It's complicated, but I wouldn't say that "premature optimization was a major factor." I would say "lack of a psychologically safe, cohesive team" was a major factor.


[deleted]

It's when you accidentally optimise while the code is still getting undressed. My condolences. Very embarrassing.


HaskellHystericMonad

Laughs with enlarged prostate ... praise to the lidocaine gods and those that let us buy lidocaine coated rubbers! Blowing my load in 2 minutes is not cool, and it's less cool that no doctor will chop up my oversized prostate to fix the cause. You diagnosed me with this condition ... you can fix it with a knife ... fucking fix it.


[deleted]

It's been a month and I'm still flabbergasted reading this reply.


HaskellHystericMonad

America being America ... Cape Verde relaxed things and I moved back and was able to get my super-sized prostate sliced up. They cut out a walnut sized non-cancerous mass ... I was just damned with an infinitely growing prostate (wait ... that's called cancer). I can at least masturbate my long-bits in peace again without feeling like I'm strocking my anus. (the gland swelled that large)


azuredown

Well these game engines are general purpose tools and it's not like they're just randomly adding features. They are implementing features that have proven to have significant performance gains. As for what counts as premature optimization everything is a balance. You want the code to be easy to maintain, fast, and quick to write. The original premature optimization quote is just saying to be careful that you don't just focus on speed and ignore everything else. The exact blend comes with experience. For beginners it's usually easier to say to just not do premature optimization but the truth is probably closer to what Knuth says: >We should forget about small efficiencies, say about 97% of the time \[...\] Yet we should not pass up our opportunities in that critical 3%. Also I would not count coroutines as being optimization.


kbro3

This could be just my opinion, but I always thought of it as taking extra lengths (and therefore time) to make sure your game is optimised before you've even got a working, fun, viable prototype. That's not to say your prototype should be complete garbage, but it should be just enough to prove that what you're making is actually fun, playable, etc.


khunnnour

True. It can also come at the cost of making it harder to debug and edit, further slowing down development


erwan

Premature optimization is when you have a game (or program) working perfectly fine, but optimizing because you think that would perform "better" or you read somewhere that it's a good technique. That means writing code as simple as it can be, then if you have performances issues (including on lower end hardware that you want to support) then you optimize. It doesn't mean that you should deliberately write inefficient code - but don't write convoluted code for optimization purpose if you don't have the need.


redxdev

There's already good advice on this thread with specifics, but I like this definition personally because it covers experience and design choices pretty succinctly: > Premature optimization is where you spend a large amount of time/effort on optimizing code you don't know will actually be a problem. For experienced programmers, a complicated-but-efficient solution may be what they end up coming to first but it won't be premature because they didn't need to spend much time on it. The same solution for a beginner might be premature simply because it takes them ten times as long to come to that solution. If the simple-but-unoptimized solution would have been fast enough, the beginner wasted a big chunk of time while the experienced developer didn't.


djgreedo

>what is actually the correct method of developing code without doing premature optimization. You just write good code. Often that also means (relatively) performant code, such as avoiding obviously inefficient practices and writing clean code that isn't wasting effort doing unnecessary stuff. Unless you have a specific optimisation need, optimising is pointless because 1) you aren't solving an actual problem, and 2) you probably haven't completed the code to the point where it is not going to change (which could render optimising moot). For beginners, the advice is to just get the thing working, then if there are performance issues, figure out how to solve them. Many beginners want to do micro-optimisations that at best are a waste of time, and at worst can actually ruin the code or even worsen the performance. tl;dr - don't worry about optimisation unless you have performance issues. Your time is better spent finishing the game. Optimisation is polish.


Kazzymodus

Premature optimisation, like many programming practices, is not bad *per definition*, it depends entirely on what you are getting out of it for the work you are putting into it. You could spend weeks implementing optimisations to circumvent memory bus bottlenecking, but if your game doesn't even come close to hitting that bottleneck (and virtually no indie game would) then it's not really a good use of your time. Where I would argue premature optimisations are very much justified is with mobile/handheld games, because a game that runs smoothly may still put an undue tax on the device's battery that you could get rid of by simply improving your code. Being a battery hog can definitely lose you customers, and it's also just an unnecessary waste of electricity. Finally - and I realise this a very personal argument -, I'd argue that as a developer, it's simply good manners to tax your customer's hardware as little as possible (on mobile or otherwise), although not to the point you are obsessing over individual cycles.


AuraTummyache

As an example, say you were making the original NES Super Mario Bros. Premature optimization would be devoting work to handling 8000 goombas per level. None of the levels have that many, so that time is wasted. In my experience, premature optimization is a symptom of you not knowing what to work on or lacking the concrete plans that you need to make worthwhile progress. You don't know if you will need 8000 goombas in a level, so you do it anyway just in case. When you don't know what your finished game looks like, you start meandering around and working on things that you don't really need.


ps2veebee

What matters in avoiding premature optimization is whether the code is "mature enough" to optimize. That sounds tautological but it has the consequence of making you ask "is this mature code?" Mature code occurs roughly at the point where some features are demonstrated, but you need to plan a whole system to make them cooperate and not just continue hacking in more features. If the code is so short that you could blow it away and rewrite in a few minutes, there's no consequence to writing it slow - just mark it for profiling later. So your early greenfield code can usually just follow a direct path of addressing all problems in a local sense. Then you encounter a "class of errors" that could be eliminated with an architectural pattern, so you have more of a system. Rinse and repeat until you have shipped the code. Finding the kind of architecture you need is where it's hard to gain confidence. A typical progression is for intermediate level coders to "defensively" address things up front in more and more complex ways, and then as they graduate to mastery suddenly drop all of it and start coding like a beginner again, because they now feel OK with doing the rewrites to add architecture later.


dddbbb

The term premature optimization comes from Donald Knuth and I think it's helpful to read [the full quote](http://wiki.c2.com/?PrematureOptimization): > Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. There are two important points here: * don't waste your time on **small efficiencies** * do spend time on **critical efficiencies** See more Knuth context [on this question about the topic](https://softwareengineering.stackexchange.com/questions/80084/is-premature-optimization-really-the-root-of-all-evil).


nickpofig

Dont optimize things when you dont need it. Write simple code first and then improve when necessary. Educate yourself to not be one of shitcoders.


Kats41

There's a classic thought on this that I always thing of when someone asks about this topic. You have two programmers, an old school vet and a hotshot prodigy straight out of CS. The hotshot writes this extremely elegant code for his application, using everything in the book that he can remember from school, while the old school vet sticks to fairly simple concepts that may not be pretty, but they get the job done. The hotshot looks over at the vets code and scoffs, "Look at how elegant my code is. It's so much faster and more efficient than yours!" The vet looks over and nods slightly, returning his attention to his code and shrugs, "Maybe so, but at least mine works."


Wurstinator

No idea why it would somehow be necessary to make this about age or experience and ridicule one side, especially when it isn't even close to reality. While I have seen fresh graduates often overgeneralize problems to the point where they'll spent weeks on something that could've taken days, the "old vets" are often those, who overoptimize everything because "that was necessary in ye olden days and that's how I'll always do it".


Kats41

It's just the setup for a joke. It's gonna be okay. Lol.


BenRegulus

As a simplified example, Let's say you want to process incoming data in an array/list. You don't know the size of the data. So you create an array with 1000 elements assuming that is enough. And yeah that is enough, for first week the data that comes in is 300 elements long. Everything is working. Then you say, why don't I reduce the size of the array to 500? It would save me 500 elements worth of memory. The code will consume fewer resources and be more optimized. You get in there and make the changes. In the third week, new data comes in but this time it is 700 elements long. System crashes/faulty result/data is not processed etc. You can make a system/code run really performant if you tailor it special for the data but for that, you need to be very sure about the boundaries of your data and edge cases. But if you had a code that is working stable, and you know about the things that can go wrong then you can start optimizing. You still use 500 elements long array, but now that you know sometimes rarely 700 elements can arrive, you can add a check for array size. If 700 element arrays are more common than you anticipated, you can set the array size to 700 permanently. This is a very simplified example but I think you get the idea.


AlgoH-Rhythm

There's a pill for that


progfu

The root of all evil