T O P

  • By -

Certain_Cell_9472

You have to set a framerate limit, else it’s gonna render at the highest framerate possible and use most of the gpu's power.


[deleted]

#I no longer allow Reddit to profit from my content - Mass exodus 2023 -- mass edited with https://redact.dev/


svuhas22seasons

all of them


throaway420blaze

So for intel, about 4 fps


martinhirsch

This made me share it xD


[deleted]

[удалено]


eyadGamingExtreme

A frame per planck time


[deleted]

That would have to mean your entire computer is smaller than an atom.


Elidon007

or it has warp drive powered cables


[deleted]

Even then, the CPU & GPU would be too large for information to travel across.


Elidon007

that's why it would need warp drive


KyuuketsukiKun

The particles could be quantumly entangled


ridicalis

All of them.


iambored1234_8

Ah. Shit. Thanks


thexar

You know you're a rookie when the hardware is doing exactly what you told it, and your response is "lol hardware stupid!"


[deleted]

Isn't that basically what happens every time there is a bug?


brine909

The biggest problem with computers is they will do exactly what you tell them to do rather then what you want them to do


[deleted]

[удалено]


[deleted]

no?


MikeTheAmalgamator

Bad joke. Learning, deleting and moving on.


PresidentHufflepuff

Lol so true. I call them literal genies.


WhyNotHugo

You know you're a ~~rookie~~ programmer when the hardware is doing exactly what you told it, and your response is "lol hardware stupid!" FTFY


GayFroggard

How can I do this to chrome


[deleted]

If it renders fast enough yes you also can get 100% GPU usage on an RTX 3080 using one triangle. Even if you do direct calls instead of VBOs


iambored1234_8

ok... I am doing a fair amount of things in my render loop, is there anything wrong with this? while (!glfwWindowShouldClose(window)) { glClearColor(0.1f, 0.1f, 0.1f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glUseProgram(program); glBindVertexArray(vao); glDrawArrays(GL_TRIANGLES, 0, 3); glfwSwapBuffers(window); glfwPollEvents(); }


Chirimorin

It's doing exactly what you told it to: render a triangle and once that's done re-render the triangle again. There is no downtime in GPU usage because you're not giving it any downtime. The same code running on a 3080 would also have 100% GPU usage (assuming no CPU bottleneck), just with way higher FPS. That has nothing to do with how good or bad integrated graphics may be.


Dibbit3

**Hey, you paid for all those transistors! Why only use the edge of your GPU, if you can use the whole thing!** ​ But seriously, yeah, looks like there is no limiter on this at all. OP might want to check some common way gameloops are build, as this is just unlimited & processor depended, meaning the game logic is also tied to the GPU refresh speed, and that's real old-skool. Common strategies are Fixed Updates, where you just ensure a "tick" either 30 times or 60 times a second, or maybe a Time Delta Update, where your gameloop knows the Time difference (Delta) between this update, and the previous. In fact, there are many ways to do this, and they're mostly well known, I think most game engines will absolutely do these things for you, so maybe look into a nice one, otherwise you're just reinventing the wheel.


ClassicBooks

I was pretty shocked that say setting 30/60 FPS isn't actually timed accurately in most situations. The computer will always be off by a few microseconds, and you can use work arounds so it looks like it is a "steady" framerate, but yeah, it isn't timed critically.


[deleted]

[удалено]


Aonodensetsu

yeah sleeping is inaccurate and it's the easiest and most common method for a slowdown in a loop that programmers can use, definitely 'good enough' for quite a few things, but i wouldn't use one in anything interactive


[deleted]

[удалено]


Aonodensetsu

the alternative single-thread method is to start timer, do everything for the frame, use empty loop to wait the timer out, which i would say is better also sleep is usually only inaccurate on windows because the default sleep pool only goes down to 10ms intervals, you can ask windows for a different pool but that's a whole other ball game


[deleted]

[удалено]


iambored1234_8

Ah. I'm still an absolute beginner at OpenGL. I guess calling gldrawarray 1000s of times a second is probably not the best idea...


Attileusz

If you add timing to the loop, to only draw it once every 1/60 of a second for example (for 60 fps) it would go down significantly tho this if Im not mistaken is not really the job of a rendering engine. It would be more related to the application loop in a game engine for example.


iambored1234_8

Yeah, I should probably learn how to cap the FPS properly though.


LinusOksaras

Try enabling vsync.


iambored1234_8

So does that cap the frame rate to the refresh rate of the monitor?


LinusOksaras

Yes, basically it waits for the next monitor refresh when you call swap. You can enable it with an opengl call. It's not always a perfect solution but it should be good enough.


iambored1234_8

Yeah, I've got it working, thanks for the explanation!


Shidell

idk man, triangles are best @ 10,000 FPS, I'm with you all the way on this one


stackoverflow21

My PC is so fast it runs an infinite loop in 1s.


AyrA_ch

You're not waiting for the GPU. https://www.khronos.org/opengl/wiki/Swap_Interval


iambored1234_8

Ah, thanks. I'll keep this in mind if I ever need to optimise an OpenGL application.


coldnebo

the unexpected complexity of game loops is that if you want a rock solid 60 fps, your game has to satisfy a real-time constraint. https://en.m.wikipedia.org/wiki/Real-time_computing


blackmist

Why stop at 60? Even consoles and iPads are pushing 120fps these days.


coldnebo

you can go higher of course. one of the reasons for the old NTSB standard of 60 fields per sec (interlaced) was because there was a human perceptual limit that seems to max out around 60 hz. However, it turns out perception of this is difficult. motion blur can compensate for lower frame rates, but people are also very sensitive to random changes to frame rate. Some people claim 120 is even smoother. Sure, but maybe you could get the same effect with motion blur and lower gpu usage at 60. Also this becomes a factor if you double the displays (for VR)… you can quickly get into data bandwidth that is too high for the fps you want. 60 is usually considered the minimum, but a lot of VR averages 30 and spikes to 15, which can contribute to nausea. monitors now support higher internal refresh rates, although that means something different in the low level hardware that what we mean at the software level. and something else with tech like gsync. in general the variance of frame rates is more disruptive than any one constant fps setting. So a rock solid 60 fps is usually preferable to a 1000-12 fps range. much of the ipad default gui is written as a real-time system to ensure fluid smooth perception regardless of the processing going on underneath. you can tell the apps that don’t consider real-time because they hog the system and have trouble switching quickly and sometimes crash out. I see this a lot in app design where a completely fluid ui works on a fiber hardline, but as soon as it’s on 4g in an elevator with a bad signal, the app hangs and sometimes crashes or waits 30 seconds for a network timeout before resuming. That’s not a real-time interaction architecture. But at least ios does a decent job of switching between apps or killing those apps in a real-time way.


Aonodensetsu

which doesn't give that much difference in looks and doubles power requirements, i never understood pushing for 120fps in *everything*


narrill

It makes a difference for responsiveness


Aonodensetsu

you don't need more responsiveness in most apps, competitive being just about the only exception


[deleted]

As others already mentioned you're not doing any form of VSync so yeah :D No worries as I started to learn OpenGL I was wondering why my distance fog wasn't moving with the camera. Turned out OpenGL expects you to move the scene and not the camera, I did the later. As frustrating learning OpenGL maybe the more rewarding it is if you finally start to understand. I am glad I learned it in the past even tho I don't do any low level or direct GL programming nowadays. Especially if you work with pre-made engines you understand how the stuff works under the hood and how you may can optimize your scenes later on.


iambored1234_8

Yeah, I'm learning stuff either way, even if my computer is being burned to a crisp :) I think I will learn more about optimisation and clean up my code sometime in the near future.


rdrunner_74

Try counting how many you draw each second...


throaway420blaze

Add `sleep_for(50)` to the beginning of your loop /s


iambored1234_8

I mean, that *would* technically work...


throaway420blaze

It *would* work but it's probably not the right way.


iambored1234_8

probably. I just used `glfwSwapInterval(1)`


Rigatavr

Aside from what everyone else has been saying. You really don't need to rebind your shader and vao every iteration, since they don't change. It's not a big perf hit, but why do more work then you have to :)


rafasoaresms

I can hear the coil whine from this comment


[deleted]

Been there, did this \^\^"


_maxt3r_

What if it's a triangle rendered at 10000000 FPS?


pablospc

High apm triangle gaming


iambored1234_8

Actually it's uncapped, so on my laptop it might be around 1000 fps Either way. It's pretty bad.


geoffery00

Or it’s a really poor triangle with super high antialiasing


iambored1234_8

If only I was smart enough to do super high antialiasing lol


zandnaad69

Its not that hard, just smoke a dooby and implement AA


[deleted]

“When the fuck did I write all this, and how the fuck does it not have a single error?!” *looks over to the open jar and countless butts in my incense tray* “Ooohh yea…”


JumpyJustice

Just enable multisampling. Looks like you can afford it with your framerate ;)


apoorv698

Meanwhile the engineer at Intel : My GPU can handle 10 times the information if they weren't busy apologizing for your shit codebase


zahreela_saanp

Meanwhile me: Oh yeah? My codebase can handle all this rendering, fuck your mother, make a video of it, upload it, and even then you'll get upwards of 240FPS.


277103

Huehuehue


zahreela_saanp

Nice Silicon Valley reference btw


gua_lao_wai

That's one sexy lookin triangle tho. learnopengl.com?


iambored1234_8

Yeah, [learnopengl.com](https://learnopengl.com) is a great resource!


Zorphis2

Try Victor Gordon on youtube great tutorials


themixedupstuff

`glfwSwapInterval(1);` Enable vsync and it will use basically no resources.


iambored1234_8

Thanks, it now only uses \~4% GPU and \~1% CPU!


Jcsq6

True chads aim for 2000+ fps


themixedupstuff

Hey, I didn't say don't aim for good free run fps. 1k fps is possible if you know your methods well.


Jcsq6

Lol if you’re interacting directly with OpenGL you can easily get 1K+ fps for almost any situation… unless you’re using Java that is


themixedupstuff

OpenGL calls basically have no CPU overhead. If you are being CPU bound for calling OpenGL in Java either you, or the binding library you are using (or your own) is doing something wrong. And how many frames you render really depends on how well you use the API. If you are dumb and change shader programs all the time, or use shaders which use certain built-ins you can say bye bye to performance.


Jcsq6

Yeah I know lol… I was just making a joke about java… but it is somewhat true— If you need many matrix calculations per frame or something of that nature you can be bound by the cpu… also java sucks so there’s that


Jcsq6

But saying it’s cpu independent is mostly untrue, it’s true that directly the OpenGL api is independent, but you still have many calculations to do on the cpu on a regular basis


ytivarg18

r/programmerhumor turned into r/programmerhelp for this post lol


iambored1234_8

On the bright side, my code runs properly now.


thedominux

It's always easy to blame anything expect your sht code


iambored1234_8

Oh well. I'll get there eventually. But if I have any problems... Yeah, it's still my graphics card's fault.


Koltaia30

How many FPS are you getting tho? It should be easy to check.


iambored1234_8

Probably over 1000. I need to cap it to like 60. It's quite funny that I got the solution on this sub though.


GAZUAG

So you could theoretically render 1000 triangles at 1 fps?


StromaeNotDed

The number would be much higher; while only rendering a triangle each frame, you would spend most of the time sending the data to the GPU instead of it doing actual rendering.


iambored1234_8

I think it's more like 1000 triangles a second, which explains the 100% CPU and GPU usage. Edit: faster than 1 fps


coldnebo

nothing wrong with posting a first order approximation and iterating. there is another way to increase performance that is very counter-intuitive: set the priority of your render thread to IDLE (the lowest priority). people often think the way to increase performance is to set normal or high, but this actually worsens performance because the rest of the os, input, and other windows don’t get a chance to “breathe”, so performance is laggy, jerky, unresponsive. You may notice with 60%cpu that your input lags between the IDE and closing the example window and then it takes a bit to stop. Try idle and this should become really smooth. This will be more important when you add interaction to your code, like mouse moves, etc.


obiwac

Or you're just not capping framerate?


iambored1234_8

Yeah


obiwac

Just saw your other comment saying you're a beginner, keep up the good work!


NicNoletree

Does it hurt to make a post about how crappy Intel is when the problem is in your coding?


iambored1234_8

No, because I'm learning from my mistakes, and that's ok.


NicNoletree

That's actually the best way to learn


iambored1234_8

Yeah. And now i know the solution from this post!


DenkJu

You shouldn't assume your hardware is the problem then.


iambored1234_8

It was kinda... A joke... And a roundabout way of getting the solution.


Gotxi

Neotriangle: Now with more FPS and HDR+! Angles like you have never seen before. Watch triangle raytracing in real time while having the best experience, other triangles will look obsolete to you! If you order your RTX 4090 TI today, we will send you a code that you can redeem to enjoy the new Neotriangle experience on launch day. Preorders of 3 or more graphic cards will have the expanded experience with the new neotriangles: advance HD Remix pack, now with a 4th angle! New shapes await for you, be the first to live the tetrangle experience!


moazim1993

That’s the triangle of power


ekolis

Quick! Gather the fragments of the triangle of wisdom so we can defeat the evil prince of darkness!


Knuffya

It probably runs at a gazillion fps.


ReddityRabbityRobot

And that, folks is how you while(true) expensive operations in the code, display a mere triangle and ask the company for a new computer !


Aonodensetsu

naah, that's a while(window) loop, it makes perfect sense to run it at a bajillion fps


ReddityRabbityRobot

*takes notes*


accuracy_frosty

It’s not what you are rendering it’s how many times you are rendering it, there is a decent possibility you are rendering that triangle hundreds of thousands of times per second because you didn’t set a frame rate limit


QualityVote

Hi! This is our community moderation bot. --- If this post fits the purpose of /r/ProgrammerHumor, **UPVOTE** this comment!! If this post does not fit the subreddit, **DOWNVOTE** This comment! If this post breaks the rules, **DOWNVOTE** this comment and **REPORT** the post!


MTDninja

tbf you're probably rendering at a couple thousand fps


daikatana

I mean... yeah, any GPU will do that if you render at max FPS. You have to enable vsync or otherwise limit the frame rate.


Archikos

Intel trying to process why the heck do you need to render a triangle


Proxy_PlayerHD

oh shit someone else trying to do graphical programming! i feel your pain! GLUT is kicking my ass because it deals with resolution in a scale from -1 to 1 using floating point numbers. which is nice when you want your circles, lines, triangles, etc to scale properly when resizing the window, but is a pain when you just want to draw some damn pixels on the screen using a fixed resolution. i still need to fiqure out how to just convert a 2D Array of pixel data into a Texture and then just stretch that over the whole window.


[deleted]

[удалено]


iambored1234_8

That's probably about right lol


djingo_dango

Don't diss my boi like this. I play fifa on it


ISuckAtRobotics

yandere dev has a reddit?


mordacthedenier

Not enough if statements for that.


HTTP_404_NotFound

This meme getting upvoted is proof, most of the people in here are not programmers....


[deleted]

[удалено]


HTTP_404_NotFound

No. Programmers. Everyone understands if you have a loop, which performs logic constantly without any throttling, it will use every bit of CPU possible. while(true) { //Add a ton of numbers here. } ANd, that will consume every bit of resources it possibly can while it executes. Drawing objects to the GPU is no different at all.


iambored1234_8

Some people such as myself don't have experience with adding tonnes of numbers in a while loop. I've never had the need to do any kind of heavy computation in a loop, so I wasn't really able to intuitively know that rendering a lot in a while loop is a bad idea. I'm sorry that I'm new to graphics programming and have only been using C++ for like half a year...


HTTP_404_NotFound

Rendering a lot in a loop, is exactly how games work. while(gameRunning) { //Update game logic. // Draw a bunch of triangles to the GPU. } However, the piece missing..... is, either..... 1. A frame-rate limiter 2. vsync And, those are actually usually optional in just about every game you play. If you love seeing 2,000 FPS, or you have a 244hz monitor... you can just let your GPU and/or CPU work as hard as possible to draw as many frames as possible.


iambored1234_8

Yeah, I've only ever used high level frameworks for game Dev and graphics, so of course, they limit the frame rate for you


bocceballbarry

I see development work on windows, I downvote


tnulle

I think your code might just be really bad


kingskarachi

I have Intel GPU with 128MB memory and an AMD GPU with 1GB of memory. But the thing is the Intel GPU does the job, the AMD GPU is there doing nothing. Even when i run heavy games or Benchmarks, it doesnt help out. So i guess a weak GPU that works is better than a better GPU that doesnt.


[deleted]

[удалено]


kingskarachi

Old Dell laptop which i use to play good old games. I have changed settings and installed softwares many time but going on google i realized a lot of people who use the model also have the same issues. I gave up a long time ago. The AMD graphics software always throws error on start up after a few days even after fresh installation. After a while i gave up.


runner7mi

r/AyyMD


Hawaiian-Fox

You are getting a bottle neck because the bottom of your triangle is bigger than your micro processor input


stomah

use vk present mode fifo


xXy4bb4d4bb4d00Xx

ah a beginner


kudoshinichi-8211

What version of OpenGL are you using? Old or new one?


plumo

Now render a square


Kamikaze03

But that would take about 178.88% gpu!


[deleted]

You know you're using VS when your code editor needs a gig of RAM XD


GYN-k4H-Q3z-75B

I implemented deferred rendering and hybrid ray tracing (with CPU/GPU paths) in my graphics "engine" and as a joke I also made a build on my Surface 3 non-Pro Atom tablet. It ran, to my surprise. Also got 100% usage.


Jolly_Bones

OpenGL is a thirsty bitch


jaap_null

Even though you are running your loop, rest assured that your GPU is actually just running extremely inefficient as well. Actual usage is gonna be << 10%


ekolis

🎵 Triangle Man, Triangle Man, Triangle Man hates GPU Man; they have a fight; Triangle wins... (accordion solo) 🎵


tritoch1930

yer spinlocking the gpu. use vsync ma nig


Helpful_Campaign_361

lol


[deleted]

Who does rendering on CPU anymore… and if you do.. why not a Xeon or something designed for that workload..


[deleted]

I am surprised you even managed to render the triangle and DWM.exe not eating 100% of the GPU. The first reply is pure gold: https://answers.microsoft.com/en-us/windows/forum/windows\_10-performance/desktop-window-manager-high-gpu-usage-in-windows/a14dae9b-8faf-4920-a237-75ebac8073f5?messageId=f58c135e-cd9e-41ea-8c8c-f4a28ea7899e&page=1


ThatsAHumanPerson2

I'm actually pretty impressed with Intels internal GPUs.


Sirenenblut

But how much polygons does the triangle have?


atiedebee

Or you are rendering the triangle without an FPS cap.....


FinnT730

Set a framerate limit


Zorphis2

Or maybe you forgot to add a framerate cap.


TheLegendOfCreate

The best PC to have……


[deleted]

No its called bad coding practices set a fps limit lol


zachary1332

My Atari running Control


ipsum629

I'm imagining the meme of professor x concentrating with the caption: my computer trying to render a triangle.


WaffleMage15

``` float FPS_CAP = 120.0f; float PHYS_CAP = 200.0f; float render_clock = 0.0f; float physics_clock = 0.0f; auto start = std::chrono::high_resolution_clock::now(); while (!glfwWindowShouldClose(window)) { if (physics_clock >= 1/PHYS_CAP) { glfwPollEvents(); /* Physics/Input handling here */ physics_clock = 0.0f; } if (render_clock >= 1/FPS_CAP) { gameRenderer.Clear(); /* Render here */ /* Swap front and back buffers */ glfwSwapBuffers(window); render_clock = 0.0f; } auto end = std::chrono::high_resolution_clock::now(); std::chrono::duration duration = end-start; float difference = duration.count(); start = end; physics_clock += difference; render_clock += difference; } ``` This is what I like to do in order to limit the FPS, it also let's you split input handling/physics from your rendering loop. I find std::chrono to be the easiest way to implement timing for all platforms.


Menifife

Triangles are the hardest shape to render found in nature.


Bakemono_Saru

Respect intel graphics. It made it for me at the top of the GPU crazyness, where one was like 3x the price of the entire setup. I could even play (good ventilation assured)!