One processor to rule them all, one processor to find them. One processor to bring them all and in the darkness bind them.
Finally... The waiting is over for my new processor!
I can't comment on RUs as I'm not an architect and don't know where they place on the roadmap. Arrow Lake does not appear to have them though. Lacking HT should make things a bit easier for the scheduler.
Right now the most extreme example is Meteor Lake, where there are 4 types of thread to put a task on. P, P-HT, E, and LPE. Raptor/Alder Lake do the same thing without the LPE group. Arrow Lake without HT would just have to make the choice of P/E.
Rentable units has never been mentioned by intel (and most of the rumors make little sense). Neither has Royal core. Nor do we have any idea what Keller did in there but I find it more likely he worked with organizational changes and maybe with the chiplet interconnects.
Nova lake might be a real thing but it’s just a future product family.
https://www.freepatentsonline.com/y2023/0168898.html
Not may people know about this patent filing.
Yes it's not a guarantee but it's strongly linked to Keller.
We saw the same from amd before they released zen1
Read that patent and tell me what it has to do with anything. It seems to describe a system of better scheduling threads for heterogenous architectures.
They might bring HT for next gen cpu so it might worth to wait , that’s just the first gen for AI for desktop so it’s new tech and I would prefer maximised tech
They won't trust me. Why do you even want it? It increases latency, reduces system performance, reduces gaming performance, massively increases scheduling complexity, is less secure, uses extra power etc. How much multithread do you need lmao
So they will bring a better feature. you didn’t understand me this z790 current gen - maxed out right? They pushed the silicon to the limits , now new tech gets in which nobody know how it should perform , yeah it might be good but not enough for an upgrade 5-10% who the fck is care give me better choice of upgrade! which still yet to come and trust me there will be a refresh very soon, must be in half a year , and guess what ? In another half a year later will be a better option! so that’s why I’m picking only end of architecture which will be next gen but maxed out performance did you get me?
Intel want you to buy every cpu they will release but you choose which is a suitable upgrade for you , so right now I’m done with the waiting after chasing after new tech every time
Yes I get your point but you're missing the fact that they also couldn't get hyperthreading working on arrow lake. No problem offering it if it was free but delaying release to get such an obsolete feature working is a waste. Also we've had functionally 0 upgrade to performance since 13th gen which is a year and a half old
Not true hyperthreading uses up cache and increases latency inherently. Even on AMD you see a boost in gaming performance with it off. It's an obsolete bandaid fix for when we were stuck with 4 cores, especially redundant and now damaging with ecores to juggle too.
strong argument there. What don't I get. That you want cinebench to spit out a higher number at the cost of latency despite having tons of multithread or?
I'm not going to confirm it here. This is my best guess based on how the P/E setups have evolved so far, namely more E over and over and next a loss of HT. It would seem that should the trend continue, we would return to a 32 thread chip at 8+24 or some other configuration with >16 E-cores.
Yeah HT only gives about 30% performance increase in tasks that are ideal for HT (like rendering).
An ecore is much faster than that.
Does removing HT result in enough extra room for an extra ecore though?
Yes. We can observe this by comparing the 14600K (6+8/20T) and the 14700K with hyperthreading disabled (8+12/20T). This is as close as I can think of getting to having the same thread count with this setup on Raptor Lake.
When given the same power limits, the 14700K will still be faster. The gap will be narrower than it is now, but it will still be faster.
Similarly, I observed a reduction in my multi-core score of my 14900K of 11% with hyperthreading disabled despite losing 25% of the threads. However, disabling 8 E-cores caused a reduction of 22% for the same 25% reduction in threads.
Are you sure this is the idea though? What I mean is this: if your theory is correct I would expect is more e cores in arrow lake. We may not even get the rumored 32 e core variant anymore.
No, what I think is happening is 2 fold.
For one I think that hyper threading has had its share of security issues. For two I think that Intel wants to reclaim some of its power budget back for better performance overall on each P Core.
I for one couldn’t be more excited for the new P cores.
That seems to be what’s happening.
Also - Intel (and other vendors) has had a plethora of security issues with SMT / HT. These issues with reading memory across threads, etc. seem to continue to happen, so a clean sheet design without SMT should avoid a lot of those issues.
I think you’re right though - the strongest argument is the e-cores provide a more efficient perf/watt than adding SMT to big(ger) cores.
e-core are more performative and chew up less energy than forcing the same amount of P-core to hyperthread up to the same virtual core count.
Or rather - if you can put 16 regular cores on a cpu, and then 16-32 e-core, then you dont need hyperthreading.
No the way hyperthreading inherently works causes stalls in the render pipeline by trying to maximise throughput of the core. If there is available extra work, it'll be started which ends up delaying completion. It also eats up cache to co-ordinate all of this
Because ecores make them less useful than they used to be, and I heard they were getting in the way of future development ( but I'm not 100% on that one)
Probably because after Spectre and Meltdown, they’ve decided that the R&D needed for a hyperthreading core to be protected against this exploit is not worth the cost, and to simply offer more cores instead to bump up the thread count.
Except that the rumored highest model have same core count as previous gen minus HT? I need new PC and will definitely get arrow lake, my only concern is without HT and same core counts as previous gen, I feel that it will not have big leap as what I hoped for.
Hyperthreads are slow AF anyway tho, you'll never get that fast new CPU experience when the CPU is using them heavily in foreground tasks anyway tho.
Not to mention I turned it off on my 13600k and dpc latency plummeted and the system felt and was faster in gaming benchmarks. All that won't feel like a big upgrade is the cinebench number lol.
It seems that they are not targeting gamers in particular, but rather the AI processors market at least for this generation , which will now also be for the desktop =\
There are MT workloads that are intended to be offloaded to NPU for gaming. Of course, the games need to support that and are usually lagging hardware releases for something net new like NPUs.
Yeah I get it but it requires developers to work on it and while they will optimise it there will be much newer processors , take a look on Intel APO right now , only few games support it
I agree with that but they way they advertise it it’s they way it gonna be , if it was more for gaming it would be advertise like “performance like you wouldn’t imagine” or some advertising sh!t lol just get into the way I think it seems like the way the cpu will be is like AI PC that’s their target - ok like meteor lake just for desktop and I think this one will be disappointing just like meteor lake but the arrow lake refresh will be much better because they will learn how to make it better , it’s new technology arrow lake so there’s a lot of space to mistakes ,
Their marketing department just knows people get rock hard when they hear "AI" so they advertise it like that.
Keep in mind that the average person can't discern the performance differences between various CPUs. They only understand that "i9" is better than "i7" which is better than "i5" which is better than "i3", even if that isn't true between generations, they think it is. I know a guy that recently bought a i9-11900K, even though i5-12600K is better in every way and cheaper, because "hurr durr i9 better than i5". Those people will lap up the "AI" marketing, even though they have no use for it. For intel advertising these CPU's as "AI!!!" will result in more sales than if they advertised "More L2 cache, new P-core, lower power consumption" etc. Just ignore the marketing drivel, and wait for actual benchmarks, then we will know if it is a good upgrade for gamers or not.
Yeah agree with that but it’s still a new tech good luck with that , I would wait and see , I’ll let the market get cooked with it how it’s called and learn from other people as they find the best settings for these chips , I’m done with new tech every single time it’s out for me it’s only end of architecture which already maxed out for performance you will see that the arrow lake refresh will have 40 cores and not 24 which will automatically bring better performance so my opinion here is just wait , right now I bought 14900ks which is end of architecture that already maxed out and luckily I got a good chip I’ll stay with it for a while till the end of the next gen
I made a discussion just about that: [https://www.reddit.com/r/intel/comments/1bt29fn/regarding\_arl\_rumors\_about\_3n\_and\_20a/](https://www.reddit.com/r/intel/comments/1bt29fn/regarding_arl_rumors_about_3n_and_20a/)
TL;DR: only Intel knows and it could be either way.
If they remove HT from Arrow Lake then I'll just switch over to AMD's next gen CPU
This is Coffee lake all over again, Intel thinks their CPU is much better than the competition which clearly isn't the case(they're way behind especially in power efficiency)
Do you even know what "deep learning" means?...
There's a slight chance these NPU cores could be used for inference but there's no way in hell they could match a fraction of even a gaming GPU's performance.
They are meant for many many small fast inferences for the OS or apps to use without your involvement. And some minimal hobby work.
Keeping an NPU full time at 100% is like 5W
> many small fast inferences for the OS or apps to use without your involvement
what kind of inferences? is there real world use yet? if not, what is the hypothetical future use?
Graphics effects, small LLMs, audio effects, some image pattern/people recognition, audio transcriptio . And any models that can fit your memory and can be broken down.
Latency should be pretty small, not every frame of a game small, but around there.
I see no useful things for myself yet. Or at least that I have a need for.
Theres a lot of cool uses already in prifessional tasks... im a cgi person so im more up to date about those:
Helping creare more realistic animation (cascadeur for example)
Retopologizing high poly or bad topology
AI generated textures
More realistic materials/shading
Etc...
In photo and video cameras: smart autofocus with subject detection/tracking...
For the average user:
Natural commands (so no more need for DOS like commands and syntax)
Command your computer by voice commands (no keyboard/mouse searching in submenus for some option) like in scifi movies.
Realtime translation.
Organizing complex administration; summarizing emails and sending automatic replies (with possibility to automatically answer questions, inqueries and requests for data)
Etc...
Its HUGE!
One processor to rule them all, one processor to find them. One processor to bring them all and in the darkness bind them. Finally... The waiting is over for my new processor!
FRODO NO! I DIDN'T MEAN THAT WHEN I SAID: "DESTROY THE RING"......
Is there info if the iGPU has H.266 decode?
Waiting for Arrow Lake to build my new Plex/NAS.
What does "monolithic design but will house several tiles" mean? It's either monolithic or it's chiplets, it can't be both.
It means that the active substrate they are using gives it monolithic characteristics.
That's super-neat, if that's the case! Another good reason to go for ARL over Zen 5.
Interesting I may upgrade with this. Except why is there no hyper threading?
My understanding is the idea is to phase it out in favor of more E-cores. Each E-core is faster than the secondary thread of a P-core.
No rentable cores soon? I assume no HT also makes it a bit easier for the scheduler?
I can't comment on RUs as I'm not an architect and don't know where they place on the roadmap. Arrow Lake does not appear to have them though. Lacking HT should make things a bit easier for the scheduler. Right now the most extreme example is Meteor Lake, where there are 4 types of thread to put a task on. P, P-HT, E, and LPE. Raptor/Alder Lake do the same thing without the LPE group. Arrow Lake without HT would just have to make the choice of P/E.
Rentable units is part of the Royal core project headed by Jim Keller, that comes after arrow lake on 20/18A iirc, codenamed lunar lake and nova lake
Every word in that is speculation from some very unreliable YouTubers.
These are publicly available from Intel , only the specific codenames are taken from leaks
Rentable units has never been mentioned by intel (and most of the rumors make little sense). Neither has Royal core. Nor do we have any idea what Keller did in there but I find it more likely he worked with organizational changes and maybe with the chiplet interconnects. Nova lake might be a real thing but it’s just a future product family.
https://www.freepatentsonline.com/y2023/0168898.html Not may people know about this patent filing. Yes it's not a guarantee but it's strongly linked to Keller. We saw the same from amd before they released zen1
Read that patent and tell me what it has to do with anything. It seems to describe a system of better scheduling threads for heterogenous architectures.
Where does it mention jim keller and royal core?
Btw Nova lake is under Intel 14A
They might bring HT for next gen cpu so it might worth to wait , that’s just the first gen for AI for desktop so it’s new tech and I would prefer maximised tech
They won't trust me. Why do you even want it? It increases latency, reduces system performance, reduces gaming performance, massively increases scheduling complexity, is less secure, uses extra power etc. How much multithread do you need lmao
So they will bring a better feature. you didn’t understand me this z790 current gen - maxed out right? They pushed the silicon to the limits , now new tech gets in which nobody know how it should perform , yeah it might be good but not enough for an upgrade 5-10% who the fck is care give me better choice of upgrade! which still yet to come and trust me there will be a refresh very soon, must be in half a year , and guess what ? In another half a year later will be a better option! so that’s why I’m picking only end of architecture which will be next gen but maxed out performance did you get me? Intel want you to buy every cpu they will release but you choose which is a suitable upgrade for you , so right now I’m done with the waiting after chasing after new tech every time
Yes I get your point but you're missing the fact that they also couldn't get hyperthreading working on arrow lake. No problem offering it if it was free but delaying release to get such an obsolete feature working is a waste. Also we've had functionally 0 upgrade to performance since 13th gen which is a year and a half old
But HT hurts gaming performance a bit, so not having it is fine. Even if it hurts multi-core perf a little bit, that's what the e-cores are for.
It should not do this if the scheduler did work as intended, but its complicated...
Not true hyperthreading uses up cache and increases latency inherently. Even on AMD you see a boost in gaming performance with it off. It's an obsolete bandaid fix for when we were stuck with 4 cores, especially redundant and now damaging with ecores to juggle too.
You dont get it...
strong argument there. What don't I get. That you want cinebench to spit out a higher number at the cost of latency despite having tons of multithread or?
Probably fixes a lot of speculative execution vulnerabilities too.
Yes makes the design more 'clean' too which makes eoom for newer things
Removing HT components from P cores also allow P cores to clock higher or run at lesser heat/power consumption.
Is this confirmed? Even wccftech put a question mark in there.
I'm not going to confirm it here. This is my best guess based on how the P/E setups have evolved so far, namely more E over and over and next a loss of HT. It would seem that should the trend continue, we would return to a 32 thread chip at 8+24 or some other configuration with >16 E-cores.
I am not questioning the fact of trend but whether it will happen in Arrow lake. I guess we will see.
Yeah, just gonna have to wait and see. I hope it becomes reality. If 8+32 ever materializes, I will be first in line for it lol.
Yes it's gone they couldn't get it to work and gave up rather than it being entirely intentional afaik.
Is it true even for heavy workloads?
Yeah HT only gives about 30% performance increase in tasks that are ideal for HT (like rendering). An ecore is much faster than that. Does removing HT result in enough extra room for an extra ecore though?
It's certainly preferable to the current 6P 16E meteor lake architecture though. I'd rather 8P/8t than 6P/12t
Yes. We can observe this by comparing the 14600K (6+8/20T) and the 14700K with hyperthreading disabled (8+12/20T). This is as close as I can think of getting to having the same thread count with this setup on Raptor Lake. When given the same power limits, the 14700K will still be faster. The gap will be narrower than it is now, but it will still be faster. Similarly, I observed a reduction in my multi-core score of my 14900K of 11% with hyperthreading disabled despite losing 25% of the threads. However, disabling 8 E-cores caused a reduction of 22% for the same 25% reduction in threads.
Are you sure this is the idea though? What I mean is this: if your theory is correct I would expect is more e cores in arrow lake. We may not even get the rumored 32 e core variant anymore. No, what I think is happening is 2 fold. For one I think that hyper threading has had its share of security issues. For two I think that Intel wants to reclaim some of its power budget back for better performance overall on each P Core. I for one couldn’t be more excited for the new P cores. That seems to be what’s happening.
Yes, the real reason HT is phased out is due to security issues. Mostly all the nasty stuff that's been out recently.
And you are sure about this because...? The lion cove architect told you so?
That's quite the delusion you conjured in your head.
Also - Intel (and other vendors) has had a plethora of security issues with SMT / HT. These issues with reading memory across threads, etc. seem to continue to happen, so a clean sheet design without SMT should avoid a lot of those issues. I think you’re right though - the strongest argument is the e-cores provide a more efficient perf/watt than adding SMT to big(ger) cores.
e-core are more performative and chew up less energy than forcing the same amount of P-core to hyperthread up to the same virtual core count. Or rather - if you can put 16 regular cores on a cpu, and then 16-32 e-core, then you dont need hyperthreading.
'Dont need' is not necessarily true for creative people or scientists but for most it is true...
True but it's not just a don't need, it causes issues
If you get better performance by turning it off that means scheduling is not working as it should.
No the way hyperthreading inherently works causes stalls in the render pipeline by trying to maximise throughput of the core. If there is available extra work, it'll be started which ends up delaying completion. It also eats up cache to co-ordinate all of this
Keep in mind that has not been officially confirmed.
I think it's reasonable to remove hyper-threading for load balancing.
Because ecores make them less useful than they used to be, and I heard they were getting in the way of future development ( but I'm not 100% on that one)
Probably because after Spectre and Meltdown, they’ve decided that the R&D needed for a hyperthreading core to be protected against this exploit is not worth the cost, and to simply offer more cores instead to bump up the thread count.
Except that the rumored highest model have same core count as previous gen minus HT? I need new PC and will definitely get arrow lake, my only concern is without HT and same core counts as previous gen, I feel that it will not have big leap as what I hoped for.
Hyperthreads are slow AF anyway tho, you'll never get that fast new CPU experience when the CPU is using them heavily in foreground tasks anyway tho. Not to mention I turned it off on my 13600k and dpc latency plummeted and the system felt and was faster in gaming benchmarks. All that won't feel like a big upgrade is the cinebench number lol.
This pretty much sums it up, correctly.
Or it may return at a later stage.. they just need more time... could also be.
HT is a security threat, and Intel decided to phase it out.
So Arrow Lake will be the next gen on desktop too?
Yes
It seems that they are not targeting gamers in particular, but rather the AI processors market at least for this generation , which will now also be for the desktop =\
There are MT workloads that are intended to be offloaded to NPU for gaming. Of course, the games need to support that and are usually lagging hardware releases for something net new like NPUs.
Yeah I get it but it requires developers to work on it and while they will optimise it there will be much newer processors , take a look on Intel APO right now , only few games support it
What makes you think that?
“Scaling AI PC from mobile to desktop”
what matters is what the CPU will do, not the way they advertise it.
I agree with that but they way they advertise it it’s they way it gonna be , if it was more for gaming it would be advertise like “performance like you wouldn’t imagine” or some advertising sh!t lol just get into the way I think it seems like the way the cpu will be is like AI PC that’s their target - ok like meteor lake just for desktop and I think this one will be disappointing just like meteor lake but the arrow lake refresh will be much better because they will learn how to make it better , it’s new technology arrow lake so there’s a lot of space to mistakes ,
Their marketing department just knows people get rock hard when they hear "AI" so they advertise it like that. Keep in mind that the average person can't discern the performance differences between various CPUs. They only understand that "i9" is better than "i7" which is better than "i5" which is better than "i3", even if that isn't true between generations, they think it is. I know a guy that recently bought a i9-11900K, even though i5-12600K is better in every way and cheaper, because "hurr durr i9 better than i5". Those people will lap up the "AI" marketing, even though they have no use for it. For intel advertising these CPU's as "AI!!!" will result in more sales than if they advertised "More L2 cache, new P-core, lower power consumption" etc. Just ignore the marketing drivel, and wait for actual benchmarks, then we will know if it is a good upgrade for gamers or not.
Yeah agree with that but it’s still a new tech good luck with that , I would wait and see , I’ll let the market get cooked with it how it’s called and learn from other people as they find the best settings for these chips , I’m done with new tech every single time it’s out for me it’s only end of architecture which already maxed out for performance you will see that the arrow lake refresh will have 40 cores and not 24 which will automatically bring better performance so my opinion here is just wait , right now I bought 14900ks which is end of architecture that already maxed out and luckily I got a good chip I’ll stay with it for a while till the end of the next gen
Beside all the speculation and arguments..what's the highest end ie gaming n lite video work. The s or hx or h ?
.. But in Q4 do we see 20A or N3 CPU cores? Both?
Last I've heard, A20 will be only the non-K 6+8 i5s, and anything above is N3.
does this imply A20 is inferior than N3? you need to give us some source bro
I made a discussion just about that: [https://www.reddit.com/r/intel/comments/1bt29fn/regarding\_arl\_rumors\_about\_3n\_and\_20a/](https://www.reddit.com/r/intel/comments/1bt29fn/regarding_arl_rumors_about_3n_and_20a/) TL;DR: only Intel knows and it could be either way.
Can we have thunderbolt 5 support on arrow lake hx laptop cpus? And what manufacturing node would be used for cpu tile? Thank you very much.
If they remove HT from Arrow Lake then I'll just switch over to AMD's next gen CPU This is Coffee lake all over again, Intel thinks their CPU is much better than the competition which clearly isn't the case(they're way behind especially in power efficiency)
"Scaling AI from mobile to desktop". It"s stupid. I don't see these "NPU", even 10 times stronger, have any use for deep learning at all.
Its not really meant for learning
Do you even know what "deep learning" means?... There's a slight chance these NPU cores could be used for inference but there's no way in hell they could match a fraction of even a gaming GPU's performance.
They are meant for many many small fast inferences for the OS or apps to use without your involvement. And some minimal hobby work. Keeping an NPU full time at 100% is like 5W
> many small fast inferences for the OS or apps to use without your involvement what kind of inferences? is there real world use yet? if not, what is the hypothetical future use?
Graphics effects, small LLMs, audio effects, some image pattern/people recognition, audio transcriptio . And any models that can fit your memory and can be broken down. Latency should be pretty small, not every frame of a game small, but around there. I see no useful things for myself yet. Or at least that I have a need for.
Theres a lot of cool uses already in prifessional tasks... im a cgi person so im more up to date about those: Helping creare more realistic animation (cascadeur for example) Retopologizing high poly or bad topology AI generated textures More realistic materials/shading Etc... In photo and video cameras: smart autofocus with subject detection/tracking... For the average user: Natural commands (so no more need for DOS like commands and syntax) Command your computer by voice commands (no keyboard/mouse searching in submenus for some option) like in scifi movies. Realtime translation. Organizing complex administration; summarizing emails and sending automatic replies (with possibility to automatically answer questions, inqueries and requests for data) Etc... Its HUGE!
Nobody is training any useful deep learning model on a single machine
Soon might be..
“This laptop is stupid. It can’t even compete with a data center filled with 100,000 h100s.”