I'm hardly likely to explain all the theory and varying methods of doing HT in one sentence :P
However the operating system is responsible for scheduling threads to be executed on the CPU and more recent operating systems are a lot better at making sure that the primary cores are prioritised. From a programmer's point of view all of the scheduling is abstracted behind the concept of a Thread and handed off to the operating system to manage.
The place I've really seen HT kill performance is in running virtual machines.
Bottom line about my comment though is that I'm running a 2 year old CPU architecture with a top of the line gfx card, and the bottleneck is still the GPU, not the CPU or memory bus. And if there's any question about disk speed, running a mid-range Sandisk SSD I'm almost always the first person to spawn in on a map load.
Obviously there are places where a CPU makes a bigger difference. When it comes to database servers there's a noticeable improvement in performance with more CPU cores available (provided that the database is properly optimised in terms of indexes, data organisation with clustered indexes, RAID, enough RAM, disk access speed, etc). CPU cores make a big difference in compiling binaries as well. These are tasks that grind the CPU by orders of magnitude more intensely than Battlefield 4 ever will.
When it comes to 3D graphics, given a reasonable spec CPU, the GPU is the next most important thing to throw money at.
With all of that said though, I agree with Skoup's last paragraph about spend over time. Getting onto newer architecture now will mean that you have more freedom to upgrade later at a lower cost. You'd be able to pick up a second graphics card later at a pretty decent price and keep up to acceptable frame rates. If you want MOAR FPS NOW, drop more on the graphics card now and worry about newer architecture when it becomes a barrier/bottleneck.





Reply With Quote