Upvote Upvoted 133 Downvote Downvoted
1 ⋅⋅ 21 22 23 24 25 26 27
TF2 benchmarks
691
#691
-4 Frags +

2639 frames 29.779 seconds 88.62 fps (11.28 ms/f) 8.722 fps variability
GTX 960 with latest geforce experience driver (522.25, 10/12/22)

running mastercomfig med-low preset with the following overrides:
decals_art=on
sprays=on
gibs=medium_low
ragdolls=high
3dsky=on
jigglebones=on
hud_player_model=on
decals_models=low
decals=medium
hud_achievement=on
htmlmotd=on
dynamic_background=preload
bandwidth=4.0Mbps

2639 frames 29.779 seconds 88.62 fps (11.28 ms/f) 8.722 fps variability
GTX 960 with latest geforce experience driver (522.25, 10/12/22)

running mastercomfig med-low preset with the following overrides:
decals_art=on
sprays=on
gibs=medium_low
ragdolls=high
3dsky=on
jigglebones=on
hud_player_model=on
decals_models=low
decals=medium
hud_achievement=on
htmlmotd=on
dynamic_background=preload
bandwidth=4.0Mbps
692
#692
6 Frags +

i5 13600kf @5.4
gtx 1660ti
2x16GB 3600@cl16
mastercoms low dx81
nohats bgum
1920x1080

benchmark_test (mercenarypark)
old 10700K: 4812 frames 12.835 seconds 374.92 fps ( 2.67 ms/f) 54.588 fps variability
new 13600K: 4812 frames 8.308 seconds 579.20 fps ( 1.73 ms/f) 81.025 fps variability
benchmark1 (dustbowl)
old 10700K: 2639 frames 8.677 seconds 304.15 fps ( 3.29 ms/f) 24.750 fps variability
new 13600K: 2639 frames 5.994 seconds 440.30 fps ( 2.27 ms/f) 35.378 fps variability

i'm including this one because i don't usually trust the results of the other two for various reasons. this one is just a demo of a normal pub in a cpu-bound spot where the fps would dip. god knows we don't need yet another benchmark demo but this is the one i personally use for tweaking my own settings since i've found it the most representative in its results

spybm (borneo)
old 10700k: 2301 frames 8.942 seconds 257.33 fps ( 3.89 ms/f) 26.195 fps variability
new 13600k: 2301 frames 5.740 seconds 400.88 fps ( 2.49 ms/f) 36.416 fps variability
i5 13600kf @5.4
gtx 1660ti
2x16GB 3600@cl16
mastercoms low dx81
nohats bgum
1920x1080

[quote]
[u]benchmark_test[/u] (mercenarypark)
old 10700K: 4812 frames 12.835 seconds 374.92 fps ( 2.67 ms/f) 54.588 fps variability
new 13600K: 4812 frames 8.308 seconds 579.20 fps ( 1.73 ms/f) 81.025 fps variability[/quote]

[quote][u]benchmark1[/u] (dustbowl)
old 10700K: 2639 frames 8.677 seconds 304.15 fps ( 3.29 ms/f) 24.750 fps variability
new 13600K: 2639 frames 5.994 seconds 440.30 fps ( 2.27 ms/f) 35.378 fps variability[/quote]


i'm including this one because i don't usually trust the results of the other two for various reasons. this one is just a demo of a normal pub in a cpu-bound spot where the fps would dip. god knows we don't need yet another benchmark demo but this is the one i personally use for tweaking my own settings since i've found it the most representative in its results
[quote]
[url=https://drive.google.com/file/d/1KHRNxBNsxHuQ4WVphcUqM9v8IKtIShf-/view?usp=sharing]spybm (borneo)[/url]
old 10700k: 2301 frames 8.942 seconds 257.33 fps ( 3.89 ms/f) 26.195 fps variability
new 13600k: 2301 frames 5.740 seconds 400.88 fps ( 2.49 ms/f) 36.416 fps variability[/quote]
693
#693
0 Frags +
kindredi5 13600kf @5.4
gtx 1660ti
2x16GB 3600@cl16
mastercoms low dx81
nohats bgum
1920x1080
benchmark1 (dustbowl)
old 10700K: 2639 frames 8.677 seconds 304.15 fps ( 3.29 ms/f) 24.750 fps variability
new 13600K: 2639 frames 5.994 seconds 440.30 fps ( 2.27 ms/f) 35.378 fps variability

i'm including this one because i don't usually trust the results of the other two for various reasons. it's just a demo of a normal pub in a cpu-bound spot where the fps would dip. god knows we don't need yet another benchmark demo but this is the one i personally use for tweaking my own settings since i've found it the most representative in its results

I'm gonna continue to primarily rely on benchmark1 just because of the sheer sample size it carries for me and people I have coerced into running it, as well as the decade of results from it from this thread. But also because as you said

it's just a demo of a normal pub in a cpu-bound spot where the fps would dip

I find it more true to form when it comes to what I am more likely to see in game in terms of the numbers.

Also wondering what would the uplift be like with an expensive DDR5 kit because from all the benchmarks I have seen on youtube it seems to be like a 7-10% difference from DDR4, but also with a 4000 series NVIDIA card. Now before Setsul rightfully so reminds me that you can run this game at close to max frame output at like 960/1050ti level, it is undeniable the newer 30 and 40 series cards are just plain faster all round(basing my thesis off of CSGO benchmarks as well). Higher clocks, more and stronger compute units, wider bus, way-way faster memory that goes through the wider bus at way higher speeds that feeds CPU data faster(GTX1650 for example has GDDR5 and GDDR6 versions with a performance delta that isn't negligible, and that card is effectively a piece of shit in its own right, and we have GDDR6X now thats even faster). All at a per part price equivalent to a whole computer that can run this game at 240hz if you know what you're doing, so definitely point of diminishing returns applies heavily here in this hard CPU based title.

Guess I'll have to wait and see once some of the more financially accomplished gamers put that to the test in this thread.

All in all from what I can tell, TF2 is now officially 480hz ready today and that brings me joy that perhaps someday I will be able to experience that.

[quote=kindred]i5 13600kf @5.4
gtx 1660ti
2x16GB 3600@cl16
mastercoms low dx81
nohats bgum
1920x1080
[u]benchmark1[/u] (dustbowl)
old 10700K: 2639 frames 8.677 seconds 304.15 fps ( 3.29 ms/f) 24.750 fps variability
new 13600K: 2639 frames 5.994 seconds 440.30 fps ( 2.27 ms/f) 35.378 fps variability


i'm including this one because i don't usually trust the results of the other two for various reasons. it's just a demo of a normal pub in a cpu-bound spot where the fps would dip. god knows we don't need yet another benchmark demo but this is the one i personally use for tweaking my own settings since i've found it the most representative in its results[/quote]
I'm gonna continue to primarily rely on benchmark1 just because of the sheer sample size it carries for me and people I have coerced into running it, as well as the decade of results from it from this thread. But also because as you said [quote=it's just a demo of a normal pub in a cpu-bound spot where the fps would dip][/quote] I find it more true to form when it comes to what I am more likely to see in game in terms of the numbers.

Also wondering what would the uplift be like with an expensive DDR5 kit because from all the benchmarks I have seen on youtube it seems to be like a 7-10% difference from DDR4, but also with a 4000 series NVIDIA card. Now before Setsul rightfully so reminds me that you can run this game at close to max frame output at like 960/1050ti level, it is undeniable the newer 30 and 40 series cards are just plain faster all round(basing my thesis off of CSGO benchmarks as well). Higher clocks, more and stronger compute units, wider bus, way-way faster memory that goes through the wider bus at way higher speeds that feeds CPU data faster(GTX1650 for example has GDDR5 and GDDR6 versions with a performance delta that isn't negligible, and that card is effectively a piece of shit in its own right, and we have GDDR6X now thats even faster). All at a per part price equivalent to a whole computer that can run this game at 240hz if you know what you're doing, so definitely point of diminishing returns applies heavily here in this hard CPU based title.

Guess I'll have to wait and see once some of the more financially accomplished gamers put that to the test in this thread.

All in all from what I can tell, TF2 is now officially 480hz ready today and that brings me joy that perhaps someday I will be able to experience that.
694
#694
6 Frags +

A better GPU doesn't help at all though if the game is already running into the CPU limited. This is not some cooperative effort, the GPU only starts working once the CPU is done. That's that mysterious "maximum pre-rendered frames" setting.
Higher clocks, more and stronger compute units, wider bus, faster memory, three other ways to say faster memory, they all don't matter. Whether the GPU needs 1ms or 0.5ms to render each frame does not matter when the CPU only send a new one every 2ms.

You're going to see a couple more fps with faster GPUs, but once you're really all the way at the CPU limit, no secret sauce in the GPU will make any difference. E.g. if a 1070 gets 480 fps and a 1080 490 fps and a 2080 495 fps and a 2080 Ti 496 fps then you can damn well be sure that a 4090 isn't going to get 550 fps but rather 499 fps.
Not sure what CS:GO benchmarks you've seen that contradict that.

In fact, the newer drivers are probably going to make things worse for a while.
Most don't bother benchmarking CS:GO anymore because there's nothing to learn from getting almost the same fps with half the cards you're testing and it screws with averages, so the only thing I've got to offer is this:

https://tech4gamers.com/wp-content/uploads/2022/10/RTX-4090-vs-RTX-3090-Ti-Test-in-9-Games-5-28-screenshot.png

Doesn't seem all that great, does it?

tl;dr
Are you going to see a noticeable difference between a 960 and a 4090? Yes.
Are you going to see a noticeable difference between a 1080 and a 4090? No.
The difference comes from having a card a bit faster than just about reaching the CPU limit, not from having one that's extra new, extra shiny, or 3 times faster than needed.

A better GPU doesn't help at all though if the game is already running into the CPU limited. This is not some cooperative effort, the GPU only starts working once the CPU is done. That's that mysterious "maximum pre-rendered frames" setting.
Higher clocks, more and stronger compute units, wider bus, faster memory, three other ways to say faster memory, they all don't matter. Whether the GPU needs 1ms or 0.5ms to render each frame does not matter when the CPU only send a new one every 2ms.

You're going to see a couple more fps with faster GPUs, but once you're really all the way at the CPU limit, no secret sauce in the GPU will make any difference. E.g. if a 1070 gets 480 fps and a 1080 490 fps and a 2080 495 fps and a 2080 Ti 496 fps then you can damn well be sure that a 4090 isn't going to get 550 fps but rather 499 fps.
Not sure what CS:GO benchmarks you've seen that contradict that.

In fact, the newer drivers are probably going to make things worse for a while.
Most don't bother benchmarking CS:GO anymore because there's nothing to learn from getting almost the same fps with half the cards you're testing and it screws with averages, so the only thing I've got to offer is this:
[img]https://tech4gamers.com/wp-content/uploads/2022/10/RTX-4090-vs-RTX-3090-Ti-Test-in-9-Games-5-28-screenshot.png[/img]Doesn't seem all that great, does it?

tl;dr
Are you going to see a noticeable difference between a 960 and a 4090? Yes.
Are you going to see a noticeable difference between a 1080 and a 4090? No.
The difference comes from having a card a bit faster than just about reaching the CPU limit, not from having one that's extra new, extra shiny, or 3 times faster than needed.
695
#695
-2 Frags +
SetsulMost don't bother benchmarking CS:GO anymore because there's nothing to learn from getting almost the same fps with half the cards you're testing and it screws with averages, so the only thing I've got to offer is this:
https://tech4gamers.com/wp-content/uploads/2022/10/RTX-4090-vs-RTX-3090-Ti-Test-in-9-Games-5-28-screenshot.pngDoesn't seem all that great, does it?

Sorry I'm a majority shareholder at the copium production fab.

Looking at that GPU load I had a tingling sensation that something seemed off. Using the URL of the screenshot I have located the website it's been sourced from. Splendid. A dive into the article with further analysis. Bingo. A link to the youtube channel where they sourced the data from. All that's left is to locate the benchmark video. The settings used are 4K High. Now I have a problem with that for a plethora of reasons but I will let you figure those on your own. Rest assured the main goal of testing methodology in this video is not to push the maximum amount of frames for 1080p competitive oriented gaming. It is to push this card usage to see generational improvements at full load at high resolution-high settings like in the rest of the titles and see what the product is about for recreational gamers with deep pockets.

That makes it ever so difficult for me to consider this benchmark as an argument for the topic at hand.

You best believe 1080p low results are several hundred fps higher. I don't want to fill this post with screenshots but take my words at face value that at 4K High 4090 strangely does like 2% worse than 3090ti in CSGO, but in 1080p you can trace the generational and per GPU model framerate differences of these cards.

[quote=Setsul]Most don't bother benchmarking CS:GO anymore because there's nothing to learn from getting almost the same fps with half the cards you're testing and it screws with averages, so the only thing I've got to offer is this:
[img]https://tech4gamers.com/wp-content/uploads/2022/10/RTX-4090-vs-RTX-3090-Ti-Test-in-9-Games-5-28-screenshot.png[/img]Doesn't seem all that great, does it?[/quote]

Sorry I'm a majority shareholder at the copium production fab.

Looking at that GPU load I had a tingling sensation that something seemed off. Using the URL of the screenshot I have located the website it's been sourced from. Splendid. A dive into the article with further analysis. Bingo. A link to the youtube channel where they sourced the data from. All that's left is to locate the [url=https://youtu.be/FTxIJgx0pxU?t=307]benchmark video.[/url] The settings used are 4K High. Now I have a problem with that for a plethora of reasons but I will let you figure those on your own. Rest assured the main goal of testing methodology in this video is not to push the maximum amount of frames for 1080p competitive oriented gaming. It is to push this card usage to see generational improvements at full load at high resolution-high settings like in the rest of the titles and see what the product is about for recreational gamers with deep pockets.

That makes it ever so difficult for me to consider this benchmark as an argument for the topic at hand.

You best believe 1080p low results are several hundred fps higher. I don't want to fill this post with screenshots but take my words at face value that at 4K High 4090 strangely does like 2% worse than 3090ti in CSGO, but in 1080p you can trace the generational and per GPU model framerate differences of these cards.
696
#696
0 Frags +
Setsul

This image makes b4nny getting a 4090 even funnier

[quote=Setsul][/quote]

This image makes b4nny getting a 4090 even funnier
697
#697
7 Frags +

this is the most insanely annoying thing when trying to compare RAM and CPUs. all the benchmarks are 4k on ultra in some game you wouldn't even play for 30 minutes if you got it for free with your new graphics card

Wait, I can turn Apex Legends up to Ultra with my new GTX 4090 SUPER TI! Now I can't see anything and my mouse input feels like dog shit!

this is the most insanely annoying thing when trying to compare RAM and CPUs. all the benchmarks are 4k on ultra in some game you wouldn't even play for 30 minutes if you got it for free with your new graphics card

Wait, I can turn Apex Legends up to Ultra with my new GTX 4090 SUPER TI! Now I can't see anything and my mouse input feels like dog shit!
698
#698
4 Frags +

New setup

  • CPU i5-13600k
  • RAM DDR5 2x16 GB @5600MHz (36-36-36-76)
  • Video Card EVGA GTX 1080 Ti

DxLevel 81 - Mastercomfig Medium-Low Preset

Benchmark1.dem
2639 frames 6.971 seconds 378.54 fps ( 2.64 ms/f) 39.739 fps variability

Benchmark_test.dem
4812 frames 10.726 seconds 448.61 fps ( 2.23 ms/f) 83.439 fps variability

-----------------------------------------------------------------------------------------------------------------
Old setup

  • CPU i7-7700k
  • RAM DDR4 2x16 GB @3200MHz (16-20-20-38)
  • Video Card EVGA GTX 1080 Ti

DxLevel 81 - Mastercomfig Medium-Low Preset

Benchmark1.dem
2639 frames 11.861 seconds 222.49 fps ( 4.49 ms/f) 19.482 fps variability

Benchmark_test.dem
4812 frames 18.558 seconds 259.30 fps ( 3.86 ms/f) 38.363 fps variability

Very happy with the results. About 70% increase in average fps!
I use a 240Hz monitor, and with the 7700k, the frames would dip way below that quite often.
However with the 13600k now, I'd say 90% of the time I'm over 240 fps

[h][u]New setup[/u][/h]
[list]
[*] [b]CPU[/b] i5-13600k
[*] [b]RAM[/b] DDR5 2x16 GB @5600MHz (36-36-36-76)
[*] [b]Video Card[/b] EVGA GTX 1080 Ti
[/list]

DxLevel 81 - Mastercomfig Medium-Low Preset

[b]Benchmark1.dem[/b]
2639 frames 6.971 seconds 378.54 fps ( 2.64 ms/f) 39.739 fps variability

[b]Benchmark_test.dem[/b]
4812 frames 10.726 seconds 448.61 fps ( 2.23 ms/f) 83.439 fps variability

-----------------------------------------------------------------------------------------------------------------
[h][u]Old setup[/u][/h]
[list]
[*] [b]CPU[/b] i7-7700k
[*] [b]RAM[/b] DDR4 2x16 GB @3200MHz (16-20-20-38)
[*] [b]Video Card[/b] EVGA GTX 1080 Ti
[/list]

DxLevel 81 - Mastercomfig Medium-Low Preset

[b]Benchmark1.dem[/b]
2639 frames 11.861 seconds 222.49 fps ( 4.49 ms/f) 19.482 fps variability

[b]Benchmark_test.dem[/b]
4812 frames 18.558 seconds 259.30 fps ( 3.86 ms/f) 38.363 fps variability

Very happy with the results. About 70% increase in average fps!
I use a 240Hz monitor, and with the 7700k, the frames would dip way below that quite often.
However with the 13600k now, I'd say 90% of the time I'm over 240 fps
699
#699
1 Frags +
jnkiSorry I'm a majority shareholder at the copium production fab.
[...]
You best believe 1080p low results are several hundred fps higher. I don't want to fill this post with screenshots but take my words at face value that at 4K High 4090 strangely does like 2% worse than 3090ti in CSGO, but in 1080p you can trace the generational and per GPU model framerate differences of these cards.

So you've got benchmarks that prove what you claim, but you don't want to show them because even a single screenshot would be too much.
Yeah, that's copium.

[quote=jnki]Sorry I'm a majority shareholder at the copium production fab.
[...]
You best believe 1080p low results are several hundred fps higher. I don't want to fill this post with screenshots but take my words at face value that at 4K High 4090 strangely does like 2% worse than 3090ti in CSGO, but in 1080p you can trace the generational and per GPU model framerate differences of these cards.[/quote]
So you've got benchmarks that prove what you claim, but you don't want to show them because even a single screenshot would be too much.
Yeah, that's copium.
700
#700
-2 Frags +

Alright, lets take console output from uLLeticaL benchmark test map as the indicator of performance, since that is what seems every benchmark is basing their Average FPS off of. Sorry couldn't get 100% matching specs.

Show Content
12900k 5.7 3090ti DDR5 7200 30-41-40-28 1080p Low 928.18fps
https://www.youtube.com/watch?v=bzT2HPdBuvw

12900k 5.5 3090 DDR5 6400 30-37-37-26 1080p Low 836.43fps
https://www.youtube.com/watch?v=7znCbCXE5_8

12900k 5.5 3090 DDR5 6400 30-37-37 1080p High 758.47fps
https://www.youtube.com/watch?v=1V9UMDIKO8o

12900k 5.1 3090 1440p Low 742.38fps
https://www.youtube.com/watch?v=n_10dyViEX4

12900k 5.1 3080ti DDR4 4000 15-16-16-36 1152x864 Low 905.26fps
https://www.youtube.com/watch?v=J-bt-ElKGTA

While neither is this a 12900k nor 3090/3090ti/4090, this is finally the video that did test 1080p, 1440p and 4K at High and Low each all on an one given setup.

Show Content
https://www.youtube.com/watch?v=vD5Fy677IsQ
10900kf ?ghz 3080
1080p Low 548.50fps - 1080p High 492.06fps
1440p Low 526.17fps - 1440p High 437.03fps
4K Low 420.52 - 4K High 306.83fps
If the game truly is CPU bound, what is up with the Graphical Setting and Resolution framerate delta, and why is it so vast, especially at 1440p and 4K coming from 1080p? And while I agree 10900k is no longer a top CPU in 2022 and has been slaughtered by a 12900k(13900k) and the lower tier K cpus from 12th-13th gen, if we are talking strictly CPU bound, why does a powerful GPU like 3080 gives such different results.

Finally. While its out of the aforementioned criteria (uLLeticaL) here is a video of an identical setup, only difference is the GPU but the testing methodology is questionable and I understand if you find it inadmissible.

Show Content
12900k 4090
https://www.youtube.com/watch?v=vX7Odutb35g
12900k 3080
https://www.youtube.com/watch?v=pcWeNCDno7g
Where do the extra 46% FPS come from

Gamers Nexus didnt benchmark 4090 in CSGO at all. When they do benchmark CSGO they do it for their CPU reviews, but with a 3090ti at 1080p High, so again not really throwing me a bone here.

Hardware Unboxed skimped out on CSGO in their 4090 review as well. Like you said, they did not want to skew their game averages with CSGO results. They did however benchmark the 4090 in CSGO in their 13900k review, but not on uLLeticaL instead on a demo from a pro game, which obviously has a lot more variation than a static map benchmark.

LTT again, did their own fancy benchmark that doesnt utilize neither uLLeticaL nor does it test 1080p at all. But looking at the screenshots of their 3090ti and 4090 results at 4K they look basically on par to those that you have used in #697, don't you think.

Show Content

But then, as soon as they drop the resolution to 1440p it behaves predictably, with 4090 being on top, followed by a 3090ti and finally 3090. Surely the GPU should not have been influencing framerates that much by your account. Sure, the increase from 3090 to 4090 is just 12.3% and from 3090ti to 4090 only 5.8%, but we are talking several dozen FPS difference, not "a couple more" FPS as i understand from #697 and thats not even taking Low settings or 1080p into the account.

Show Content
Alright, lets take console output from uLLeticaL benchmark test map as the indicator of performance, since that is what seems every benchmark is basing their Average FPS off of. Sorry couldn't get 100% matching specs.

[spoiler]12900k 5.7 3090ti DDR5 7200 30-41-40-28 1080p Low 928.18fps
https://www.youtube.com/watch?v=bzT2HPdBuvw

12900k 5.5 3090 DDR5 6400 30-37-37-26 1080p Low 836.43fps
https://www.youtube.com/watch?v=7znCbCXE5_8

12900k 5.5 3090 DDR5 6400 30-37-37 1080p High 758.47fps
https://www.youtube.com/watch?v=1V9UMDIKO8o

12900k 5.1 3090 1440p Low 742.38fps
https://www.youtube.com/watch?v=n_10dyViEX4

12900k 5.1 3080ti DDR4 4000 15-16-16-36 1152x864 Low 905.26fps
https://www.youtube.com/watch?v=J-bt-ElKGTA[/spoiler]

While neither is this a 12900k nor 3090/3090ti/4090, this is finally the video that did test 1080p, 1440p and 4K at High and Low each all on an one given setup.

[spoiler]https://www.youtube.com/watch?v=vD5Fy677IsQ
10900kf ?ghz 3080
1080p Low 548.50fps - 1080p High 492.06fps
1440p Low 526.17fps - 1440p High 437.03fps
4K Low 420.52 - 4K High 306.83fps
If the game truly is CPU bound, what is up with the Graphical Setting and Resolution framerate delta, and why is it so vast, especially at 1440p and 4K coming from 1080p? And while I agree 10900k is no longer a top CPU in 2022 and has been slaughtered by a 12900k(13900k) and the lower tier K cpus from 12th-13th gen, if we are talking strictly CPU bound, why does a powerful GPU like 3080 gives such different results.[/spoiler]

Finally. While its out of the aforementioned criteria (uLLeticaL) here is a video of an identical setup, only difference is the GPU but the testing methodology is questionable and I understand if you find it inadmissible.
[spoiler]12900k 4090
https://www.youtube.com/watch?v=vX7Odutb35g
12900k 3080
https://www.youtube.com/watch?v=pcWeNCDno7g
Where do the extra 46% FPS come from[/spoiler]

Gamers Nexus didnt benchmark 4090 in CSGO at all. When they do benchmark CSGO they do it for their CPU reviews, but with a 3090ti at 1080p High, so again not really throwing me a bone here.

Hardware Unboxed skimped out on CSGO in their 4090 review as well. Like you said, they did not want to skew their game averages with CSGO results. They did however benchmark the 4090 in CSGO in their 13900k review, but not on uLLeticaL instead on a demo from a pro game, which obviously has a lot more variation than a static map benchmark.

LTT again, did their own fancy benchmark that doesnt utilize neither uLLeticaL nor does it test 1080p at all. But looking at the screenshots of their 3090ti and 4090 results at 4K they look basically on par to those that you have used in #697, don't you think. [spoiler][img]https://i.imgur.com/NB2AJeC.png[/img][/spoiler]
But then, as soon as they drop the resolution to 1440p it behaves predictably, with 4090 being on top, followed by a 3090ti and finally 3090. Surely the GPU should not have been influencing framerates that much by your account. Sure, the increase from 3090 to 4090 is just 12.3% and from 3090ti to 4090 only 5.8%, but we are talking several dozen FPS difference, not "a couple more" FPS as i understand from #697 and thats not even taking Low settings or 1080p into the account.
[spoiler][img]https://i.imgur.com/DLssiBI.png[/img][/spoiler]
701
#701
2 Frags +
cheetazVery happy with the results. About 70% increase in average fps!
I use a 240Hz monitor, and with the 7700k, the frames would dip way below that quite often.
However with the 13600k now, I'd say 90% of the time I'm over 240 fps

I’m surprised to hear this, especially with DDR5.
I would double check the configurations because I wouldn’t expect to see dips below 300 on dx8, even on an upward pub

[quote=cheetaz]
Very happy with the results. About 70% increase in average fps!
I use a 240Hz monitor, and with the 7700k, the frames would dip way below that quite often.
However with the 13600k now, I'd say 90% of the time I'm over 240 fps[/quote]

I’m surprised to hear this, especially with DDR5.
I would double check the configurations because I wouldn’t expect to see dips below 300 on dx8, even on an upward pub
702
#702
0 Frags +

You need to understand that a GPU does not conjure details out of thin air. Higher settings and higher resolution also mean more CPU load. The difference in GPU load is usually larger, so you're more likely to be GPU bound, but it's not always the case. Source engine is especially bad about this, where way too much is done on the CPU.

So you complained about 4K High benchmarks and pulled out 4K and 1440p Very High benchmarks instead. Not impressed, I must say.

Let's go through the benchmarks in reverse order:
We're not going to play some mystery guessing game, so the LTT benchmarks are completely worthless.
As you can see, at 4K the 4090 driver is fucked, at 4K the 6950XT is the fastest and at 1440p it falls behind even the 3090. So the only apples to apples comparison are the 3090 Ti and 3090 and their difference is ... 39 fps, regardless of resolution and total fps. Logically the prediction would be that at 1080p Very Low we'd see something like 1000 fps vs 1039 fps. Not very impressive.

The 3080 vs 4090 is pretty scuffed because the average the 4090 shows 450-500fps average on 1080p low and 500-550 average on high, no idea how the current fps are so much higher than the average for the entire run, the 3080 shows 650-ish on low and 670-ish on high. So I'm not sure what're you trying to show.
Where could the difference be coming from? I don't know, maybe it's the fact that the 3080 was run with 4800 MHz RAM and the 4090 with 6200 MHz? CPU clock is also different.
No idea where you're getting 46% more fps from, but it's most definitely not an identical setup and not even the same benchmark.
Again, worthless.

Then you got a bunch of benchmarks showing that lower details/resolution lead to higher fps. No one is suprised by that. See right at the start, GPUs aren't magic devices that pull more pixels out of the aether. Higher details/resolution also mean more CPU load.

Which brings us the only benchmark that could've been relevant.
12900K + 3090 vs 12900K + 3090 Ti.
Except one is DDR5 6400 30-37-37-26; 12900K (SP103) / P Cores 5.5Ghz / E Cores 4.3Ghz / Cache 4.5Ghz
the other is DDR5 7200 30-41-40-28 12900K (SP103) / P Cores 5.7Ghz / E Cores off / Cache 5.2Ghz
And there's your answer.
CS:GO getting scheduled on E-Cores sucks and source engine loves fast RAM and Cache. That's all there is to it.

You need to understand that a GPU does not conjure details out of thin air. Higher settings and higher resolution also mean more CPU load. The difference in GPU load is usually larger, so you're more likely to be GPU bound, but it's not always the case. Source engine is especially bad about this, where way too much is done on the CPU.

So you complained about 4K High benchmarks and pulled out 4K and 1440p Very High benchmarks instead. Not impressed, I must say.

Let's go through the benchmarks in reverse order:
We're not going to play some mystery guessing game, so the LTT benchmarks are completely worthless.
As you can see, at 4K the 4090 driver is fucked, at 4K the 6950XT is the fastest and at 1440p it falls behind even the 3090. So the only apples to apples comparison are the 3090 Ti and 3090 and their difference is ... 39 fps, regardless of resolution and total fps. Logically the prediction would be that at 1080p Very Low we'd see something like 1000 fps vs 1039 fps. Not very impressive.


The 3080 vs 4090 is pretty scuffed because the average the 4090 shows 450-500fps average on 1080p low and 500-550 average on high, no idea how the current fps are so much higher than the average for the entire run, the 3080 shows 650-ish on low and 670-ish on high. So I'm not sure what're you trying to show.
Where could the difference be coming from? I don't know, maybe it's the fact that the 3080 was run with 4800 MHz RAM and the 4090 with 6200 MHz? CPU clock is also different.
No idea where you're getting 46% more fps from, but it's most definitely not an identical setup and not even the same benchmark.
Again, worthless.

Then you got a bunch of benchmarks showing that lower details/resolution lead to higher fps. No one is suprised by that. See right at the start, GPUs aren't magic devices that pull more pixels out of the aether. Higher details/resolution also mean more CPU load.

Which brings us the only benchmark that could've been relevant.
12900K + 3090 vs 12900K + 3090 Ti.
Except one is DDR5 6400 30-37-37-26; 12900K (SP103) / P Cores 5.5Ghz / E Cores 4.3Ghz / Cache 4.5Ghz
the other is DDR5 7200 30-41-40-28 12900K (SP103) / P Cores 5.7Ghz / E Cores off / Cache 5.2Ghz
And there's your answer.
CS:GO getting scheduled on E-Cores sucks and source engine loves fast RAM and Cache. That's all there is to it.
703
#703
-2 Frags +
SetsulSo you complained about 4K High benchmarks and pulled out 4K and 1440p Very High benchmarks instead. Not impressed, I must say.

What the fuck is a Very High? CSGO doesnt have rendering distance sliders, hairworks, weather, NPC density, ray tracing etc. There is only one possible thing that can be set to "Very High" and it is the model quality. And let me tell you right now the difference that makes is 100% pure snake oil. All the reviewers and benchmarkers make up their own boogie way of saying Highest/Ultra Settings/Ultra High/Very High, so for all intents and purposes "High" in Source is essentially Ultra Settings.

SetsulHigher settings and higher resolution also mean more CPU load. The difference in GPU load is usually larger, so you're more likely to be GPU bound, but it's not always the case. Source engine is especially bad about this, where way too much is done on the CPU.

I disagree. If at GPU heavy titles many different CPUs level out their performance at 4K and half the time 1440p, while at 1080p show incremental differences, then why should a CPU heavy game care whatsoever about your resolution if no new objects are being drawn, only each frame being scaled up in the resolution by the GPU increasing its load. So from my standpoint at CPU heavy titles, differences in performance from GPUs come either from a class step-up/downs or architectural differences.

SetsulLet's go through the benchmarks in reverse order:
We're not going to play some mystery guessing game, so the LTT benchmarks are completely worthless.
As you can see, at 4K the 4090 driver is fucked, at 4K the 6950XT is the fastest and at 1440p it falls behind even the 3090. So the only apples to apples comparison are the 3090 Ti and 3090 and their difference is ... 39 fps, regardless of resolution and total fps. Logically the prediction would be that at 1080p Very Low we'd see something like 1000 fps vs 1039 fps. Not very impressive.
Setsul>Driver fucked

Pretty obvious, thats why we didnt see any difference whatsoever on neither your screenshot or LTT's at 4K.

Setsul>So the only apples to apples comparison are the 3090 Ti and 3090

I'm happy that you accept that comparison graciously and acknowledge the 39fps difference, but it is essentially two of the same cards, except the one clocks marginally higher and thats why they sell it overpriced with a Ti label slapped onto it. Shouldn't we also expect the same level of performance scaling down to 3080 and 3080Ti? What about 2080Ti that has been found to be slower than 3070Ti? What about 1080Ti. Lets revisit your initial claim

E.g. if a 1070 gets 480 fps and a 1080 490 fps and a 2080 495 fps and a 2080 Ti 496 fps then you can damn well be sure that a 4090 isn't going to get 550 fps but rather 499 fps.

But then miraculously 3090 and 3090ti have 39fps between in the same generation, so what gives? According to that quote, the gap should have been 5fps, or as you said, going from 2080ti to 4090 will be 496->499 fps? Isn't it reasonable to deduct that the generational leap is much greater than that, especially starting 30 series, and now again at 40 series. Because 2080Ti was like intel in the past decade, feeding you 10% more at twice the price. 30 series were a reasonable performance bump at a cost of heavily inflated power draw, but finally the new product actually was meaningfully faster. Now the same thing is happening with the 40 series, but the pricing is outrageous.

SetsulThen you got a bunch of benchmarks showing that lower details/resolution lead to higher fps. No one is suprised by that. See right at the start, GPUs aren't magic devices that pull more pixels out of the aether. Higher details/resolution also mean more CPU load.

Well, yes. That's the point in the first place, all of these show that at 1080p the performance is vastly different from 4K, and that settings play albeit a smaller role as well. Your initial post fed me 4K High results bordering 480fps. Why are you explaining to me the gains are negligible while showing 4K High results when:

  1. The generational improvements can barely be traced if at all
  2. The ports on the card cap out at 4K 120hz
  3. Who plays competitve at 4K High professionally or competitively in the first place

And the point is that at 1080p or even at 1440p the performance difference is pronounced, not in your 5fps increments.

SetsulThe 3080 vs 4090 is pretty scuffed because the average the 4090 shows 450-500fps average on 1080p low and 500-550 average on high, no idea how the current fps are so much higher than the average for the entire run, the 3080 shows 650-ish on low and 670-ish on high. So I'm not sure what're you trying to show.
No idea where you're getting 46% more fps from, but it's most definitely not an identical setup and not even the same benchmark.

I was looking at the current framerate without looking at the average, forgot to mention that and was fully prepared to discard that benchmark alltogether because again, its not even uLLeticaL but a guy running around a bot filled dust2 so I'm gonna concede this.

SetsulWhere could the difference be coming from? I don't know, maybe it's the fact that the 3080 was run with 4800 MHz RAM and the 4090 with 6200 MHz? CPU clock is also different.

In regards to the lower average I think that the NVIDIA driver is at play once again. So allegedly +200mhz on the all core and faster and tighter RAM, yet the average is lower by a 100fps while current fps is 300higher? 1% and .1% lows driver shit is at play again.

Setsul12900K + 3090 vs 12900K + 3090 Ti.
Except one is DDR5 6400 30-37-37-26; 12900K (SP103) / P Cores 5.5Ghz / E Cores 4.3Ghz / Cache 4.5Ghz
the other is DDR5 7200 30-41-40-28 12900K (SP103) / P Cores 5.7Ghz / E Cores off / Cache 5.2Ghz
And there's your answer.
CS:GO getting scheduled on E-Cores sucks and source engine loves fast RAM and Cache. That's all there is to it.

https://www.youtube.com/watch?v=-jkgFwydrPA
Alright, 12900KS unspecified clock, allegedly turbos to 5.5, unspecified DDR5 RAM, lets pretend that its JEDEC 4800, and a 3060ti at 1080p Low.
So according to previously I consider agreed upon data, a 3090 and a 3090ti have about 39fps between them in sterile environment. Which should mean the difference the faster DDR5 kit, +200 allcore and E-Cores off provide is 52.75fps with the remaining 39fps taken out of the equation. Then how does a 3060ti score 136.67 fps below a 3090 with identically shitty JEDEC kit of RAM at the same resolution and graphical settings with the main or even the only difference being a stepdown in a GPU class?

dm me your discord or something we are clogging the thread with this nerd shit, understandable should you not choose to engage with me out of time practicality or own mental well being concerns

[quote=Setsul]So you complained about 4K High benchmarks and pulled out 4K and 1440p Very High benchmarks instead. Not impressed, I must say.[/quote]
What the fuck is a Very High? CSGO doesnt have rendering distance sliders, hairworks, weather, NPC density, ray tracing etc. There is only one possible thing that can be set to "Very High" and it is the model quality. And let me tell you right now the difference that makes is 100% pure snake oil. All the reviewers and benchmarkers make up their own boogie way of saying Highest/Ultra Settings/Ultra High/Very High, so for all intents and purposes "High" in Source is essentially Ultra Settings.

[quote=Setsul]Higher settings and higher resolution also mean more CPU load. The difference in GPU load is usually larger, so you're more likely to be GPU bound, but it's not always the case. Source engine is especially bad about this, where way too much is done on the CPU.[/quote]
I disagree. If at GPU heavy titles many different CPUs level out their performance at 4K and half the time 1440p, while at 1080p show incremental differences, then why should a CPU heavy game care whatsoever about your resolution if no new objects are being drawn, only each frame being scaled up in the resolution by the GPU increasing its load. So from my standpoint at CPU heavy titles, differences in performance from GPUs come either from a class step-up/downs or architectural differences.
[quote=Setsul]Let's go through the benchmarks in reverse order:
We're not going to play some mystery guessing game, so the LTT benchmarks are completely worthless.
As you can see, at 4K the 4090 driver is fucked, at 4K the 6950XT is the fastest and at 1440p it falls behind even the 3090. So the only apples to apples comparison are the 3090 Ti and 3090 and their difference is ... 39 fps, regardless of resolution and total fps. Logically the prediction would be that at 1080p Very Low we'd see something like 1000 fps vs 1039 fps. Not very impressive.[/quote]
[quote=Setsul]>Driver fucked[/quote]
Pretty obvious, thats why we didnt see any difference whatsoever on neither your screenshot or LTT's at 4K.
[quote=Setsul]>So the only apples to apples comparison are the 3090 Ti and 3090[/quote]
I'm happy that you accept that comparison graciously and acknowledge the 39fps difference, but it is essentially two of the same cards, except the one clocks marginally higher and thats why they sell it overpriced with a Ti label slapped onto it. Shouldn't we also expect the same level of performance scaling down to 3080 and 3080Ti? What about 2080Ti that has been found to be slower than 3070Ti? What about 1080Ti. Lets revisit your initial claim [quote=E.g. if a 1070 gets 480 fps and a 1080 490 fps and a 2080 495 fps and a 2080 Ti 496 fps then you can damn well be sure that a 4090 isn't going to get 550 fps but rather 499 fps.][/quote]But then miraculously 3090 and 3090ti have 39fps between in the same generation, so what gives? According to that quote, the gap should have been 5fps, or as you said, going from 2080ti to 4090 will be 496->499 fps? Isn't it reasonable to deduct that the generational leap is much greater than that, especially starting 30 series, and now again at 40 series. Because 2080Ti was like intel in the past decade, feeding you 10% more at twice the price. 30 series were a reasonable performance bump at a cost of heavily inflated power draw, but finally the new product actually was meaningfully faster. Now the same thing is happening with the 40 series, but the pricing is outrageous.
[quote=Setsul]Then you got a bunch of benchmarks showing that lower details/resolution lead to higher fps. No one is suprised by that. See right at the start, GPUs aren't magic devices that pull more pixels out of the aether. Higher details/resolution also mean more CPU load.[/quote]
Well, yes. That's the point in the first place, all of these show that at 1080p the performance is vastly different from 4K, and that settings play albeit a smaller role as well. Your initial post fed me 4K High results bordering 480fps. Why are you explaining to me the gains are negligible while showing 4K High results when:
[olist]
[*] The generational improvements can barely be traced if at all
[*] The ports on the card cap out at 4K 120hz
[*] Who plays competitve at 4K High professionally or competitively in the first place
[/olist]And the point is that at 1080p or even at 1440p the performance difference is pronounced, not in your 5fps increments.
[quote=Setsul]The 3080 vs 4090 is pretty scuffed because the average the 4090 shows 450-500fps average on 1080p low and 500-550 average on high, no idea how the current fps are so much higher than the average for the entire run, the 3080 shows 650-ish on low and 670-ish on high. So I'm not sure what're you trying to show.
No idea where you're getting 46% more fps from, but it's most definitely not an identical setup and not even the same benchmark.[/quote]
I was looking at the current framerate without looking at the average, forgot to mention that and was fully prepared to discard that benchmark alltogether because again, its not even uLLeticaL but a guy running around a bot filled dust2 so I'm gonna concede this.
[quote=Setsul]Where could the difference be coming from? I don't know, maybe it's the fact that the 3080 was run with 4800 MHz RAM and the 4090 with 6200 MHz? CPU clock is also different.
[/quote]In regards to the lower average I think that the NVIDIA driver is at play once again. So allegedly +200mhz on the all core and faster and tighter RAM, yet the average is lower by a 100fps while current fps is 300higher? 1% and .1% lows driver shit is at play again.

[quote=Setsul]12900K + 3090 vs 12900K + 3090 Ti.
Except one is DDR5 6400 30-37-37-26; 12900K (SP103) / P Cores 5.5Ghz / E Cores 4.3Ghz / Cache 4.5Ghz
the other is DDR5 7200 30-41-40-28 12900K (SP103) / P Cores 5.7Ghz / E Cores off / Cache 5.2Ghz
And there's your answer.
CS:GO getting scheduled on E-Cores sucks and source engine loves fast RAM and Cache. That's all there is to it.[/quote]
https://www.youtube.com/watch?v=-jkgFwydrPA
Alright, 12900KS unspecified clock, allegedly turbos to 5.5, unspecified DDR5 RAM, lets pretend that its JEDEC 4800, and a 3060ti at 1080p Low.
So according to previously I consider agreed upon data, a 3090 and a 3090ti have about 39fps between them in sterile environment. Which should mean the difference the faster DDR5 kit, +200 allcore and E-Cores off provide is 52.75fps with the remaining 39fps taken out of the equation. Then how does a 3060ti score 136.67 fps below a 3090 with identically shitty JEDEC kit of RAM at the same resolution and graphical settings with the main or even the only difference being a stepdown in a GPU class?

dm me your discord or something we are clogging the thread with this nerd shit, understandable should you not choose to engage with me out of time practicality or own mental well being concerns
704
#704
0 Frags +
jnkiWhat the fuck is a Very High? CSGO doesnt have rendering distance sliders, hairworks, weather, NPC density, ray tracing etc. There is only one possible thing that can be set to "Very High" and it is the model quality. And let me tell you right now the difference that makes is 100% pure snake oil. All the reviewers and benchmarkers make up their own boogie way of saying Highest/Ultra Settings/Ultra High/Very High, so for all intents and purposes "High" in Source is essentially Ultra Settings.

Very High is literally what the LTT screenshot says. Not my fault you linked that.

jnkiSetsulHigher settings and higher resolution also mean more CPU load. The difference in GPU load is usually larger, so you're more likely to be GPU bound, but it's not always the case. Source engine is especially bad about this, where way too much is done on the CPU.I disagree. If at GPU heavy titles many different CPUs level out their performance at 4K and half the time 1440p, while at 1080p show incremental differences, then why should a CPU heavy game care whatsoever about your resolution if no new objects are being drawn, only each frame being scaled up in the resolution by the GPU increasing its load. So from my standpoint at CPU heavy titles, differences in performance from GPUs come either from a class step-up/downs or architectural differences.

You can disagree with how games work, it's not going to change the facts.
And no, the frames aren't just scaled up. That would be DLSS.

jnkiI'm happy that you accept that comparison graciously and acknowledge the 39fps difference, but it is essentially two of the same cards, except the one clocks marginally higher and thats why they sell it overpriced with a Ti label slapped onto it. Shouldn't we also expect the same level of performance scaling down to 3080 and 3080Ti? What about 2080Ti that has been found to be slower than 3070Ti? What about 1080Ti. Lets revisit your initial claim E.g. if a 1070 gets 480 fps and a 1080 490 fps and a 2080 495 fps and a 2080 Ti 496 fps then you can damn well be sure that a 4090 isn't going to get 550 fps but rather 499 fps.But then miraculously 3090 and 3090ti have 39fps between in the same generation, so what gives? According to that quote, the gap should have been 5fps, or as you said, going from 2080ti to 4090 will be 496->499 fps? Isn't it reasonable to deduct that the generational leap is much greater than that, especially starting 30 series, and now again at 40 series. Because 2080Ti was like intel in the past decade, feeding you 10% more at twice the price. 30 series were a reasonable performance bump at a cost of heavily inflated power draw, but finally the new product actually was meaningfully faster. Now the same thing is happening with the 40 series, but the pricing is outrageous.

1. Yes, the numbers I pulled out of my ass aren't gospel. A 3090 gets different fps in CS:GO at 4K than what I predicted for a 1080 in TF2 at 1080p on the benchmark demo. What is your point?
2. Nice rant about nVidia. What is your point?
3. You accept that the whole benchmark is fucked and not useable, but want to extrapolate from that what is means for a different game, at different settings, with different GPUs just because "that's how it should be" aka "things should be how I want them to be". What is your point?

jnkiSetsulThen you got a bunch of benchmarks showing that lower details/resolution lead to higher fps. No one is suprised by that. See right at the start, GPUs aren't magic devices that pull more pixels out of the aether. Higher details/resolution also mean more CPU load.Well, yes. That's the point in the first place, all of these show that at 1080p the performance is vastly different from 4K, and that settings play albeit a smaller role as well. Your initial post fed me 4K High results bordering 480fps. Why are you explaining to me the gains are negligible while showing 4K High results when:
  1. The generational improvements can barely be traced if at all
  2. The ports on the card cap out at 4K 120hz
  3. Who plays competitve at 4K High professionally or competitively in the first place
And the point is that at 1080p or even at 1440p the performance difference is pronounced, not in your 5fps increments.

Can you shut up about "generational improvements"? No one cares.
What does the refresh rate have to do with this? Are you arguing that anyone with a 120 Hz monitor should cap TF2 at 120 fps?
Why are you so upset about high resolution benchmarks when you link them yourself because there's nothing else available?
Yes, the difference between 1080p and 4K or low and high settings is more than 5fps on the same card. That is literally the opposite of what we're arguing about. Again, why bother linking all that? What is your point?

jnkiI was looking at the current framerate without looking at the average, forgot to mention that and was fully prepared to discard that benchmark alltogether because again, its not even uLLeticaL but a guy running around a bot filled dust2 so I'm gonna concede this.SetsulWhere could the difference be coming from? I don't know, maybe it's the fact that the 3080 was run with 4800 MHz RAM and the 4090 with 6200 MHz? CPU clock is also different.In regards to the lower average I think that the NVIDIA driver is at play once again. So allegedly 200mhz on the all core and faster and tighter RAM, yet the average is lower by a 100fps while current fps is 300higher? 1% and .1% lows driver shit is at play again.

Again, it's a different setup with a different, unspecified overclock, likely DDR4 vs DDR5, and the current fps counter is completely bogus. At one point it shows 3.9ms frametime and 900+ fps.

You can't argue that 4 out of 5 values are too low because the driver fucked up, but the 1 value that you want to be true is beyond reproach and that current fps should be used instead of average fps because it's the one true average.

jnkiSetsul12900K + 3090 vs 12900K + 3090 Ti.
Except one is DDR5 6400 30-37-37-26; 12900K (SP103) / P Cores 5.5Ghz / E Cores 4.3Ghz / Cache 4.5Ghz
the other is DDR5 7200 30-41-40-28 12900K (SP103) / P Cores 5.7Ghz / E Cores off / Cache 5.2Ghz
And there's your answer.
CS:GO getting scheduled on E-Cores sucks and source engine loves fast RAM and Cache. That's all there is to it.
https://www.youtube.com/watch?v=-jkgFwydrPA
Alright, 12900KS unspecified clock, allegedly turbos to 5.5, unspecified DDR5 RAM, lets pretend that its JEDEC 4800, and a 3060ti at 1080p Low.
So according to previously I consider agreed upon data, a 3090 and a 3090ti have about 39fps between them in sterile environment. Which should mean the difference the faster DDR5 kit, +200 allcore and E-Cores off provide is 52.75fps with the remaining 39fps taken out of the equation. Then how does a 3060ti score 136.67 fps below a 3090 with identically shitty JEDEC kit of RAM at the same resolution and graphical settings with the main or even the only difference being a stepdown in a GPU class?

dm me your discord or something we are clogging the thread with this nerd shit, understandable should you not choose to engage with me out of time practicality or own mental well being concerns

No, we saw a 39 fps difference in a completely fucked up environment.
Trying to extrapolate from behaviour on a 7950X with 6000MHZ CL36 RAM to an 12900KS with unknown clockspeed, unknown RAM, on a completely different benchmark and comparing that with yet another 12900K setup is completely meaningless.

You're arguing from your conclusion, that the fps are different because the GPU is different, while completely ignoring that CPU, RAM, and the benchmark itself are also different.
It's just pure garbage.

[quote=jnki]What the fuck is a Very High? CSGO doesnt have rendering distance sliders, hairworks, weather, NPC density, ray tracing etc. There is only one possible thing that can be set to "Very High" and it is the model quality. And let me tell you right now the difference that makes is 100% pure snake oil. All the reviewers and benchmarkers make up their own boogie way of saying Highest/Ultra Settings/Ultra High/Very High, so for all intents and purposes "High" in Source is essentially Ultra Settings.
[/quote]
Very High is literally what the LTT screenshot says. Not my fault you linked that.
[quote=jnki]
[quote=Setsul]Higher settings and higher resolution also mean more CPU load. The difference in GPU load is usually larger, so you're more likely to be GPU bound, but it's not always the case. Source engine is especially bad about this, where way too much is done on the CPU.[/quote]
I disagree. If at GPU heavy titles many different CPUs level out their performance at 4K and half the time 1440p, while at 1080p show incremental differences, then why should a CPU heavy game care whatsoever about your resolution if no new objects are being drawn, only each frame being scaled up in the resolution by the GPU increasing its load. So from my standpoint at CPU heavy titles, differences in performance from GPUs come either from a class step-up/downs or architectural differences.
[/quote]
You can disagree with how games work, it's not going to change the facts.
And no, the frames aren't just scaled up. That would be DLSS.
[quote=jnki]
I'm happy that you accept that comparison graciously and acknowledge the 39fps difference, but it is essentially two of the same cards, except the one clocks marginally higher and thats why they sell it overpriced with a Ti label slapped onto it. Shouldn't we also expect the same level of performance scaling down to 3080 and 3080Ti? What about 2080Ti that has been found to be slower than 3070Ti? What about 1080Ti. Lets revisit your initial claim [quote=E.g. if a 1070 gets 480 fps and a 1080 490 fps and a 2080 495 fps and a 2080 Ti 496 fps then you can damn well be sure that a 4090 isn't going to get 550 fps but rather 499 fps.][/quote]But then miraculously 3090 and 3090ti have 39fps between in the same generation, so what gives? According to that quote, the gap should have been 5fps, or as you said, going from 2080ti to 4090 will be 496->499 fps? Isn't it reasonable to deduct that the generational leap is much greater than that, especially starting 30 series, and now again at 40 series. Because 2080Ti was like intel in the past decade, feeding you 10% more at twice the price. 30 series were a reasonable performance bump at a cost of heavily inflated power draw, but finally the new product actually was meaningfully faster. Now the same thing is happening with the 40 series, but the pricing is outrageous.
[/quote]
1. Yes, the numbers I pulled out of my ass aren't gospel. A 3090 gets different fps in CS:GO at 4K than what I predicted for a 1080 in TF2 at 1080p on the benchmark demo. What is your point?
2. Nice rant about nVidia. What is your point?
3. You accept that the whole benchmark is fucked and not useable, but want to extrapolate from that what is means for a different game, at different settings, with different GPUs just because "that's how it should be" aka "things should be how I want them to be". What is your point?
[quote=jnki]
[quote=Setsul]Then you got a bunch of benchmarks showing that lower details/resolution lead to higher fps. No one is suprised by that. See right at the start, GPUs aren't magic devices that pull more pixels out of the aether. Higher details/resolution also mean more CPU load.[/quote]
Well, yes. That's the point in the first place, all of these show that at 1080p the performance is vastly different from 4K, and that settings play albeit a smaller role as well. Your initial post fed me 4K High results bordering 480fps. Why are you explaining to me the gains are negligible while showing 4K High results when:
[olist]
[*] The generational improvements can barely be traced if at all
[*] The ports on the card cap out at 4K 120hz
[*] Who plays competitve at 4K High professionally or competitively in the first place
[/olist]And the point is that at 1080p or even at 1440p the performance difference is pronounced, not in your 5fps increments.
[/quote]
Can you shut up about "generational improvements"? No one cares.
What does the refresh rate have to do with this? Are you arguing that anyone with a 120 Hz monitor should cap TF2 at 120 fps?
Why are you so upset about high resolution benchmarks when you link them yourself because there's nothing else available?
Yes, the difference between 1080p and 4K or low and high settings is more than 5fps on the same card. That is literally the opposite of what we're arguing about. Again, why bother linking all that? What is your point?
[quote=jnki]
I was looking at the current framerate without looking at the average, forgot to mention that and was fully prepared to discard that benchmark alltogether because again, its not even uLLeticaL but a guy running around a bot filled dust2 so I'm gonna concede this.
[quote=Setsul]Where could the difference be coming from? I don't know, maybe it's the fact that the 3080 was run with 4800 MHz RAM and the 4090 with 6200 MHz? CPU clock is also different.
[/quote]In regards to the lower average I think that the NVIDIA driver is at play once again. So allegedly 200mhz on the all core and faster and tighter RAM, yet the average is lower by a 100fps while current fps is 300higher? 1% and .1% lows driver shit is at play again.
[/quote]
Again, it's a different setup with a different, unspecified overclock, likely DDR4 vs DDR5, and the current fps counter is completely bogus. At one point it shows 3.9ms frametime and 900+ fps.

You can't argue that 4 out of 5 values are too low because the driver fucked up, but the 1 value that you want to be true is beyond reproach and that current fps should be used instead of average fps because it's the one true average.
[quote=jnki]
[quote=Setsul]12900K + 3090 vs 12900K + 3090 Ti.
Except one is DDR5 6400 30-37-37-26; 12900K (SP103) / P Cores 5.5Ghz / E Cores 4.3Ghz / Cache 4.5Ghz
the other is DDR5 7200 30-41-40-28 12900K (SP103) / P Cores 5.7Ghz / E Cores off / Cache 5.2Ghz
And there's your answer.
CS:GO getting scheduled on E-Cores sucks and source engine loves fast RAM and Cache. That's all there is to it.[/quote]
https://www.youtube.com/watch?v=-jkgFwydrPA
Alright, 12900KS unspecified clock, allegedly turbos to 5.5, unspecified DDR5 RAM, lets pretend that its JEDEC 4800, and a 3060ti at 1080p Low.
So according to previously I consider agreed upon data, a 3090 and a 3090ti have about 39fps between them in sterile environment. Which should mean the difference the faster DDR5 kit, +200 allcore and E-Cores off provide is 52.75fps with the remaining 39fps taken out of the equation. Then how does a 3060ti score 136.67 fps below a 3090 with identically shitty JEDEC kit of RAM at the same resolution and graphical settings with the main or even the only difference being a stepdown in a GPU class?

dm me your discord or something we are clogging the thread with this nerd shit, understandable should you not choose to engage with me out of time practicality or own mental well being concerns[/quote]
No, we saw a 39 fps difference in a completely fucked up environment.
Trying to extrapolate from behaviour on a 7950X with 6000MHZ CL36 RAM to an 12900KS with unknown clockspeed, unknown RAM, on a completely different benchmark and comparing that with yet another 12900K setup is completely meaningless.

You're arguing from your conclusion, that the fps are different because the GPU is different, while completely ignoring that CPU, RAM, and the benchmark itself are also different.
It's just pure garbage.
705
#705
0 Frags +
SetsulYes, the numbers I pulled out of my ass aren't gospel.

Thank you for your blunt honesty

SetsulAnd no, the frames aren't just scaled up. That would be DLSS.

Yes I'm aware they arent magically "scaled up" but rendered at a substantially higher resolution, using more bandwidth and CPU time. I was simply making an exaggerated oversimplification.

SetsulYou can disagree with how games work, it's not going to change the facts.

Neither will it change the fact that somehow CPU heavy games still react to GPUs getting better?

SetsulNice rant about nVidia. What is your point?

That we finally get meaningful uplifts thanks to the g̸̻̜̬̰͈̳̊̈̄̍e̵͚̪͕̹̮̰̒n̷̞̘͉̜͚̞̩̞̥̓̈́͠ę̵̡̱̹̠̤͌͒́̿̊͠r̸̢͖̟͖̥̜̺̖͖̃͜͝a̸̢͇̗̱̱̱͚̟͖̗̓͂́̇͒͋̂̿͂͝t̴̹̒͋̆̓ͅḭ̶̡̧̰̦̜̮̥̘̾̀͛̒͠ö̴̡̺̠́̓̆̎́̕͝͝n̴̥̳͚̺̞̺̜̑̽͜͝ą̷̬̫̳̩̼͇̣̠̑̌͐͂͝ͅl̴̤̬̭̾̀̈́́̃̄̚ ̷̠͛͛̊i̵̧̦̺̬̙͇̩͂͑̾̈́͋̅̅̚m̶̛̱̼̞͚̘̦̟̦̹̥͊̈́͛̊̄̉͝p̷̡̹̼͉̔̂̎͗͑ŗ̵̱͙͎̭̜̱̹̤̟̀̀̃̍͌̓ȍ̷̺̘̺̱͙̲͊͜v̴̧̜̅̒e̴̮̱̽̌̈́̎͛̚m̸̛̛̘̞̅̾̎̍̀̔̃͜e̵̡̺̖̻͗̊̾̈́͆̕̚n̴̞̪̔̈̉͘͠t̴̨̛̗̾͑̈́̚̚s̸̨̝̹̜̱̦̯̟͚͊͐̊͝ that can likely improve fps even in CPU heavy shit we play but uh

SetsulCan you shut up about "generational improvements"? No one cares.SetsulAre you arguing that anyone with a 120 Hz monitor should cap TF2 at 120 fps?

Do I? No. But mentioning the current upper bandwidth cap - that is a byproduct of me trying to tell you that we neither have the capable screens or the interfaces to run 4K 480hz. When in fact we don't even want to, at least yet. AKA "No one cares". Then why did you show 4K results in the first place when 1080 is all the majority cares about in competitive shooters.

SetsulWhy are you so upset about high resolution benchmarks when you link them yourself because there's nothing else available?

I'm upset because you pull up with 4K results when and especially in Source

No one cares.

pull some numbers out of your ass and then

Are you going to see a noticeable difference between a 1080 and a 4090? No.

when the card is three times faster

SetsulYes, the difference between 1080p and 4K or low and high settings is more than 5fps on the same card. That is literally the opposite of what we're arguing about. Again, why bother linking all that? What is your point?

Frankly I don't find it completely opposite of what we were arguing about: Right off the bat you show me 4K High results, where its obvious that the card instead of pushing more frames at 1080p has to render frames 4 times the size we are interested in, and then I find equally scuffed 4K results, we see the performance difference between the top cards at that resolution is tiny at best if present at all. I try and make a point that

  1. Since we play in 1080p and this thread is about 1080p, shouldn't we pay more attention to 1080p
  2. Since the framerate difference is enormous between 4K and 1080p ranging hundreds on the same card, why would you in sound mind show me 4K results and then proceed to talk about how incremental the performance uplift would be
  3. Since the difference between top cards in 1080p is actually noticeable, isnt that worth focusing on
SetsulNo, we saw a 39 fps difference in a completely fucked up environment.
Trying to extrapolate from behaviour on a 7950X with 6000MHZ CL36 RAM to an 12900KS with unknown clockspeed, unknown RAM, on a completely different benchmark and comparing that with yet another 12900K setup is completely meaningless.

The 39fps at equal conditions indicate there is a difference in the first place. 4090 managed an even greater difference and especially at a lower resolution:1440p. Therefore it is safe to assume that

  1. There is still headroom for 1080p which is likely going to be greater, just a matter of how quantifiable it is
  2. That we only see performance of 1000$+ cards, shouldnt we ask ourself how much bigger is the gap between a definitely slower 2080ti? What about a 2060? A 1060? How much performance should a current generation high-end CPU owner expect from upgrading his 4-6 year old GPU if he wishes to improve his frames even further in a CPU heavy title.

Now regarding to the mysteriously clocked 12900ks. It was more of an comparison to the

12900k 5.7 3090ti DDR5 7200 30-41-40-28 1080p Low 928.18fps
12900k 5.5 3090 DDR5 6400 30-37-37-26 1080p Low 836.43fps

I was aiming to eliminate the 39fps out of the equation because it was present in the "scuffed benchmark". Very well. 12900KS stock clock (I've found other game benchmarks on the channel) that according to intel ark should boost to 5.5 give or take (which should be in line with the 5.5 12900k with the 3090) with DDR5 JEDEC and a 3060ti scored 699fps at 1080p Low. So then you are telling me that by simply going from DDR5 JEDEC to 6400 gives you 136 fps, and at 7200, +200 on the clock and dropping the eficiency cores nets you 228fps total uplift. That is an australian christmas fucking miracle don't you find. I may be delusional but there is no way going from 3060ti to 3090 isnt responsible for at least 35% of that total gain at 1080p low.

The funniest thing is: the best screen tech we have on hand today is 360hz, which we can drive with a 12/13600k, so not even I fully understand why am I shilling 30 and 40 series when those cost more than the computer able to drive the game at 360hz. In my defence however I was romantacising the possibility of 480hz gaming in the very near future and it appears to me that if the monitor was to be released tomorrow, the only way to get the last remaining performance droplets is by getting a two fucking thousand dollar pcb with a side topping of silicon covered in two kilos of alluminum and copper.

[quote=Setsul]Yes, the numbers I pulled out of my ass aren't gospel.[/quote]
Thank you for your blunt honesty
[quote=Setsul]And no, the frames aren't just scaled up. That would be DLSS.[/quote]
Yes I'm aware they arent magically "scaled up" but rendered at a substantially higher resolution, using more bandwidth and CPU time. I was simply making an exaggerated oversimplification.
[quote=Setsul]You can disagree with how games work, it's not going to change the facts.[/quote]
Neither will it change the fact that somehow CPU heavy games still react to GPUs getting better?
[quote=Setsul]Nice rant about nVidia. What is your point?[/quote]
That we finally get meaningful uplifts thanks to the g̸̻̜̬̰͈̳̊̈̄̍e̵͚̪͕̹̮̰̒n̷̞̘͉̜͚̞̩̞̥̓̈́͠ę̵̡̱̹̠̤͌͒́̿̊͠r̸̢͖̟͖̥̜̺̖͖̃͜͝a̸̢͇̗̱̱̱͚̟͖̗̓͂́̇͒͋̂̿͂͝t̴̹̒͋̆̓ͅḭ̶̡̧̰̦̜̮̥̘̾̀͛̒͠ö̴̡̺̠́̓̆̎́̕͝͝n̴̥̳͚̺̞̺̜̑̽͜͝ą̷̬̫̳̩̼͇̣̠̑̌͐͂͝ͅl̴̤̬̭̾̀̈́́̃̄̚ ̷̠͛͛̊i̵̧̦̺̬̙͇̩͂͑̾̈́͋̅̅̚m̶̛̱̼̞͚̘̦̟̦̹̥͊̈́͛̊̄̉͝p̷̡̹̼͉̔̂̎͗͑ŗ̵̱͙͎̭̜̱̹̤̟̀̀̃̍͌̓ȍ̷̺̘̺̱͙̲͊͜v̴̧̜̅̒e̴̮̱̽̌̈́̎͛̚m̸̛̛̘̞̅̾̎̍̀̔̃͜e̵̡̺̖̻͗̊̾̈́͆̕̚n̴̞̪̔̈̉͘͠t̴̨̛̗̾͑̈́̚̚s̸̨̝̹̜̱̦̯̟͚͊͐̊͝ that can likely improve fps even in CPU heavy shit we play but uh
[quote=Setsul]Can you shut up about "generational improvements"? No one cares.[/quote]
[quote=Setsul]Are you arguing that anyone with a 120 Hz monitor should cap TF2 at 120 fps?[/quote]
Do I? No. But mentioning the current upper bandwidth cap - that is a byproduct of me trying to tell you that we neither have the capable screens or the interfaces to run 4K 480hz. When in fact we don't even want to, at least yet. AKA "No one cares". Then why did you show 4K results in the first place when 1080 is all the majority cares about in competitive shooters.
[quote=Setsul]Why are you so upset about high resolution benchmarks when you link them yourself because there's nothing else available?[/quote]
I'm upset because you pull up with 4K results when and especially in Source[quote=]No one cares.[/quote]pull some numbers out of your ass and then[quote=Are you going to see a noticeable difference between a 1080 and a 4090? No.][/quote]when the card is three times faster
[quote=Setsul]Yes, the difference between 1080p and 4K or low and high settings is more than 5fps on the same card. That is literally the opposite of what we're arguing about. Again, why bother linking all that? What is your point?[/quote]
Frankly I don't find it completely opposite of what we were arguing about: Right off the bat you show me 4K High results, where its obvious that the card instead of pushing more frames at 1080p has to render frames 4 times the size we are interested in, and then I find equally scuffed 4K results, we see the performance difference between the top cards at that resolution is tiny at best if present at all. I try and make a point that
[olist]
[*] Since we play in 1080p and this thread is about 1080p, shouldn't we pay more attention to 1080p
[*] Since the framerate difference is enormous between 4K and 1080p ranging hundreds on the same card, why would you in sound mind show me 4K results and then proceed to talk about how incremental the performance uplift would be
[*] Since the difference between top cards in 1080p is actually noticeable, isnt that worth focusing on
[/olist][quote=Setsul]No, we saw a 39 fps difference in a completely fucked up environment.
Trying to extrapolate from behaviour on a 7950X with 6000MHZ CL36 RAM to an 12900KS with unknown clockspeed, unknown RAM, on a completely different benchmark and comparing that with yet another 12900K setup is completely meaningless.[/quote]The 39fps at equal conditions indicate there is a difference in the first place. 4090 managed an even greater difference and especially at a lower resolution:1440p. Therefore it is safe to assume that
[olist]
[*] There is still headroom for 1080p which is likely going to be greater, just a matter of how quantifiable it is
[*] That we only see performance of 1000$+ cards, shouldnt we ask ourself how much bigger is the gap between a definitely slower 2080ti? What about a 2060? A 1060? How much performance should a current generation high-end CPU owner expect from upgrading his 4-6 year old GPU if he wishes to improve his frames even further in a CPU heavy title.[/olist] Now regarding to the mysteriously clocked 12900ks. It was more of an comparison to the
[quote]12900k 5.7 3090ti DDR5 7200 30-41-40-28 1080p Low 928.18fps
12900k 5.5 3090 DDR5 6400 30-37-37-26 1080p Low 836.43fps[/quote]
I was aiming to eliminate the 39fps out of the equation because it was present in the "scuffed benchmark". Very well. 12900KS stock clock (I've found other game benchmarks on the channel) that according to intel ark should boost to 5.5 give or take (which should be in line with the 5.5 12900k with the 3090) with DDR5 JEDEC and a 3060ti scored 699fps at 1080p Low. So then you are telling me that by simply going from DDR5 JEDEC to 6400 gives you 136 fps, and at 7200, +200 on the clock and dropping the eficiency cores nets you 228fps total uplift. That is an australian christmas fucking miracle don't you find. I may be delusional but there is no way going from 3060ti to 3090 isnt responsible for at least 35% of that total gain at 1080p low.

The funniest thing is: the best screen tech we have on hand [b]today[/b] is 360hz, which we can drive with a [i]12/13[/i]600k, so not even I fully understand why am I shilling 30 and 40 series when those cost more than the computer able to drive the game at 360hz. In my defence however I was romantacising the possibility of 480hz gaming in the very near future and it appears to me that if the monitor was to be released tomorrow, the only way to get the last remaining performance droplets is by getting a two fucking thousand dollar pcb with a side topping of silicon covered in two kilos of alluminum and copper.
706
#706
2 Frags +
jnkiSetsulAre you arguing that anyone with a 120 Hz monitor should cap TF2 at 120 fps?Do I? No. But mentioning the current upper bandwidth cap - that is a byproduct of me trying to tell you that we neither have the capable screens or the interfaces to run 4K 480hz. When in fact we don't even want to, at least yet. AKA "No one cares". Then why did you show 4K results in the first place when 1080 is all the majority cares about in competitive shooters.
[...]
The funniest thing is: the best screen tech we have on hand today is 360hz, which we can drive with a 12/13600k, so not even I fully understand why am I shilling 30 and 40 series when those cost more than the computer able to drive the game at 360hz. In my defence however I was romantacising the possibility of 480hz gaming in the very near future and it appears to me that if the monitor was to be released tomorrow, the only way to get the last remaining performance droplets is by getting a two fucking thousand dollar pcb with a side topping of silicon covered in two kilos of alluminum and copper.

This is just schizophrenic. "4K benchmarks above 120fps don't matter because we don't have the monitors"
"But the current fps counter says 900, ignore that the average says 500, this is really important"

jnkiSetsulWhy are you so upset about high resolution benchmarks when you link them yourself because there's nothing else available?I'm upset because you pull up with 4K results when and especially in SourceNo one cares.pull some numbers out of your ass and thenAre you going to see a noticeable difference between a 1080 and a 4090? No.when the card is three times faster

But your 4K high and 1440p high benchmarks I'm supposed to care about?

You started this whole bullshit with your theory that a 4090 would give more fps in TF2 at 1080p on low settings because of some secret sauce.
I said it's not going to make a difference.
I don't see how the 4090 being "3 times as fast" in completely different games on 4K is going to matter. The fps do not magically transfer.

jnki
  1. Since we play in 1080p and this thread is about 1080p, shouldn't we pay more attention to 1080p
  2. Since the framerate difference is enormous between 4K and 1080p ranging hundreds on the same card, why would you in sound mind show me 4K results and then proceed to talk about how incremental the performance uplift would be
  3. Since the difference between top cards in 1080p is actually noticeable, isnt that worth focusing on
  1. Then why the fuck do you keep digging up 1440p high and other benchmarks?
  2. Show me a single fucking benchmark that's actually using the same system and benchmark just with different GPUs instead of mixing completely different setups until the "prove" your theory based on shitty math
  3. Is it? Show me a single fucking benchmark.
jnkiSetsulNo, we saw a 39 fps difference in a completely fucked up environment.
Trying to extrapolate from behaviour on a 7950X with 6000MHZ CL36 RAM to an 12900KS with unknown clockspeed, unknown RAM, on a completely different benchmark and comparing that with yet another 12900K setup is completely meaningless.
The 39fps at equal conditions indicate there is a difference in the first place. 4090 managed an even greater difference and especially at a lower resolution:1440p. Therefore it is safe to assume that
  1. There is still headroom for 1080p which is likely going to be greater, just a matter of how quantifiable it is
  2. That we only see performance of 1000$+ cards, shouldnt we ask ourself how much bigger is the gap between a definitely slower 2080ti? What about a 2060? A 1060? How much performance should a current generation high-end CPU owner expect from upgrading his 4-6 year old GPU if he wishes to improve his frames even further in a CPU heavy title.

So there is a difference at 1440p on Very High Settings in CS:GO between a 3090 and a 4090. Why exactly does this mean there will be one in TF2 at 1080p on low?
Or are you trying to prove that the 4090 is significantly faster in general so it must always be significantly faster and the CPU limit is just a myth? What are you trying to show here?

And no, none of that is safe to assume.
Like I said, only the 3090 Ti and 3090 are apples to apples. If you extrapolate from those, the difference between the two should always be 39 fps no matter the resolution, which is obviously garbage.
If you extrapolate from the nonsensical 4090 4K results to "prove" that the difference between 4090 and 3090 (Ti) should be larger at 1080p then extrapolating in the other direction "proves" that at 8K the 3090 should actually be faster than a 4090, which is also complete garbage.

Garbage in, garbage out.

jnkiNow regarding to the mysteriously clocked 12900ks. It was more of an comparison to the 12900k 5.7 3090ti DDR5 7200 30-41-40-28 1080p Low 928.18fps
12900k 5.5 3090 DDR5 6400 30-37-37-26 1080p Low 836.43fps
I was aiming to eliminate the 39fps out of the equation because it was present in the "scuffed benchmark". Very well. 12900KS stock clock (I've found other game benchmarks on the channel) that according to intel ark should boost to 5.5 give or take (which should be in line with the 5.5 12900k with the 3090) with DDR5 JEDEC and a 3060ti scored 699fps at 1080p Low. So then you are telling me that by simply going from DDR5 JEDEC to 6400 gives you 136 fps, and at 7200, +200 on the clock and dropping the eficiency cores nets you 228fps total uplift. That is an australian christmas fucking miracle don't you find. I may be delusional but there is no way going from 3060ti to 3090 isnt responsible for at least 35% of that total gain at 1080p low.

It's fucking bogus math.

Let me do the exact same thing. You had a benchmark of a 12900KS, presumably with stock clocks, which can sometimes boost to 5.5 on at most two cores, otherwise 5.2, some RAM, DDR5 JEDEC RAM according to you but it could just as well be DDR4, and a 3060 Ti, on unknown settings that look pretty low, "competitive settings" supposedly, versus a benchmark of a 12900K at 5.5/4.3 constant, DDR5 6400 MHz, and a 3090, on "low".

We can look at the thread and compare a 13600KF + 1660 Ti with a 13600K + 1080 Ti.
5.4 GHz, DDR4 3600 MHz, "competitive settings" for the former, stock clocks, DDR5 5600 MHz and "low" on the latter.

kindredbenchmark_test (mercenarypark)
new 13600K: 4812 frames 8.308 seconds 579.20 fps ( 1.73 ms/f) 81.025 fps variability
cheetazBenchmark_test.dem
4812 frames 10.726 seconds 448.61 fps ( 2.23 ms/f) 83.439 fps variability

Slightly higher CPU clocks but much lower RAM clocks, so I conclude that at least 35% of those extra 131 fps are from "generational improvements" of the 1660 Ti.

In conclusion, the 1660 Ti faster than a 1080 Ti because it's newer.
That's what you're doing.
Benchmarks on vastly different setups are fucking worthless for these comparisons.
Stop doing it.

[quote=jnki]
[quote=Setsul]Are you arguing that anyone with a 120 Hz monitor should cap TF2 at 120 fps?[/quote]
Do I? No. But mentioning the current upper bandwidth cap - that is a byproduct of me trying to tell you that we neither have the capable screens or the interfaces to run 4K 480hz. When in fact we don't even want to, at least yet. AKA "No one cares". Then why did you show 4K results in the first place when 1080 is all the majority cares about in competitive shooters.
[...]
The funniest thing is: the best screen tech we have on hand [b]today[/b] is 360hz, which we can drive with a [i]12/13[/i]600k, so not even I fully understand why am I shilling 30 and 40 series when those cost more than the computer able to drive the game at 360hz. In my defence however I was romantacising the possibility of 480hz gaming in the very near future and it appears to me that if the monitor was to be released tomorrow, the only way to get the last remaining performance droplets is by getting a two fucking thousand dollar pcb with a side topping of silicon covered in two kilos of alluminum and copper.
[/quote]
This is just schizophrenic. "4K benchmarks above 120fps don't matter because we don't have the monitors"
"But the current fps counter says 900, ignore that the average says 500, this is really important"

[quote=jnki]
[quote=Setsul]Why are you so upset about high resolution benchmarks when you link them yourself because there's nothing else available?[/quote]
I'm upset because you pull up with 4K results when and especially in Source[quote=]No one cares.[/quote]pull some numbers out of your ass and then[quote=Are you going to see a noticeable difference between a 1080 and a 4090? No.][/quote]when the card is three times faster
[/quote]
But your 4K high and 1440p high benchmarks I'm supposed to care about?

You started this whole bullshit with your theory that a 4090 would give more fps in TF2 at 1080p on low settings because of some secret sauce.
I said it's not going to make a difference.
I don't see how the 4090 being "3 times as fast" in completely different games on 4K is going to matter. The fps do not magically transfer.

[quote=jnki]
[olist]
[*] Since we play in 1080p and this thread is about 1080p, shouldn't we pay more attention to 1080p
[*] Since the framerate difference is enormous between 4K and 1080p ranging hundreds on the same card, why would you in sound mind show me 4K results and then proceed to talk about how incremental the performance uplift would be
[*] Since the difference between top cards in 1080p is actually noticeable, isnt that worth focusing on
[/olist]
[/quote]
[olist]
[*] Then why the fuck do you keep digging up 1440p high and other benchmarks?
[*] Show me a single fucking benchmark that's actually using the same system and benchmark just with different GPUs instead of mixing completely different setups until the "prove" your theory based on shitty math
[*] Is it? Show me a single fucking benchmark.
[/olist]
[quote=jnki]
[quote=Setsul]No, we saw a 39 fps difference in a completely fucked up environment.
Trying to extrapolate from behaviour on a 7950X with 6000MHZ CL36 RAM to an 12900KS with unknown clockspeed, unknown RAM, on a completely different benchmark and comparing that with yet another 12900K setup is completely meaningless.[/quote]The 39fps at equal conditions indicate there is a difference in the first place. 4090 managed an even greater difference and especially at a lower resolution:1440p. Therefore it is safe to assume that
[olist]
[*] There is still headroom for 1080p which is likely going to be greater, just a matter of how quantifiable it is
[*] That we only see performance of 1000$+ cards, shouldnt we ask ourself how much bigger is the gap between a definitely slower 2080ti? What about a 2060? A 1060? How much performance should a current generation high-end CPU owner expect from upgrading his 4-6 year old GPU if he wishes to improve his frames even further in a CPU heavy title.[/olist]
[/quote]
So there is a difference at 1440p on Very High Settings in CS:GO between a 3090 and a 4090. Why exactly does this mean there will be one in TF2 at 1080p on low?
Or are you trying to prove that the 4090 is significantly faster in general so it must always be significantly faster and the CPU limit is just a myth? What are you trying to show here?

And no, none of that is safe to assume.
Like I said, only the 3090 Ti and 3090 are apples to apples. If you extrapolate from those, the difference between the two should always be 39 fps no matter the resolution, which is obviously garbage.
If you extrapolate from the nonsensical 4090 4K results to "prove" that the difference between 4090 and 3090 (Ti) should be larger at 1080p then extrapolating in the other direction "proves" that at 8K the 3090 should actually be faster than a 4090, which is also complete garbage.

Garbage in, garbage out.

[quote=jnki]Now regarding to the mysteriously clocked 12900ks. It was more of an comparison to the
[quote]12900k 5.7 3090ti DDR5 7200 30-41-40-28 1080p Low 928.18fps
12900k 5.5 3090 DDR5 6400 30-37-37-26 1080p Low 836.43fps[/quote]
I was aiming to eliminate the 39fps out of the equation because it was present in the "scuffed benchmark". Very well. 12900KS stock clock (I've found other game benchmarks on the channel) that according to intel ark should boost to 5.5 give or take (which should be in line with the 5.5 12900k with the 3090) with DDR5 JEDEC and a 3060ti scored 699fps at 1080p Low. So then you are telling me that by simply going from DDR5 JEDEC to 6400 gives you 136 fps, and at 7200, +200 on the clock and dropping the eficiency cores nets you 228fps total uplift. That is an australian christmas fucking miracle don't you find. I may be delusional but there is no way going from 3060ti to 3090 isnt responsible for at least 35% of that total gain at 1080p low.
[/quote]
It's fucking bogus math.

Let me do the exact same thing. You had a benchmark of a 12900KS, presumably with stock clocks, which can sometimes boost to 5.5 on at most two cores, otherwise 5.2, some RAM, DDR5 JEDEC RAM according to you but it could just as well be DDR4, and a 3060 Ti, on unknown settings that look pretty low, "competitive settings" supposedly, versus a benchmark of a 12900K at 5.5/4.3 constant, DDR5 6400 MHz, and a 3090, on "low".

We can look at the thread and compare a 13600KF + 1660 Ti with a 13600K + 1080 Ti.
5.4 GHz, DDR4 3600 MHz, "competitive settings" for the former, stock clocks, DDR5 5600 MHz and "low" on the latter.
[quote=kindred]
[u]benchmark_test[/u] (mercenarypark)
new 13600K: 4812 frames 8.308 seconds 579.20 fps ( 1.73 ms/f) 81.025 fps variability[/quote]
[quote=cheetaz]
[b]Benchmark_test.dem[/b]
4812 frames 10.726 seconds 448.61 fps ( 2.23 ms/f) 83.439 fps variability[/quote]
Slightly higher CPU clocks but much lower RAM clocks, so I conclude that at least 35% of those extra 131 fps are from "generational improvements" of the 1660 Ti.

In conclusion, the 1660 Ti faster than a 1080 Ti because it's newer.
That's what you're doing.
Benchmarks on vastly different setups are fucking worthless for these comparisons.
Stop doing it.
707
#707
21 Frags +

https://media.tenor.com/rgegj1AOR1IAAAAC/awkward-black.gif

[img]https://media.tenor.com/rgegj1AOR1IAAAAC/awkward-black.gif[/img]
708
#708
1 Frags +

i use 99999 hz monitor

i use 99999 hz monitor
709
#709
1 Frags +

but can it play dustbowl 32 player servers above 240 fps all the time ?
can someone benchmark the game on a dxlevel that doesnt make the game look like a potato ?
would love to see some new ryzen benchmarks too
or a video where i can see the fps fluctuation and the dips to 100 fps that completely destroy this game

but can it play dustbowl 32 player servers above 240 fps all the time ?
can someone benchmark the game on a dxlevel that doesnt make the game look like a potato ?
would love to see some new ryzen benchmarks too
or a video where i can see the fps fluctuation and the dips to 100 fps that completely destroy this game
710
#710
0 Frags +

i7-8700 @ 3.2 GHz
gtx 1050 ti
16x2 ddr4 @2666MHz
windows 10
1920x1080
headsfeet
benchmark_test

Comanglia Toaster with high quality shadows dx81

4812 frames 18.662 seconds 257.86 fps ( 3.88 ms/f) 52.694 fps variability

Comanglia Toaster with high quality shadows dx100

4812 frames 19.884 seconds 242.00 fps ( 4.13 ms/f) 47.922 fps variability

Mastercoms low dx81

4812 frames 18.881 seconds 254.85 fps ( 3.92 ms/f) 52.006 fps variability

I did not expect the one with shadows to have higher fps, weird

i7-8700 @ 3.2 GHz
gtx 1050 ti
16x2 ddr4 @2666MHz
windows 10
1920x1080
headsfeet
benchmark_test

Comanglia Toaster with high quality shadows dx81
[code]4812 frames 18.662 seconds 257.86 fps ( 3.88 ms/f) 52.694 fps variability[/code]

Comanglia Toaster with high quality shadows dx100
[code]4812 frames 19.884 seconds 242.00 fps ( 4.13 ms/f) 47.922 fps variability[/code]

Mastercoms low dx81
[code]4812 frames 18.881 seconds 254.85 fps ( 3.92 ms/f) 52.006 fps variability[/code]


I did not expect the one with shadows to have higher fps, weird
711
#711
-3 Frags +
kindredcheetazVery happy with the results. About 70% increase in average fps!
I use a 240Hz monitor, and with the 7700k, the frames would dip way below that quite often.
However with the 13600k now, I'd say 90% of the time I'm over 240 fps

I’m surprised to hear this, especially with DDR5.
I would double check the configurations because I wouldn’t expect to see dips below 300 on dx8, even on an upward pub

I made a test benchmark video in Swiftwater, which I found to always have the lowest FPS, even more than upward.
You will see some dips as low as ~210 FPS on Swiftwater 4th point, and that's without it having all 24 players on it.
Do read the description of the video for some extra details

https://www.youtube.com/watch?v=145Y6LM2m5M

[quote=kindred][quote=cheetaz]
Very happy with the results. About 70% increase in average fps!
I use a 240Hz monitor, and with the 7700k, the frames would dip way below that quite often.
However with the 13600k now, I'd say 90% of the time I'm over 240 fps[/quote]

I’m surprised to hear this, especially with DDR5.
I would double check the configurations because I wouldn’t expect to see dips below 300 on dx8, even on an upward pub[/quote]

I made a test benchmark video in Swiftwater, which I found to always have the lowest FPS, even more than upward.
You will see some dips as low as ~210 FPS on Swiftwater 4th point, and that's without it having all 24 players on it.
Do read the description of the video for some extra details

[youtube]https://www.youtube.com/watch?v=145Y6LM2m5M[/youtube]
712
#712
2 Frags +

7600x stock
rtx3070
32gb 6000 cl36 (running on 5600, 6000 is not very stable)
windows 10
1920x1080
headsfeet
flat textures
mastercomfig low

benchmark1
2639 frames 5.820 seconds 453.43 fps ( 2.21 ms/f) 46.448 fps variability
benchmark_test
4812 frames 9.658 seconds 498.26 fps ( 2.01 ms/f) 84.618 fps variability

no windows optimizations yet, no OC, just fast bench test after building the rig

7600x stock
rtx3070
32gb 6000 cl36 (running on 5600, 6000 is not very stable)
windows 10
1920x1080
headsfeet
flat textures
mastercomfig low

benchmark1
2639 frames 5.820 seconds 453.43 fps ( 2.21 ms/f) 46.448 fps variability
benchmark_test
4812 frames 9.658 seconds 498.26 fps ( 2.01 ms/f) 84.618 fps variability

no windows optimizations yet, no OC, just fast bench test after building the rig
713
#713
-2 Frags +
cheetaz

how do i get my gun to be in the middle like that

[quote=cheetaz][/quote]
how do i get my gun to be in the middle like that
714
#714
0 Frags +
adysky7600x stock
rtx3070
32gb 6000 cl36 (running on 5600, 6000 is not very stable)
windows 10
1920x1080
headsfeet
flat textures
mastercomfig low

benchmark1
2639 frames 5.820 seconds 453.43 fps ( 2.21 ms/f) 46.448 fps variability
benchmark_test
4812 frames 9.658 seconds 498.26 fps ( 2.01 ms/f) 84.618 fps variability

no windows optimizations yet, no OC, just fast bench test after building the rig

dx8 or 9 ?

[quote=adysky]7600x stock
rtx3070
32gb 6000 cl36 (running on 5600, 6000 is not very stable)
windows 10
1920x1080
headsfeet
flat textures
mastercomfig low

benchmark1
2639 frames 5.820 seconds 453.43 fps ( 2.21 ms/f) 46.448 fps variability
benchmark_test
4812 frames 9.658 seconds 498.26 fps ( 2.01 ms/f) 84.618 fps variability

no windows optimizations yet, no OC, just fast bench test after building the rig[/quote]

dx8 or 9 ?
715
#715
1 Frags +
mousiopedx8 or 9 ?

dx8, after upgrading to windows 11 both benches are -20 fps, with 6000Mhz RAM now

edit: nvm im stupid, removed headsfeet when running the bench on win11

win11 bench with 6000mhz ram and headsfeet:
benchmark1

2639 frames 5.696 seconds 463.30 fps ( 2.16 ms/f) 45.660 fps variability

benchmark_test

4812 frames 9.281 seconds 518.46 fps ( 1.93 ms/f) 80.219 fps variability

[quote=mousiope]dx8 or 9 ?[/quote]
dx8, after upgrading to windows 11 both benches are -20 fps, with 6000Mhz RAM now

edit: nvm im stupid, removed headsfeet when running the bench on win11

win11 bench with 6000mhz ram and headsfeet:
benchmark1

2639 frames 5.696 seconds 463.30 fps ( 2.16 ms/f) 45.660 fps variability

benchmark_test

4812 frames 9.281 seconds 518.46 fps ( 1.93 ms/f) 80.219 fps variability
716
#716
2 Frags +

7900x stock
rtx 3080
32 GB ddr5 6000 MHz cl30
headfeet
mastercomfig low
dx 81

benchmark test
4812 frames 9.579 seconds 502.34 fps ( 1.99 ms/f) 74.454 fps variability

I normally play with mastercomfig medium low and shadows on for 460 FPS

7900x stock
rtx 3080
32 GB ddr5 6000 MHz cl30
headfeet
mastercomfig low
dx 81

benchmark test
4812 frames 9.579 seconds 502.34 fps ( 1.99 ms/f) 74.454 fps variability

I normally play with mastercomfig medium low and shadows on for 460 FPS
717
#717
0 Frags +

A quick question in mind if anyone has some good advices for me: the plan is to build on a proper CPU with 3200Mhz CL16 RAM.
I have to choose between the R5 7600x and the i7-13700k - which one would benefit me the most?
(The gear is: 16 Gbs of DDR4 3200Mhz RAM + an RTX3060 OC (CPU is irrelevant now, but it's a 5900X atm))

A quick question in mind if anyone has some good advices for me: the plan is to build on a proper CPU with 3200Mhz CL16 RAM.
I have to choose between the R5 7600x and the i7-13700k - which one would benefit me the most?
(The gear is: 16 Gbs of DDR4 3200Mhz RAM + an RTX3060 OC (CPU is irrelevant now, but it's a 5900X atm))
718
#718
4 Frags +

7600x is ddr5 ram only, so you dont have a choice buddy

7600x is ddr5 ram only, so you dont have a choice buddy
719
#719
1 Frags +

i7 12700kf 5.0
3080
2x8 3800cl15
win11

--------------------------------------------------------------------------------------------------
headsfeet
mastercomfig low
dx81
1920x1080

benchmark1
2639 frames 5.719 seconds 461.41 fps ( 2.17 ms/f) 40.155 fps variability
--------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------------------------
mastercomfig low
lod=high
lighting_ex=high
anti_aliasing=msaa_8x
texture_filter=aniso16x
textures=ultra
outlines=high
dx98
1920x1080

benchmark1
2639 frames 6.390 seconds 412.97 fps ( 2.42 ms/f) 33.817 fps variability
--------------------------------------------------------------------------------------------------

i7 12700kf 5.0
3080
2x8 3800cl15
win11


--------------------------------------------------------------------------------------------------
headsfeet
mastercomfig low
dx81
1920x1080

benchmark1
2639 frames 5.719 seconds 461.41 fps ( 2.17 ms/f) 40.155 fps variability
--------------------------------------------------------------------------------------------------


--------------------------------------------------------------------------------------------------
mastercomfig low
lod=high
lighting_ex=high
anti_aliasing=msaa_8x
texture_filter=aniso16x
textures=ultra
outlines=high
dx98
1920x1080

benchmark1
2639 frames 6.390 seconds 412.97 fps ( 2.42 ms/f) 33.817 fps variability
--------------------------------------------------------------------------------------------------
720
#720
0 Frags +

8700k@5.0ghz ~1.35v
1070 6gb
2x8gb c vengeance 3200mhz 16cl xmp
stab.cfg (shadows disabled)
dx8.0
process lasso 1-11
viewmodels enabled (disabled for 03 03 2023 benchmarks)
lighthud
nohats

fps averages around 350 without nvidia inspector up and results below are with it up, I have no idea why it has an effect. 03032023 Cant recreate this so i likely forgot my pills

benchmark_test
4812 frames 12.145 seconds 396.22 fps ( 2.52 ms/f) 64.513 fps variability
4812 frames 11.790 seconds 408.14 fps ( 2.45 ms/f) 52.910 fps variability
4812 frames 11.842 seconds 406.36 fps ( 2.46 ms/f) 52.436 fps variability
4812 frames 11.731 seconds 410.20 fps ( 2.44 ms/f) 52.877 fps variability

benchmark_test 03 03 2023
4812 frames 12.766 seconds 376.94 fps ( 2.65 ms/f) 61.507 fps variability
4812 frames 12.516 seconds 384.46 fps ( 2.60 ms/f) 50.520 fps variability
4812 frames 11.547 seconds 416.72 fps ( 2.40 ms/f) 52.116 fps variability
4812 frames 11.595 seconds 415.02 fps ( 2.41 ms/f) 51.807 fps variability
benchmark1 03 03 2023
2639 frames 8.608 seconds 306.57 fps ( 3.26 ms/f) 30.969 fps variability
2639 frames 8.609 seconds 306.54 fps ( 3.26 ms/f) 28.821 fps variability
2639 frames 8.631 seconds 305.75 fps ( 3.27 ms/f) 29.601 fps variability
changes: 16:10 1680/1050 > 16:9 1920/1080
All 03 03 2023 benchmarks taken in one session, stable as of third benchmark
Thank you Iso!

8700k@5.0ghz ~1.35v
1070 6gb
2x8gb c vengeance 3200mhz 16cl xmp
stab.cfg (shadows disabled)
dx8.0
process lasso 1-11
viewmodels enabled (disabled for 03 03 2023 benchmarks)
lighthud
nohats

[s]fps averages around 350 without nvidia inspector up and results below are with it up, I have no idea why it has an effect. [/s] 03032023 Cant recreate this so i likely forgot my pills

benchmark_test
4812 frames 12.145 seconds 396.22 fps ( 2.52 ms/f) 64.513 fps variability
4812 frames 11.790 seconds 408.14 fps ( 2.45 ms/f) 52.910 fps variability
4812 frames 11.842 seconds 406.36 fps ( 2.46 ms/f) 52.436 fps variability
4812 frames 11.731 seconds 410.20 fps ( 2.44 ms/f) 52.877 fps variability

benchmark_test 03 03 2023
4812 frames 12.766 seconds 376.94 fps ( 2.65 ms/f) 61.507 fps variability
4812 frames 12.516 seconds 384.46 fps ( 2.60 ms/f) 50.520 fps variability
4812 frames 11.547 seconds 416.72 fps ( 2.40 ms/f) 52.116 fps variability
4812 frames 11.595 seconds 415.02 fps ( 2.41 ms/f) 51.807 fps variability
benchmark1 03 03 2023
2639 frames 8.608 seconds 306.57 fps ( 3.26 ms/f) 30.969 fps variability
2639 frames 8.609 seconds 306.54 fps ( 3.26 ms/f) 28.821 fps variability
2639 frames 8.631 seconds 305.75 fps ( 3.27 ms/f) 29.601 fps variability
changes: 16:10 1680/1050 > 16:9 1920/1080
All 03 03 2023 benchmarks taken in one session, stable as of third benchmark
Thank you Iso!
1 ⋅⋅ 21 22 23 24 25 26 27
Please sign in through STEAM to post a comment.