Actually looks somewhat reasonable
Account Details | |
---|---|
SteamID64 | 76561198009358827 |
SteamID3 | [U:1:49093099] |
SteamID32 | STEAM_0:1:24546549 |
Country | United States |
Signed Up | August 23, 2012 |
Last Posted | April 22, 2020 at 6:24 PM |
Posts | 2041 (0.5 per day) |
Game Settings | |
---|---|
In-game Sensitivity | 9 in./360 plus accel |
Windows Sensitivity | 6 |
Raw Input | 1 |
DPI |
1600 |
Resolution |
1680x1050 |
Refresh Rate |
250fps/60hz |
Hardware Peripherals | |
---|---|
Mouse | Razer Deathadder |
Keyboard | Quickfire TK Green |
Mousepad | Generic |
Headphones | Generic |
Monitor | Generic |
Somehow, QWTF is more player-friendly gameplay wise than TFC (and thus, FF) is. To the extent that I haven't seen a TFC/FF community last long without becoming reclusive, yet QWTF just has a "difficult" engine.
I would say in a wishy-washy abstract sense that it has the opposite problems of TF2.
I should shut up and get around to writing a retrospective.
Has the DA 2013's smoothing been proven or debunked?
Does it introduce input lag? Does it apply on low DPI steps?
I unironically want gamerfood, did they go out of business or something?
>(Oh no! The worst case would be a whole 1.3ms if G-Sync added no latency at all! What a tragedy!)
For reference this only 30% more than the difference from 500hz to 1000hz mouse input sampling. You won't notice. The smoothness is a bigger deal between 120hz to 144hz if you're after that.
-
Any 16:10 120hz+ monitors that I wouldn't have to buy secondhand? The two in the OP are "out of stock".
Okay, we'll need more information to help, then.
Are you running anything that would interfere with the kernel? Do you have a realtime AV installed? What kinds of overlays do you have enabled, e.g. mumble, fraps, etc? Have you tried deleting (after backing up) your config and starting from scratch with single-threaded and otherwise default graphics settings?
"my name starts with a w so manual hacks never actually get to me" strikes again!
Did you try turning it off and on again?
>And how do you know it's lower latency? And no "I can clearly see it" doesn't count. Your brain is way too biased to be objective.
"Can't trust nuffin"
My 100hz is right there.
>Corruption is the best excuse I've ever heard for a spelling error. Stuff like that really doesn't make you look good in a paper.
This ain't a paper.
>So if a render takes 1-y and render 0 started at 0+x, the next render won't start until 1+x and won't finish until 2+x-y.
That would be true but we end up drawing over the waiting frame.
Also your use of y here is slightly misleading but assuming a perfectly consistent framerate let's roll with it.
>Up until the tearing you get the higher input lag. Now tell me that having part of the frame with higher input lag and part of it with lower input lag and having screen tearing is good and feels smooth and keep a straight face while saying it.
It's not smooth, not at all.
Vsyncing without triple buffering makes you render start at point 1, then tranfer the frame across points 2 to 3 (assuming we're pushing our panel or cable). Your crosshair (if it's in the middle) will show up at point 2.5, 1.5 into the future.
TF2 tears with the "future" on the bottom (at least on my machine -- and yes, I checked). Assuming the most extreme case for tearing, you could have a render from 0.5 end at 1.5, then another render start at 1.5 and go up to the middle of the screen before being buffer swapped at 2.0. Now you'll have data from 0.5 display from 2.0 to 2.5, and data from 1.5 display from 2.5 to 3.0. You'll notice that the latency ranges from 1.5 to 2.0 and from 1.0 to 1.5.
However, this only happens when the render gets buffer swapped prematurely. If the engine renders the entire buffer in the span of 1.5~2.0, you'll get data from 1.5 displaying from 2.0 to 3.0. That gives the crosshair a nice 1.0 latency, down from the 1.5 earlier.
In this case it's only during the durations when a render overlaps the sync request that you get laggy tearing, which is totally dependent on how much time the game spends making data. Since the theoretical range is "zero time" to "it takes a frame", this puts the median frame latency at "half a frame plus half of however long it takes your monitor to draw a frame". That's where I pulled my 16ms vs 8ms from, and I was stupid to not make it more clear.
When writing this I realized that the tearing on "easy" frames doesn't actually time like that, but I can't be assed to pull out the math for that; just note that my numbers for tearing on the 1.5~2.0 full render time frame's average latency are smaller than they should be. It's probably more like sqrt(16.66/framerendertime), but I pulled that out of my ass with "it wouldn't be higher for an infinitely fast frame".
However, this is moot, because good luck getting a modern DX game engine to give you true flip-buffered vsync. "Double buffered" vsync in CSGO of all games, which is the one game where they should try as hard as possible to give low latency options, uses a render queue. The input lag is horrendous. A 45FPS cap and no vsync is somehow better.
>Still can't verify wether or not it helped and that's a real problem.
Maybe you can't, but I can just go play at fullscreen 800x600 at any time for lower latency at the same framerate. It's not that hard. It's like the difference of a 100hz monitor, minus the extra smoothness.
>addemendum
Corruption of addendum; so?
>I am aware of the problems with VSYNC and although the picture shows those quite nicely, it's irrelevant since we were talking about triple buffered VSYNC.
Triple buffered vsync under DX still starts rendering on the sync point. It doesn't overcome this basic restriction. Show me a DX9-10 game that doesn't behave like this in vsync.
>a cap at the refresh actually has higher input lag than normal VSYNC as long as the fps don't drop.
No, a cap at or just above the refresh rate has unstable input lag, not higher input lag. Only when you dip below native hz does non-synced input lag become higher due to buffer clash. When you're running at or above native hz, there are points in time where the frames will start to render further into the future than with vsync, resulting in less lag. The higher the framerate the game is capable of, the better.
>the capped one could have started anywhere between one render time before refresh 2 and slightly after refresh 0.
If it started to render before refresh 1 and displayed on refresh 2 when you're running at below native fps.
>And that worst case is that the frame starts rendering somewhere between refresh 0 and 1, the cap delays the new render and the next frame doesn't finish rendering until after refresh 2.
You know what caps dependent on monitor vertical phase are called? "Vsync". You know what kind isn't? The kind we're talking about when we say cap. The cap won't delay the next frame until after refresh 2, it'll delay the next frame until 1/caprate after the previous render started; if the render lasts longer than 1/caprate, that doesn't make it take 2/caprate. It just starts overwriting the frame it's already built and starts causing tearing. (note: this is a vast oversimplification; point being that "it works out")
>Yes, it all comes down to implementation, but there's also the option of letting the driver do the triple buffered VSYNC, not the API.
Then you get the nightmare that is the 5770 driver. Thanks, Microsoft!
>But you go around and recommend lowering the resolution to reduce input lag when it might in fact do the opposite for most.
No, I said that if you have a shitty connection to a great monitor, it can help, and one should try it. I didn't recommend everyone switch to 800x600 just because. You completely misunderstand me.
>Latin spelling is not a matter of choice. It hasn't changed in the last two millennia and you won't change it now.
/badlinguistics/
spelling != words, different words, where did the word "romance" come from?
>Yet somehow in that picture rendering always starts at the same time as a refresh. Coincidence? I think not. It's normal VSYNC.
This is literally the fundamental "flaw" with modern vsync that I was showing. Old games consoles don't have this problem because they can program game logic in sync with the display. You can't do that with modern games because you have no idea how long a frame is going to last. If you run at a high framerate without vsync, you generate frames with inputs that are halfway closer to the point they're displayed. The point was to counter this:
"That means a normal 60fps cap is actually worse than VSYNC if you can consistently get 60fps. VSYNC eliminates that random delay."
With vsync, you're making the delay *always* be 16ms, when it's "on average" (actually depends on the rendering time) 8 otherwise. In a perfect world, it would be different.
Apologies for not connecting points well, but I thought order of inclusion would be enough.
>Nice argument from authority.
No, it was advice. There was no argument there.
>In that case you can use an fps cap to force a delay before the next frame starts rendering if the the previous render time was short.
http://i.imgur.com/HXXI3ty.png
It would be so easy to eliminate tearing and "extra lag" if we just constantly rendered to one of three buffers OGL style, but nobody actually does that.
>And let's keep in mind that like I said, DirectX 9 isn't exactly state of the art anymore.
It would be nice if we didn't have any DX9-only games, but we do, and vsync buffering is almost never implemented properly.
The one time I used linux for TF2 I had noticeably lower input lag despite having half the frames. Take that how you will.
start at 7 inches per 360, adjust according to comfort.
>Progressive scan -> doesn't matter, limited by refresh rate
Limited by whichever is slower between your panel and the connection. Not some magic number that's the same between monitors.
>2. good monitor in terms of latency that uses the maximum transfer rate but for some reason is using a buffer which totally makes no sense if they care about latency so those monitors are rare/nonexistant -> normal latency + transfer time, still worse than progressive scan.
Let's say I have a monitor where it takes 1/90th of a second for it to blit the whole panel once it has its buffer, and it takes a millisecond to resize the buffer in its internal RAM. That's ~12 ms to resize and display after getting the image. If it normally takes 1/60th of a second for it to get a 1080p24 image, then getting a 800x600x15(16) image should take just over 1/6th of that time; we're already only up to 15ms to send, resize, and display the low resolution image.
For people who have *very good monitors* but do *not* have the right cables to go with them, this is a perfectly legitimate option. These are people that exist.
>DX9 does support triple buffering.
It has three buffers, but it's a queue. Hence "the order of buffer swapping is fixed". I don't see what's wrong here. Have fun with your two frames of forced input lag :)
API edge cases that drivers don't implement properly beware! Hail Vulkan!
>3. You've mixed up two cases.
Nearly every scaler is the full buffer case. I very well am aware of the others, but the fact that slow connections can make even fully buffered low-resolution images make sense is already the "worst" case.
>This is only an improvement if scaler latency + small res transfer time is less than large res transfer time and if the monitor buffers a frame no matter what instead of using progressive scan for the native resolution.
Again, these situations exist. It's not unicorns we're talking about.
>Is it a TV?
God, no. Throw all latency worries out the window.
>Did you mean addendum?
Everyone in my house says addemendum, so that's the word to me.
>[posts picture about normal vsync]
You would notice that's actually triple buffered vsync if you understood the dropped frame. Normally, sending and rendering the latest frame can't be done at the same time. That's one of the reasons that triple buffering exists.
(The fact that there are triple buffering implementations that don't involve timing sync on the gaming engine is off-topic. I was just showing the best case for DX vsync.)
>Re:image
You have no idea what you just did to that image. Come back after you write an emulator for 3d gaming hardware.
>Normal vsync line is completely wrong
>triple buffer missing refreshes misinterpreted this, move on
>"You'll have to live with one added refresh time of input lag, even with uncapped fps."
to completely ignore the fact that DX vsync starts rendering the next frame as early as possible instead of as late as possible regardless of how much performance the game could actually theoretically squeeze out of it
>DX triple + cap @ 2x refresh
u wot m8? Why the fuck would you make DX triple-buffer without vsync, and if that's not what you're doing, how the hell do you make DX vsync to "2xrefresh"? The only thing I can even think of is playing the game in a window without an FPS cap with Aero enabled. That's two extra layers of indirection where literally anything could go wrong latency-wise.