>And how do you know it's lower latency? And no "I can clearly see it" doesn't count. Your brain is way too biased to be objective.
"Can't trust nuffin"
My 100hz is right there.
>Corruption is the best excuse I've ever heard for a spelling error. Stuff like that really doesn't make you look good in a paper.
This ain't a paper.
>So if a render takes 1-y and render 0 started at 0+x, the next render won't start until 1+x and won't finish until 2+x-y.
That would be true but we end up drawing over the waiting frame.
Also your use of y here is slightly misleading but assuming a perfectly consistent framerate let's roll with it.
>Up until the tearing you get the higher input lag. Now tell me that having part of the frame with higher input lag and part of it with lower input lag and having screen tearing is good and feels smooth and keep a straight face while saying it.
It's not smooth, not at all.
Vsyncing without triple buffering makes you render start at point 1, then tranfer the frame across points 2 to 3 (assuming we're pushing our panel or cable). Your crosshair (if it's in the middle) will show up at point 2.5, 1.5 into the future.
TF2 tears with the "future" on the bottom (at least on my machine -- and yes, I checked). Assuming the most extreme case for tearing, you could have a render from 0.5 end at 1.5, then another render start at 1.5 and go up to the middle of the screen before being buffer swapped at 2.0. Now you'll have data from 0.5 display from 2.0 to 2.5, and data from 1.5 display from 2.5 to 3.0. You'll notice that the latency ranges from 1.5 to 2.0 and from 1.0 to 1.5.
However, this only happens when the render gets buffer swapped prematurely. If the engine renders the entire buffer in the span of 1.5~2.0, you'll get data from 1.5 displaying from 2.0 to 3.0. That gives the crosshair a nice 1.0 latency, down from the 1.5 earlier.
In this case it's only during the durations when a render overlaps the sync request that you get laggy tearing, which is totally dependent on how much time the game spends making data. Since the theoretical range is "zero time" to "it takes a frame", this puts the median frame latency at "half a frame plus half of however long it takes your monitor to draw a frame". That's where I pulled my 16ms vs 8ms from, and I was stupid to not make it more clear.
When writing this I realized that the tearing on "easy" frames doesn't actually time like that, but I can't be assed to pull out the math for that; just note that my numbers for tearing on the 1.5~2.0 full render time frame's average latency are smaller than they should be. It's probably more like sqrt(16.66/framerendertime), but I pulled that out of my ass with "it wouldn't be higher for an infinitely fast frame".
However, this is moot, because good luck getting a modern DX game engine to give you true flip-buffered vsync. "Double buffered" vsync in CSGO of all games, which is the one game where they should try as hard as possible to give low latency options, uses a render queue. The input lag is horrendous. A 45FPS cap and no vsync is somehow better.