#60
Correct. 1 for projectiles, 2 for hitscan. You should consider 2 for sniper since it's the class where hitreg matters the most.
Correct. 1 for projectiles, 2 for hitscan. You should consider 2 for sniper since it's the class where hitreg matters the most.
Is there a reason that people are setting both cl_interp and cl_interp_ratio?
My understanding is that cl_interp was essentially discontinued, and that you should set cl_interp to 0 so that cl_interp_ratio is consistently used (where cl_interp_ratio "x" roughly translates to the previous cl_interp "x/updaterate")
My understanding is that cl_interp was essentially discontinued, and that you should set cl_interp to 0 so that cl_interp_ratio is consistently used (where cl_interp_ratio "x" roughly translates to the previous cl_interp "x/updaterate")
You are right.
cl_interp_ratio "x" translates EXACTLY to the previous cl_interp "x/updaterate"
The problem is that one programmer forgot to remove cl_interp and some configs force cl_interp_ratio to stupid values (e.g. the etf2l config forces cl_interp_ratio 1, I don't know why) whereas cl_interp can't be limited to certain values.
cl_interp_ratio "x" translates EXACTLY to the previous cl_interp "x/updaterate"
The problem is that one programmer forgot to remove cl_interp and some configs force cl_interp_ratio to stupid values (e.g. the etf2l config forces cl_interp_ratio 1, I don't know why) whereas cl_interp can't be limited to certain values.
1. net_graph doesn't show the correct cmdrate on listen servers
2. Therefore both updaterate, cmdrate and tickrate of dedicated servers have to be 66 and not 66.67.
No. That's not what I was saying at all. I was specifically saying that when you set the updaterate to 66, that it is *not* sending the tickrate of the game, like you originally claimed in order to make cl_interp_ratio 1 span more than a single update window. I was specifically saying that reading off net_graph's active rates to know what rate the game is actually using is a bad idea because it's inaccurate, and I decided to figure that out because it's the only reasonable source of that claim you made that I could think of.
net_graph's active rates are off by (roughly, actually a little more) 1%. net_graph shows 66/s settings as 66.7/s. net_graph shows ~66.6 settings as ~67.5.
When you play a on your own listen server, the game uses its internal tickrate for the input rate, regardless of your client and server settings. net_graph shows ~67.5/s for the active input rate when on your own listen server. If the game sent more than its internal tickrate of input messages, it would break, because it would be doing the exact same thing as speedhacking, literally. Therefore, the only reasonable conclusion to draw is that net_graph is inaccurate about active rates by ~1%, and that when you set your updaterate to 66, the server does not in fact send slightly more messages than that in order to make up for low interp.
Because I know that, I can say that the extrapolation spike meter is also inaccurate, because true random jitter absolutely positively must cause extrapolation on roughly 50% of frames due to how interp works. The extent of this extrapolation is dependent on refresh rate phase vs update phase. That's what your ms paint diagrams are showing to me. I already agreed to that.
However, the practical randomness of this all when you're using a monitor below ~66hz is part of how awful it is. The higher the refresh rate you get beyond that, the further along that extrapolation necessarily must be shown on it. This isn't an excuse however for dealing with extrapolation at low refresh rates because it's still a problem that happens and can at least be effectively mitigated trivially by raising your interp slightly to something like ratio 1.2 instead of 1.0 (so that gradual and constant increases in ping don't necessarily cause extrapolation on every frame).
2. Therefore both updaterate, cmdrate and tickrate of dedicated servers have to be 66 and not 66.67.[/quote]
No. That's not what I was saying at all. I was specifically saying that when you set the updaterate to 66, that it is *not* sending the tickrate of the game, like you originally claimed in order to make cl_interp_ratio 1 span more than a single update window. I was specifically saying that reading off net_graph's active rates to know what rate the game is actually using is a bad idea because it's inaccurate, and I decided to figure that out because it's the only reasonable source of that claim you made that I could think of.
net_graph's active rates are off by (roughly, actually a little more) 1%. net_graph shows 66/s settings as 66.7/s. net_graph shows ~66.6 settings as ~67.5.
When you play a on your own listen server, the game uses its internal tickrate for the input rate, regardless of your client and server settings. net_graph shows ~67.5/s for the active input rate when on your own listen server. If the game sent more than its internal tickrate of input messages, it would break, because it would be doing the exact same thing as speedhacking, literally. Therefore, the only reasonable conclusion to draw is that net_graph is inaccurate about active rates by ~1%, and that when you set your updaterate to 66, the server does not in fact send slightly more messages than that in order to make up for low interp.
Because I know that, I can say that the extrapolation spike meter is also inaccurate, because true random jitter absolutely positively must cause extrapolation on roughly 50% of frames due to how interp works. The extent of this extrapolation is dependent on refresh rate phase vs update phase. That's what your ms paint diagrams are showing to me. I already agreed to that.
However, the practical randomness of this all when you're using a monitor below ~66hz is part of how awful it is. The higher the refresh rate you get beyond that, the further along that extrapolation necessarily must be shown on it. This isn't an excuse however for dealing with extrapolation at low refresh rates because it's still a problem that happens and can at least be effectively mitigated trivially by raising your interp slightly to something like ratio 1.2 instead of 1.0 (so that gradual and constant increases in ping don't necessarily cause extrapolation on every frame).
You have no way of measuring the real tickrate, yet you claim to know exactly how much net_graph is off by.
wareyanet_graph's active rates are off by (roughly, actually a little more) 1%. net_graph shows 66/s settings as 66.7/s. net_graph shows ~66.6 settings as ~67.5.
So if 67.5/s in net_graph corresponds to an actual value of 66.67/s and it shows 67.5/s on a listen server doesn't that mean it's running at 66.67/s=15ms tickrate?
[quote=wareya]net_graph's active rates are off by (roughly, actually a little more) 1%. net_graph shows 66/s settings as 66.7/s. net_graph shows ~66.6 settings as ~67.5.[/quote]
So if 67.5/s in net_graph corresponds to an actual value of 66.67/s and it shows 67.5/s on a listen server doesn't that mean it's running at 66.67/s=15ms tickrate?
Am i mistaken or cl_interp_ratio is blocked in most leagues?
ETF2L used to force it to 1, don't know about other leagues. You can't block the cvar itself, just restrict it to certain values.
If it's forced to bs values just talk with the admins about that or send them here.
If it's forced to bs values just talk with the admins about that or send them here.
SetsulYou have no way of measuring the real tickrate, yet you claim to know exactly how much net_graph is off by.
Yes, I do. The tickrate for input on a listen server absolutely, positively, MUST be equal to the game's internal tickrate. THIS ENGINE DOES NOT WORK WNY OTHER WAY. How fucking hard is it to understand something so fundamental. If this is not the case then the game is fundamentally broken. This is literally a fundamental aspect of source engine. This is as fundamental as the fact that the directx renderer uses directx and the opengl renderer uses opengl. This is as fundamental as the fact that interpolation uses discrete messages. This is as fundamental as the fact that rockets use line collision instead of hull collision. If this aspect of the engine does not work in this way then several things are going to break in snowballing fashion.
The fact that rockets are lines is what lets rockets do consistent damage on directs, because the engine can accurately "map" them onto whatever they hit. That's why grenades don't have consistent damage on directs.
The fact that interpolation is a function of discrete messages allows player movement to be run at a static rate, which allows input prediction to be possible. Without discrete messages at a static rate, input prediction would be prohibitively expensive on the CPU even with optimization.
The fact that the input rate on a local listen server is equal to the game's tickrate (AND NOT HIGHER LIKE NETGRAPH SUGGESTS) allows the listen server to bypass a lot of the network stack. If it were any higher then the game's physics would break as the player's character would literally be doing the same thing as speedhacking. And that is not the case. The game engine can not accurately simulate input at rates above its tick rate. Therefore, net_graph is necessarily wrong.
Here are some other ingame counters that are wrong/inaccurate:
- cl_showfps 1 (measurement is inaccurate by roughly a millisecond per frame, causing the framerate to look like it's rapidly changing between two values when it's at a stable integer)
- ping on the scoreboard (it's reaalllly broken. uses an old calculation from goldsrc which doesn't work properly anymore)
- some HUD elements (like health) show information that's outdated by a frame
Yes, I do. The tickrate for input on a listen server [b]absolutely, positively, [i]MUST[/i][/b] be equal to the game's internal tickrate. [b]THIS ENGINE DOES NOT WORK WNY OTHER WAY.[/b] How fucking hard is it to understand something so fundamental. If this is not the case then the game is fundamentally broken. This is literally a fundamental aspect of source engine. This is as fundamental as the fact that the directx renderer uses directx and the opengl renderer uses opengl. This is as fundamental as the fact that interpolation uses discrete messages. This is as fundamental as the fact that rockets use line collision instead of hull collision. If this aspect of the engine does not work in this way then [i]several things are going to break in snowballing fashion[/i].
The fact that rockets are lines is what lets rockets do consistent damage on directs, because the engine can accurately "map" them onto whatever they hit. That's why grenades don't have consistent damage on directs.
The fact that interpolation is a function of discrete messages allows player movement to be run at a static rate, which allows input prediction to be possible. Without discrete messages at a static rate, input prediction would be prohibitively expensive on the CPU even with optimization.
The fact that the input rate on a local listen server is equal to the game's tickrate (AND NOT HIGHER LIKE NETGRAPH SUGGESTS) allows the listen server to bypass a lot of the network stack. If it were any higher then the game's physics would break as the player's character would [i]literally[/i] be doing the same thing as speedhacking. And that is [i]not[/i] the case. The game engine [i]can not accurately simulate input at rates above its tick rate[/i]. Therefore, [i]net_graph is necessarily wrong[/i].
Here are some other ingame counters that are wrong/inaccurate:
- cl_showfps 1 (measurement is inaccurate by roughly a millisecond per frame, causing the framerate to look like it's rapidly changing between two values when it's at a stable integer)
- ping on the scoreboard (it's reaalllly broken. uses an old calculation from goldsrc which doesn't work properly anymore)
- some HUD elements (like health) show information that's outdated by a frame
CALM DOWN!
Ok, now answer this question: Does TF2 run at a tickrate of 15ms=66.67/s?
Ok, now answer this question: Does TF2 run at a tickrate of 15ms=66.67/s?
SetsulCALM DOWN!
Ok, now answer this question: Does TF2 run at a tickrate of 15ms=66.67/s?
Yes, TF2, CSS and DoD S run at a tick interval of 15 ms.
Ok, now answer this question: Does TF2 run at a tickrate of 15ms=66.67/s?[/quote]
Yes, TF2, CSS and DoD S run at a tick interval of 15 ms.
This is exactly what I'm trying to tell him, but he doesn't listen.
Except that I've said that at several times during the course of the argument and you fail to understand that I'm arguing about this:
SetsulTF2 runs at 66.67 tickrate, not 66, which equals 15ms or 1/66.67ths of a second between messages. IF you use 66 rates a server will send you all snapshots, which is one every 15ms, not one every 15.2ms. That means that any ping jitter <0.2ms won't cause extrapolation.
It's a minor sideeffect.
This is a major point of your argument that cl_interp_ratio 1 is no worse than 1.2 and it's absolutely incorrect. I'm not arguing about the game's tickrate. It's 200 per 3 seconds. It's 66.667~. It's ~67.5 divided by 101% which is ~66.67. If you were paying attention to the content of my arguments at all you would notice that I understand this game does run at a ~66.667 tickrate. It's you who fails to understand what I'm trying to say.
[quote=Setsul]
TF2 runs at 66.67 tickrate, not 66, which equals 15ms or 1/66.67ths of a second between messages. IF you use 66 rates a server will send you all snapshots, which is one every 15ms, not one every 15.2ms. That means that any ping jitter <0.2ms won't cause extrapolation.
It's a minor sideeffect.[/quote]
This is a major point of your argument that cl_interp_ratio 1 is no worse than 1.2 and it's absolutely incorrect. I'm not arguing about the game's tickrate. It's 200 per 3 seconds. It's 66.667~. It's ~67.5 divided by 101% which is ~66.67. If you were paying attention to the content of my arguments at all you would notice that I understand this game does run at a ~66.667 tickrate. It's you who fails to understand what I'm trying to say.
net_graph does indeed show to high values, but that is not the point.
On a dedicated server net_graph is showing the same values for both 66 and 67. Using the same assumption that you made, that net_graph scales linearly, which is not a given, I conclude that the TF2 client is actually sending and/or recieving 66.67 packets per second in both cases.
Since snapshots are taken at a rate of 15ms holding them back to get an intervall of 15.2ms just to meet user-specified values doesn't make any sense. If TF2 would use the values the user set speedhacking would be ridiculously easy. The fact that TF2 just plain ignores any values higher than 66/67 for cmdrate and updaterate tells me that this isn't the case.
On a dedicated server net_graph is showing the same values for both 66 and 67. Using the same assumption that you made, that net_graph scales linearly, which is not a given, I conclude that the TF2 client is actually sending and/or recieving 66.67 packets per second in both cases.
Since snapshots are taken at a rate of 15ms holding them back to get an intervall of 15.2ms just to meet user-specified values doesn't make any sense. If TF2 would use the values the user set speedhacking would be ridiculously easy. The fact that TF2 just plain ignores any values higher than 66/67 for cmdrate and updaterate tells me that this isn't the case.
net_graph is showing the same values for both 66 and 67
Are you sure you removed the rate max? (set sv_maxupdaterate 67) Because it absolutely isn't doing that for me. 66 is showing 66.7~ and 67 is showing 67.7. Actually, any value above 66.667~ exactly is showing 67.7, regardless of anything that I do. So that's another implication that the rates really can't exceed the game's tickrate.
Are you sure you removed the rate max? (set sv_maxupdaterate 67) Because it absolutely isn't doing that for me. 66 is showing 66.7~ and 67 is showing 67.7. Actually, any value above 66.667~ exactly is showing 67.7, regardless of anything that I do. So that's another implication that the rates really can't exceed the game's tickrate.
I never said that anything would exceed the tickrate?
It seems to show pretty random values (68.2 wtf?). Nonetheless the in and out values actually change.
I guess that means most serverconfigs are wrong and I need to set 67 rates.
Will look into this tomorrow, it's getting late here.
It seems to show pretty random values (68.2 wtf?). Nonetheless the in and out values actually change.
I guess that means most serverconfigs are wrong and I need to set 67 rates.
Will look into this tomorrow, it's getting late here.
So should i set my interp to 0 with cl_interp ratio 1 for projectiles and cl_interp 0.033 with cl_interp ratio 2 for hitscan? Also, would cl_interp ratio 2 set interp 0.033 to 0.066?
cl_interp_ratio isn't a factor for cl_interp. It's a factor for 1/cl_updaterate.
#74 and #75
http://whisper.ausgamers.com/wiki/images/Net_graph3.gif
"There is a quirk in net_graph 3 that occurs when the average updates sent/received by you per second magically seem to exceed the servers sv_maxupdaterate, servers tickrate, and the clients cl_updaterate which were all set to 100 at the time the screenshot was taken above, but this only occurs because lines 2 and 3 update twice per second and therefore what you see are averages that aren't always in sync with the server."
Source: Whisper's Counter-Strike Wiki. Which is why it references 100/s and not 66/s.
Setsul and wareya were arguing 66 vs 66.7. The Source wiki states "By default, the timestep is 15ms, so 66.666...". So 66.7 is technically closer to being right, but you're both splitting hairs. I doubt this difference has a discernible effect on hitreg.
There's no universal answer to the issue of hitreg, rates, and interp. We all play on different servers with different connections to all of them. The route your data takes to the server and back, and the performance of the server itself, will have a much greater effect than the difference between 66 and 66.7.
Spend some time testing out different values on your most played servers and find what's right for you. You might find a happy middle-ground. Or maybe you'll need server specific values for your matches.
I've tinkered with all kinds of network values over the years, but have settled on:
General settings:
cl_interp "0"
cl_smooth "0"
rate "100000" <--- probably extreme, but anyone with a reasonable connection isn't going to be troubled by setting it this high. Certainly much better than setting it too low.
Class specific/projectile + Medic:
cl_cmdrate "66"
cl_interp_ratio "1"
cl_updaterate "66"
Class specific/hitscan:
cl_cmdrate "66"
cl_interp_ratio "2"
cl_updaterate "66"
The assumption here being that the server is configured "correctly", and doesn't limit your rate setting to something silly, resulting in your cmd and update rates causing choke.
Of course, I could be wrong. Totally willing to be corrected.
I'm just a 12v12 player who likes watching 6v6 :-)
On that note: these are the values I use for 12v12 play. I do suffer with choke issues from time to time, but I'm willing to do so because the hitreg is, generally speaking, much better than with lower values.
There should be even less of an issue when playing 6v6.
[img]http://whisper.ausgamers.com/wiki/images/Net_graph3.gif[/img]
"There is [b]a quirk in net_graph 3[/b] that occurs when the average updates sent/received by you per second magically seem to exceed the servers sv_maxupdaterate, servers tickrate, and the clients cl_updaterate which were all set to 100 at the time the screenshot was taken above, but [b]this only occurs because lines 2 and 3 update twice per second and therefore what you see are averages that aren't always in sync with the server.[/b]"
Source: [url=http://whisper.ausgamers.com/wiki/index.php?title=Source_rates]Whisper's Counter-Strike Wiki[/url]. Which is why it references 100/s and not 66/s.
Setsul and wareya were arguing 66 vs 66.7. The Source wiki states "By default, the timestep is 15ms, so 66.666...". So 66.7 is technically closer to being right, but you're both splitting hairs. I doubt this difference has a discernible effect on hitreg.
There's no universal answer to the issue of hitreg, rates, and interp. We all play on different servers with different connections to all of them. The route your data takes to the server and back, and the performance of the server itself, will have a much greater effect than the difference between 66 and 66.7.
Spend some time testing out different values on your most played servers and find what's right for you. You might find a happy middle-ground. Or maybe you'll need server specific values for your matches.
I've tinkered with all kinds of network values over the years, but have settled on:
General settings:
cl_interp "0"
cl_smooth "0"
rate "100000" <--- probably extreme, but anyone with a reasonable connection isn't going to be troubled by setting it this high. Certainly much better than setting it too low.
Class specific/projectile + Medic:
cl_cmdrate "66"
cl_interp_ratio "1"
cl_updaterate "66"
Class specific/hitscan:
cl_cmdrate "66"
cl_interp_ratio "2"
cl_updaterate "66"
The assumption here being that the server is configured "correctly", and doesn't limit your rate setting to something silly, resulting in your cmd and update rates causing choke.
Of course, I could be wrong. Totally willing to be corrected.
I'm just a 12v12 player who likes watching 6v6 :-)
On that note: these [i]are[/i] the values I use for 12v12 play. I do suffer with choke issues from time to time, but I'm willing to do so because the hitreg is, generally speaking, much better than with lower values.
There should be even less of an issue when playing 6v6.
While we're on this topic can someone explain what these 2 spots mean on my netgraph? And maybe how I can fix it? I'm on a wireless connection atm but will switch back to wired again.
http://s21.postimg.org/urqqd3mvr/2013_12_01_00002.jpg
I get a lot of gaps between the graph and my choke is usually around 19-30, sometimes 60.
[img]http://s21.postimg.org/urqqd3mvr/2013_12_01_00002.jpg[/img]
I get a lot of gaps between the graph and my choke is usually around 19-30, sometimes 60.
Why would my interp not change when I change classes?
Ex. I go from sniper having a 0.033 interp to soldier and it says in the net_graph 1 that i have it still at 33 lerp or 0.033 interp instead of what is in my .cfg files which has soldier at 0.0152 interp.
Ex. I go from sniper having a 0.033 interp to soldier and it says in the net_graph 1 that i have it still at 33 lerp or 0.033 interp instead of what is in my .cfg files which has soldier at 0.0152 interp.
TerrapinJusticeWhy would my interp not change when I change classes?
Ex. I go from sniper having a 0.033 interp to soldier and it says in the net_graph 1 that i have it still at 33 lerp or 0.033 interp instead of what is in my .cfg files which has soldier at 0.0152 interp.
Valve blocked it because of an exploit.
Ex. I go from sniper having a 0.033 interp to soldier and it says in the net_graph 1 that i have it still at 33 lerp or 0.033 interp instead of what is in my .cfg files which has soldier at 0.0152 interp.[/quote]
Valve blocked it because of an exploit.
the interp that the game uses is whichever of cl_interp and cl_interp_ratio is higher. So what could cause your problem TJ is if you have a cl_interp_ratio above 1 (so it overrides cl_interp) or if you have an updaterate of 30 instead of 66 (good possibility)
Do I need to change anything if I have a 144hz monitor?
#76
cl_interp 0 for both, setting cl_interp_ratio is enough.
#78
Apparently there was some major misunderstanding. TF2 is indeed running at 15ms, I don't know why we started arguing about that, the problem is that most servers use 66 maxrates instead of 66.67 or 67.
#79
1. Wireless is bad.
2. The default netsettings are bad
3. choke is bad, you want it to be 0.
That empy space means you didn't get any packets at all, which is bad too and a result of that massive choke.
Use these netsettings and wired (if possible) then try again.
//netsettings
//change these if necessary
rate 100000
cl_cmdrate 66
cl_updaterate 66
cl_interp_ratio 2
//don´t change these
cl_interp "0" //set by cl_interp_ratio
cl_lagcompensation 1
cl_pred_optimize "2"
cl_smooth "0"
cl_smoothtime "0.01"
You can use cl_interp_ratio 1 for projectile classes, just add it to the class configs, but don't forget to add cl_interp_ratio 2 to all other class configs if you do this.
#80
Use either cl_interp 0 for all, cl_interp_ratio 1 in the soldier.cfg and cl_interp_ratio 2 in all other classcfgs or use cl_interp_ratio 0, cl_interp 0.01515151515 in the solider.cfg and cl_interp 0.03030303 (or 0.033 if you think it's better) in all other class cfgs.
#83
No.
Maybe change fps_max to 288 if you use an fps cap.
120Hz Lightboost > 144Hz, so you might want to use LB if you can.
cl_interp 0 for both, setting cl_interp_ratio is enough.
#78
Apparently there was some major misunderstanding. TF2 is indeed running at 15ms, I don't know why we started arguing about that, the problem is that most servers use 66 maxrates instead of 66.67 or 67.
#79
1. Wireless is bad.
2. The default netsettings are bad
3. choke is bad, you want it to be 0.
That empy space means you didn't get any packets at all, which is bad too and a result of that massive choke.
Use these netsettings and wired (if possible) then try again.
//netsettings
//change these if necessary
rate 100000
cl_cmdrate 66
cl_updaterate 66
cl_interp_ratio 2
//don´t change these
cl_interp "0" //set by cl_interp_ratio
cl_lagcompensation 1
cl_pred_optimize "2"
cl_smooth "0"
cl_smoothtime "0.01"
You can use cl_interp_ratio 1 for projectile classes, just add it to the class configs, but don't forget to add cl_interp_ratio 2 to all other class configs if you do this.
#80
Use either cl_interp 0 for all, cl_interp_ratio 1 in the soldier.cfg and cl_interp_ratio 2 in all other classcfgs or use cl_interp_ratio 0, cl_interp 0.01515151515 in the solider.cfg and cl_interp 0.03030303 (or 0.033 if you think it's better) in all other class cfgs.
#83
No.
Maybe change fps_max to 288 if you use an fps cap.
120Hz Lightboost > 144Hz, so you might want to use LB if you can.
stabby
What you suggested is that I am unintelligent and spreading deleterious information. That's untrue and immature, and I don't appreciate it.
but div2 is not ugc silver
What you suggested is that I am unintelligent and spreading delet[b]e[/b]rious information. That's untrue and immature, and I don't appreciate it.[/quote]
but div2 is not ugc silver
Setsul#79
Thanks man the settings helped out a bit on my wireless. Will use it on wired soon.
#79
[/quote]
Thanks man the settings helped out a bit on my wireless. Will use it on wired soon.
I change my cl_interp to 0.0152 on my autoexec.cfg and doesn't work... when i type in console cl_interp says me 0.025000 :(
Where is the problem?
Where is the problem?
#87
To confirm: You're checking your console at the main menu once you've launched the game?
I ask to rule out any class-specific cfg commands you may have in the \my_custom_stuff\cfg\ folder that could change the value upon class selection.
Easiest way to determine what's wrong will be to post the contents of your autoexec.cfg.
Sidenote: I think it's "better" to set your interp to "0", and use your cl_updaterate/interp_ratio values to lower the interp automatically. A cl_interp of "0", an cl_interp_ratio of "1" and an cl_updaterate of "66" would result in an in-game interp of 0.0152
Back on point:
It's also possible that there's something in your config.cfg overwriting the values from your autoexec.cfg. In addition to this, it could be your Steam Cloud config.cfg, or the local-Cloud config.cfg.
We can consider this if nothing is obviously wrong with your autoexec.cfg
To explain:
In the past I had a bunch of problems with the Cloud-stored "config.cfg" overwriting newer settings (config.cfg would soak up settings from demos I watched), so I decided to disable Cloud storage for TF2. In the process of investigating this I discovered that Valve also have "local" versions* of the Cloud storage files. Found here: Steam\userdata\<your_account_number>\440\remote\cfg\
*Only in the absence of this file will Steam download the actual Cloud file.
Note: 440 is the appid for TF2.
If you want a totally fresh start, disable Steam Cloud for TF2, delete "config.cfg" from Steam\steamapps\common\Team Fortress 2\tf\cfg AND from the local Cloud storage folder I mentioned above.
Steam will automatically generate a new config.cfg upon launching TF2, populating it with any non-default values contained in your autoexec.cfg
Important I: Any custom settings: for example fast weapon switching, graphics settings, download settings, and so on, will be restored to their default values if they're not explicitly stated in your autoexec.cfg.
Important II: TF2 has introduced a bunch of fairly annoying pop-ups for "new users" relating to tutorials, backpack instructions, and who knows what else.
Because you're starting with a new config.cfg, the game will pop these up again. Fortunately, each of these has a console command associated with them, which can be added to your autoexec.cfg, making the process entirely painless.
Let me know if you'd like me to post them here, instead of having look them up for them yourself.
I could actually post my own autoexec.cfg for reference purposes. It doesn't have much in the way of graphical commands, but most everything else is covered.
To confirm: You're checking your console at the main menu once you've launched the game?
I ask to rule out any class-specific cfg commands you may have in the \my_custom_stuff\cfg\ folder that could change the value upon class selection.
Easiest way to determine what's wrong will be to post the contents of your autoexec.cfg.
Sidenote: I think it's "better" to set your interp to "0", and use your cl_updaterate/interp_ratio values to lower the interp automatically. A cl_interp of "0", an cl_interp_ratio of "1" and an cl_updaterate of "66" would result in an in-game interp of 0.0152
Back on point:
It's also possible that there's something in your config.cfg overwriting the values from your autoexec.cfg. In addition to this, it could be your Steam Cloud config.cfg, or the local-Cloud config.cfg.
We can consider this if nothing is obviously wrong with your autoexec.cfg
To explain:
In the past I had a bunch of problems with the Cloud-stored "config.cfg" overwriting newer settings (config.cfg would soak up settings from demos I watched), so I decided to disable Cloud storage for TF2. In the process of investigating this I discovered that Valve also have "local" versions* of the Cloud storage files. Found here: Steam\userdata\<your_account_number>\440\remote\cfg\
*Only in the absence of this file will Steam download the actual Cloud file.
Note: 440 is the appid for TF2.
If you want a totally fresh start, disable Steam Cloud for TF2, delete "config.cfg" from Steam\steamapps\common\Team Fortress 2\tf\cfg AND from the local Cloud storage folder I mentioned above.
Steam will automatically generate a new config.cfg upon launching TF2, populating it with any non-default values contained in your autoexec.cfg
Important I: Any custom settings: for example fast weapon switching, graphics settings, download settings, and so on, will be restored to their default values if they're not explicitly stated in your autoexec.cfg.
Important II: TF2 has introduced a bunch of fairly annoying pop-ups for "new users" relating to tutorials, backpack instructions, and who knows what else.
Because you're starting with a new config.cfg, the game will pop these up again. Fortunately, each of these has a console command associated with them, which can be added to your autoexec.cfg, making the process entirely painless.
Let me know if you'd like me to post them here, instead of having look them up for them yourself.
I could actually post my own autoexec.cfg for reference purposes. It doesn't have much in the way of graphical commands, but most everything else is covered.
Alright, So I just want a clear answer.
I will change the cl_interp in my crosshair switcher file (I got it from tf2mate). But how do I change the cl_interp for the actual class, or is it the same thing...
And where do I go to change the cl_interp ratio (cant find it on any cfg file) and what should it be for each class?
Note: I have 80-150 ping, and some packet loss in-game, I want the setting that best complies with this.
My monitor is 60 Hertz.
I will change the cl_interp in my crosshair switcher file (I got it from tf2mate). But how do I change the cl_interp for the actual class, or is it the same thing...
And where do I go to change the cl_interp ratio (cant find it on any cfg file) and what should it be for each class?
[i]Note: I have 80-150 ping, and some packet loss in-game, I want the setting that best complies with this.
My monitor is 60 Hertz.[/i]
EdgeProjectile: Set to 0.0152/cl_interp_ratio 1, open net_graph 4, and slightly bump it up by around 0.001 until your lerp is a stable orange colour and not flashing to yellow every few seconds.
Hitscan: A lot say 0.033, but I have it as 0.0303/cl_interp_ratio 2 as default. Hitscan interp is annoying, and a I find my shots rego differently on different servers, and because of that I fiddle around with it a lot.
Sniper: I have this set to 0.1, and I know a few others that use 0.1 as well. Just find this to rego the most, possibly because it's always been at 0.1 for me.
What do the different flashing colors mean in net_graph (for interp that is)?
Spy: Also at 0.1, use to it.
" I was getting yellow-flashing lerp (means that extrapolation absolutely Happened that frame) "
Is orange just "normal"?
Hitscan: A lot say 0.033, but I have it as 0.0303/cl_interp_ratio 2 as default. Hitscan interp is annoying, and a I find my shots rego differently on different servers, and because of that I fiddle around with it a lot.
Sniper: I have this set to 0.1, and I know a few others that use 0.1 as well. Just find this to rego the most, possibly because it's always been at 0.1 for me.
What do the different flashing colors mean in net_graph (for interp that is)?
Spy: Also at 0.1, use to it.[/quote]
" I was getting yellow-flashing lerp (means that extrapolation absolutely Happened that frame) "
Is orange just "normal"?