So did you disable C6/C7 already or not?
Account Details | |
---|---|
SteamID64 | 76561198042353207 |
SteamID3 | [U:1:82087479] |
SteamID32 | STEAM_0:1:41043739 |
Country | Germany |
Signed Up | December 16, 2012 |
Last Posted | April 26, 2024 at 5:56 AM |
Posts | 3425 (0.8 per day) |
Game Settings | |
---|---|
In-game Sensitivity | |
Windows Sensitivity | |
Raw Input | |
DPI |
|
Resolution |
|
Refresh Rate |
Hardware Peripherals | |
---|---|
Mouse | |
Keyboard | |
Mousepad | |
Headphones | |
Monitor |
That's literally why these settings exist.
It must still initialize the buffer with that colour or the colour won't show up if it's not overdrawn. Otherwise gl_clear_rancomcolor wouldn't work. There are commands to invalidate the framebuffer, glclear is not one of them.
Event then assuming it would work like you said, why would it only speed up tiled renderers?
Tiles aren't sitched together either. A pixel either is in one tile, or it isn't. It can't be in 2 tiles either.
A GPU will never know how the scene looks before rendering it. It has to start drawing before all draw calls are finished. These are not deferred renderers, they are immediate mode renderers. Instead of the usual IMR MO "get drawcall, rasterize whole triangle, repeat" a TBIMR buffers a bit of geometry, then rasterizes a tile (actually multiple in parallel), ignoring whichever parts of a triangle (or even a whole one) are not inside this tile, then moves on to the next tile. It doesn't know how many triangles there will be and where they will be and it doesn't care. It doesn't have to stitch tiles together.
It doesn't have to be system specific.
It's easy. You want higher minimum heapsize you set higher min_heapsize.
You want higher maximum heapsize you set higher max_heapsize.
You wanted 196MB for <=512MB so set that as min.
>800MB you'll automatically have more.
If you want to allow more than 256MB max simply set the max higher.
There's no way async audio takes up 300MB so that you have to force 512MB heapsize. And if it did then why would you have it enabled while still trying to get the config to run with <512MB RAM and 196MB heapsize?
No, it won't fade any faster. With -5000 it fades, still being rendered until 5000 except with alpha involved, with 0 it'll be fully opaque and just disappear at 5000.
mastercomsSetsulgl_clear clears to the same colour every frame, randomcolor randomizes. You haven't explained explained why it would help at all and you haven't tested it at all. This isn't even an educated guess.
You didn't explain anything.
You didn't explain why otherwise the colour buffer would be read and why it would not be read when set to a solid colour.
You didn't explain how writing to the colour buffer instead of reading from it would speed things up.
You didn't explain why that would happen on TBIMR like Maxwell/Pascal but not on standard IMR.
glclear is most definitely not broken, glclear has always cleared the colour buffer to gl_color_clear_value.
ZeRo5Yes please, I would be surprised if 7 could hold up against a pub team in this format.
Can't move backwards if you don't move at all. Kaidus has foreseen this.
You can edit posts, no need to doublepost after 3 minutes.
You're still forcing 512MB heapsize, no matter what. That's terrible with <=512MB RAM. Higher max heapsize and slightly higher min heapsize will stutter less than hammering the pagefile. If you want 196 (still weird, why not 192?) for <512MB then just set min heapsize to 192, because that's what it does. That's why it's named mem_min_heapsize. If you want more than 256MB to be available with >1GB then set higher mem_max_heapsize. It's not that complex.
It disabled things that shouldn't be disabled normally. That's why they can't be disabled via other methods.
You've had a year to document it. Why did you choose to randomly release it now if you weren't even close to being done?
So that's what you're trying to do. I thought the 5000 changed in 2013. Anyway now look at the code again.
float flFalloffFactor = 255.0f / (flMaxDist - flMinDist);
int nAlpha = flFalloffFactor * (flMaxDist - flCurrentDistanceSq);
return clamp( nAlpha, 0, 255 );
0 -> max and min the same avoids all alpha shenanigans. Rendering everything that you will have to render anyway fully opaque makes things easier.
gl_clear clears to the same colour every frame, randomcolor randomizes. You haven't explained explained why it would help at all and you haven't tested it at all. This isn't even an educated guess.
Maybe I'll list the rest of what's wrong in a few days.
Spannsif I'm not mistaken back when ETF2L had divs, calling a prem player a "div2 player" was considered a pretty big insult
To be fair the reason it doesn't work anymore is because they're actually div3 players now.
smzii think its about time for setsul to make the ultimate fps cfg
No, because at best that'll end with octochris levels of autism, testing every single setting in isolation and at worst it'll end with me testing every possible combination on every architecture available only to realize 10 years later that the pyro update finally hit and made all the testing done before that point useless.
I don't have time for either.
I simply accepted that TF2 will never run smoothly, got an i7-4790K for VT-d anyway, overclocked it and filed TF2 fps configs away as SEP/PAL.
I do however like to call out people on their bullshit.
Also this week I'm getting triggered by inconsistencies.
All_Over_RSbetter art style then overwatch
better music then overwatch
better animation than overwatch
I am upset.
XL2411Z >>> VG248QE. Can't be bothered to look for the link to my essay on that.
So on one hand you left in settings for a better single player experience, on the other hand you left in a setting that completely breaks single player. That seems a bit inconsistent, doesn't it?
Apparently you read the code, but only 2 or 3 lines and then skipped the next 10 which are kind of important. This will become a recurring theme.
// take one quarter the physical memory
if ( host_parms.memsize <= 512*1024*1024)
{
host_parms.memsize >>= 2;
// Apply cap of 64MB for 512MB systems
// this keeps the code the same as HL2 gold
// but allows us to use more memory on 1GB+ systems
if (host_parms.memsize > MAXIMUM_DEDICATED_MEMORY)
{
host_parms.memsize = MAXIMUM_DEDICATED_MEMORY;
}
}
else
{
// just take one quarter, no cap
host_parms.memsize >>= 2;
}
// At least MINIMUM_WIN_MEMORY mb, even if we have to swap a lot.
if (host_parms.memsize < MINIMUM_WIN_MEMORY)
{
host_parms.memsize = MINIMUM_WIN_MEMORY;
}
// Apply cap
if (host_parms.memsize > MAXIMUM_WIN_MEMORY)
{
host_parms.memsize = MAXIMUM_WIN_MEMORY;
}
I'm going to help you with the math. 512MB >> 2 equal 128MB. So not only is 196MB a weird value, it will always be ignored, since it can never be less than a quarter of host_parms.memsize, if that is <512MB.
It will then be overwritten by mem_min_heapsize anyway. Both the default (144) and your setting (512) are larger than 128MB so dedicated will never be used. Also you're forcing 512MB heapsize on systems with <512MB RAM. Off to the pagefiles we go. Think of the poor TLB. Look at this.
http://i.imgur.com/nIH2Glx.png
Does he not suffer enough? Why do you have to do this to him? Think of his family!
In all seriousness though, min_heapsize is set to 144 for a reason, leave it be. On systems with more than 576MB RAM it'll be set higher automatically anyway. If you insist increase max_heapsize. You can put that at 4GB if you want, since that would only be used with 16GB RAM available. It'll probably fuck with the TLB so much though that it ends up being slower.
Why do you think building_cubemaps 1 is the only way to disable these things? If you want to disable bloom do it via the settings that are meant for this. building_cubemaps 1 can break things horribly, e.g. epilepsy inducing flickering because the low pass filter is turned off.
So you document bloom settings and set them to something reasonable but you don't document the ragdoll settings and set them to completely random values that will result in garbage tier ragdolls with forcefade 0? Again, this seems rather inconsistent. If you don't want anyone to use those settings leave them out of the cfg. If you want them to be usable either set them to sane values for forcefade 0 or set them to non-conflicting values (read: 0) and provide a set of sane values in the comments. The way you're doing it it's annoying for those who do know what these settings do and useless for those who don't.
Again, you want high model detail, but you want other things to disappear completely? Inconsistent.
Also it won't work. I don't know how you do this, but you always ignore very relevant lines that come immediately after things that you appear to have read.
if( flMinDist > flMaxDist )
{
V_swap( flMinDist, flMaxDist );
}
// If a negative value is provided for the min fade distance, then base it off the max.
if( flMinDist < 0 )
{
flMinDist = flMaxDist - 400;
if( flMinDist < 0 )
{
flMinDist = 0;
}
}
So yes, it'll swap because positive number > negative number, but your negative value will then be ignored and actually lead to a higher min distance than 0. Good job.
I haven't looked this up but since before DirectX existed user accessible glclear vars have been used to set the colour buffer to a solid bright colour, to find holes in the map. I know that it only sets the colour clear flag in source, so I have no reason to believe it'll to anything weird beyond that. Stencil and depth buffer are unaffected.
Now how does having to write the whole colour buffer before rendering help?
#74
Because it's incredibly inconsistent.
He documents bloom with a level of detail that is so incredibly redundant (that they're the rgb values for bloom and between 0 to 1 could've been put on a single line instead of spreading it across 4), even though basically no one will use bloom, but then sets ragdolls to random values and doesn't comment them because "they don't matter anyway".
It just doesn't make sense.
Oh, another config with random cvars.
ai_expression_optimization 1
ai_expression_frametime 0
ai_frametime_limit 0.0152
anim_3wayblend 0
func_break_max_pieces 0
host_thread_mode 1
mem_max_heapsize_dedicated 196
Server cvars.
Host thread mode is broken and you know it. Why would you put it in an fps config?
I don't care if you've read the source code if you ignore even the most basic documentation. TF2 itself will tell you
"mem_max_heapsize_dedicated" = "64"
- Maximum amount of memory to dedicate to engine hunk and datacache, for dedicated server (in mb)
It's utterly useless to change it since it will never affect the TF2 client, ever.
anim_3wayblend isn't the gold standard for uselessness, it's like triple platinum certified useless. It's been broken since before it was (accidentally) released. https://developer.valvesoftware.com/wiki/VCD_Blocking_Tool
// disable HDR and bloom
mat_hdr_level 0
// disable bloom
mat_disable_bloom 1
// ensure bloom is disabled
mat_bloomscale 0
mat_non_hdr_bloom_scalefactor 0
mat_bloom_scalefactor_scalar 0
So you really want to make sure bloom is disabled?
Then what the fuck are these settings supposed to do?
// -- Bloom --
// Tints for bloom (0 to 1)
// red tint for bloom
r_bloomtintr 0.3
// green tint for bloom
r_bloomtintg 0.59
// blue tint for bloom
r_bloomtintb 0.11
// the coefficient for bloom effect
r_bloomtintexponent 2.2
How important is it to set bloom tint when you've tried so hard to make sure that bloom is definitely disabled entirely?
// skip calculating sky glow obstruction
// also disables bloom
building_cubemaps 1
Uhm what? This simply enables the command buildcubemaps. It's most definitely not meant as a way to disable bloom.
// --- Ragdolls ---
// disable ragdoll collisions
cl_ragdoll_collide 0
// duration of the fade out effect of ragdolls, 0 instantly removes
cl_ragdoll_fade_time 5
// start fading ragdolls without delay, even if the player is looking at them
// set to 0 for ragdolls
cl_ragdoll_forcefade 1
// disable ragdoll physics
// huge performance hit
// set to 1 for ragdolls
cl_ragdoll_physics_enable 0
// how fast a ragdoll fades out
// the higher the value, the faster it fades out per frame
g_ragdoll_fadespeed 10000
// how fast a ragdoll fades out in low violence mode
g_ragdoll_lvfadespeed 10000
// ragdolls are irrelevant once they've settled
ragdoll_sleepaftertime 1
cl_ragdoll_forcefade 1 fades ragdolls instantly, what are you trying to accomplish with non-zero fade_time and sleepaftertime?
// clamps the highest lod to the set lod for models
// 0 - high; 1 - medium; 2 - maximum performance
r_rootlod 1
[...]
lod_TransitionDist -5000
So on one hand you want high model quality on the other hand you want it to switch to lower LOD quality at a negative distance?
// clear each frame before drawing the next one - optimization for tile based
// rendering, found in NVIDIA Maxwell series (GTX700) and above
//gl_clear 1
How would that even work? Apart from the fact that only one chip in the 700 series is actually Maxwell.
I could go on but I'm tired.
If it's because your PSU is shit then the solution is to either disable C6/C7 states or buy a non-shit PSU.
Now that's a valid reason to not care about colours if I've ever seen one.
G-Sync/ULMB should work on Linux, the BenQ Motion Blur Reduction Utility won't, iirc.
Yes, nVidia is still blocking adaptive Sync / FreeSync.
Fps < refresh rate -> G-Sync is nice.
Fps > refresh rate -> G-Sync does nothing.
That is not a proper summoning.
You turned it off, then removed the GPU and then tried to restart? That should rule out the GPU for now.
The "problem" started with Haswell, that's why you didn't find anything.
Does that mean you tried booting with all sticks removed? :D
FirelockDoesn't Setsul live in Germany? just summon him with the ritualI tried but i guess due to my limited experience it takes a while.
You didn't even try it.
Also try not posting in the middle of the night. I've heard that helps.
FireHowever, since yesterday it got to a point where it will just stay inside that loop forever* (didnt test longer than 5 minutes but i guess thats enough). The only way to fix this is to reset bios by taking out the bios battery for a couple of seconds.
[...]
Also i could technically test whether its the gpu by using built in graphics, however i cant really stand using those for multiple weeks so id rather only do that as a very last backup plan.
Or you know, you could remove the GPU first instead of the CMOS battery next time it happens.
Have you thought about reading the mobo manual to figure out how to disable C6/C7 states on your mobo?
Also try booting with a single RAM stick (maybe try different ones in different slots).