Setsul
Account Details
SteamID64 76561198042353207
SteamID3 [U:1:82087479]
SteamID32 STEAM_0:1:41043739
Country Germany
Signed Up December 16, 2012
Last Posted April 26, 2024 at 5:56 AM
Posts 3425 (0.8 per day)
Game Settings
In-game Sensitivity
Windows Sensitivity
Raw Input  
DPI
 
Resolution
 
Refresh Rate
 
Hardware Peripherals
Mouse  
Keyboard  
Mousepad  
Headphones  
Monitor  
1 ⋅⋅ 196 197 198 199 200 201 202 ⋅⋅ 229
#9 rsi in TF2 General Discussion

Had it, ignored it for a while but then an accident fucked up my left hand, couldn't move the thumb for a month because of that, so I had to do everything that required moving the thumb with my right hand and that was a bit too much. Used a bandage to fix thumb and wrist while playing and changed to low sens so I only had to move my arm and couldn't move wrist or thumb on accident. As soon as my left hand was fine I went back to business as usual, ignored the pain because I was used to it by then. A few days later my thumb locked up, wrist and other fingers weren't much better. Didn't touch a pc for a week. Used the mouse with my left hand for a month. Kept playing on low sens for almost a year.

tl;dr
Ignoring the pain won't help. Stop doing whatever movement causes pain. If it still hurts after a break either the break wasn't long enough or you should stop doing that movement completely. Use a bandage to prevent involuntary movement during the breaks.

posted about 9 years ago
#8 My hard-wired slower than wifi? in Hardware

1. Test with the laptop and your cable (wifi off).
Fast -> it's your pc/network card/motherboard or bandwidth limiting for your pc or other stuff is enabled/disabled on the router. Further troubleshooting will commence.
Slow -> it's the cable or the router, go to step 2

2. Test with either the laptop or your pc (laptop is probably easier) and a different cable on the same port.
Fast -> Cable defect. Replace it. Test again with your pc.
Slow -> router, step 3

3. Same setup as in step 2, but use a different port now.
Fast -> Original port is defect. step 4
Slow -> Post what laptop, cables and router you used. Motherboard/network card of your pc or model number if it's a pre-built would be good aswell.

4. Use the other port for your cable, test with your pc and the original cable now
Fast -> Problem solved.
Slow -> cable might be defect aswell. step 5

5. Your pc, different cable (same as in 2/3), different port (same as in 3/4)
Fast -> Cable and port defect. Replace the cable and use the other port from now on. Test again.
Slow -> I'll make a new list, this one would get to long

posted about 9 years ago
#5 Replacing Old Hardware in Hardware

I don't think you'll have to spend 200$ on Case, HDD and PSU.

Getting a better case for better airflow (and mabye looks too) might be a good idea. Your mobo seems to be mATX so you'd be looking at 20-40$. Something like the Cougar MG100 or Cougar Spike just to get a decent case, a better one like the Fractal Design Core 1300 or a bigger one like the Cooler Master N200 in case you want to get a huge GPU in a couple of years.

Your mobo doesn't have SATA III so I wouldn't get an SSD since you can't fully utilize it but a faster HDD is definitely a nice upgrade. Standard recommendations would be the Seagate Barracuda ST1000DM003 (1TB 44$), ST2000MD001 (2TB 74$) or ST3000DM001 (3TB 92$) depending on how much capacity you need.

There aren't any amazing deals on PSUs right now and there's no point in replacing a working, acceptable quality PSU with a newer acceptable quality PSU, so first of all you should find out what PSU you have and wether or not you have to replace it at all. Could it be that you looked at the sticker on the fan? The TT-8020A is a fan iirc. If the fan is facing downwards the model name is probably on the side that faces the back panel of the case. Either way you'll have to open the case.

posted about 9 years ago
#88 Super Bowl in Off Topic

#87
I see you have been on reddit
http://www.reddit.com/r/funny/comments/2uh187/back_to_the_future_was_right/co8g7yo

Why the fuck would they throw?????
For the skins
posted about 9 years ago
#41 RInput in TF2 General Discussion

You should never trust programmers with naming things.

There's exactly two options:
You believe he knows what he's doing or not.
If you disregard his opinion because you think he doesn't know what he's doing then you shouldn't use his fix.

posted about 9 years ago
#38 RInput in TF2 General Discussion

My point was that the guy who thought that not having raw input is such a significant problem that he spent time writing a fix, thinks that Valve's rawinput is fine and you don't need his fix anymore.

posted about 9 years ago
#36 RInput in TF2 General Discussion

hooli (#22) summoned me.

I'm with wareya on this one.

People should also know by now how I feel about technical advice from stabby. He's got good intentions, finds something that sounds good on paper, but doesn't know exactly how it works, since he doesn't have the technical background for that, just like 99.9% of all people, therefore is unable to properly verify it, falls victim to the placebo effect and then recommends it. Can't really blame him but I still don't like it.

I'm not saying RInput is broken, I'm saying rawinput isn't broken. Don't fix what isn't broken.

One more thing: That thread on ocn is for a GUI launcher that injects RInput into TF2/CS:GO/etc.
Have a look at the source of RInput. Scroll down and look what the author said:

The code as presented (the last version I released in 2009) supports X86 architecture only. After Valve embedded Raw Input into their games, I did not see the urge of rewriting the projects for the X64 architecture.
posted about 9 years ago
#25 Guide to Streaming with 2 Computers (2PC Setup) in Q/A Help

Best case around 50% more power for the stream, 20% for TF2.

posted about 9 years ago
#21 Guide to Streaming with 2 Computers (2PC Setup) in Q/A Help

You missed the point again. I'm trying to suggest swapping the streaming and gaming computer around.

The most effective setup would probably be

Gaming Computer
GPU: GeForce GTX 770
CPU: Intel i7-4790K @ 4.0 GHz
RAM: 8 GB DDR3-2133 RAM
Microphone: Blue Snowball

Streaming Computer:
GPU: GeForce GTX 580
CPU: Intel i7-3930K @ 4.3GHz
RAM: 16 GB DDR3-1600 RAM
Capture Card: AVerMedia C985

My point is you said

MR_SLINSo as long as I'm using the more powerful setup to stream, then I'm doing it in the most effective way possible.

but you're using the less powerful setup to stream.

posted about 9 years ago
#29 GTX 970 in Hardware

Ok, the germans that started this are bringing out the pitchforks again, because they don't believe nvidias benchmarks. I don't feel like dealing with them but I can at least slow the witchhunt here.

I'll make this as clear as I can:
Most of the time the GTX 970 will use over 3.5GB if it has to.
The last 0.5GB have 30% lower bandwidth and lead to fps drops of <10%.
The driver will avoid this if it can.

However, 3rd party programs might not show VRAM usage correctly, only displaying 3.5GB max. Therefore when people are trying to force >3.5GB they're actually using >4GB. What they are complaining about are the huge performance issues associated with going over 4GB VRAM usage. Those are the same on the GTX 980 and are present on every GPU ever made.

The only legitimate issue is that in Nai's benchmark those 0.5GB don't seem to get used at all. The reason for this might be that CUDA's memory management isn't automatic. To access the 0.5GB 2nd "partition" different/new commands might have to be used and those might not be implemented in CUDA yet. This issue should not occurr in games, since games do not have the same control over the memory CUDA does.

posted about 9 years ago
#27 GTX 970 in Hardware

You all need to chill out.

I think I'll have to clear up a few things. Maybe I'll make a flowchart.

1. It's not a bait-and-switch. http://en.wikipedia.org/wiki/Bait-and-switch
The other options would have been to actually just put 3GB on it and cripple the bandwidth throughout the whole memory or not to use colour compression and cripple the bandwidth or not to sell the GTX 970 at all. Clearly a better deal for everyone involved. Technical explanation further down.

2. All those game benchmarks didn't suddenly change. The GTX 970 still performs the same.

3. Who of you would have actually used exactly 3.5-4GB VRAM? The driver is actively trying to keep VRAM usage below 3.5GB so once you manage to go over that you'll probably go over 4GB aswell. That means swapping with the RAM and a lot more performance issues.

Technical stuff:
DISCLAIMER: This might not be correct. I haven't confirmed it myself and I won't dissect a cut down GM204 just to please some people on the internet. So take this with a grain of salt, some of it might be wrong. However that's also true for most of the accusations.

What everyone thinks is the issue won't be fixed because they can't fix it.
The issue is real and I hope they'll fix it on the GM200 and maybe later versions of the GM204, but it's more like a minor inconvenience rather than the absolutely game-breaking, performance-destroying and possibly life-threatening bug everyone makes it out to be.

Most of the benchmarks were also run incorrectly. People didn't bother to read the instructions. Some caught onto that and are now trying to blame the author, because a program that was coded in 30 minutes isn't fool proof. If the benchmark is run incorrectly it can show lower bandwidth for 1GB. It'll also show lower bandwidth or infinite bandwidth towards the end of the memory on every single nvidia card in existence. What it's actually showing is the RAM/swapping bandwidth. The whole point of that benchmark was to find out if the last 0.5GB of VRAM is that much slower than the rest to be able to cause these issues people have reported when using >3.5GB VRAM. What it's showing when run correctly* is that for some reason the swapping starts with 0.5GB left. It's not actually the VRAM being slow, it's the DRAM (via PCIe) being used instead of the VRAM which is incredibly slow.
*headless, the gpu can't be used for a display, use the iGPU, or the OS will reserve VRAM and the weird CUDA memory swapping will show up earlier or before you actually run out of memory on other cards that don't even have that issue

My guess on what's happening in as simple as I can describe it, this is highly speculative and might not even come close to the truth:
1. Nvidia "hardwired" the VRAM adresses to the L2 cache adresses for the colour compression. Because of that, intentionally or unintenionally, data in the VRAM can't be moved/swapped. So once you run out of VRAM and data gets put in the DRAM (the normal RAM) it's stuck there. Normally in case of a page fault (data is DRAM instead of VRAM) the data would get swapped and the one you're using gets put in the VRAM. This doesn't work now, so everytime you need that data it's going to be send from the DRAM via PCIe (which is incredibly slow compared to VRAM). Now that would only be a problem when you're using a bit over 4GB (compressed size). In fact it's only a problem under specific circumstances, namely when the total memory consumption is >4GB but the actively used memory for the current application/task is <4GB. Once you start using >4GB for one application you have to swap anyway. So unless you've got some pretty big stuff in the background/minimized or windows is reserving stupid amounts of VRAM, this isn't an issue.

2. Because of the way they cut down the L2 cache (1.75MB instead of 2MB)/crossbars on the GTX 970 they can't access the last 0.5GB of the VRAM via the normal "hardwired" colour compression way. However they knew about this and made 2 "partitions". The first 3.5GB would be accessed in the "normal" way, like the GTX 980 does for all 4GB. The last 0.5GB would be accessed without colour compression. That way you'd only lose the extra 30% bandwidth from the compression. It's not ideal but acceptable. For that reason the driver tries to keep the VRAM usage below 3.5GB. The only question is did they have to cut down the L2 cache? Iirc they didn't cut it down on the GTX 780, so I'm leaning towards yes.

3. Now it's time to get to the actual issue. The whole thing started when people noticed that the GTX 970 wouldn't used more than 3.5GB VRAM unless it was forced to. The behaviour itself is normal but the limit should have been 4GB like on the GTX 980. Also a 30% drop in bandwidth shouldn't be able to cause the drastic performance problems that were reported. Nai's benchmark indicates that the GTX 970 can't access the last 0.5GB or starts swapping to DRAM even though it can access them. That's not supposed to happen. Everything else is.

Conclusions:
1. There is an issue. But it's not what people think is the issue, it's only related.
2. Unless Nvidia sent cards with the full 2MB L2 cache to reviewers and then sold the cards with 1.75MB while knowing about the problem it would cause it's not bait-and-switch. A reviewer with press and retail versions of the GTX 970 could confirm this.
3. Benchmarks didn't show it because most benchmarks still only use 3GB.*
4. It's mostly an issue for 4K and/or very high settings that the 970 might not be able to handle anyway. It's a bummer in those cases and in SLI because the 970 might not scale as well as expected since it's running out of VRAM. If were really lucky it's just a driver glitch and fixable without physical changes.

*We've been there before, people claimed you need Titan Blacks because the 3GB on the 780 Ti isn't enough for 4K, triple 1080p, 1440p once you go to 256xSSAA (it's a hyperbole). Turns out that neither of those GPUs can get 60fps with 256xSSAA anyway and that people prefer 60fps 4xSSAA to 0.2fps 256xSSAA. Apparently some of the people working on the drivers actually have a clue about what they're doing. In fact, some of them are so good they even get paid for it. They know what happens when a GPU runs out of VRAM so unless there is absolutely no way to avoid it, it won't happen.

posted about 9 years ago
#18 Guide to Streaming with 2 Computers (2PC Setup) in Q/A Help

1. Software vs hardware downsampling (OBS vs capture card)? I'm always assuming that capture cards are shit so downsampling in OBS with a good filter might look better.
2. You either didn't read the whole paragraph or missed the point. That's exactly what I said. Except that I think the 3930K would be more powerful for streaming. On top of that the 4790K might run TF2 better. So you'd be doing it the least effective way right now if I'm right.

I didn't want to be as blunt as dashner before knowing all the facts, but it really seems you just got carried away.
It's a colossal waste of money. I have streamed 1080p 60fps. On an i7-4770K. I just tested it again on an 4790K. 720p 60fps on faster no problem, only for 1080p 60fps I have to use veryfast.

posted about 9 years ago
#9 Guide to Streaming with 2 Computers (2PC Setup) in Q/A Help

Just a few questions.
Are you playing TF2 on 1280*720?
Are you sure using the 3930K for playing and the 4790K for streaming is the best setup? I'd say the 4790K should be able to handle TF2 just fine, depending on how well TF2 deals with 12 threads maybe even better than the 3930K. I'm not sure how well x264/OBS can take advantage of AVX2 either. Basically assuming perfect optimization and utilization your setup would be the correct way, but afaik there's a lot of limitations for 256-bit integer usage in x264 and TF2 doesn't even know what perfect means.
Is the 4790K running at 4.0GHz ignoring turbo boost or "fixed"? Either way you should be using 4.4GHz "fixed", it's so easy I wouldn't even count it as overclocking and should get you a few percent more power.

posted about 9 years ago
#12 GTX 970 in Hardware

Only looking at noise levels, load temperatures and overclocks my money is still on the MSI 4G and the Gigabyte G1.

I must admit that I haven't really kept up with everything lately.
From memory, might be outdated or simply incorrect:
EVGA had a lot of issues, also a bit low memory OC iirc.
Gainward and Galax temps a bit too high, also on the loud side and coil whine.
Palit temps a bit high and coil whine.
Zotac temps a bit high and coil whine save for the really expensive ones.
Asus Strix average across the board, no flaws, coil whine levels acceptable, on par with the 4G and G1, doesn't OC as high though and higher temps and noise in some cases (case as in housing). Ok, but didn't impress me and gets beat by the 4G and G1 in every aspect, granted it's close in some aspects but it's still "only" 3rd place after the "i don't know which is better" joint 1st place.

posted about 9 years ago
#388 PC Build Thread in Hardware
MR_SLINIn an effort to condense the number of PC build threads out there and to give people a place to view many TF2 players' PC builds at once, I'd like to suggest that people post in this stickied thread.

Feel free to post your personal build here (and edit your post as you upgrade) either by linking the parts individually or by linking something from a website like www.pcpartpicker.com. It helps if you post the approximate cost of your build as well.

Partlist?

posted about 9 years ago
1 ⋅⋅ 196 197 198 199 200 201 202 ⋅⋅ 229