I’ve thought about related topics in this field (multi-agent reinforcement learning) before, so here’s a partially thought out answer if you really care.
The most relatable result would be in 2019, when OpenAI Five beat the reigning Dota 2 TI Champions OG in a BO3. This was a big deal, since not only is Dota 2 a complex game, the AI would have had to navigate cooperation between players on the team while working adversarially against others. However, it’s kind of hard in practice to create a TF2 version of this, even if you ignore sniper/scout. The primary barrier is data/computational power. The idea is that the AI learns by playing games against itself repeatedly. The amount of computational power needed to have the AI get to a pretty high level is most likely too expensive for peons. Not only is the training expensive, it’s also not cheap to consistently run the AI and get them to play games against players. I also don’t know how convincing the Dota 2 match was since I don’t play that game (it may have suffered the same hallucinations as the early AlphaGo versions), but generally if you want to improve performance, you just need to run more iterations of self-play, which would cost even more resources. Now maybe there exist people that have access to these resources and are able to design a reinforcement learning algorithm for TF2, so it’s theoretically possible.
However, it’s quite an open problem to create an AI that “mimics” real players in these complex problems. When these AI are trained, they are aiming to win the game, not to act like a player at some level. Thus, if you are able to play against such AI as described before, I think you could learn a thing or two, but you could definitely feel that you are playing against bots and not humans.
Let me know if you have any questions!