Having participated in competitive programming and comparing it to development work it feels to me comparing chess tactic puzzles to classical chess. If you're a good classical player you're probably reasonably good at puzzles, but the opposite is not necessarily true.
Competitive coding, despite superficially involving typing code into an editor, has almost nothing to do with working on large pieces of software. It's a lot of rote memorisation, learning algorithms, matching them onto very particular problems, and so on, it's more of a sport. Just like playing too much bullet chess can be bad for your classical chess I can honestly see how it gets into the way of collaborative work.
It's actually more subtle. Yes, chess is very tactical but the way you approach tactics in a puzzle is very different from how you mentally approach tactics in a game of chess.
If you already know that there is a tactic in the position your entire frame of reference changes. Which is actually why puzzle composition is treated very differently from actually playing, and a lot of famous composers are not particularly strong players.
This is why I feel it compares well to coding competitions. It looks so similar, but the mindset is very different. And only looking at tactics, just like only looking at coding as a game problem is I think why it may damage your performance at work.
The terminology associated with chess challenges [to use a neutral term] is unfortunate.
"Chess problem" is a term of art that refers to an artificial composed position with a unique solution that is constructed to both be a challenge to the solver and have aesthetic value. They often have constraints on the solution such as that White must deliver checkmate in two moves (three ply). This is what I assume you're referring to.
A position from an actual game (or that easily could have been) that demonstrates a tactic (or combination of them) is generally known as a "chess puzzle", largely because the term "chess problem" was already squatted on.
Somewhere in between the two is the "study", which is a constructed position, less artificial than a chess problem but still very carefully made to have a unique solution that walks a tightrope and generally requires absolutely exact calculation rather than working by general tactical principles.
At lower levels like where I'm at, players are prone to mistakes and blunders, so having a good eye for tactics allows you to take advantage of those moments in the game as well as prevent yourself from getting into a bad situation.
But at elite levels, tactics have less importance (as he says in the video he estimates it drops to 50%) as every player at that level is extremely solid.
I was very good at competitions, but terrible at rote memorization, including memorizing algorithms and matching them to particular problems. I'd just create the algorithms on the fly. E.g. I was presented with a maze solving problem, never had read about maze solving before, and just created my own version of it.
It's easy to make generalizations that minimize or downplay some of these things. But it's no more knowledge then the original study on too little data was.
Competitive coding, despite superficially involving typing code into an editor, has almost nothing to do with working on large pieces of software. It's a lot of rote memorisation, learning algorithms, matching them onto very particular problems, and so on, it's more of a sport. Just like playing too much bullet chess can be bad for your classical chess I can honestly see how it gets into the way of collaborative work.