I'm not sure asking multiple questions is OK here - most stacks require you ask a single, clear question. Also, this appears to be a joke question, rather than genuine philosophical inquiry. However, it does touch on some common misapprehensions, so it's worth answering.
The Operator is the Sixth Victim (Moral Injury)
[...] I see the lever operator not as an agent of power, but as a victim. [...]
They are both. They are the one who gets to make the decision (agent of power). The decision to act may be hard, and morally harmful (victim), and that moral quandary is what the question is about.
Is it fair to apply categories of "guilt" or "virtue" to someone forcibly placed in a state of zugzwang? Isn't the act of coercion a greater evil than the choice itself?
This claim of "zugzwang" is false: it means a compulsion to move. There is no compulsion intended or implied in the problem, no "coercion" to pull the lever.
I often see naifs acting as if there's a "correct" answer to this question, as if it's not a dilemma. The OP shows signs of this misapprehension, by assuming that there's a correct answer that one is forced to take.
In reality, a "runaway trolley" is the result of negligence, malicious intent or bad design. By forcing us to fixate on the lever, we absolve the system's creators.
The dilemma is: would you personally want to act, taking on moral responsibility, to cause N bad outcomes; or would you prefer inaction, permitting M >> N bad outcomes instead?
The trolley, lever, tracks, ropes and even deaths are "color text". They are there to frame the question in such a way as to prevent answerers from trying to dodge the core dilemma, but are not otherwise relevant to that dilemma. Anyone who will "fixate on the lever" is looking in the wrong place and misunderstanding the question.
And once again, nothing is "forcing" someone to focus on these irrelevancies; that's a "them" problem, a hyper-literalist failure of comprehension of the problem given.
A logically equivalent question, different only in color text, and which is faced tens of thousands of times a day in real life, is: as a doctor, do you select a lethal drug rather than a cure to administer to 1 organ donor with a rare phenotype, if doing so will allow their organs to save 5 lives?
Resolving this dilemma is why we have rules like "first do no harm", "informed consent", and "bodily autonomy".
Note how the outcome that is near-universally-accepted as being morally correct in medical reality turns out to be inaction; this is the disproof of anyone who believes that this is a case of "zugzwang". In real life equivalent situations, you are ethically required not to deviate from the expected outcome.
The trolley framing is intended to avoid the nit-picks and niggles about the framing that would happen in more realistic scenarios like this. "The Socratic oath exists", "You can't guarantee 100% efficacy of the cure, poison, or organ transplants!", "What if there are other more eligible organ donors?" and so ad infinitum. People, given a dilemma, will always look for a way to create a third option, a way to weasel out of having to make a choice.
The trolley problem places the dilemma on literal rails in an effort to avoid this issue. The lever provides a mechanism that allows for no third option. The deaths are used as a stand-in for "unquestionably bad moral outcomes". Even despite framing the choice to be as binary as possible, you will see that people asked the question will still try to find alternatives: "derail the train by switching the lever at just the right time!", "maybe the lives of the people dying aren't all as morally important!", "I would have acted much earlier, to untie them!", etc.
Who is the "Invisible Killer" that created this situation, and why does philosophy focus on the victim at the lever rather than the architect of the disaster? Somebody tied people to the rails or created the system without brakes, but we tend not to notice this person in the shadow.
There is no invisible killer. Not even in the hypothetical universe of the person making the choice.
This is the nature of hypothetical questions like this: they are restricted solely to the context given. There is no additional context that may be gleaned or inferred.
There is no prime mover, as the past does not exist, the entire scenario being formed ex nihilo at the moment of decision. The rest of the world must be treated as if it doesn't exist; perhaps one can imagine this happening on an infinite empty plane for the rails and lever to rest on may be imagined, but not even that is permissible if the answerer tries to make the existence of the infinite plane be relevant to their answer, such as "if there's nobody else on the plane, maximizing population is essential to species survival, so..." or "if there's nothing else on the plane, we'll need something to eat, so...".
So to answer some of the questions asked:
Is this problem somehow not "fair"?
Mu. This is like asking whether the question "Do you want one apple, or seven apples?" is fair. Fairness doesn't even come into it. Why would we have any expectation of fairness? To whom would it be unfair? This is a non sequitur.
Is this problem "a form of institutional gaslighting"?
No. To be institutional, there'd need to be some institution (perhaps the loose brotherhood of philosophers?) responsible.
To be gaslighting, it would have to be deliberately making us believe something which was patently untrue, and I see no sign of it making us believe anything at all. Instead, it calls us to examine our beliefs.
Does it "trains us to accept responsibility for systemic failures"?
No, and if anyone experienced that outcome it would be through their own failure to understand the concept of hypothetical dilemmas and framing: a "them problem". Similarly, if anyone was driven to go around punching train engineers by this dilemma, or who developed an unhealthy fixation with levers.
Any mental breakdown of an answerer is not a responsibility of this dilemma, nor of those who ask it, as mental harm is not an outcome that could be reasonably expected. And if harm can't be reasonably expected, then the responsibility for not going bonkers would lie with the owner of the brain.