We at GetReal Security are fielding questions about a slew of videos emerging from the shooting of Charlie Kirk at Utah College. We have analyzed several of the videos circulating online and find no evidence of manipulation or tampering. At the same time, we are seeing some AI-generated videos of the event that are clearly fake. (I am not linking to the real or fake content; the real content is incredibly graphic and I don't want to give air-time to the fake content). This is an example of how fake content can muddy the waters and in turn cast doubt on legitimate content.
It could be that I'm being too critical. I could be outside of your targeted audience. I agree the videos are horrific. But I think that's the real stress test for a company like yours. For me, to be known as the authority in deepfake identification, I would expect nothing but stoicism and duty to the job when something like this happens. God forbid something like this happens again. Do I turn to GetReal for my facts? Do I tell other people to trust GetReal? We need the facts!
I really respect the efforts of GetReal but transparency is essential in combatting misinformation. Forgive me, but the notion that you're promoting something by verifying it as false seems seriously out of touch with your own mission statement. If you verify a video is false, people will use that to combat misinformation. Especially on X where community notes exist and these videos are propagating. New traffic would be informed. Instead, the strategy is to let it fly under the radar to avoid "air-time"? What are we even doing here?