Yes, AIs are they are do need oversight. But it’s not possible to do this in real time without AIs. And corrections afterwards when AIs make mistakes is far better than just letting politicians get away with blatant lying. Also, as long as they’re supervised, any lines can be vetoed out if the supervisor things they may be off, leaving the corrections and source statements conservative since it’s obviously better to be silent than to be wrong for this sort of things.
And the earlier such projects start, the more we can learn to do it better as AIs get better, as well as recognize signs of the AI hallucinating.
Yes, AIs are they are do need oversight. But it’s not possible to do this in real time without AIs. And corrections afterwards when AIs make mistakes is far better than just letting politicians get away with blatant lying. Also, as long as they’re supervised, any lines can be vetoed out if the supervisor things they may be off, leaving the corrections and source statements conservative since it’s obviously better to be silent than to be wrong for this sort of things.
And the earlier such projects start, the more we can learn to do it better as AIs get better, as well as recognize signs of the AI hallucinating.