With the rapid advances we’re currently seeing in generative AI, we’re also seeing a lot of concern for large scale misinformation. Any individual with sufficient technical knowledge can now spam a forum with lots of organic looking voices and generate photos to back them up. Has anyone given some thought on how we can combat this? If so, how do you think the solution should/could look? How do you personally decide whether you’re looking at a trustworthy source of information? Do you think your approach works, or are there still problems with it?
This brings us to the issue of being reliant on one entity (Bing) to decide whether the source is reliable. How do we know if this entity can be trusted, and how can we know if that ever changes? Assuming we can trust them, this just passes the problem onto someone else. How would this entity decide whether sources are reliable or not before feeding them to us?
Can you elaborate a bit on what you mean by metadata on generated images? What kind of metadata and what can you do with them?
While you have a point, I’d say that Bing’s footnotes are much better than BARD just providing a definitive answer. With footnotes, you can follow Bing’s logic from the sources provided and make your own call whether it was right/wrong, whereas BARD is a black-box that Google hopes we implicitly trust.
As for the metadata… it’s at least giving credit to the generator/source in the same way a camera will embed details about the hardware/software that was used to capture the image. It’s not immediately useful for end-users, but if we align on a standard ASAP then those documents can be flagged as Gen-AI output across different ecosystems and be tagged or filtered based on personal/platform preferences. Like the search answers, it’s not the end-all solution for fighting misinformation, but it gives the user a chance to make their own decision instead of trusting anything placed in-front of them. And it gives platforms a way to scan content for Gen-AI content and deal with it as they see fit.