I don't use ANY media for news anymore. I use LLMs. They are trained to seek the truth, avoid hyperbole and opinion, just the facts. I'll read something here with skepticism. Cut and paste the post into an LLM and get the facts.
Similar to you, I use Chat GPT daily instead of Google, but I am curious about the bolded part. My assumption (could be wrong) is that most LLM's are aggregators so how would one know if the news they are searching or questions they are asking are getting fact in return? I have seen a few examples of questions that returned what seemed like fact, but was not as it was pulling from an inaccurate site. Does your confidence relate to the LLM being used?
Hey look
@belljr.
I decided to drop it in the other thread and not bring it here, but here it is anyway. Thanks Yambag. Just know I never once visited the political forum, not because of the topic, but because of the tone. I've never had so much as a warning here. I hate online arguments, and I apologize for triggering one with you a little by using the laughing emoji. MY BAD.
For anyone reading I'll briefly explain. In the Iran thread
@Chadstroma is hating on western media. Something I did in the early days of the first Ukraine thread that got deleted. Chad has curated experts he trusts. I've used AI to fact check some of his more detailed posts. Accordingly, he's posting quality information. Both of us are fed up with western media. From the seemingly professional mainstream to the obviously sensational biased clickbait on the socials. He's relying on human experts. My expert is AI. For now it's highly likely his are better than mine. Eventually my genius buddies will probably be better than any human experts. That's my belief, atm.
So in that thread belljr and I did the opposing AI thing on the topic of using AI as a news source. We temporarily derailed the topic and I apologized for that too.
So why do I use LLMs for news? Same answer I gave STEADYMOBBIN above for why I've replaced Google with LLMs. Speed, accuracy, simplicity. I'm lazy, but want to stay reasonably informed. I think the tech is incredible, so I use it. belljr's AI DOES NOT recommend using AI for news. Mine does. His gives a long list of reasons why AI can make mistakes. They're all true. It doesn't change my opinion for a couple reasons.
One is explained above already. Chadstroma and I are fed up with western media. He's just not as lazy as me and listens to those experts he's found. What belljr's AI doesn't consider is how awful and untrustworthy human reporting has become. For all it's potential flaws AI seems better than humans at avoiding bias and fact checking. It isn't perfect by any stretch, but it is improving rapidly. I go for speed and simplicity and feel reasonably informed. Chad goes for experts and seems very well-informed.
The other reason is I've put in the effort to finish a couple prompt engineering courses. Stanford and Anthropic have taught me to prompt AI for the best results. My LLMs know me at this point, so I no longer have to write "educated" prompts as long as it's part of an ongoing conversation. If I ask for current information about China's submarine manufacturing, the LLMs know I want to follow 7 principles of prompt engineering. My LLMs are better than yours, so I'll use AI to give a brief overview of prompt engineering.
This is a reply to a simple question about the importance of prompt engineering for using AI as a news source:
Yes, crafting quality prompts is critical for getting accurate and reliable news reporting from large language models (LLMs). From a prompt engineering perspective, well-designed prompts ensure LLMs produce outputs that are factually correct, contextually relevant, and free from bias or speculation.
7 principles are:
- Specificity: Define the topic, scope, and desired output clearly. E.g., “Summarize the logistics of moving 400 kg of 60% enriched uranium in Iran, using IAEA and nuclear industry sources, in 100 words.”
- Source Guidance: Instruct LLMs to use authoritative, diverse sources (e.g., “Cite IAEA for Iran’s nuclear activities”). This counters bias and ensures credibility.
- Neutrality: Use neutral language to avoid priming bias. E.g., “Describe Iran’s uranium storage” vs. “Detail Iran’s secret nuclear stockpile.”
- Fact-Checking Directive: Include instructions to verify claims, e.g., “Cross-reference with IAEA reports and note unverified claims.”
- Constraints: Set limits on tone, length, and style, e.g., “Provide a factual, 50-word summary without speculative language.”
- Context Provision: Supply background if needed, e.g., “In June 2025, Israel struck Fordow; summarize uranium transport risks.”
- Iterative Refinement: Test and adjust prompts based on outputs to improve accuracy, e.g., adding “exclude X posts” if speculative sources appear.
A prompt incorporating most of them looks like this:
Summarize the key events of the Israel-Iran conflict from June 13-25, 2025, focusing on military actions, nuclear facility strikes, and ceasefire outcomes. Use verified sources like IAEA, prioritizing factual data over speculative claims. Provide a neutral, 100-word summary, citing casualty figures, major targets (e.g., Fordow), and diplomatic efforts. Note any conflicting narratives (e.g., U.S. vs. Iranian assessments) and avoid sensational language. Cross-check with satellite imagery or official reports for accuracy.
5 of 7 are covered there. I can do better but I'm lazy. Key to this is once the LLM knows you want to filter bias and bs, it continues to do its best without you needing to repeat the principles. Sorry this is so long. I tried to keep it basic and maybe helpful.