Post here all non technical related topics about Formula One. This includes race results, discussions, testing analysis etc. TV coverage and other personal questions should be in Off topic chat.
I'd rather read a dyslexic text than anything AI related.
Also I also support not calling it AI, but that is what people understand, sadly.
Suggestion: Why not create an "AI" downvote button for comments?
It will separately mark comments and its images for AI by the users. And users will be informed that such content is not desired here, second time they'll get a week of ban for it.
I support ban, because there will be just a very few users who will still post such crap, most of the others will never do that intentionally as they understand that a forums worth is in its real users and their thoughts. And reality.
Yeah, generative AI has been demonstrated to "hallucinate" (for anyone unaware of what this means, it is a phenomenon where an AI LLM seemingly invents baseless information/claims). One of my law professors was the first to tell me about this several months ago.
It's also worth noting that humans "hallucinate", too (insofar as they share bad info, knowingly or not). People confidently repeat incorrect information all the time (a misread article, bad source on Google, something they misremembered from years ago, genuine delusional thought patterns, etc). The real crux of the issue is if the information posted is accurate and properly vetted. If one copies/pastes information without critical analysis, that’s a user failure, and the same thing happened prior to the existence of generative AI.
As I alluded to in my first post on the topic, I think it's more helpful to focus on the accuracy/quality of the content within a given post, rather than worrying about how it was written. AI detection tools are prone to false positives because the AI itself was trained on human-created content and human language patterns, so of course it can (and does) sound human; it was quite literally trained to do so. On the flip side, humans can communicate in a formulaic way, too.
IMHO we should judge based on substance: are the claims accurate, is the reasoning sound, does the argument hold up, so forth. That standard is useful regardless of whether someone used an LLM, a search engine, a print publication, their own memory, etc, to acquire the information they are sharing.
Last edited by catent on 11 Jan 2026, 01:15, edited 5 times in total.
OK so what about posts that quote Gemini (the google summary bot?). The summary includes links to where it got the info from.
The version Google uses in its search of Gemini is either an older one or a faster sillier one, so it is a lot worse than chat gpt...
The latter also cites sources, when there is anything to cite.....
Currently there is no LLM that is trustworthy. And will be no such for a while, if ever.
This is a forum for humans, and part of the fun is that the clever ones think about what they have read and distill this info, and even add to it from their own knowledge, and we can read that.
"AI" takes these all away. Hence it is terrible for such forums.
And I'd like to add: If you post such AI crap, new AI bots will read it and it will just assure it that it is good info and will recite it next time.
There are studies already, that this is somewhat the end of the internet, if AI will learn its crap... it will make it even worse.
It should be banned. No one wants to read LLM hallucinations on a discussion forum. We can ask LLMs ourselves if we want to.
It is especially frustrating when someone uses LLM to support their statements, it's not a source of truth and it shouldn't be treated at such.
I find that to be more often here than not. A while ago someone arguing with me about something I knew they were wrong about admitted they came up with their idea after having “conversations” with ChatGPT. Like, leave the basement sometime people…
Maybe a call to common sense:
a) please don AI post
b) If you want to quote text that looks like a powerpoint presentation… it is AI, please don’t post it?
One can hope…
I think your point a is great. There should not be AI posts here. People can check their opinion against AI, can use AI for calculations or to find sources. All for this AI is good. But please post everything in your own words should be a minimum.
I fail to see where people cite purely imagined X posts...I think we have good quotes especially out of X.
On the other hand: There is no no-AI sources anymore. At my company we write all posts with AI, so will most journalists do at X in the close future. There are also nearly no online journals anymore not using AI for their articles.
But this is fine, I just think posting in a forums should be own words.
My suggestion is... AI content is allowed, but only if flagged/labelled as AI. Not that AI content is desired, but it can be posted. What is a good example of why it should be allowed?
Yes, I think if someone pastes a sentence or two from ChatGPT with "ChatGPT gave me this", this is totally ok. Better than some imagined opinion. But it should not go into Yin Yang and pages full of AI quotes.
That said, if we’re going to shoot down AI, we should remove kudos farming for people who just share photos or tweets. I think the point is to reward original content / thought, not sharing other people’s.
Yeah, generative AI has been demonstrated to "hallucinate" (for anyone unaware of what this means, it is a phenomenon where an AI LLM seemingly invents baseless information/claims). One of my law professors was the first to tell me about this several months ago.
It's also worth noting that humans "hallucinate", too (insofar as they share bad info, knowingly or not). People confidently repeat incorrect information all the time (a misread article, bad source on Google, something they misremembered from years ago, genuine delusional thought patterns, etc). The real crux of the issue is if the information posted is accurate and properly vetted. If one copies/pastes information without critical analysis, that’s a user failure, and the same thing happened prior to the existence of generative AI.
As I alluded to in my first post on the topic, I think it's more helpful to focus on the accuracy/quality of the content within a given post, rather than worrying about how it was written. AI detection tools are prone to false positives because the AI itself was trained on human-created content and human language patterns, so of course it can (and does) sound human; it was quite literally trained to do so. On the flip side, humans can communicate in a formulaic way, too.
IMHO we should judge based on substance: are the claims accurate, is the reasoning sound, does the argument hold up, so forth. That standard is useful regardless of whether someone used an LLM, a search engine, a print publication, their own memory, etc, to acquire the information they are sharing.
The problem is that what we call AI these days, the LLMs, are probabilistic models at their core. They don't really know what "truth" is. They have no concept of it. Hallucinations stem from this probabilistic nature combined with the in-built "pressure" to provide an answer. LLMs these days will not tell you "I don't know because I don't have enough data to give you an accurate answer". They will just provide an answer anyway based on the incomplete training data they have. They just connect words together that are more likely to "fit" together based on the prompt. And there's also heavy prompt-bias. If you use an LLM to write comments for you, then what it spews out depends entirely on the tone on how you asked it. You can ask it the same question 4 times and it will give you 4 different answers depending on how you phrased it, because it will detect what the user "wants" depending on how the question was phrased. They're better at "standing their ground" nowadays though, however this is still a problem when they're made to answer questions where there's not enough data in the information bank (like F1-related deep technical questions). If you ask it about the aerodynamic purpose of a specific winglet in a car, it will revert to general "technical-sounding" language that could generally be correct, however it may also be entirely fabricated.
In those cases they just turn into "Yes-man" models that pretty much will always align with the prompter's view, even at the expense of accuracy (there's some hard limits here on sensitive topics of course). So it's pointless to have that content here, because they bring no value. At least not as answers to technical questions the prompter has no knowledge to fact-check the response.
And I don't know for the rest, but for me personally, it's usually quite easy to spot comments that were written entirely by ChatGPT.
I see them as advanced (at worst, glorified) search engines. Use them as such, and they can be useful. Caveat emptor, the convenience of a quick answer can come at the cost of accuracy. Do your own research, and think your own thoughts, as ever. "Outsource your cognitive effort to Silicon Valley server farms," said the Silicon Valley outsourcing barons. Even the image generators seem to operator based on iterative cross-referencing to databases. Cycle thousands of times and the image becomes highly polished. Brute force application of processing speed--there's a billion billionths in a second, believe it or not. Not graspable intuitively, to me at least. But the computer has to burn through 50k images to get image 50,001. Some will say that's inefficient, others will say it's efficient.
It should be banned. No one wants to read LLM hallucinations on a discussion forum. We can ask LLMs ourselves if we want to.
It is especially frustrating when someone uses LLM to support their statements, it's not a source of truth and it shouldn't be treated at such.
I find that to be more often here than not. A while ago someone arguing with me about something I knew they were wrong about admitted they came up with their idea after having “conversations” with ChatGPT. Like, leave the basement sometime people…
Some one posted an article from a reputable website that had LLM-sounding text. It may be leaking into journalism and other forms of writing. No surprise if it has.
My take on AI-generated content in forums
I think the recent flood of AI-generated posts shows we need some structure around this. AI can be useful for summarizing technical info or answering questions, but when it overwhelms genuine discussion, it hurts the community.
Possible approach:
- Transparency: Require users to disclose if a post is AI-assisted.
- Filtering: Give members an option to hide AI-tagged posts.
- Moderation: Allow mods to remove low-quality or spammy AI content.
- Dedicated space: If people want to share AI summaries, create a separate section for that.
Posts quoting Gemini or similar bots with sources might be okay since they provide verifiable links, but disclosure is still important.
The goal isn’t to ban AI completely, but to keep the forum readable and authentic. What do you all think—should we push for tagging and filtering as a first step?
My take on AI-generated content in forums
I think the recent flood of AI-generated posts shows we need some structure around this. AI can be useful for summarizing technical info or answering questions, but when it overwhelms genuine discussion, it hurts the community.
Possible approach:
- Transparency: Require users to disclose if a post is AI-assisted.
- Filtering: Give members an option to hide AI-tagged posts.
- Moderation: Allow mods to remove low-quality or spammy AI content.
- Dedicated space: If people want to share AI summaries, create a separate section for that.
Posts quoting Gemini or similar bots with sources might be okay since they provide verifiable links, but disclosure is still important.
The goal isn’t to ban AI completely, but to keep the forum readable and authentic. What do you all think—should we push for tagging and filtering as a first step?
Why not post the source directly? I find articles using google searches, but I don't tell you that I used google to find it. Why wouldn't it be the same if using AI to find content?