• Kwakigra@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    I have two main thoughts on this

    1. LLMs are not at this time reliable sources of factual information. The user may be getting something that was skimmed from factual information, but the output can often be incorrect since the machine can’t “understand” the information it’s outputting.

    2. This could potentially be an excellent way to do real research for people who were not provided research skills by their education. Conspiracy theorists often start off as curious but undisciplined before they fall into the identity aspects of the theories. If a machine using human-like language is able to report factual information quickly, reliably, and without judgement to those who wouldn’t be able to find that info on their own, this could actually be a very useful tool.