AnthonyYou are not doing science research if you stuff an LLM into the critical path of your experiments. You are, instead, producing science-shaped artifacts with the same peripheral relationship to science that LLM text output has to truth.<br><br>The reason for this shouldn't be hard to see but apparently is. Simplistically, science is about hypothesis-driven investigation of research questions. You formulate the question first, you derive hypotheses from it, and then you make observations designed to tell you something about the hypotheses. (1)(2) If you stuff an LLM in what should be the observations part, you are not performing observations relevant to your hypothesis, you are filtering what might have been observations through a black box. If you knew how to de-convolve the LLM's response function from the signal that matters to your question, maybe you'd be OK, but nobody knows how to do that. (3)<br><br>If you stick an LLM in the question-generating part, or the hypothesis-generating part, then forget it, at that point you're playing a scientistic video game. The possibility of a scientific discovery coming out of it is the same as the possibility of getting physically wet while watching a computer simulation of rain. (4)<br><br>If you stick an LLM in the communication part, then you're putting yourself on the <a href="https://retractionwatch.com" rel="nofollow noopener noreferrer" target="_blank">Retraction Watch</a> list, not communicating.<br><br><a href="https://buc.ci?t=science" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#science</a> <a href="https://buc.ci?t=llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#LLM</a> <a href="https://buc.ci?t=ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#AI</a> <a href="https://buc.ci?t=genai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#GenAI</a> <a href="https://buc.ci?t=generativeai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#GenerativeAI</a> <a href="https://buc.ci?t=aihype" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#AIHype</a> <a href="https://buc.ci?t=hype" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#hype</a><br><br>(1) I know this is a cartoonishly simple view of science, but I do firmly believe that something along these lines is the backbone of it, however real-world messy it becomes in practice.<br>(2) A large number of computer scientists are very sloppy about this process--and I have been in the past too--but that does not mean it should be condoned.<br>(3) Things are so dire that very few even seem to have the thought that this is something you should try to do.<br>(4) Yes, you might discover something while watching the LLM glop, but that's <i>you</i>, the human being, making the discovery, not the AI, in a chance manner despite the process, not in a systematic manner enhanced by the process. You could likewise accidentally spill a glass of water on yourself while watching RainSim.<br>