[Paper]
Large Language Models have taken the cognitive science world by storm. It is
perhaps timely now to take stock of the various research paradigms that have
been used to make scientific inferences about cognition" in these models or
about human cognition. We review several emerging research paradigms --
GPT-ology, LLMs-as-computational-models, and
silicon sampling” – and review
recent papers that have used LLMs under these paradigms. In doing so, we
discuss their claims as well as challenges to scientific inference under these
various paradigms. We highlight several outstanding issues about LLMs that have
to be addressed to push our science forward: closed-source vs open-sourced
models; (the lack of visibility of) training data; and reproducibility in LLM
research, including forming conventions on new task ``hyperparameters” like
instructions and prompts.