I distinguish four ways in which (rational) skills can be said to be emergent in Large Language Models (LLMs): scaling emergence, curricular emergence, informationtheoretic emergence, and ontological emergence. Against the background of Ryle and Wittgenstein, I will adapt the "theory for emergence of complex skills in language models" recently proposed by Arora & Goyal (2023) to sketch a unified account of emergence that may advance our understanding of LLMs as intelligent machines, settle the stochastic parrots controversy, and allow for empirically testing philosophical theories about the relation between language and thought. A detailed handout underlying the talk will be made available upfront here: https:// ggbetz.short.gy/erllm