In the previous two sections of this module, we outlined four key insights regarding the ways in which Large Language Models (LLMs) construct their responses:
LLMs are initially reactive: they are wholly dependent on the input given to them to start creating.
LLMs are probabilistic: the output they present to one user can be very different to the story they present to another user, even if the same prompt has been used.
LLMs do not have a long-term vision: they can only take into consideration what has been outputted, not what might be inherently logical to output next.
LLMs are heavily context-dependent, which means that all previously inputted and outputted text will influence a new response.
Stemming from the mentioned insights, there are several limitations associated with GenAI output. In this section, we will explore four of these limitations.