Reflecting on the insights gained from learning about the limitations of text-based Generative AI (LLMs), you can utilize a framework for interacting with these models to better work with the way it creates output. The CLEAR framework by Leo S. Lo is such a framework (see this paper for more information). It states that any time you consider interacting with an LLM to remember that you and your prompt should be:
Concise - do not give more information than needed, give enough space for a proper reaction, and avoid unnecessary context. Otherwise, the model might get 'distracted';
Logical - write out a plan for the structure of the response to counteract the lack of logical thought of the model, especially when looking for a long response;
Explicit - give clear instructions as to what you want to see in the response to direct the attention mechanism to the most important aspects of your prompt. Also, you can specify the way you want the model to format the output;
Adaptive - do not give the same prompt for different topics to account for potential gaps in the dataset/pattern recognitions of the model. Try out different formulations and frames for your prompt to get the model to produce different outcomes;
Reflective - consider any incremental value of a follow-up prompt to make the decision on whether or not the current conversation session (context) can still lead to better results. Especially when just starting a conversation, following up with adjustments or specifications to what you mentioned in your initial prompt can help to fine-tune the output.