However, do keep in mind that LLMs regularly pull language an library features out of their asses that have no direct correspondent in practice. I’d use the LLMs to generate small snippets of code, giving them a small and restricted set of requirements to minimize hallucinations.
Yea, encountered that as well (depending on LLM model). Mostly, it is enough to just feed the exception output back into the LLM thread and it will Fix it’s bugs, or at least can tell you why this exception normally occurs.
However, do keep in mind that LLMs regularly pull language an library features out of their asses that have no direct correspondent in practice. I’d use the LLMs to generate small snippets of code, giving them a small and restricted set of requirements to minimize hallucinations.
Yea, encountered that as well (depending on LLM model). Mostly, it is enough to just feed the exception output back into the LLM thread and it will Fix it’s bugs, or at least can tell you why this exception normally occurs.