To be fair, not all knowledge of LLM comes from training material. The other way is to provide context to instructions.
I can imagine someone someday develops a decent way for LLMs to write down their mistakes in database and some clever way to recall most relevant memories when needed.
deleted by creator
To be fair, not all knowledge of LLM comes from training material. The other way is to provide context to instructions.
I can imagine someone someday develops a decent way for LLMs to write down their mistakes in database and some clever way to recall most relevant memories when needed.
there are already existing approaches tackling this problem https://github.com/MemTensor/MemOS
deleted by creator