• 0 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: March 31st, 2024

help-circle


  • I don’t think it will make enough difference, but RAG stands for Retrieval Augmented Generation.

    There’s a few ways to do it, but basically it’s a way add extra information to the conversation. By default the model only knows what it generates, plus what is in the conversation. RAG adds extra information to the mix.

    The simplest approach is to scan the conversation for keywords and add information based on them.

    So you ask “what is the capital of France” and instead of the model answering/hallucinating by itself, your app could send the full Wikipedia page for France along with your question, and the model will almost always return the correct answer from the Wikipedia page and hallucinate much less. In practice it gets a lot more complicated and I’m not up to date on recent RAG but the idea is the same.











  • They make the plan as a training exercise, not something they actually think will happen, but if they ask a new planner to plan a war with China he’s probably thought a lot about it and you learn more about what plans he’s read than what unique and out of the box thinking he has, if you ask him to make plans for zombie outbreak or idaho rebelling or an alien invasion in Montana you get unique answers, so they have thousands of plans that no one ever reads again but it’s less competence and more " 1999 q4 training exercise"