Skip to content
Home » 2024.09.23 – Stream Notes

2024.09.23 – Stream Notes

  • by
  • Stream Notes
    • Catch up
      • Helped family move! Had to focus on that for a while
      • Haven’t done much coding due to that and interview stuff
    • Doing
      • Since we kinda finished FastAPI, figured we could try out the Tencent Persona-Hub repo
  • A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications
    • New Tasks without Extensive Training
      • Chain of Thought Prompting
        • CoT Prompting can guide LLMs through a logical reasoning chain
          • E.g. show reasoning process and final answer for multi-step math word problem and mimic how humans break down problems into logical intermediate steps
        • There’s also Contrastive Chain of Thought prompting, which seeks to learn from mistakes, but is noted as limited in literature
      • Automatic Chain of Thought Prompting
        • Enhancement of above, improves few-shot learning
      • Self-Consistency
        • Decoding strategy
        • Generates diverse reasoning chains by sampling from language model’s decoder for complex reasoning tasks with multiple valid paths
      • Graph-of-Thought Prompting
        • Framework permits dynamic interplay, backtracking, and evaluation of ideas
      • System 2 Attention Prompting
        • Uses reasoning abilities of LLMs by employing a two-step process to enhance attention and response quality by employing context regeneration and response generation with refined context
          • Not sure why, but it makes me think of when I will try to run something back after a meeting to make sure I understand
            • Actually this might be called Rephrase and Respond Prompting
    • Reduce Hallucination
      • Retrieval Augmented Generation
        • Analyzes user input, crafts a targeted query, and scours a pre-built knowledge base for relevant resources
        • Retrieved snippets are incorporated into original primpt
        • Should allow LLM to generate creative, factually accurate responses
      • Reason and Act (ReAct) Prompting
        • Concurrently generates reasoning traces and task-specific actions
        • Can handle multi-language
        • Can use Wikipedia API to help control hallucination and error propagation
      • Chain-of-Note Prompting
        • Can handle noisy, irrelevant documents and address unknown scenarios
      • Chain-of-Knowledge Prompting
        • Tries to break down tasks into well-coordinated steps
          • Reasoning preparation stage
          • Dynamic knowledge adaption phase
    • Knowledge-Based Reasoning and Generation
      • Automatic Reasoning and Tool-Use
    • Metacognition and Self-Reflection
      • Take a Step Back Prompting
        • Extract high-level concepts
          • Wonder if this can be used to get “difficulties” or as an alternate for topic modeling/clustering
  • A Survey of Prompt Engineering Methods in Large Language Models for Different NLP Tasks
    • Ensemble Refinement
      • Builds on CoT and Self-Consistency
        • Given a few-shot LLM prompt and query, LLM makes multiple generations by altering temperature
        • LLM conditioned on original prompt, query, and concatenated generations from previous stage to generate better explanation and answer
          • Done multiple times

Socials