queirozf.com

Entries by tag: alignment

Including child/synonym tags

Paper Summary: Self-instruct: Aligning Language Models with Self-generated Instructions  03 Jun 2023    paper-summary language-modeling alignment
Summary of the 2022 article "Self-instruct: Aligning Language Models with Self-generated Instructions" by Wang et al. Read More ›

Paper Summary: Training language models to follow instructions with human feedback  05 Feb 2023    paper-summary language-models alignment
Summary of the 2022 article "Training language models to follow instructions with human feedback" by Ouyang et al. AKA the InstructGPT article Read More ›