모델 관련

모델

LLaMA부터 LLM의 흐름

Encoder-Decoder vs Decoder only

RoBERTa: A Robustly Optimized BERT Pretraining Approach

SentenceBERT

GPT

XLNet

ALBERT

EXAONE Fine-tuning

방법론

Contrastive Learning

Self Training Methods

Poly Encoder

DAPT TAPT

Inductive Bias

Focal Loss

VAE

Reparameterization trick

prompt tuning