🚀 About Me

KakaoTalk_Photo_2021-03-07-11-13-55.jpeg

📫 E-mail : [email protected]

📥 Github : 🖱Github

💻 Velog : 📎Velog

📞 Phone : 010-4909-4946

💡 Strength:

🎓 ****Education

Korea Advanced Institute(M.S.)

LAB. Xfact Lab

2024.03 -

Learning to Insert [PAUSE] Tokens for Better Reasoning

Continuation of "Training Language Models With Pause Tokens"(ICLR 2024, Google Research)

This ongoing research adopts a methodology where, during the generation of the K+1 token, not only the previous K steps are utilized but also incorporates dummy [PAUSE] tokens, increasing the computation steps to K+M.

Unlike the original paper’s approach of appending 10 [PAUSE] tokens to the suffix of the prompt, this study focuses on analyzing the effect of noise token with measuring likelihood and attention flows in reasoning task.

Moreover, this research employs fine-tuning instead of pre-training utilized in previous studies, aiming to enhance performance and efficiency.

Korea Univ.

2017.03 - 2024.02

GPA. 4.4 / 4.5 (98.9/100)

Major GPA. 4.35 / 4.5

Scholarship.

Received Korea University’s Best Student Award every semester.

🏢 Experience

LDI Lab. SNU.

Research Intern

    1. ~ 2023. 08.

Code Generation with analysis of partially correct with evidence of variable state

Developing idea of <Learning from Self-Sampled Correct and Partially> paper and generalized in general code.

Conducting research on introducing blocking techniques to derive the state from general code and implementing it as a tool of verification to learn the model through it.

Among the incorrect answer codes generated by the model, the code that generated the appropriate variable state in the middle is used as evidence of partially correct answer and used for resampling.


LG AI Research

Language Lab Text Analytics Squad

Internship

  1. 08 ~ 2023. 02

Main: AICC(A.I. Contact Center)

127 categories classification of consultation types.

320 categories classification of VOC type based on STT text.

Sentiment Analysis on 5 level.

Assist: Generation Span Prediction and NER

Based on Generation Model, extracted Noun phrase and make it as a prompt for NER.

Assist: Ko-BERT PLM based on QA

Korean Version PLM of <From Clozing to Comprehending: Retrofitting Pre-trained Language Model to Pre-trained Machine Reader> Experiment

✏️ Extracurricular

Google Developer Student Club Korea Univ.(KUGODS)

2023.02 - 2023.06

As a core member participate in google solution challenge


NAVER AI TECH Boostcamp

NLP Track

2022.03 - 2022.06

Learned the essentials for an ML Engineer.

Participated in 3 NLP competitions and projects, 1 data production project.


Korea Univ. Big Data Academic Society(KUBIG)

Management and Mentor

2022.06 - 2022.12

Leaded paper review study about Computer Vision and participated in DACON as a team.


Korea Univ. Computer Club

Management and Mentor

2021.02 - 2023.06