UPDATED: 6 MAY, 2022

GRAM: Fast Fine-tuning of Pre-trained Language Models for Content-based Collaborative Filtering

by Yoonseok Yang, Kyu Seok Kim, Minsam Kim, Juneyoung Park

Applying pre-trained language models(PLM) to Knowledge Tracing(KT) is an important task in order to improve student assessment accuracy for cold-start items. Simply put, natural language processing(NLP) complements Knowledge Tracing(KT) in analyzing questions that have no response data yet and the KT model is not trained for. However, the inherent hurdle is that training the model from end-to-end(E2E) is expensive.


Knowledge Tracing models attempt to accurately determine a student’s current knowledge state, such as whether a student may respond correctly to a question
GRadient Accumulation for Multi-modality in CCF(GRAM) has been introduced to reduce the time spent in training such models, and empirical evidence suggests that GRAM drastically reduced the amount of GPU memory and improved training efficiency to up to 146Ă— on several datasets.


Loading PDF…

Get your score in 40 min!

Just do 1/4 of a full test and get actionable insights.

R.test is an AI-powered diagnostic test platform that evaluates student’s test readiness. Our mission is to get rid of inefficiency and inequality from test prep industry by making assessments more adaptive, accessible, and reliable.

â“’ 2023 Riiid, Inc. All Rights Reserved

521, Teheran-ro, Gangnam-gu, Seoul, Korea




College Board® is a trademark registered by the College Board, which is not affiliated with, and does not endorse, this website.

Neither Riiid, Inc. or R.test is affiliated with College Board® and do not have access to College Board's proprietary data.

ACT® is a registered trademark of ACT, inc. This website is not endorsed or approved by ACT, inc.

Neither Riiid, Inc. or R.test is affiliated with ACT, Inc. and do not have access to ACT’s proprietary data.