The shift from paper-based to digital assessment formats in education raises fundamental questions about the comparability of reading scores across modes and over time. International large-scale assessments increasingly rely on digital delivery while continuing to report trends on a common scale, making it essential to establish whether paper-based and digital reading assessments support equivalent score interpretations and valid trend comparisons across cycles. This dissertation examines the comparability and validity of paper-based and digital reading assessments using data from PIRLS 2016 and its digital extension, ePIRLS.