Abstract:
Connectionist Temporal Classification (CTC) loss has become widely used in sequence modeling tasks such as Automatic Speech Recognition (ASR) and Handwritten Text Recognition (HTR) due to its ease of use. CTC itself has no architecture constraints, but it is commonly used with recurrent models that predict letters based on histories in order to relax the conditional independent assumption. However, recent sequence models that incorporate CTC loss have been focusing on speed by removing recurrent structures, hence losing important context information. This thesis presents Contextualized Connectionist Temporal Classification (CCTC) loss, which induces prediction dependencies in non-recurrent and non-autoregressive neural networks for sequence modeling. CCTC allows the model to implicitly learn the language model by predicting neighboring labels via multi-task learning. Experiments on ASR and HTR tasks in two different languages show that CCTC models offer improvements over CTC models by 2.2-8.4% relative without incurring extra inference costs.