Ghasemi, Hojat2023-01-252023-01-252022-01-13https://laurentian.scholaris.ca/handle/10219/3980Recent transformer-based language models pre-trained on huge text corpora have shown great success in performing downstream Natural Language Processing (NLP) tasks such as text summarization when fine-tuned on smaller labeled datasets. However, the impact of fine-tuning on improving the performance of pre-trained language models in summarizing movie storylines have not been explored. Moreover, there is a lack of extensive labelled datasets containing movies storylines to allow pre-trained language models delving deeper in this realm. In this research work we propose a novel labelled dataset containing IMDB movie storylines alongside their summaries for teaching pre-trained language models how to perform text summarization on movie storylines. Furthermore, we showcase the potential of this dataset by fine-tuning a T5-base model with the use of this dataset. Our results show that fine-tuning a T5-base model on this dataset can significantly improve the performance in summarizing movie storylinesenFine-tuninglanguage modeltext summarizationtransfer learningtransformerpre-trainingabstractive summarizationIMDBmovie storylinenatural language processingattention based modelsdeep learningFine-tuning a general transformer model on story-lines of IMDB movies databaseThesis