Pandey, Vishal (2018) Language Model For Sanskrit. MTech thesis.
Restricted to Repository staff only
Natural-language processing (NLP) is an area of computer science and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to fruitfully process large amounts of natural language
data. Challenges in natural-language processing frequently involve speech recognition, natural-language understanding, and natural-language modeling. Language modeling in NLP refers to the task of predicting next words following a given sequence of words. It is one of the fundamental tasks in NLP. The problem addressed in this report is this task of language modeling in Sanskrit language since it has already been done in most languages but not in Sanskrit (to the best of my knowledge). The deep neural network architecture called LSTM (short for Long-Short-Term-Memory) specifically AWD-LSTM proposed by Stephen Merity has been used for the task. Various new techniques has then been applied to improve the model. These include Cyclical Learning Rates proposed by Leslie Smith and Cache Pointers proposed by Grave et. al. . Moreover, the word embedding vectors obtained from the language model has been studied. Thus, the model developed here serves two purposes for future studies.
It can be used for transfer learning for either building another language model for new data sets or as a classifier for various text classification purposes.
|Item Type:||Thesis (MTech)|
|Uncontrolled Keywords:||Artificial intelligence; Sanskrit language.|
|Subjects:||Engineering and Technology > Computer and Information Science > Data Mining|
|Divisions:||Engineering and Technology > Department of Computer Science Engineering|
|Deposited By:||IR Staff BPCL|
|Deposited On:||12 Mar 2019 14:38|
|Last Modified:||12 Mar 2019 14:38|
|Supervisor(s):||Sa., Pankaj K|
Repository Staff Only: item control page