Study-Unit Description

Study-Unit Description


CODE LLT3511

 
TITLE Deep Learning Approaches to Natural Language Processing

 
UM LEVEL 03 - Years 2, 3, 4 in Modular Undergraduate Course

 
MQF LEVEL 6

 
ECTS CREDITS 5

 
DEPARTMENT Institute of Linguistics and Language Technology

 
DESCRIPTION This study-unit focuses on techniques for solving Natural Language Processing tasks through the design of neural architectures.

The contents of the study-unit can be divided into two main components:

A. Methods:

- The relationship and differences between linear, log-linear and neural models;
- Feed-forward neural models for NLP;- convolutional neural network models for NLP;- recurrent networks (including Long Short-Term Memory and Gated Recurrent Unit models);- encoder-decoder architectures and attention;- attention-based (Transformer) networks.

B. NLP Challenges:

The above core methodological topics are in turn applied to a variety of natural language analysis and generation tasks, including, but not limited to:

- Language models;
- Lexical semantics and vector-space semantic models;
- Text classification, including sentiment analysis and topic classification;
- Sequence classification such as part of speech tagging;
- Conditioned Generation, including data-to-text generation and machine translation.

The study-unit will focus on the use of the Python programming language with PyTorch as a deep learning library. Practical exercises will be provided at the end of each lecture.

Study-Unit Aims:

Contemporary NLP applications increasingly rely on the design of architectures with neural components, which learn from large data sets in a supervised or self-supervised way. In view of this, this study-unit aims to:

- Give students a thorough grounding in the formal and conceptual foundations of neural methods;
- Show how these methods are applied in the construction of systems for the analysis and generation of Natural Language;
- Pave the way for other study units in speech processing and multilingual computing.

Learning Outcomes:

1. Knowledge & Understanding:
By the end of the study-unit the student will be able to:

- Identify the appropriate neural architecture to address a particular NLP problem;
- Distinguish the strengths and weaknesses of recurrent models and attention-based models;
- Design architectures composed of different components for training in an end-to-end fashion for Natural Language Understanding and Natural Language Generation tasks.

2. Skills:
By the end of the study-unit the student will be able to:

- Implement a machine-learning experiment to address a specific NLP problem;
- Use Python libraries to implement deep learning models that solve classification, tagging, and generation problems in NLP.

Main Text/s and any supplementary readings:

Main Texts:

- D. Jurafsky and H Martin (2009). Speech and language processing (2nd Ed). New York: Prentice Hall [2nd Edition available in the library. Third edition available online at https://web.stanford.edu/~jurafsky/slp3/).
[Students should always consult the third edition unless otherwise specified.]

 
ADDITIONAL NOTES Pre-requisite Qualification: Prior exposure to machine learning methods.

 
STUDY-UNIT TYPE Lecture and Practicum

 
METHOD OF ASSESSMENT
Assessment Component/s Assessment Due Sept. Asst Session Weighting
Assignment SEM1 Yes 100%

 
LECTURER/S Marc Tanti

 

 
The University makes every effort to ensure that the published Courses Plans, Programmes of Study and Study-Unit information are complete and up-to-date at the time of publication. The University reserves the right to make changes in case errors are detected after publication.
The availability of optional units may be subject to timetabling constraints.
Units not attracting a sufficient number of registrations may be withdrawn without notice.
It should be noted that all the information in the description above applies to study-units available during the academic year 2024/5. It may be subject to change in subsequent years.

https://www.um.edu.mt/course/studyunit