XB_0025Year 3 · Period 46ECUnknownintelligent systemsOfficial study guide

Computational Intelligence

Optimization algorithms for AI: evolutionary algorithms, neural networks (deep learning), reinforcement learning, and neuroevolution.

Learning objectives

Knowledge and understanding of: Optimization techniques (Hill Climbing, Local Search, Gradient Descent, SGD), Evolutionary Algorithms, Neural Networks (Fully-Connected, Convolutional), Sampling Methods (Metropolis-Hastings, Simulated Annealing), Reinforcement Learning (Q-learning), Neuroevolution (Neural Architecture Search). Applying knowledge and understanding: How optimization algorithms work and where to use them; How to formulate an evolutionary algorithm for a specific problem; What neural network fits best for a given problem. Making judgments: What optimization algorithm to use for a given problem. Communication skills: Presenting analysis in written form (short reports) for each assignment.

In the course Computational Intelligence, we will focus mainly on computational aspects of Artificial Intelligence, namely, optimization algorithms for solving learning problems. Specifically, we will consider problems that cannot be solved using information about gradient due to their combinatorial character or complexity of the objective function (e.g., non-differentiability, blackbox objective function). These problems pop up in computer science and AI, such as, identification of biological systems, task scheduling on chips, robotics, finding optimal architecture of neural networks. For this purpose, we will introduce different classes of algorithms that can be used to tackle these problems, namely, hill climbing and local search, and evolutionary algorithms. Additionally, we explain sampling methods (Markov Chain Monte Carlo) and population-based sampling methods, and indicate how they are linked to evolutionary algorithms. In the second part of the course, we will discuss neural networks as current state-of-the-art modeling paradigm. We will present basic components of deep learning, such as, different layers (e.g., linear layers, convolutional layers, pooling layers, recurrent layers), non-linear activation functions (e.g., sigmoid, ReLU), and how to use them for specific problems. At the end of the course, we will touch upon alternative approaches to learning using Reinforcement Learning. We will conclude the course with a recently revived field of neuroevolution that aims for utilizing evolutionary algorithms in training neural networks.

Assessment

The final grade is calculated based on the final exam (50 points) and 5 individual practical assignments (10 points each, 50 points in total). To pass the course, students are required to obtain at least 25 points from the final exam, and 55 points in total including all the points from the exam and assignments. The exam can be retaken (a resit). Solutions to practical assignments must be provided within given deadlines. There is no resit option for practical assignments.

Teaching methods

Lectures and practical assignments.

Literature

The literature will be made available on Canvas.

optimizationneural-networksconstrained choice

Help us improve this page

Computational Intelligence

This course doesn't have detailed student info yet. Your tips, summaries, or resources would help everyone. You can submit materials anonymously — no account needed.

Submit materials

Upload summaries, notes & more

Or get in touch

Email us for quick changes, questions, or anything that isn't a large file.

Tip: Use the Canvas Course Downloader extension to export your course materials from Canvas, then upload the files through the form above.

You can also contribute via a GitHub pull request. Contributors are credited if desired.