- Huffman Coding Tutorial
- Corrected exercises on control structures and data structures
- Corrected exercises on time complexity
- Corrected exercises about time complexity (en)
- Corrected exercises on recursive, terminal, multiple and cross algorithms
- License-type corrected exercises on the divide-and-conquer paradigm
- Corrected exercises on algorithms in Divide and Conquer and Dynamic Programming
- Corrected exercises about divide & conquer and dynamic programming
- Branch and Bound exercise corrected step by step on whole number LP
- Corrected Constraint Logic Programming Exercises
- Corrected exercises on sorting algorithms
Algorithmics have been around forever, it is a thought process that is described in a logical manner. Although this system seems essential to us today for many sciences, the systematization of algorithmic writing has evolved over time and through technology. It began in the time of the Babylonians, until it was concretely established by René Descartes in the general method proposed by the Discourse on Method in 1637.
Modern algorithms, including a so-called “program” language, made its debut shortly afterwards in 1677. The term we use today, “algorithm”, appears two centuries later.
The algorithm of XXe and XXIe century is often based on formalisms like that of Turing machines, which provides a scientific framework for studying the properties of algorithms.
With computer science, algorithms developed a lot in the second half of the XXe century and Donald Knuth, author of the The Art of Computer Programming laid a rigorous mathematical foundation for their analysis.
Control structure and data structure
This sequence of instructions is also called the execution flow of a program and can be represented using a control flow graph or an automaton. A certain programming paradigm calling on several processors is then exposed to the problem of assignment of tasks in the processors.
Each computer language has a type of control structure and data structure. The writing of an algorithm is very often based on widely used languages such as C, java, etc.
Notions of termination, correction, completeness
Termination is the assurance that the algorithm will terminate in a finite time. The proofs of termination usually involve a strictly decreasing positive integer function at each step / iteration of the algorithm.
Given the guarantee that an algorithm will finish, the proof of correctness must provide the assurance that if the algorithm ends by giving a proposed solution, then this solution is correct (in the sense that it answers the problem posed).
The proof of completeness guarantees that, for a given problem space, the algorithm, if it finishes, will give a solution for each of the inputs of the algorithm.
algorithm: the “natural” performance of an algorithm outside the environment in which the latter will be implemented.
Complexity classes are sets of problems which all have the same complexity according to a certain criterion. The best-known classes are classes defined on Turing machines (deterministic or non-deterministic - see the course on automata) with time and space constraints. There is a relationship between algorithmic complexity and its energy cost of operation, called Landauer's principle.
Here is a list of the most commonly encountered complexities:
|constant complexity: an assignment, an elementary calculation, etc.|
|logarithmic complexity: dichotomy, tree|
|linear complexity: a loop|
|quasi-linear complexity: divide and conquer, tree|
|quadratic complexity: nesting two loops|
|cubic complexity: nesting of three loops|
|polynomial complexity: nesting of loops without knowledge of states|
|quasi-polynomial complexity: divide and conquer|
|exponential complexity: tree, brute force|
|factorial complexity: naive approach to combinatorial enumeration|