Algorithmics 101

Algorithmic

Algorithmics has always been around, it is a thought process described in a logical way. Although this system seems essential to us today for many sciences, the systematization of algorithmic writing has evolved over time and technologies. It began in the time of the Babylonians, until it was concretely established by René Descartes in the general method proposed by the Discourse on Method in 1637.

Modern algorithmics, including a so-called “program” language, made its debut shortly afterwards in 1677. The term we use today, “algorithm”, appeared two centuries later.

The algorithm of XXe and XXIe century is often based on formalisms like that of Turing machines, which provides a scientific framework for studying the properties of algorithms.

Along with computers, algorithms developed a lot in the second half of the XXe century and Knuth (1969), author of The Art of Computer Programming defines 5 properties which constitute the prerequisites of an algorithm.

1. Finiteness: an algorithm must terminate after a finite number of steps

2. Precise definition: each step of an algorithm must be precisely defined without ambiguity

3. Inputs: quantities that are defined before the algorithm begins

4. Outputs: quantities that have a specified relationship to inputs

5. Performance: all the operations that the algorithm must perform must be sufficiently basic to be able in principle to be carried out in a finite period of time by a man using paper and a pencil.

Markov (1967) defines an algorithm as “An exact prescription defining a processing process which leads from initial data to the final result”.

Which implies the following characteristics:
• Precision of the prescription leaving no room for arbitrariness and universal understanding • Possibility of starting with initial data which can vary within given limits • The orientation of the algorithm in the search for a desired result which must be obtain at the end of the execution from the initial data: this is the conclusion property of the algorithm.

For Minsky (1967) the definition is even more direct:
“A set of rules that tell us, from moment to moment, precisely how to behave”

And finally for Stone (1972)
“An algorithm is a set of rules that precisely defines a sequence of operations such that each rule is effective and defined and the sequence completes in a finite time. »

It further specifies that:
“for those who follow the rules of the algorithm, they must be formulated so that they are followed mechanically without having to think”

For example :
If the instructions require extracting a square root and are performed by someone who knows how to do arithmetic but does not know how to extract a square root, then one must provide a set of rules for extracting a square root, so to satisfy the definition of the algorithm

Many formal or theoretical tools have been developed to describe algorithms, study them, express their qualities, and be able to compare them:

  • algorithmic structures: control structures and data structures.
  • the notions of correctness, completeness and termination.
  • the theory of complexity.

Transcribe an algorithm

THEnatural language : verbose and ambiguous, rarely used for complex algorithms but not prohibited.
Pseudocode : a structured language, but no strict rules defined; Independent of any implementation
Organizational chart : no ambiguity, good readability, writing in the form of standardized symbols
THEprogramming languages : normally intended to transcribe flowcharts into a form understandable by computers; can also be used to describe algorithms
THEalgorithmic languages : language using pseudo-code composed of fixed rules, understandable by a computer, allowing the description and execution of algorithms.

Control structure and data structure

A control structure is a command that controls the order in which the various instructions of an algorithm or computer program are executed. The processor is responsible for memorizing the address of the next instruction to be executed, blocks (loops, if, etc.) or subroutines.

This sequence of instructions is also called the execution flow of a program and it can be represented using a graph of control flow or a automaton. Certain programming paradigm calling on several processors is then exposed to the problem of assignment of tasks in the processors.

A data structure is a logical structure intended to contain data, in order to give them an organization allowing their processing to be simplified. There are many types of data structure depending on the actions to be performed on them, we will not delay any further on this subject.

Every computer language has a type of control structure and data structure. The writing of an algorithm is very often based on very widespread languages such as C, java, etc.

Notions of termination, correction, completeness

Termination is the assurance that the algorithm will complete in finite time. Termination proofs usually involve a strictly decreasing positive integer function at each step/iteration of the algorithm.

Given the guarantee that an algorithm will terminate, the correctness proof must provide the assurance that if the algorithm terminates by giving a proposed solution, then this solution is correct (in the sense that it answers the problem posed).

The proof of completeness guarantees that, for a given problem space, the algorithm, if it terminates, will give a solution for each of the entries of the algorithm.

Termination : the algorithm has a time complexity.

Correction : any calculated response of the algorithm is a solution of the problem.

Completeness : the algorithm can calculate an answer for any instance of the problem.

Algorithmic complexity

Algorithm complexity is an assessment of the cost of running an algorithm in terms of time (time complexity) or memory space (space complexity). The efficiency of an algorithm therefore depends on these two criteria, which we will try to minimize.

Algorithmic complexity makes it possible to learn about the intrinsic efficiency of a
algorithm: the “natural” performance of an algorithm outside the environment in which the latter will be implemented.

There are several types of analysis of the performance of an algorithm:

  • Optimistic analysis (best-case analysis or best-case analysis).
  • The average analysis is often a factor in the programming paradigm used.
  • Pessimistic analysis (worst-case analysis or worst-case analysis).

Complexity classes are sets of problems which have the property of all having the same complexity according to a certain criterion. The best known classes are classes defined on Turing machines (deterministic or non-deterministic – see the course on automata) with time and space constraints. There is a relationship between algorithmic complexity and its energy cost of operation, called Landauer's principle.

Here is a list of the most commonly encountered complexities:

Rating Complexity type
constant complexity: an assignment, an elementary calculation, etc.
logarithmic complexity: dichotomy, tree
linear complexity: a loop
quasi-linear complexity: divide and rule, tree
quadratic complexity: nesting two loops
cubic complexity: nesting of three loops
polynomial complexity: nesting of loops without knowledge of states
quasi-polynomial complexity: divide and conquer
exponential complexity: tree, brute force
factorial complexity: naive combinatorial enumeration approach

EN
FR
FR
EN
ES
Exit mobile version