And increase efficiency. Optimization goals include reducing the amount of code, the volume used by the program RAM, speeding up the program, reducing the number of input/output operations.

The main requirement that is usually imposed on an optimization method is that the optimized program must have the same result and side effects on the same set of input data as the non-optimized program. However, this requirement may not play a special role if the gain due to the use of optimization can be considered more important than the consequences of changing the behavior of the program.

Types of optimization

Code optimization can be done either manually by a programmer or automatically. In the latter case, the optimizer can be either separate software, and be built into the compiler (the so-called optimizing compiler). Additionally, it should be noted that modern processors can optimize the order in which code instructions are executed.

There are such concepts as high-level and low-level optimization. High-level optimizations are mostly carried out by a programmer who, operating with abstract entities (functions, procedures, classes, etc.) and imagining a general model for solving a problem, can optimize the design of the system. Optimizations at the level of elementary building blocks of source code (loops, branches, etc.) are also usually classified as high level; some distinguish them into a separate (“middle”) level (N. Wirth?). Low-level optimization occurs at the stage of turning source code into a set of machine instructions, and often this stage is subject to automation. However, assembly language programmers believe that no machine can surpass a good programmer in this (while everyone agrees that a bad programmer will make machines even worse).

Selection of the area to be optimized

When optimizing code manually, there is another problem: you need to know not only how to optimize, but also where to apply it. Typically, due to various factors (slow input operations, differences in the speed of the human operator and the machine, etc.), only 10% of the code takes up as much as 90% of the execution time (of course, this statement is quite speculative, and has a dubious basis in law Pareto, however, looks quite convincing in E. Tanenbaum). Since optimization will have to be spent extra time, so instead of trying to optimize the entire program, it would be better to optimize these “critical” 10% of execution time. Such a piece of code is called a bottleneck or bottleneck, and is used to determine it. special programs- profilers that allow you to measure operating time various parts programs.

In fact, in practice, optimization is often carried out after the stage of "chaotic" programming (including such things as "", "we'll figure it out later", "it will do"), and therefore is a mixture of actual optimization, refactoring and correction: simplification" fancy" constructions - like strlen(path.c_str()), logical conditions (a.x != 0 && a.x != 0), etc. Profilers are hardly suitable for such optimizations. However, to detect such places, you can use programs - tools for finding semantic errors based on a deep analysis of the source code - because, as can be seen from the second example, ineffective code can be the result of errors (such as typos in this example - most likely, a.x was meant != 0 && a.y != 0). A good one will detect such code and display a warning message.

The harm and benefits of optimizations

Almost everything in programming must be approached rationally, and optimization is no exception. It is believed that an inexperienced assembly programmer typically writes code that is 3-5 times slower than compiler-generated code (Zubkov). There is a widely known expression regarding early, rather low-level (like the fight for an extra operator or variable) optimizations formulated by Knuth: “Premature optimization is the root of all troubles.”

Most people have no complaints about the optimizations carried out by the optimizer, and sometimes some optimizations are practically standard and mandatory - for example, optimization of tail recursion in functional languages ​​(Tail recursion is a special type of recursion that can be reduced to the form of a loop).

However, it should be understood that numerous complex optimizations at the machine code level can greatly slow down the compilation process. Moreover, the gain from them can be extremely small compared to optimizations of the overall system design (Wirth). We should also not forget that modern, syntactically and semantically sophisticated languages ​​have many subtleties, and a programmer who does not take them into account may be surprised by the consequences of optimization.

For example, consider the C++ language and the so-called. Return-Value Optimization, the essence of which is that the compiler may not create copies of the temporary object returned by the function. Since the compiler "skips" the copying in this case, this technique is also called "Copy elision". So the following code:

#include struct C ( C() () C(const C&) ( std::cout


Close