Over time, programming languages ​​constantly change and evolve due to new technologies, modern requirements, or a simple desire to refresh the style of coding. Reactive programming can be implemented using various frameworks such as Reactive Cocoa. It changes the imperative style of the Objective-C language and this approach to programming has something to offer to the standard paradigm. This, of course, attracts the attention of iOS developers.

ReactiveCocoa brings declarative style to Objective-C. What do we mean by this? The traditional imperative style, which is used in languages ​​such as C, C++, Objective-C, and Java, etc., can be described as follows: You write directives to computer program, which must be performed in a certain way. In other words, you say "how to do" something. While declarative programming allows you to describe the flow of control as a sequence of actions, “what to do,” without defining “how to do it.”

Imperative vs Functional programming

The imperative approach to programming involves detailed description each step a computer must take to complete tasks. In fact, the imperative style is used in native programming languages ​​(or used when writing machine code). This, by the way, is a characteristic feature of most programming languages.

In contrast, the functional approach solves problems using a set of functions that need to be performed. You define the input parameters for each function, and what each function returns. These two programming approaches are very different.

Here are the main differences between languages:

1. State changes

For pure functional programming, state change does not exist because there are no side effects. A side effect involves changes in state in addition to the return value due to some external interaction. RP (Referential Transparency) of a subexpression is often defined as "no side effects" and is primarily related to pure functions. SP does not allow function execution to have external access to the function's volatile state, because every subexpression is a function call by definition.

To clarify the point, pure functions have the following attributes:

  • the only noticeable output is the return value
  • the only dependence of the input parameters is the arguments
  • arguments are fully specified before any output is generated

Despite the fact that the functional approach minimizes side effects, they cannot be completely avoided since they are an inherent part of any development.

On the other hand, functions in imperative programming do not have referential transparency, and this may be the only difference between the declarative approach and the imperative one. Side effects are widely used to implement state and I/O. Commands in the source language can change state, resulting in different meanings for one language expression.

How about ReactiveCocoa? This is a functional framework for Objective-C, which is a conceptually imperative language without explicitly including pure functions. When trying to avoid a change in condition, there are no limitations to the side effects.

2. First class facilities

In functional programming, there are objects and functions, which are first class objects. What does it mean? This means that functions can be passed as a parameter, assigned to a variable, or returned from a function. Why is this convenient? This makes it easy to manage execution blocks, create and combine functions in a variety of ways without complications such as function pointers (char *(*(**foo)()); - have fun!).

Languages ​​that use the imperative approach have their own peculiarities regarding first-class expressions. What about Objective-C? It has blocks as closure implementations. Higher order functions (HOFs) can be modeled by taking blocks as parameters. In this case, the block is a closure and a higher order function can be created from a specific set of blocks.

However, the process of manipulating FVP in functional languages ​​is more in a fast way and requires fewer lines of code.

3. Main flow control

Imperative style loops are represented as recursion function calls in functional programming. Iteration in functional languages ​​is usually done through recursion. Why? Probably for the sake of complexity. For Objective-C developers, loops seem much more programmer-friendly. Recursions can cause problems, such as excessive RAM consumption.

But! We can write a function without using loops or recursion. For each of the infinitely possible specialized actions that can be applied to each element of a collection, functional programming uses reusable iterative functions such as “ map”, “fold”, “" These functions are useful for refactoring source code. They reduce duplication and do not require writing a separate function. (read on, we have more information about this!)

4. Order of execution

Declarative expressions show only the logical relationships between the arguments of the subexpression function and the constant state relationship. So, in the absence of side effects, the state transition of each function call occurs independently of the others.

The functional order of execution of imperative expressions depends on the non-persistent state. Therefore, the order of execution matters and is implicitly determined by the organization of the source code. In this matter, we can point out the difference between the evaluation strategies of both approaches.

Lazy evaluation, or on-demand evaluation, is a strategy in functional programming languages. In this case, evaluation of the expression is delayed until its value is needed, thereby avoiding repeated evaluations. In other words, expressions are evaluated only when the dependent expression is evaluated. The order of operations becomes uncertain.

In contrast, energetic evaluation in an imperative language means that the expression will be evaluated as soon as it is bound to a variable. This implies dictating the order of execution. Thus, it is easier to determine when subexpressions (including functions) will be evaluated, because subexpressions can have side effects that affect the evaluation of other expressions.

5. Number of code

This is important; the functional approach requires writing less code than the imperative approach. This means fewer crashes, less code to test, and a more productive development cycle. Since the system is constantly evolving and growing, this is important.

Main components of ReactiveCocoa

Functional programming works with concepts known as future (a read-only representation of a variable) and promise (a read-only representation of a future). What's good about them? In imperative programming, you have to work with pre-existing values, which leads to the need to synchronize asynchronous code and other difficulties. But the concepts of futures and promises allow you to work with values ​​that have not yet been created (asynchronous code is written in a synchronous way).


Signal

Future and promise are represented as signals in reactive programming. - the main component of ReactiveCocoa. It makes it possible to imagine the flow of events that will be presented in the future. You subscribe to a signal and get access to events that will happen over time. A signal is a push-driven flow and can represent a button click, asynchronous network operations, timers, other UI events, or anything else that changes over time. They can bind the results of asynchronous operations and efficiently combine multiple event sources.

Subsequence

Another type of flow is a sequence. Unlike a signal, a sequence is a pull-driven flow. This is a kind of collection that has a similar purpose to NSArray. RACSequence allows certain operations to be performed when you need them, rather than sequentially as with a collection NSArray. Values ​​in a sequence are only evaluated when specified by default. Using only part of the sequence potentially improves performance. RACSequence allows Cocoa collections to be processed in a generic and declarative way. RAC adds the -rac_sequence method to most Cocoa collection classes so that they can be used as RACSequences.

Team

Created in response to certain actions RACCcommand and subscribes to the signal. This applies primarily to UI interactions. Categories UIKit provided by ReactiveCocoa for most controls UIKit, give us a correct way to handle UI events. Let's imagine that we have to register a user in response to a button click. In this case, the command may represent a network request. When the process starts executing, the button changes its state to “inactive” and vice versa. What else? We can transmit an active signal in the command (Reachability - good example). Therefore, if the server is unavailable (which is our “on signal”), then the command will be unavailable, and each command of the associated control will reflect this state.

Examples of basic operations

Here are some diagrams of how basic operations with RACSignals work:

Merge

+ (RACSignal *)merge:(id )signals;


Result streams have both event streams combined together. So "+merge" is useful when you don't care about the specific event source but would like to process them in one place. In our example, stateLabel.text uses 3 different signals: progress, completion, errors.

RACCommand *loginCommand = [ initWithSignalBlock:^RACSignal *(id input) ( // let's login! )]; RACSignal *executionSignal = ; RACSignal *completionSignal = filter:^BOOL(RACEvent *event) ( return event.eventType == RACEventTypeCompleted; )] map:^id(id value) ( ​​return @"Done"; )]; RACSignal *errorSignal = ;

+ (RACSignal *)combineLatest:(id )signals reduce:(id (^)())reduceBlock;

As a result, the stream contains the latest values ​​of the transmitted streams. If one of the streams does not have a value, then the result will be empty.


When can we use it? Let's take our previous example and add more logic to it. It's useful to only enable the login button when the user has entered the correct email and password, right? We can declare this rule like this:

ACSignal *enabledSignal = reduce:^id (NSString *email, NSString *password) ( return @( && password.length > 3); )];

*Now let's change our login command a little and connect it to the actual loginButton

RACCommand *loginCommand = [ initWithEnabled:enabledSignal signalBlock:^RACSignal *(id input) ( // let's login! )]; ;

- (RACSignal *)flattenMap:(RACStream * (^)(id value))block;

You create new threads for each value in the original thread using this function(f). The result stream returns new signals based on the values ​​generated in the original streams. Therefore it can be asynchronous.


Let's imagine that your authorization request to the system consists of two separate parts: receiving data from Facebook (identifier, etc.) and passing it to the Backend. One of the requirements is to be able to cancel login. Therefore, client code must handle the state of the login process to be able to cancel it. This results in a lot of boilerplate code, especially if you can log in from multiple places.

How does ReactiveCocoa help you? This could be a login implementation:

- (RACSignal *)authorizeUsingFacebook ( return [[ flattenMap:^RACStream *(FBSession *session) ( return ; )] flattenMap:^RACStream *(NSDictionary *profile) ( return ; )]; )

Legend:

+ - a signal that leads to an opening FBSession. If necessary, this can lead to entering Facebook.

- - a signal that retrieves profile data through a session, which is transmitted as self.

The advantage of this approach is that for the user the entire flow is fuzzy, represented by a single signal that can be canceled at any “stage”, be it Facebook login or calling Backend.

Filter

- (RACSignal *)filter:(BOOL (^)(id value))block;

The resulting stream contains the values ​​of stream “a” filtered according to the specified function.


RACSequence *sequence = @[@"Some", @"example", @"of", @"sequence"].rac_sequence; RACSequence *filteredSequence = ; )];

Map

- (RACSignal *)map:(id (^)(id value))block;

Unlike FlattenMap, Map runs synchronously. The value of property “a” is passed through the given function f(x + 1) and returns the mapped original value.


Let's say you need to enter the title of a model on the screen, applying some attributes to it. Map comes into play when “Applying some attributes” is described as a separate function:

RAC(self.titleLabel, text) = initWithString:modelTitle attributes:attributes]; )];

How we work: unites self.titleLabel.text with changes model.title by applying custom attributes to it.

Zip

+ (RACSignal *)zip:(id )streams reduce:(id (^)())reduceBlock;

Result stream events are created when each thread has generated an equal number of events. It contains values, one from each of the 3 merged streams.


For some practical examples, zip can be described as dispatch_group_notify For example, you have 3 separate signals and need to combine their responses at a single point:

NSArray *signals = @; return ;

- (RACSignal *)throttle:(NSTimeInterval)interval;

Using a timer set for a certain period of time, the first value of stream “a” is transferred to the result stream only when the timer ends. In case a new value is produced within a given time interval, it holds the first value, preventing it from being passed to the result stream. Instead, a second value appears in the result stream.


Amazing case: we need to search for a query when the user changes the searchField. Standard task, right? However, it is not very efficient for constructing and sending a network request whenever text changes, since textField can generate many such events per second, and you end up with inefficient network usage.
The solution here is to add a delay before we actually complete the network request. This is usually achieved by adding an NSTimer. With ReactiveCocoa it's much easier!

[[ throttle:0.3] subscribeNext:^(NSString *text) ( // perform network request )];

*The important note here is that all "previous" textFields are changed before the "latest" ones are deleted.

Delay

- (RACSignal *)delay:(NSTimeInterval)interval;

The value received in stream “a” is delayed and transferred to the result stream after a certain time interval.


Like -, delay will only delay the sending of “next” and “completed” events.

[ subscribeNext:^(NSString *text) ( )];

What we love about Reactive Cocoa

  • Introduces Cocoa Bindings to iOS
  • Ability to create operations on future data. Here's some theory about futures & promises from Scala.
  • The ability to represent asynchronous operations in a synchronous manner. Reactive Cocoa simplifies asynchronous software such as network code.
  • Convenient decomposition. Code that deals with user events and application state changes can become very complex and confusing. Reactive Cocoa makes dependent operation models especially simple. When we represent operations as bundled threads (for example, processing network requests, user events, etc.), we can achieve high modularity and loose coupling, which leads to more frequent code reuse.
  • Behaviors and relationships between properties are defined as declarative.
  • Solves synchronization problems - if you combine multiple signals, then there is one single place to handle all the results (be it next value, completion signal or errors)

With the RAC framework, you can create and transform sequences of values ​​in a better, higher-level way. RAC makes it easier to manage everything that waits for an asynchronous operation to complete: the network response, the dependent value change, and the subsequent reaction. He may seem difficult to deal with at first, but ReactiveCocoa is infectious!

Today, there are a number of methodologies for programming complex real-time systems. One such methodology is called FRP. It has adopted a design pattern called Observer (Observer) on the one hand, and on the other, as you might guess, the principles of functional programming. In this article we will look at functional reactive programming using the example of its implementation in the library Sodium for language Haskell.

Theoretical part

Traditionally, programs are divided into three classes:

  • Batch
  • Interactive
  • Jet

The differences between these three classes of programs lie in the type of interaction they have with the outside world.

Execution package programs not synchronized with the outside world in any way. They can be launched at some arbitrary point in time and completed when the original data is processed and the result is obtained. Unlike batch programs, interactive and reactive programs exchange data with the outside world continuously during their work from the moment they start until they stop.

A reactive program, unlike an interactive one, is tightly synchronized with the external environment: it is required to respond to events in the external world with an acceptable delay in the rate at which these events occur. At the same time, an interactive program is allowed to make the external environment wait. For example, text editor is an interactive program: the user who has started the process of correcting errors in the text must wait for it to complete in order to continue editing. However, the autopilot is a reactive program, because if an obstacle arises, it must immediately correct the course to perform a flyby maneuver, because the real world cannot be paused like a text editor user.

These two classes of programs appeared somewhat later, when computers began to be used to control machines and user interfaces began to develop. Since then, many implementation methods have changed, each successive of which was designed to correct the shortcomings of the previous ones. At first these were simple event-driven programs: the system identified a certain set of events to which it was necessary to respond, and created handlers for these events. Handlers, in turn, could also generate events that went to the outside world. This model was simple and solved a certain range of simple interactive problems.

But over time, interactive and reactive programs became more complex and event-driven programming became hell. There is a need for more advanced tools for the synthesis of event-driven systems. There were two main flaws in this methodology:

  • Implicit state
  • Non-determinism

To fix this, a template was first created observer (observer), transforming events into time-varying meanings. The set of these values ​​represented the explicit state the program was in. Thus, event processing was replaced by data processing, which is familiar to batch programs. The developer could change observed values ​​and subscribe handlers to change these values. It became possible to implement dependent values ​​that changed according to a given algorithm when the values ​​on which they depended changed.

Although developments in this approach have passed, the need for them still remains. An event does not always imply a change in meaning. For example, a real-time event implies that the time counter will increment by a second, but an event where the alarm goes off every day at a certain time does not imply any meaning at all. Of course, you can associate this event with a certain meaning, but this will be an artificial device. For example, we can enter the value: alarm_time == remainder_of_division (current_time, 24*60*60). But this will not be exactly what we are interested in, since this variable is tied to the second, and in fact the value changes twice. To determine that the alarm has gone off, the subscriber must detect that the value has become true, not the other way around. The value will be true for exactly one second, and if we change the tick period from a second, say, to 100 milliseconds, then the true value will no longer be a second, but these 100 milliseconds.

The emergence of the functional reactive programming methodology is a kind of response from functionalists to the pattern observer. In FRP, the developed approaches were rethought: Events did not go away, however, observed values ​​​​also appeared and were named characteristics (Behaviors). An event in this methodology is a discretely generated value, and a characteristic is a continuously generated value. The two can be related: characteristics can generate events, and events can act as sources for characteristics.

The problem of state indeterminacy is much more complex. It stems from the fact that event-driven systems are de facto asynchronous. This entails the emergence of intermediate states of the system, which may be unacceptable for some reason. To solve this problem, so-called synchronous programming appeared.

Practical part

Library Sodium appeared as an implementation project FRP with a common interface in different programming languages. It contains all the elements of the methodology: primitives (Event, Behavior) and patterns for their use.

Primitives and interaction with the outside world

The two main primitives we have to work with are:

  • Event a- event with a value of type a
  • Behavior a- characteristic (or changing value) of the type a

We can create new events and values ​​using functions newEvent And newBehavior:

NewEvent:: Reactive (Event a, a -> Reactive ()) newBehavior:: a -> Reactive (Behavior a, a -> Reactive ())

As we can see, both of these functions can only be called in the monad Reactive, as a result, the primitive itself is returned, as well as the function that must be called to activate the event or change the value. The feature creation function takes an initial value as its first argument.

To connect the real world to a reactive program, there is a function sync, and to connect the program with the outside world there is a function listen:

Sync:: Reactive a -> IO a listen:: Event a -> (a -> IO ()) -> Reactive (IO ())

The first function, as the name suggests, executes some reactive code synchronously, it allows you to get into the context Reactive from context IO, and the second is used to add event handlers that occur in the context Reactive executing in context IO. Function listen returns a function unlisten, which must be called to detach the handler.

In this way, a unique transaction mechanism is implemented. When we do something inside the Reactive monad, the code is executed within one transaction at the time the function is called sync. State is deterministic only outside the context of a transaction.

This is the basis of reactive functional programming, which is sufficient for writing programs. It may be a little confusing that you can only listen to events. This is exactly how it should be, as we will see later there are close relationships between events and characteristics.

Operations on basic primitives

For convenience, the methodology includes additional features, transformative events and characteristics. Let's look at some of them:

An event that will never happen -- (can be used as a stub) never:: Event a -- Merge two events of the same type into one -- (useful for defining one handler for an event class) merge:: Event a -> Event a -> Event a -- Extracts values ​​from Maybe events -- (separates the wheat from the chaff) filterJust:: Event (Maybe a) -> Event a -- Turns an event into a characteristic with an initial value -- (changes the value when events occur) hold :: a -> Event a -> Reactive (Behavior a) -- Turns a characteristic into an event -- (we generate events when the value changes) updates:: Behavior a -> Event a -- Turns a characteristic into an event -- (also generates an event for the initial value) value:: Behavior a -> Event a -- When an event occurs, takes the value of the characteristic, -- applies the function and generates an event snapshot:: (a -> b -> c) -> Event a -> Behavior b - > Event c -- Gets the current value of the characteristic sample:: Behavior a -> Reactive a -- Combines repeated events into one coalesce:: (a -> a -> a) -> Event a -> Event a -- Suppresses all events , except the first once:: Event a -> Event a -- Splits an event with a list into several events split:: Event [a] -> Event a

Spherical examples in vacuum

Let's try to write something:

Import FRP.Sodium main = do sync $ do -- create event (e1, triggerE1)<- newEvent -- создаём характеристику с начальным значением 0 (v1, changeV1) <- newBehavior 0 -- определяем обработчик для события listen e1 $ \_ ->putStrLn $ "e1 triggered" -- define a handler for changing the value of the listen characteristic (value v1) $ \v -> putStrLn $ "v1 value: " ++ show v -- Generate an event without the value triggerE1 () -- Change the value of the changeV1 characteristic 13

Let's install the package Sodium by using Cabal and run the example in the interpreter:

# if we want to work in a separate sandbox # create it > cabal sandbox init # install > cabal install sodium > cabal repl GHCi, version 7.6.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. # load the example Prelude> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. # run the example *Main>

Now let's experiment. Let's comment out the line where we change our value (changeV1 13) and restart the example:

*Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main e1 triggered v1 value: 0

As you can see, the initial value is now output, this happens because the function value generates the first event with the initial value of the characteristic. Let's replace the function value on updates and let's see what happened:

*Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main e1 triggered

Now the initial value is not printed, but if we uncomment the line where we changed the value, the changed value will still be printed. Let's return everything as it was and generate an event e1 twice:

*Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main e1 triggered e1 triggered v1 value: 13

As you can see, the event also fired twice. Let's try to avoid this, why in the function listen replace the argument e1 on (once e1), thereby creating a new event that fires once:

*Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main e1 triggered v1 value: 13

When an event does not have an argument, the very fact of the presence or absence of the event is important to us, so the function once to combine events is the right choice. However, when the argument is present, this is not always appropriate. Let's rewrite the example as follows:

<- newEvent (v1, changeV1) <- newBehavior 0 listen e1 $ \v ->putStrLn $ "e1 triggered with: " ++ show v listen (value v1) $ \v -> putStrLn $ "v1 value: " ++ show v triggerE1 "a" triggerE1 "b" triggerE1 "c" changeV1 13

We will receive, as expected, all events with values ​​in the same order in which they were generated:

*Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main e1 triggered with: "a" e1 triggered with: "b" e1 triggered with: "c" v1 value: 13

If we use the function once With e1, then we will only get the first event, so let's try to use the function coalesce, for which we replace the argument e1 V listen argument (coalesce (\_a -> a) e1):

*Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main e1 triggered with: "c" v1 value: 13

And indeed, we only received the last event.

More examples

Let's look at more complex examples:

Import FRP.Sodium main = do sync $do(e1, triggerE1)<- newEvent -- создаём характеристику, изменяемую событием e1 v1 <- hold 0 e1 listen e1 $ \v ->putStrLn $ "e1 triggered with: " ++ show v listen (value v1) $ \v -> putStrLn $ "v1 value is: " ++ show v -- generate events triggerE1 1 triggerE1 2 triggerE1 3

Here's the output:

*Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main e1 triggered with: 1 e1 triggered with: 2 e1 triggered with: 3 v1 value is: 3

The characteristic value is displayed only once, although several events were generated. This is the peculiarity of synchronous programming: the characteristics are synchronized with the call sync. To demonstrate this, let's slightly modify our example:

<- sync $ do (e1, triggerE1) <- newEvent v1 <- hold 0 e1 listen e1 $ \v ->putStrLn $ "e1 triggered with: " ++ show v listen (value v1) $ \v -> putStrLn $ "v1 value is: " ++ show v return triggerE1 sync $ triggerE1 1 sync $ triggerE1 2 sync $ triggerE1 3

We just brought the event trigger to the outside world and call it in different synchronization phases:

*Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main v1 value is: 0 e1 triggered with: 1 v1 value is: 1 e1 triggered with: 2 v1 value is: 2 e1 triggered with: 3 v1 value is: 3

Now each event displays a new value.

Other operations on primitives

Consider the following group of useful functions:

Merges events using the function mergeWith:: (a -> a -> a) -> Event a -> Event a -> Event a -- Filters events, keeping only those for which the function returns true filterE:: (a -> Bool) -> Event a -> Event a -- Allows you to "turn off" events - when the characteristic is False gate:: Event a -> Behavior Bool -> Event a - Organizes an event converter - with an internal state collectE: : (a -> s -> (b, s)) -> s -> Event a -> Reactive (Event b) -- Organizes a characteristic transformer -- with an internal state collect:: (a -> s -> (b , s)) -> s -> Behavior a -> Reactive (Behavior b) -- Creates a characteristic as a result of accumulation of events accum:: a -> Event (a -> a) -> Reactive (Behavior a)

Of course, these are not all the functions provided by the library. There are much more exotic things that are beyond the scope of this article.

Examples

Let's try these features in action. Let's start with the last one, organizing something like a calculator. Let us have a certain value to which we can apply arithmetic functions and get the result:

Import FRP.Sodium main = do triggerE1<- sync $ do (e1, triggerE1) <- newEvent -- пусть начальное значение будет равно 1 v1 <- accum (1:: Int) e1 listen (value v1) $ \v ->putStrLn $ "v1 value is: " ++ show v return triggerE1 -- ​​add 1 sync $ triggerE1 (+ 1) -- multiply by 2 sync $ triggerE1 (* 2) -- subtract 3 sync $ triggerE1 (+ (-3) ) -- add 5 sync $ triggerE1 (+ 5) -- raise to the power 3 sync $ triggerE1 (^ 3)

Let's run:

*Main> :l Example.hs Compiling Main (Example.hs, interpreted) Ok, modules loaded: Main. *Main> main v1 value is: 1 v1 value is: 2 v1 value is: 4 v1 value is: 1 v1 value is: 6 v1 value is: 216

It may seem that the set of features is quite limited, but in fact it is not so. After all, we are dealing with Haskell; applicative functors and monads are here to stay. We can perform any operations on characteristics and events that we are used to performing on pure values. As a result, new characteristics and events are obtained. For characteristics, the class functor and applicative functor are implemented, but for events, for obvious reasons, only a functor.

For example:

<$>), (<*>)) import FRP.Sodium main = do (setA, setB)<- sync $ do (a, setA) <- newBehavior 0 (b, setB) <- newBehavior 0 -- Новая характеристика a + b let a_add_b = (+) <$>a<*>b -- New characteristic a * b let a_mul_b = (*)<$>a<*>b listen (value a) $ \v -> putStrLn $ "a = " ++ show v listen (value b) $ \v -> putStrLn $ "b = " ++ show v listen (value a_add_b) $ \v - > putStrLn $ "a + b = " ++ show v listen (value a_mul_b) $ \v -> putStrLn $ "a * b = " ++ show v return (setA, setB) sync $ do setA 2 setB 3 sync $ setA 3 sync $setB 7

This is what will be output in the interpreter:

λ> main a = 0 b = 0 a + b = 0 a * b = 0 a = 2 b = 3 a + b = 5 a * b = 6 a = 3 a + b = 6 a * b = 9 b = 7 a + b = 10 a * b = 21

Now let's see how something like this works with events:

Import Control.Applicative ((<$>)) import FRP.Sodium main = do sigA<- sync $ do (a, sigA) <- newEvent let a_mul_2 = (* 2) <$>a let a_pow_2 = (^2)<$>a listen a $ \v -> putStrLn $ "a = " ++ show v listen a_mul_2 $ \v -> putStrLn $ "a * 2 = " ++ show v listen a_pow_2 $ \v -> putStrLn $ "a ^ 2 = " ++ show v return sigA sync $ do sigA 2 sync $ sigA 3 sync $ sigA 7

This is what will be output:

λ> main a = 2 a * 2 = 4 a ^ 2 = 4 a = 3 a * 2 = 6 a ^ 2 = 9 a = 7 a * 2 = 14 a ^ 2 = 49

The documentation contains a list of instances of classes that are implemented for Behavior And Event, but nothing prevents you from implementing instances of the missing classes.

The Downside of Reactivity

Functional reactive programming undoubtedly simplifies the development of complex real-time systems, but there are many aspects that need to be considered when using this approach. Therefore, we will consider here the problems that most often occur.

Non-simultaneity

Synchronous programming implies a certain transaction mechanism that ensures the consistency of successively changing system states, and, therefore, the absence of intermediate unexpected states. IN Sodium calls are responsible for transactions sync. Although the state inside a transaction is not defined, it cannot be assumed that everything inside it happens at the same time. The values ​​change in a certain order, which affects the result. So, for example, sharing events and characteristics may cause unexpected effects. Let's look at an example:

Import Control.Applicative ((<$>)) import FRP.Sodium main = do setVal<- sync $ do (val, setVal) <- newBehavior 0 -- создаём булеву характеристику val >2 let gt2 = (> 2)<$>val -- create an event with values ​​that > ​​2 let evt = gate (value val) gt2 listen (value val) $ \v -> putStrLn $ "val = " ++ show v listen (value gt2) $ \v -> putStrLn $ "val > 2 ? " ++ show v listen evt $ \v -> putStrLn $ "val > 2: " ++ show v return setVal sync $ setVal 1 sync $ setVal 2 sync $ setVal 3 sync $ setVal 4 sync $setVal 0

You can expect output like this:

Val = 0 val > 2 ? False val = 1 val > 2 ? False val = 2 val > 2 ? False val = 3 val > 2 ? True val > 2: 3 val = 4 val > 2 ? True val > 2: 4 val = 0 val > 2 ? False

However, in fact the line val>2:3 will be missing, and at the end there will be a line val>2:0. This happens because the value change event (value val) generated before the dependent characteristic is calculated gt2, and therefore the event evt does not occur for the set value 3. At the end, when we set it to 0 again, the calculation of the characteristic gt2 is late.

In general, the effects are the same as in analog and digital electronics: signal races, to combat which different techniques are used. In particular, synchronization. This is what we'll do to get this code to work properly:

Import Control.Applicative ((<$>)) import FRP.Sodium main = do (sigClk, setVal)<- sync $ do -- Мы ввели новое событие clk -- сигнал синхронизации -- прям как в цифровой электронике (clk, sigClk) <- newEvent (val, setVal) <- newBehavior 0 -- Также вы создали альтернативную функцию -- получения значения по сигналу синхронизации -- и заменили все вызовы value на value" let value" = snapshot (\_ v ->v) clk let gt2 = (> 2)<$>val let evt = gate (value" val) gt2 listen (value" val) $ \v -> putStrLn $ "val = " ++ show v listen (value" gt2) $ \v -> putStrLn $ "val > 2 ? " ++ show v listen evt $ \v -> putStrLn $ "val > 2: " ++ show v return (sigClk, setVal) -- Entered new feature sync" -- which raises a sync signal -- at the end of each transaction -- And replaced all calls to sync with it let sync" a = sync $ a >> sigClk() sync" $ setVal 1 sync" $ setVal 2 sync" $ setVal 3 sync" $setVal 4 sync" $setVal 0

Now our output is as expected:

λ> main val = 0 val > 2 ? False val = 1 val > 2 ? False val = 2 val > 2 ? False val = 3 val > 2 ? True val > 2: 3 val = 4 val > 2 ? True val > 2: 4 val = 0 val > 2 ? False

Laziness

Another kind of problem relates to the lazy nature of calculations in Haskell. This leads to the fact that when testing the code in the interpreter, some output at the end may simply be missing. What can be suggested in this case is to perform a useless synchronization step at the end, e.g. sync $return().

Conclusion

That's enough for now, I think. Currently one of the authors of the library Sodium writes a book about FRP. Let's hope this will somehow fill the gaps in this area of ​​​​programming and serve to popularize progressive approaches in our ossified minds.

Check information. It is necessary to check the accuracy of the facts and reliability of the information presented in this article. There should be an explanation on the talk page... Wikipedia

Interactivity is a concept that reveals the nature and degree of interaction between objects. Used in the fields of: information theory, computer science and programming, telecommunications systems, sociology, industrial design and others. In... ... Wikipedia

This article should be Wikified. Please format it according to the article formatting rules. This term has other meanings, see Elektromash (meanings) ... Wikipedia

foreign psychotherapeutic techniques- DEPTH TECHNIQUES Active psychotherapy (Fromm Reichmann). Analysis of Being (Binswanger). Analysis of fate (Sondi). Character analysis (W. Reich). Self Analysis (H. Kohut, E. Erikson). Analytical play therapy (M. Klein). Analytical Family Therapy (Richter).... ... Great psychological encyclopedia

Books

  • Reactive programming in C++. Designing Parallel and Asynchronous Applications Using , Pai Prasidh, Abraham Peter. Designing parallel and asynchronous applications using the RxCpp library and modern C++17 Parallel programming tools supported by the language standard Collaborative…
  • , Nurkiewicz T., Christensen B.. In these days, when programs are asynchronous and fast response is the most important property, reactive programming will help you write more reliable, better scalable and faster running code.…
  • Reactive programming using RxJava, Nurkiewicz Tomasz, Christensen Ben. In a day and age where programs are asynchronous and responsiveness is paramount, reactive programming can help you write more reliable, better scalable, and faster running code.…

I want to tell you about a modern programming discipline that meets the growing demands for scalability, fault tolerance and responsiveness, and is indispensable in both multi-core environments and cloud computing, and also introduce an open online course on it, which will start in just a few days.

If you haven't heard anything about reactive programming, that's okay. This is a rapidly developing discipline that combines concurrency with event-drivenness and asynchrony. Reactivity is inherent in any web service and distributed system, and serves as the core of many high-performance, highly parallel systems. In short, the course authors propose to consider reactive programming as a natural extension of functional programming (with higher-order functions) to parallel systems with distributed state, coordinated and orchestrated by asynchronous data streams exchanged between active subjects, or actors.

This is described in more understandable terms in the Reactive Manifesto; below I will retell the main provisions from it, and the full translation is published on Habré. According to Wikipedia, the term reactive programming has existed for quite a long time and has practical applications of varying degrees of exoticism, but it received a new impetus for development and distribution quite recently, thanks to the efforts of the authors of the Reactive Manifesto - an initiative group from Typesafe Inc. Typesafe is known in the functional programming community as the company founded by the authors of the beautiful Scala language and the revolutionary Akka parallel platform. They are now positioning their company as the creator of the world's first next-generation jet platform. Their platform allows you to quickly develop complex user interfaces and provides new level abstractions over parallel computing and multi-threading, reducing their inherent risks with guaranteed predictable scaling. It puts the ideas of the Reactive Manifesto into practice and allows the developer to conceptualize and create applications that meet modern needs.

You can become familiar with the framework and reactive programming by taking the Principles of Reactive Programming massive open online course. This course is a continuation of Martin Odersky's Principles of Functional Programming on Scala, which has over 100,000 participants and has one of the highest success rates in the world for a massive open online course by its participants. Together with the creator of the Scala language, the new course is taught by Eric Meyer, who developed the Rx environment for reactive programming under .NET, and Roland Kuhn, who currently leads the Akka development team at Typesafe. The course covers the key elements of reactive programming and shows how they are used to design event-driven systems that are scalable and fault-tolerant. The educational material is illustrated with short programs and accompanied by a set of tasks, each of which is a software project. If all tasks are successfully completed, participants receive certificates (of course, both participation and certificates are free). The course lasts 7 weeks and starts this Monday, November 4th. A detailed outline, as well as an introductory video, are available on the course page: https://www.coursera.org/course/reactive.

For those who are interested or in doubt, I offer a condensed summary of the basic concepts of the Reactive Manifesto. Its authors note significant changes in application requirements in recent years. Today, applications are deployed in any environment from mobile devices to cloud clusters with thousands of multi-core processors. These environments place new demands on software and technologies. In previous generation architectures, the emphasis was on managed servers and containers, and scaling was achieved through additional expensive hardware, proprietary solutions, and parallel computing through multi-threading. A new architecture is now evolving, in which four important features can be identified that are increasingly prevalent in both consumer and corporate industrial environments. Systems with such an architecture are: event-driven, scalable, fault-tolerant and have a fast response, i.e. responsive. This provides a seamless user experience with a real-time feel, supported by a self-healing, scalable application stack ready for deployment in multi-core and cloud environments. Each of the four characteristics of reactive architecture applies to the entire technology stack, which distinguishes them from links in layered architectures. Let's look at them in a little more detail.


Event-driven applications assume asynchronous communication components and implement their loosely coupled design: the sender and recipient of the message do not need information about each other or about the method of transmitting the message, which allows them to concentrate on the content of the communication. In addition to the fact that loosely coupled components significantly improve the maintainability, extensibility and evolution of the system, the asynchrony and non-blocking nature of their interaction also makes it possible to free up a significant part of resources, reduce call time and ensure O greater throughput compared to traditional applications. It is the event-driven nature that makes the remaining features of reactive architecture possible.

Scalability in the context of reactive programming, this is the system’s response to load changes, i.e. elasticity, achieved by the ability to add or release compute nodes as needed. With low coupling, asynchronous messaging, and location transparency, the application's deployment method and topology become a deployment time decision and a matter of configuration and load-responsive algorithms. Thus, the computer network becomes part of an application that initially has an explicit distributed nature.

Fault tolerance Reactive architecture also becomes part of the design, and this makes it significantly different from traditional approaches to ensuring continuous system availability by redundant servers and taking over control in the event of failure. The resilience of such a system is achieved by its ability to correctly respond to failures of individual components, isolate these failures by storing their context in the form of messages that caused them, and pass these messages to another component that can make decisions about how to handle the error. This approach allows you to keep the business logic of the application pure, separating from it the logic for handling failures, which is formulated in an explicit declarative form to register, isolate, and handle failures using the system itself. To build such self-healing systems, components are ordered hierarchically, and the problem is escalated to the level that can solve it.

And finally responsiveness- this is the ability of the system to respond to user input regardless of load and failures; such applications involve the user in interaction, create a feeling of close connection with the system and sufficient equipment to perform current tasks. Responsiveness is not only relevant in real-time systems, but is also necessary for a wide range of applications. Moreover, a system that is unable to respond quickly even at the moment of failure cannot be considered fault-tolerant. Responsiveness is achieved by using observable models, event streams, and stateful clients. Observable models generate events when their state changes and enable real-time interaction between users and systems, and event streams provide the abstraction on which this interaction is built through non-blocking asynchronous transformations and communications.

Thus, reactive applications represent a balanced approach to solving a wide range of problems in modern software development. Built on an event-driven foundation, they provide the tools needed to guarantee scalability and fault tolerance and support a rich, responsive user experience. The authors expect that an increasing number of systems will adhere to the principles of the reactive manifesto.

In addition, I provide the course plan without translation. Just in case you've read this far and are still interested.

Week 1: Review of Principles of Functional Programming: substitution model, for-expressions and how they relate to monads. Introduces a new implementation of for-expressions: random value generators. Shows how this can be used in randomized testing and gives an overview of ScalaCheck, a tool which implements this idea.

Week 2: Functional programming and mutable state. What makes an object mutable? How this impacts the substitution model. Extended example: Digital circuit simulation

Week 3: Futures. Introduces futures as another monad, with for-expressions as concrete syntax. Shows how futures can be composed to avoid thread blocking. Discusses cross-thread error handling.

Week 4: Reactive stream processing. Generalizing futures to reactive computations over streams. Stream operators.

Week 5: Actors. Introduces the Actor Model, actors as encapsulated units of consistency, asynchronous message passing, discusses different message delivery semantics (at most once, at least once, exactly once) and eventual consistency.

Week 6: Supervision. Introduces reification of failure, hierarchical failure handling, the Error Kernel pattern, lifecycle monitoring, discusses transient and persistent state.

Week 7: Conversation Patterns. Discusses the management of conversational state between actors and patterns for flow control, routing of messages to pools of actors for resilience or load balancing, acknowledgment of reception to achieve reliable delivery.


Close