Functional Programming by Example - Side Effects

Long time no see! I want to continue my explorational series of functional programming with side effects.
After doing some research I came across this excellent talk by Kris Jenkins about functional programming.

I my opinion he does an outstanding job in explaining the meaning of side effects and functional programming in general.
So instead of writing a long blog post, I leave you with this talk. Money quotes:

Hidden inputs and outputs are called ‘side effects’

Functional Programming is eliminating & controlling side effects

Kris Jenkins

Let me know if things in this talk need clarification!

Functional Programming by Example - Higher Order Functions

In my previous post I tried to explain tail call optimization.
If you read it to the end you saw that map function that took a collection and a function as its arguments.
This already was a higher order function. The definition is straight forward:

A higher order function takes a function as an argument or returns a function as its result, or both.

So lets say you work in a big company and you have to calculate the bonuses for the employees. Of course, not everyone gets the same bonus, but you’re too lazy to pass the specific multiplier at every calculation. So you create seperate functions.
In this example the C-level department members get a bonus of 1.5 of their annual salary. While the salaries may differ, the multiplier stays the same.

1
2
3
4
5
6
7
8
9
defmodule Salary do
def bonus_calculation(percentage) do
fn(salary) -> salary * percentage end
end
end

# c_level_bonuses = Salary.bonus_calculation(1.5)
# c_level_bonuses.(100_000)
# => 150000.0

You may be wondering, why our c_level_bonuses function still knows about the percentage since we haven’t explicitly passed that value.
That’s because the function has knowledge about the environment it was created in - it’s a closure.

If you want to see an example for a function that takes another function as an argument, head to my previous post and take a look at the map function.

And in case you feel adventurous, try to find a use case to write a function that takes and returns a function =)

Functional Programming by Example - Tail Call Optimization

Since Elixir came to my attention I really enjoy functional programming.
This series of posts aims to explain some of the terms used in functional programming contexts by using some straight forward and (hopefully) interesting examples.
The language of choice is, of course, Elixir(could you have guessed? =)).
If you are not familiar with Elixir, I hope you can still follow along, otherwise you might want to take a look at Elixir’s excellent guides.

Recursion

When talking about tail call optimization it might be helpful to have a look at what happens when you call a function recursively.
Lets say we have the following function.

1
2
3
4
5
6
defmodule Recur do
def multiply_forever(a) do
b = 5
b * multiply_forever(a)
end
end

I you feel adventurous run this function and see what happens with your RAM usage(spoiler: it will go up fast).

And that’s exactly what you expect to happen - a function calls itself forever, every time it uses some memory space for the stack to save what values a and b hold on each run.

Tail Call Optimization

Now we are using the same example, with a small modification.

1
2
3
4
5
6
defmodule Recur do
def multiply_forever(a) do
b = 5
multiply_forever(a * b)
end
end

Your RAM usage will not grow indefinitely.
Why?
Because the last thing we called in our function is just the function itself.

A function is a subject to tail call optimization if the last thing it does is calling itself - nothing but itself.

How does it work?

In the first case the compiler may not know if he needs to keep those a and b values, so they have to be saved before every new function call.
This might be written down like this:

1
2
3
4
5
6
7
8
9
10
call multiply_forever(a)
save a,b
call multiply_forever(a)
save a,b
call multiply_forever(a)
save a,b
call multiply_forever(a)
save a,b
call multiply_forever(a)
...

In contrary, in the second example, the compiler knows, that he only has to care about the new argument to multiply_forever.
So he just updates the argument and JUMPS back to when multiply_forever first got called.
That’s the difference - JUMP vs. CALL

1
2
3
4
5
6
multiply_forever(a)
jump back to multiply_forever, update what we know about a
jump back to multiply_forever, update what we know about a
jump back to multiply_forever, update what we know about a
jump back to multiply_forever, update what we know about a
...

That’s basically what you have to know about tail call optimization(imho).
The next section is just for the sake of a more interesting example, feel free to skip it, I hope this article was useful to you.

A “real world example“ - writing your own map function

Lets say you want to write your own Enum module, it will need a map function. So we are going to write it.

First approach that is not suiting tail call optimization:

1
2
3
4
5
6
defmodule MyEnum do
def map([], fun) when is_function(fun), do: []
def map([head|tail], fun) do
[fun.(head) | map(tail, fun)]
end
end

Now the “optimized” version(disclaimer: there are better ways to implement this for sure):

1
2
3
4
5
6
7
8
9
10
defmodule MyEnum do
def map(collection, fun) when is function(fun) and is_list(collection) do
_map(collection, [], fun)
end

defp _map([], acc, _fun), do: Enum.reverse(acc) # yeah it's cheating to use Enum here
defp _map([head|tail], acc, fun) do
_map(tail, [fun.(head) | acc], fun)
end
end

As you can see, the tail call version takes more code, is not as straight forward as the first approach and we even have to reverse our result.
This is just to show, that even though tail optimization is necessary and you have to use it, sometimes it takes a litte bit more work.

Merry Christmas