New blog domain: kula.blog

It's still work in progress but new posts will be published on  https://kula.blog/ Thanks to 11ty base blog RSS is working from the start so it should be easy to add to your reader if you still use any :) If now then I hope you can sign up to the newsletter where I'll be publishing new posts and interesting articles to it from time to time. Please use  kula.blog  from now on.

Book Review: Algorithms to Live By: The Computer Science of Human Decisions by Brian Christian and Tom Griffiths


I liked the idea of the book form the first time I've seen the title and fortunately, it lives up to my expectations. It's not perfect, but it rekindled my curiosity towards math and algorithms. Bringing them to everyday problems makes them much more interesting.

The only thing that I'm missing in the book is the application of algorithms in business settings. For example, when I mentioned this book Mateusz K he has sent me a blog post of  Pieter Hintjens: Amdahl to Zipf: Ten Laws of the Physics of People. For example, Amdahl's Law mathematically proves the argument from The Mythical Man-Month that you can't just add new people to the project to deliver it faster because of the cost of synchronization.

I want this post to be a reminder for myself of the most interesting ideas in the book so if you want to savor it yourself just remember that it's worth buying.

Rating: 9/10


Contents:

1. Optimal Stopping

Probably the most interesting from this chapter was the rule of 37% that's optimal in the "secretary problem". Given some amount of time for interviews, you should spend 37% of them only on exploring. After exploring you should pick the first option that is better than anything from the exploration phase. In real life, it's more complicated than 37% but the rule to only explore in the beginning was still interesting.

2. Explore/Exploit

The most important question when deciding between staying with something that's working or looking for something new is the timespan. How long will you use it? How long are you going to live? For how long will you be able to exploit this thing?
If you're young or early in your career then you should explore a lot (like small children do) - exploration has a cost but some of the new things may be way better than everything you've seen so far.
When you're old then stick to the things that work, things you already like - it's not worth wasting time and effort to learn things that you won't have time to use.

A simple strategy to use: Minimise regret 
The sad thing is we're going to gain regrets even when we choose the optimal strategy. Minimizing regret should get us fewer regrets each year in comparison to the previous ones - so it's the best you can get.
The optimal strategy is to pick new opportunities with maximum gains - or "upper confidence bounds". If you're going to gamble it's best to gamble on things where you can get the most out of it. For me, it was really hard to wrap my head around as it means that when in doubt you should be an optimist.

Another idea I first heard about is "Adaptive" trials. You might have heard about A/B tests where you check two things at the same time to compare results and pick the best option. It's simple but what if one of them kills people? When you test new drugs or medical procedures and see that one option is saving lives and another is killing them is it still moral to continue with the rest of the trial?
In this situation, you can use another simple strategy called "Win-Stay, Lose-Shift". As long as it works keep doing it but switch when it stops working. In medical trials, it means - shift when someone dies.
Overall people explore too much - at least in the lab - but it seems likely it's because we live in the restless world - it constantly changes so you can't be sure if the wrong option from yesterday is still worse or maybe something has changed?          


3. Sorting

I've learned that computers were created to sort things. When you search for something - it's still dependent on sorting (imagine best search results being on a 100th page on google).
It's inefficient to search an unsorted list/stack but it's a waste of time to sort things that you're never going to search.
Rule of thumb: do not sort.
There are experiments proving that sorting emails into categories is a waste of time. It's better to search for them when needed.
In many sports (like football or soccer for US readers) the second best team is pretty much random. Only the best team really is the best. The easiest way to make sure that the second and third place are real is for everyone to play with everyone - unfortunately, that takes too much time so not many sports do it.

The best strategy is to avoid direct comparisons like matches and "pecking order fights" to establish the hierarchy and switch to races when something like time can be used to judge who/what is better. Direct comparisons take a lot of time.

4. Caching

Least Recently Used - LRU is the best caching strategy you can use. It's pretty close to clairvoyance and you can use it even for everyday things like what to keep at your desk, nightstand or in the wardrobe. People aren't like computers so by using it strategically you can make yourself make things more often just by placing them near you.
Noguchi Filling System - it's LRU but for documents or just physical stuff. You have a shelf with a list of documents. If you want to add a new document or put one back - always put it on the left. New items and the items recently used end up on the left-hand side of the shelve - the order takes care of itself.
It turns out our memory works exactly the same: Forgetting curve is basically LRU - memories you use often are kept closer - everything else moves into oblivion. Most of the time it's a good thing and it's worth to remember that when you can't remember something it might be the case that it's just far away and searching for it takes time - you may still remember it just fine just the sheer amount of memories makes it hard to find.

5. Scheduling

If you have 100 tasks with deadlines you can decide to do as many as possible or to limit how much they are going to be late - different goals have different optimal strategies.
Personally, I found the idea of Thrashing as applicable to everyday tasks the most intriguing. If you have too many tasks just managing them will take a lot of your limited time. At some point, management itself will take most of your time. It seems a little crazy but everyone can feel it from time to time when you feel overwhelmed before starting a really big project.

The most interesting strategies I've picked up are:

  1. Do the short tasks first to minimize the number of tasks.
  2. People have long context switching cost - try to group tasks into 90min blocks.
  3. Try to group small/easy/short tasks together to avoid switching your focus too often.
  4. If the task takes 2 times longer do it first only if it is 2x more important.
  5. Decide how responsive you want to be eg. check emails once a day and don't break it to avoid context switching costs.


6. Bayes's Rule

To tell how long it will take (or exist or live), you have to decide if you know how long things usually take.
If you know the average e.g. average life expectancy then it is your answer. After the average eg. if someone is older: add a couple more years to existing age at that is your answer (Bayes's Rule with Prior Belief).
If you don't know anything then it's best to assume that you're in the middle of its existence. For example how long a building is going to last? If it's already 30 years old then your best guess is another 30 years (Copernican Principle).

In our lives, we overestimate events that we hear a lot about - if you're watching a lot of News then you're going to overestimate the frequency of mentioned events eg. plain crashes.

Marshmallow test - which I thought was really about delaying gratification might be more about expectations. It's easy to assume children should wait for the second marshmallow but the test assumes that the future is predictable and known. If the child thinks it's likely that you're going to break the promise then it doesn't make sense to wait.


7. Overfitting

I really like the subtitle of this chapter "The Case Against Complexity". Complexity has cost and it might even be damaging. I'm pretty sure it should be taught to IT students instead of Design Patterns.
Given data in the chart (taken from Wikipedia) notice which function (line) get's you closer to the underlying phenomenon. Around -4.5 there is a huge spike in (blue) more complex function caused by overfitting. The polynomial function tries too hard to follow the data - it's no longer useful to describe whetever data points were measuring.
We should penalize complexity. In real life use the Occam's razor - always pick simples explanation. Use Simple Rules even if they aren't perfect as they use less cognitive resources and make your choices easier.

When things change then it makes sense to keep some old genes or traditions even if they don't work that well at the moment. If there's a big change back to the old ways then we won't die off that easily. Conservatism is a little bit like evolution - both are restraints for overfitting.

Use "Early Stopping" when you don't know much about the problem. Only pick one most important metric to avoid overfitting for barely important that can mess the whole model.


8. Relaxation

Questions like "What would you do if you didn't have to work?" are used to solve intractable problems. Turns out some things are just hard and by removing constraints you can get closer to the best answer.

Things to try:
1. Remove the constraint
2. Instead of yes/no try assigning a number between 0 and 1. eg. 0.4 and 0.6 and compare your options.
3. Ask yourself what's really the cost of breaking the rule? How much speeding ticket does really cost?


9. Randomness

Instead of understanding the problem sometimes it makes sense to just sample a lot of data.

It's the first time that I learned about Bloom filter - which is a set that has an error rate of around 1-2% to save time and space of keeping the data.

There was a lot in this chapter about shortest paths and avoiding local maximums with strategies like Simulated Annealing but I'm not really sure how I could use them in practice outside of programming.

10. Networking

This chapter starts with the problem of the Two Generals - which shows how hard it is to coordinate using a medium that can fail. It was a good reminder for me that TCP uses "triple handshake" to acknowledge packets in the networks.
Another reminder was learning about early problems of networks caused by new users and how exponential backoff fixes it by cutting the speed by half when it notices any lost packets. Finally, I understood why it's cut by half - if there are only two users in the network - the first one should leave half of it for the new joining user.
Exponential Backoff can a problem when you want to serve a fast website but it's still a useful algorithm. I was surprised that it's useful in fighting crime - give offenders increasingly long sentences (starting with one day) instead of a bunch of warnings followed by a really long sentence.

Exponential Backoff can be useful to fight with Peter Principle which observes that people in a hierarchy tend to rise to their "level of incompetence". To fight with it organizations use rules like up or out, but it's highly ineffective to let go of people that were highly effective on the lower ladder of the organization. Exponential Backoff suggests downgrading them instead of letting them go. It sounds much more humane to me.

I liked the description of bufferbloat. It's not a good thing that you're router has so much memory - everything gets slower because of it as it keeps packets for too long. Sometimes never is better than late and big buffers are breaking this rule.
In general, it's a case against our devices remembering everything for us - for example emails when you're on vacation. You get overwhelmed by them when you're back. Buffers like those induce "lack of idleness" and raise our stress levels.


11. Game Theory

The biggest gain for me from this chapter is learning about Reverse Game Theory. The easiest way to explains it is with the tragedy of the commons - if something is for everyone the dominant strategy will ruin it. The best way for everyone (paradoxically) is to introduce a penalty from the outside. It's about creating new rules to get the results that everyone wants.

Beware of Information Cascades - if you're looking at others to know what to do - others are probably looking at you to decide what to do themselves.
Stating your contrasting opinion may save the whole group just because it creates contrast.

All in all, it's a really good book. Hope that my notes will be helpful.

Big thanks to Mateusz K for reviewing this post.


Comments

Popular posts from this blog

How to use NPM packages from private repositories on bitbucket

How to simulate slow connection (developer proxy in Node.js)