The Fox in the Machine

Philip Tetlock, the author of “Superforecasting” and a leading researcher in the psychological theory behind human predictions, classified forecasters into two types – hedgehogs and foxes. The archetypical description is as follows:

  • Hedgehogs – people with a specific and highly detailed world view on a topic, with a single source through which to explain everything. They are experts in that field and their theory can be used to give an explanation to any phenomenon.
  • Foxes – people who maintain many models of the world, and a lot of data even if it contradicts their working model. They answer questions about the world by combining all of their data and models, weighted by probabilities.
  •  

We see that machine learning today can be thought of as a foxy way of obtaining solutions, as opposed to many examples in AI history which had tried a more hedgehog-like approach. This makes a lot of sense, as ML is mainly successful and is measured in it’s ability to make statistical prediction, where (as is true in humans) foxes do a much better job.

However, we do see a benefit in maintaining expert hedgehogs in our society. This may be because of the way humans think, which is verbally and consequentially, and thus a simple (confabulated) explanation for why some phenomenon should occur can help humans solve more problems as it adds to their arsenal of ideas.

So it may be the case that in order to achieve more human like intelligence in machines we should build confabulatory systems with a hedgehog mind.

Advertisements

Marketing Failure

The amount of money spent worldwide on advertisements is astronomical, reaching about 550 billion dollars annually and is expected to continue growing rapidly. In 2016, the top 20 companies with the biggest annual advertising budget spend between 2.7 to 8.3 billion dollars annually.

Is this a result of a race-to-the-bottom-type market failure? If so, can we solve it?

Continue reading

Solving the Cyber-Security Bubble

There seems to be a big bubble in cyber security. Many awful products in the market, and many bad startups easily raising funds. I believe this problem needs to be addressed, either by governmental regulations or by independent companies. In this brief post I lay out the problem and what has been done so far to mitigate this effect.
Continue reading

Effective Global Scientific Research

Scientific advancements are one of the greatest drivers for improving the quality of life of everyone on the globe. In this post I’ll present an overview of an idea, which is basically aimed at improving the allocation of resources on scientific research such that it will correspond better to what is important for society.

We’ll start by a bit of background, and then go to the actual idea. The idea is simply to construct a graph of the important objectives in science, and how each problem relates to other problems.

Continue reading

Security of Google’s Federated Learning

In this post I’ll collect some initial thoughts regarding the security of Google’s Federated Learning, which is a method for learning a model on a server where the clients do not send their data, but instead they send an updated model trained on their device with their own data.  The main points are:

  1. Knowing the clients update can give information on his training data.
  2. Knowing the average of some updates is likely to give information on each user’s update.
  3. If an attacker can send many updates, he can get information on a specific client.

The first two points are acknowledged briefly in the article.

Continue reading

A possible improvement for black-box adversarial example attack

This paper presents a cunning adversarial example attack on an unknown DNN model, with a small amount of black box calls to the model available (which happen before the input-for-deformation is given).  The algorithm is basically to build a different model, an adversarial DNN, with some arbitrary choice of architecture and hyper parameters, and learn the parameters on a data set given by oracle calls to the model. The choice of inputs to the oracle is made iteratively by taking the inputs from the previous iteration and choosing points close by that are the closest to the decision boundary of the last learned adversarial DNN.

I think it may be possible to improve the choice of the new inputs. The best choices for a new input are inputs such that they should have a big impact on the decision boundary, weighted by the probability distribution of possible inputs.

Several thoughts regarding “big impact on the decision boundary”:

  1. The work is entirely done during preprocess, as the (adversarial) model is known.
  2. Points near (at) the decision boundary are very good.
  3. A point on the decision boundary can be approximated in log-time.
  4. It may be possible to find good measures to the extent that a new input has changed the decision boundary.
    1. For example, maybe a form of regularization where we motivate changing as many parameters by as much as possible is good enough. (I guess not, but it is very simple to test)

Several thoughts regarding the probability distribution of possible inputs:

  1. It seems like a very important concept to understand deeply.
  2. It is probably heavily researched.
  3. If there is an available training set, it may be possible to approximate the manifold of the probable inputs.
    1. Maybe GANs can help with this problem.