Jaron Lanier talks about The Myth of AI

Here are my notes taken while listening to the interview with Edge.org.


  • AI has been heavily mythologized.
  • This myth causes bad decisions and allocation of resources impairing the development of AI.
  • The myth shifts attention from data to the algorithm learning from it. This causes faulty data ownership rights.
  • Users of AI-based recommender systems will over-trust the algorithm, harming further improvement and causing the users to be confined to a small set of options.

The interview

Corporate Personhood

A lot of us were appalled a few years ago when the American Supreme Court decided, out of the blue, to decide a question it hadn’t been asked to decide, and declare that corporations are people. That’s a cover for making it easier for big money to have an influence in politics.

Jaron is referencing a controversial Supreme Court decision known as “Citizens United” (named after the committee with the same name, with the agenda “Citizens United seeks to reassert the traditional American values of limited government, freedom of enterprise, strong families, and national sovereignty and security”). A surprisingly unbiased introduction can be found in reclaimdemocracy.org/who-are-citizens-united/, where their views on the subject are explicitly written.

Algorithmic Personhood – an economic perspective

He goes on saying

The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person?

Since many corporations are based on an algorithm, Jaron claims that we are actually giving “person” status to algorithms. He raises an interesting point, which is that we are in fact giving political power to algorithms, but does not expand much on that. This point of view can be explained as follows:

  1. Let C be a company who’s main revenue comes from solving a problem  P using an algorithm A.
  2. The majority of C‘s actions are toward advancing the effectiveness of A. In effectiveness I simply mean using P to make more profits. Say by collecting more data and advancing algorithmic research or by expanding the target market using advertisements or better UI, or even by making P more difficult to solve without A.
  3. There are many political actions that can effect the effectiveness of the algorithm.
  4. Hence more political power given to corporations can be thought of as political power serving the interests and goals of C.

Is this good or bad?

Most of the arguments regarding whether we should give less or more power to individuals with a lot of wealth apply here. The main arguments where our situation is different are the moral arguments applying to the rights and obligations of the individual, where it is not clear what (if any) moral rights and obligations should be assigned to an algorithms.

Intuitively, I do not see a good reason to assign any morality values to algorithms unless they have practical goals. For example, I do not think that it is intrinsically bad that a learning algorithm learns racist biases (although it may definitely have bad consequences). As another example, should we grant free speech right to an algorithm? Is it important that the interests of Google’s search engine won’t be silenced?

Algorithmic Personhood – a sociological perspective

what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people.


There has been a domineering subculture—[that there is] a historical determinism that we’re inevitably making computers that will be smarter and better than us and will take over from us.

I think the most interesting point that he makes is that the humanization of algorithms is biasing our point of view on the topics related to AI. This applies to the fear (or hopes) of the consequences of advanced AI.

Human Agency

My feeling about that is it’s a kind of a non-optimal, silly way of expressing anxiety about where technology is going. The particular thing about it that isn’t optimal is the way it talks about an end of human agency.

But it’s a call for increased human agency, so in that sense maybe it’s functional.

I would have liked to hear more. Nick Bostrom did a great work arguing why human agency is extremely difficult to maintain when a superintelligent algorithm exists.

Maybe what Jaron meant is that the question should not be “How do we maintain human agency”, and instead focus on what benefits the most to humanity.

Effects of AI mythology

  • Over-promise of AI leading to AI-winters, since the general public feels dissatisfied with what seems like too slow a progress. I do not fully agree. The situation today is vastly different than it was 20 years ago, as the economic incentive to build better systems is enormous, so even when the bubble bursts I believe there would still be a lot of funds available for advancing the basic research as well as in applications. One area which may be damaged is theoretical work on AGI.
  • People’s choices are greatly affected by algorithms relying on big data. The mythology adds to the trust in such systems which rely on bad science and whose results are not good enough to deserve such trust. Also, the choices are very limited to what the algorithm can show you.
  • The hype over recommendation systems also means that it is very hard to improve recommendation systems, as there is less data of people’s choices that is not dependent on the recommendation system itself. I am not sure how it is relevant to the AI mythology. Maybe the incentive for producing better recommendation systems is too small. 

Automatic language translation

Given as an example to the dangers of the AI hype.

The point is that the AI hype hides the fact that under the hood there are many translators whose translations have added to the big data under the translation algorithm, and they get no credit for this work. Their contribution is not even acknowledged.

Data Ownership

The data used in big data – based algorithms does not translate to money given to the data contributors.

It is not enough that the data benefits society as a whole, if the individual contributor lacks the financial stability and lacks incentive to add more data.

This topic is expanded in his writings.

Assassination drone

A thought experiment designed to show that it is the capabilities that are dangerous, not the AI controlling it.

I disagree with the analogy, as it undermines the importance of intelligence in obtaining power. For example, a smart AI could figure out how to do a lot of harm, by using existing technologies in a manner which requires so much coordination that single individuals could not manage themselves.

Is superintelligence possible?

His main argument is:

What we don’t have to worry about is the AI algorithm running them, because that’s speculative. There isn’t an AI algorithm that’s good enough to do that for the time being.

This is of course entirely debatable.

Jaron also speaks about the difficulty of extrapolating the advances in science and technology. He claims that in many areas our own ignorance is not fully acknowledged, which causes bad allocation of resources and a bad way of doing science. While I fully agree, I am not sure about the argument’s implications to AI-safety research.

Singularity as a religion

The comparison is interesting. I think he claims that the thought that the emerging superintelligence will disempower us completely and we will become slaves to the machine (whether it is good or bad for us), will change today’s attitude towards this “divinitification” of AI’s, and by that we would actually serve the interests of the corporations managing those algorithms.

My conclusions

The main point I am taking from this is that it may do harm to generalize and speak in a grandiose manner on the subject of AI. I do still think that AI-safety research is very  important, but I am not sure anymore that we should try and publicize those ideas more.

There is a trade-off between preparing better to automation or superintelligence and advancing basic AI research, methodology and applications.

I think that it all depends  very much on the public perception of AI. Jaron seems to claim that the tendency today is towards over-hype, but this may only be true for the “elites” and not the general population. This affects some of the arguments (e.g the problem with recommendation systems).

I think the bottom line is that it is in our best interest that people would be aware of the truth. We do not understand it (or agree on one hypothesis) but we do have knowledge which is not understood by the general public.

Edge.org “Reality Club Discussion”

Rodney Brooks

He made a very interesting argument regarding the difficulty of creating AGI. It is all presented in a post at rethinkrobotics.org.

He has convinced me to lower the hopes of a superintelligence created any time soon. This is basically due to the argument, and many supporting examples, that most of the progress in the field has been computational, and there is no reason to think that a major advancement can be made easily.

General comment

I have been very disappointed with the discussion. Almost no one had talked about the “myth of AI” (which is the title..), and most discussion was on the concept of AI-safety.

It was painfully clear that most respondents had very strong opinions and very poor understanding, or at least the arguments written were very shallow. I would have preferred if the respondents would have listen to Nick Bostrom, or read his Superintelligence book before writing their reply, as many of his basic arguments completely contradict much of what is written. Some exceptions are:

  • Lee Smolin (Interesting argument, even though I think it is pretty weak)
  • Rodney Brooks.
  • Nathan Myhrvold
  • Sendhil Mullainathan
  • Stuart Russell

Interactive Biometric Identification


Today, we have a problem with the internet: it is terribly difficult to validate another person’s identity. Even to figure out whether an online entity is an actual person can be difficult. Someday I will analyze the importance of identification (maybe as opposed to anonymity, though they are not mutually exclusive), but for now let’s take it for granted that it is worthwhile discussing. It is also important to define what we mean by identity, which I also won’t do now.

The motivation for this idea came out of listening to David Birch’s lecture on How to use Identity and the Blockchain, where he gave one possible definition of identity and talked about how it relates to the internet and Blockchain technologies.

The idea

The idea is trying to solve the validation problem without having the users remember (or store) a secret password, using biometric data instead, but avoiding some inherit security problems.

One possible solution could have been to send a photo of the face of the person whose identity is to be validated, or fingerprint or a voice sample. This data could be validated by an image recognition model owned by the authority that’s making the validation. One problem with this approach is that it may be easy to obtain the person’s biometric data, say by viewing her Facebook profile, so we cannot completely trust the biometric data sent.

My solution to the above problem relies on a challenge-response mechanism, such that the biometric data being sent is dependent on the challenge given by the validating server. For example, the server might send a random sentence which the person will need to say to be validated. Then the server checks both that the voice comes from the correct person and that the words spoken correspond to the challenge initiated.

Two other ideas are to use the flashlight or the vibration of the smartphone. Lets say the server sends Challenge = 0001010111101111000101010111110111001010, and then the user will take a video of himself such that the light (or vibration) is turned on every frame corresponding to a 1  bit and turned off otherwise. In this way the video can be validated as happening in real time and not duplicated, as well as still enabling biometric identification.

Pros and Cons

The pros have been sort of laid out. Let’s list many of the cons which makes this idea disadvantageous:

  • Computational difficulty of validation. Facial and vocal recognition still have a high error rate. Maybe using fingerprints is easier.
  • Big bandwidth usage. Uploading videos takes a lot of bandwidth.
  • Possibility of attacks. It still may be possible to simulate the flashlight on top of an existing video. I also know of attempts to make computer-generated audio that sounds like specific individuals. Also, in many cases it is possible to make fake inputs to machine-learning models that results in a specified classification (see this for example).
  • Inconvenient usage. Just imagine doing a selfie with the flashlight randomly on or off…


While there may be a good idea among these lines, the options laid out have many underlying problems. Because of this I do not think that this idea, as applied to my specific problem, is a very good one. It has been an interesting thought experiment, anyway.


Fact Verifier

In the hope of reducing the harm of fake news and erroneous reporting, i want there to be an app that given a web page it will check the claims given there.
This can be done using search engine style technique, where crawlers look through the web to find sources or contradictions, and giving some credibility ratings. It can also be done by user or content producer given data which supports or contradicts the report.