My favorite definition of human consciousness is simply our access to a representation of parts of our internal mental states. In this post we’ll elaborate on this definition, from cognitive psychology’s point of view, and discuss a bit about possible applications for machine learning.
First, go and read this OpenAI blog post. Read it? good!
In the next 10 minutes, I’ll write as much as I can on my thoughts regarding the claims posed in the above mentioned post.
I have a slight cognitive dissonance.. I got used to thinking that RL is very good, and that the results obtained on the Atari games, for example, are extremely high. However, it seems that Evolution Strategies (ES), as are any type of “local search” methods, are so generic and simple, such that they should be the lowest standard for any machine learning algorithm.
Is it correct to take away from this that overall RL is just not very good, but that it’s success is mostly a story of fast supercomputers?
OpenAI mentions that these kinds of local search methods are not good for supervised learning. This means that we do have some tools which are much better than local search, but that they are not easily transferable.
A different explanation could simply be that the Atari games and OpenAI Gym-type games, are specific examples where RL algorithms are not working well. Maybe due to their small action space?
When training ML models, there can be some security aspects which are important. Here are some examples:
Some security goals.
- Training set privacy. An adversary which is familiar with the model, can not get “any” information on the data-points in the training set.
- Model secrecy. An adversary able to get predictions for any input by the model as a black-box, can not obtain information about the model parameters.
- Model reliability. The model should behave in a way that humans can predict.
Links to related attacks
I’m trying to do a 10MA – “10 Minute Analysis”. The goal is to write the post in just 10 minutes, and see what comes out of it.
This post is about the benefits vs the downsides of making 10MAs. Hopefully we’ll reach some conclusion.
So why do I write these analyses anyway? The first, and most important, reason is for my self improvement. The second reason is because I am a big believer in sharing of knowledge and openess, and I hope some of what I plan to write here will be of use to other people later on.
Effects on my self improvement:
- Trains intuitive analysis, and coming up with a variety of ideas, as opposed to thorough and more linear thinking. In general I am better at this kind of thinking.
- Trains writing down quickly, and moving more thoughts to text. This is very important to me.
- Less time to learn how to formulate coreectly.
Effects on what others will read:
- Quantity instead of quality. probably not too bad, if I want to spread ideas and let others think for themselves.
This is fun, and can be helpful to do it together with deep analysis.
Here are my notes taken while listening to the interview with Edge.org.
- AI has been heavily mythologized.
- This myth causes bad decisions and allocation of resources impairing the development of AI.
- The myth shifts attention from data to the algorithm learning from it. This causes faulty data ownership rights.
- Users of AI-based recommender systems will over-trust the algorithm, harming further improvement and causing the users to be confined to a small set of options.
A lot of us were appalled a few years ago when the American Supreme Court decided, out of the blue, to decide a question it hadn’t been asked to decide, and declare that corporations are people. That’s a cover for making it easier for big money to have an influence in politics.
Jaron is referencing a controversial Supreme Court decision known as “Citizens United” (named after the committee with the same name, with the agenda “Citizens United seeks to reassert the traditional American values of limited government, freedom of enterprise, strong families, and national sovereignty and security”). A surprisingly unbiased introduction can be found in reclaimdemocracy.org/who-are-citizens-united/, where their views on the subject are explicitly written.
Algorithmic Personhood – an economic perspective
He goes on saying
The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person?
Since many corporations are based on an algorithm, Jaron claims that we are actually giving “person” status to algorithms. He raises an interesting point, which is that we are in fact giving political power to algorithms, but does not expand much on that. This point of view can be explained as follows:
- Let be a company who’s main revenue comes from solving a problem using an algorithm .
- The majority of ‘s actions are toward advancing the effectiveness of . In effectiveness I simply mean using to make more profits. Say by collecting more data and advancing algorithmic research or by expanding the target market using advertisements or better UI, or even by making more difficult to solve without .
- There are many political actions that can effect the effectiveness of the algorithm.
- Hence more political power given to corporations can be thought of as political power serving the interests and goals of .
Is this good or bad?
Most of the arguments regarding whether we should give less or more power to individuals with a lot of wealth apply here. The main arguments where our situation is different are the moral arguments applying to the rights and obligations of the individual, where it is not clear what (if any) moral rights and obligations should be assigned to an algorithms.
Intuitively, I do not see a good reason to assign any morality values to algorithms unless they have practical goals. For example, I do not think that it is intrinsically bad that a learning algorithm learns racist biases (although it may definitely have bad consequences). As another example, should we grant free speech right to an algorithm? Is it important that the interests of Google’s search engine won’t be silenced?
Algorithmic Personhood – a sociological perspective
what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people.
There has been a domineering subculture—[that there is] a historical determinism that we’re inevitably making computers that will be smarter and better than us and will take over from us.
I think the most interesting point that he makes is that the humanization of algorithms is biasing our point of view on the topics related to AI. This applies to the fear (or hopes) of the consequences of advanced AI.
My feeling about that is it’s a kind of a non-optimal, silly way of expressing anxiety about where technology is going. The particular thing about it that isn’t optimal is the way it talks about an end of human agency.
But it’s a call for increased human agency, so in that sense maybe it’s functional.
I would have liked to hear more. Nick Bostrom did a great work arguing why human agency is extremely difficult to maintain when a superintelligent algorithm exists.
Maybe what Jaron meant is that the question should not be “How do we maintain human agency”, and instead focus on what benefits the most to humanity.
Effects of AI mythology
- Over-promise of AI leading to AI-winters, since the general public feels dissatisfied with what seems like too slow a progress. I do not fully agree. The situation today is vastly different than it was 20 years ago, as the economic incentive to build better systems is enormous, so even when the bubble bursts I believe there would still be a lot of funds available for advancing the basic research as well as in applications. One area which may be damaged is theoretical work on AGI.
- People’s choices are greatly affected by algorithms relying on big data. The mythology adds to the trust in such systems which rely on bad science and whose results are not good enough to deserve such trust. Also, the choices are very limited to what the algorithm can show you.
- The hype over recommendation systems also means that it is very hard to improve recommendation systems, as there is less data of people’s choices that is not dependent on the recommendation system itself. I am not sure how it is relevant to the AI mythology. Maybe the incentive for producing better recommendation systems is too small.
Automatic language translation
Given as an example to the dangers of the AI hype.
The point is that the AI hype hides the fact that under the hood there are many translators whose translations have added to the big data under the translation algorithm, and they get no credit for this work. Their contribution is not even acknowledged.
The data used in big data – based algorithms does not translate to money given to the data contributors.
It is not enough that the data benefits society as a whole, if the individual contributor lacks the financial stability and lacks incentive to add more data.
This topic is expanded in his writings.
A thought experiment designed to show that it is the capabilities that are dangerous, not the AI controlling it.
I disagree with the analogy, as it undermines the importance of intelligence in obtaining power. For example, a smart AI could figure out how to do a lot of harm, by using existing technologies in a manner which requires so much coordination that single individuals could not manage themselves.
Is superintelligence possible?
His main argument is:
What we don’t have to worry about is the AI algorithm running them, because that’s speculative. There isn’t an AI algorithm that’s good enough to do that for the time being.
This is of course entirely debatable.
Jaron also speaks about the difficulty of extrapolating the advances in science and technology. He claims that in many areas our own ignorance is not fully acknowledged, which causes bad allocation of resources and a bad way of doing science. While I fully agree, I am not sure about the argument’s implications to AI-safety research.
Singularity as a religion
The comparison is interesting. I think he claims that the thought that the emerging superintelligence will disempower us completely and we will become slaves to the machine (whether it is good or bad for us), will change today’s attitude towards this “divinitification” of AI’s, and by that we would actually serve the interests of the corporations managing those algorithms.
The main point I am taking from this is that it may do harm to generalize and speak in a grandiose manner on the subject of AI. I do still think that AI-safety research is very important, but I am not sure anymore that we should try and publicize those ideas more.
There is a trade-off between preparing better to automation or superintelligence and advancing basic AI research, methodology and applications.
I think that it all depends very much on the public perception of AI. Jaron seems to claim that the tendency today is towards over-hype, but this may only be true for the “elites” and not the general population. This affects some of the arguments (e.g the problem with recommendation systems).
I think the bottom line is that it is in our best interest that people would be aware of the truth. We do not understand it (or agree on one hypothesis) but we do have knowledge which is not understood by the general public.
Edge.org “Reality Club Discussion”
He made a very interesting argument regarding the difficulty of creating AGI. It is all presented in a post at rethinkrobotics.org.
He has convinced me to lower the hopes of a superintelligence created any time soon. This is basically due to the argument, and many supporting examples, that most of the progress in the field has been computational, and there is no reason to think that a major advancement can be made easily.
I have been very disappointed with the discussion. Almost no one had talked about the “myth of AI” (which is the title..), and most discussion was on the concept of AI-safety.
It was painfully clear that most respondents had very strong opinions and very poor understanding, or at least the arguments written were very shallow. I would have preferred if the respondents would have listen to Nick Bostrom, or read his Superintelligence book before writing their reply, as many of his basic arguments completely contradict much of what is written. Some exceptions are:
- Lee Smolin (Interesting argument, even though I think it is pretty weak)
- Rodney Brooks.
- Nathan Myhrvold
- Sendhil Mullainathan
- Stuart Russell
Here I could put some analysis on what are the advantages and disadvantages for WP.