Security of Google’s Federated Learning

In this post I’ll collect some initial thoughts regarding the security of Google’s Federated Learning, which is a method for learning a model on a server where the clients do not send their data, but instead they send an updated model trained on their device with their own data.  The main points are:

  1. Knowing the clients update can give information on his training data.
  2. Knowing the average of some updates is likely to give information on each user’s update.
  3. If an attacker can send many updates, he can get information on a specific client.

The first two points are acknowledged briefly in the article.

Continue reading

Advertisements

A possible improvement for black-box adversarial example attack

This paper presents a cunning adversarial example attack on an unknown DNN model, with a small amount of black box calls to the model available (which happen before the input-for-deformation is given).  The algorithm is basically to build a different model, an adversarial DNN, with some arbitrary choice of architecture and hyper parameters, and learn the parameters on a data set given by oracle calls to the model. The choice of inputs to the oracle is made iteratively by taking the inputs from the previous iteration and choosing points close by that are the closest to the decision boundary of the last learned adversarial DNN.

I think it may be possible to improve the choice of the new inputs. The best choices for a new input are inputs such that they should have a big impact on the decision boundary, weighted by the probability distribution of possible inputs.

Several thoughts regarding “big impact on the decision boundary”:

  1. The work is entirely done during preprocess, as the (adversarial) model is known.
  2. Points near (at) the decision boundary are very good.
  3. A point on the decision boundary can be approximated in log-time.
  4. It may be possible to find good measures to the extent that a new input has changed the decision boundary.
    1. For example, maybe a form of regularization where we motivate changing as many parameters by as much as possible is good enough. (I guess not, but it is very simple to test)

Several thoughts regarding the probability distribution of possible inputs:

  1. It seems like a very important concept to understand deeply.
  2. It is probably heavily researched.
  3. If there is an available training set, it may be possible to approximate the manifold of the probable inputs.
    1. Maybe GANs can help with this problem.

10MA – Evolution strategies VS reinforcement learning

First, go and read this OpenAI blog post. Read it? good!

In the next 10 minutes, I’ll write as much as I can on my thoughts regarding the claims posed in the above mentioned post.

I have a slight cognitive dissonance.. I got used to thinking that RL is very good, and that the results obtained on the Atari games, for example, are extremely high. However, it seems that Evolution Strategies (ES), as are any type of “local search” methods, are so generic and simple, such that they should be the lowest standard for any machine learning algorithm.

Is it correct to take away from this that overall RL is just not very good, but that it’s success is mostly a story of fast supercomputers?

OpenAI mentions that these kinds of local search methods are not good for supervised learning. This means that we do have some tools which are much better than local search, but that they are not easily transferable.

A different explanation could simply be that the Atari games and OpenAI Gym-type games, are specific examples where RL algorithms are not working well. Maybe due to their small action space?

10MA – Security and Machine Learning

When training ML models, there can be some security aspects which are important. Here are some examples:

Some security goals.

  1. Training set privacy. An adversary which is familiar with the model, can not get “any” information on the data-points in the training set.
  2. Model secrecy. An adversary able to get predictions for any input by the model as a black-box, can not obtain information about the model parameters.
  3. Model reliability. The model should behave in a way that humans can predict.

Links to related attacks

  1. Membership Inference Attacks against Machine Learning Models
  2. Stealing Machine Learning Models via Prediction APIs
  3. Breaking Linear Classifiers on ImageNet

10MA – Should I do short analyses?

I’m trying to do a 10MA – “10 Minute Analysis”. The goal is to write the post in just 10 minutes, and see what comes out of it.

This post is about the benefits vs the downsides of making 10MAs. Hopefully we’ll reach some conclusion.

So why do I write these analyses anyway? The first, and most important, reason is for my self improvement. The second reason is because I am a big believer in sharing of knowledge and openess, and I hope some of what I plan to write here will be of use to other people later on.

Effects on my self improvement:

  • Trains intuitive analysis, and coming up with a variety of ideas, as opposed to thorough and more linear thinking. In general I am better at this kind of thinking.
  • Trains writing down quickly, and moving more thoughts to text. This is very important to me.
  • Less time to learn how to formulate coreectly.

Effects on what others will read:

  • Quantity instead of quality. probably not too bad, if I want to spread ideas and let others think for themselves.

Conclusion:

This is fun, and can be helpful to do it together with deep analysis.

Single-use code for 3D printing

When 3D printers will be potent and cheap enough, they can make an enormous economical change. In this post I discuss the main reasons for this economical change, and ponder some technological concepts which may restrict it. I am not sure if this restriction is beneficial or not, as we’ll discuss in the summary.

Digitization ⇒ duplicability

If the information of the product is entirely digital, then there are two main consequences:

  • It will be easy to share the product p2p. We see this today in many areas, such as music, film or electronic books, where downloaded copies can be shared freely as torrents or in file sharing sites.
  • It will be easy to “use” the product more then once. We usually take it for granted that this has to be the case, as music. books and the like can be used repetatively once owned. Note that it is not a necessity, and in fact there are many alternatives such as leasing or radio

Economical implications

The impact of digitization is obviously huge, as can be seen in the case of the music industry. The analysis here is important and must be data driven, so it should take a more careful research on the topic which I will postpone.

A relevant question which is not analogue to the case in the music industry is “what are the implications of being able to generate an object more then once”? I’ll leave it open as well.

Single use code

The challenge is to find a way such that users can download a design online, and use it immediately to print the object, but  in such a way that the majority of users can not print the design again.

If the printer is stateless (that is, has no intrinsic memory), then sending the same packet over to the printer will result in the same action of the printer. Hence, even if the driver of the printer acts in different ways, a simple solution to be able to print the same thing many times is by sniffing the communication for the first “legal” print, and repeating it for the next prints. This can be automated somewhat easily, and the program for doing so can be made simple enough so that many users will use it. Thus, we need some level of sophistication in the driver-printer protocol to avoid this attack. It is also clear that the printer’s code and internal state needs to be unmalleable.

The naive idea of having the printer try to remember information about which models it had already printed (say by storing their hash values), and not allow to print the same model again. This is not good enough, as it is easy to make minor changes to the model so that it wont print in the same way. Even if the printer would have a clever algorithm which can tell if two models are the same, which is very hard to do efficiently, these kinds of protections can always be overcome.

We can try to use cryptography to make sure that the printer will not use the same code twice. Assume that the printer has a secret key shared with the printing company. Then whoever wants to publish their design for a unique printing will send it to the printing company, which has a platform for selling designs, and then anyone who buys the design gets it encrypted and signed so that only his printer can decrypt and authenticate the code for the model. In this case, the model can not be shared, and the hashing solution above can protect from duplication. This solution assumes that the vast majority of users will not open their printers and obtain the private key (which can be made extremely complicated). Another version is to sign on the model and the printer ID using public key cryptography, and have the printer only print what is verified as coming from the company and have the correct ID. This version is problematic, as the code itself will be visible.

The main technical problem with the above solution is that it does not allow for printing of free models, or home generated ones, and here is where it gets interesting. Just allowing for printing of unencrypted models has the inherent problem that it only takes one person who manages to recover his own key to be able to spread the model. However, it would still cost money, so it can be still quite good. Another problem is the managing of the keys, but it should be fine.

conclusion

The above scheme is probably fine, but I think a better solution is possible. Eventually, the biggest problem for any such solution is that the printer manufacturer and the platform for the unique printing of models needs to work together, and create a large enough community of buyers and sellers so that new people will choose to but these specific printers.

Jaron Lanier talks about The Myth of AI

Here are my notes taken while listening to the interview with Edge.org.

Summary

  • AI has been heavily mythologized.
  • This myth causes bad decisions and allocation of resources impairing the development of AI.
  • The myth shifts attention from data to the algorithm learning from it. This causes faulty data ownership rights.
  • Users of AI-based recommender systems will over-trust the algorithm, harming further improvement and causing the users to be confined to a small set of options.

The interview

Corporate Personhood

A lot of us were appalled a few years ago when the American Supreme Court decided, out of the blue, to decide a question it hadn’t been asked to decide, and declare that corporations are people. That’s a cover for making it easier for big money to have an influence in politics.

Jaron is referencing a controversial Supreme Court decision known as “Citizens United” (named after the committee with the same name, with the agenda “Citizens United seeks to reassert the traditional American values of limited government, freedom of enterprise, strong families, and national sovereignty and security”). A surprisingly unbiased introduction can be found in reclaimdemocracy.org/who-are-citizens-united/, where their views on the subject are explicitly written.

Algorithmic Personhood – an economic perspective

He goes on saying

The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person?

Since many corporations are based on an algorithm, Jaron claims that we are actually giving “person” status to algorithms. He raises an interesting point, which is that we are in fact giving political power to algorithms, but does not expand much on that. This point of view can be explained as follows:

  1. Let C be a company who’s main revenue comes from solving a problem  P using an algorithm A.
  2. The majority of C‘s actions are toward advancing the effectiveness of A. In effectiveness I simply mean using P to make more profits. Say by collecting more data and advancing algorithmic research or by expanding the target market using advertisements or better UI, or even by making P more difficult to solve without A.
  3. There are many political actions that can effect the effectiveness of the algorithm.
  4. Hence more political power given to corporations can be thought of as political power serving the interests and goals of C.

Is this good or bad?

Most of the arguments regarding whether we should give less or more power to individuals with a lot of wealth apply here. The main arguments where our situation is different are the moral arguments applying to the rights and obligations of the individual, where it is not clear what (if any) moral rights and obligations should be assigned to an algorithms.

Intuitively, I do not see a good reason to assign any morality values to algorithms unless they have practical goals. For example, I do not think that it is intrinsically bad that a learning algorithm learns racist biases (although it may definitely have bad consequences). As another example, should we grant free speech right to an algorithm? Is it important that the interests of Google’s search engine won’t be silenced?

Algorithmic Personhood – a sociological perspective

what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people.

[…]

There has been a domineering subculture—[that there is] a historical determinism that we’re inevitably making computers that will be smarter and better than us and will take over from us.

I think the most interesting point that he makes is that the humanization of algorithms is biasing our point of view on the topics related to AI. This applies to the fear (or hopes) of the consequences of advanced AI.

Human Agency

My feeling about that is it’s a kind of a non-optimal, silly way of expressing anxiety about where technology is going. The particular thing about it that isn’t optimal is the way it talks about an end of human agency.

But it’s a call for increased human agency, so in that sense maybe it’s functional.

I would have liked to hear more. Nick Bostrom did a great work arguing why human agency is extremely difficult to maintain when a superintelligent algorithm exists.

Maybe what Jaron meant is that the question should not be “How do we maintain human agency”, and instead focus on what benefits the most to humanity.

Effects of AI mythology

  • Over-promise of AI leading to AI-winters, since the general public feels dissatisfied with what seems like too slow a progress. I do not fully agree. The situation today is vastly different than it was 20 years ago, as the economic incentive to build better systems is enormous, so even when the bubble bursts I believe there would still be a lot of funds available for advancing the basic research as well as in applications. One area which may be damaged is theoretical work on AGI.
  • People’s choices are greatly affected by algorithms relying on big data. The mythology adds to the trust in such systems which rely on bad science and whose results are not good enough to deserve such trust. Also, the choices are very limited to what the algorithm can show you.
  • The hype over recommendation systems also means that it is very hard to improve recommendation systems, as there is less data of people’s choices that is not dependent on the recommendation system itself. I am not sure how it is relevant to the AI mythology. Maybe the incentive for producing better recommendation systems is too small. 

Automatic language translation

Given as an example to the dangers of the AI hype.

The point is that the AI hype hides the fact that under the hood there are many translators whose translations have added to the big data under the translation algorithm, and they get no credit for this work. Their contribution is not even acknowledged.

Data Ownership

The data used in big data – based algorithms does not translate to money given to the data contributors.

It is not enough that the data benefits society as a whole, if the individual contributor lacks the financial stability and lacks incentive to add more data.

This topic is expanded in his writings.

Assassination drone

A thought experiment designed to show that it is the capabilities that are dangerous, not the AI controlling it.

I disagree with the analogy, as it undermines the importance of intelligence in obtaining power. For example, a smart AI could figure out how to do a lot of harm, by using existing technologies in a manner which requires so much coordination that single individuals could not manage themselves.

Is superintelligence possible?

His main argument is:

What we don’t have to worry about is the AI algorithm running them, because that’s speculative. There isn’t an AI algorithm that’s good enough to do that for the time being.

This is of course entirely debatable.

Jaron also speaks about the difficulty of extrapolating the advances in science and technology. He claims that in many areas our own ignorance is not fully acknowledged, which causes bad allocation of resources and a bad way of doing science. While I fully agree, I am not sure about the argument’s implications to AI-safety research.

Singularity as a religion

The comparison is interesting. I think he claims that the thought that the emerging superintelligence will disempower us completely and we will become slaves to the machine (whether it is good or bad for us), will change today’s attitude towards this “divinitification” of AI’s, and by that we would actually serve the interests of the corporations managing those algorithms.

My conclusions

The main point I am taking from this is that it may do harm to generalize and speak in a grandiose manner on the subject of AI. I do still think that AI-safety research is very  important, but I am not sure anymore that we should try and publicize those ideas more.

There is a trade-off between preparing better to automation or superintelligence and advancing basic AI research, methodology and applications.

I think that it all depends  very much on the public perception of AI. Jaron seems to claim that the tendency today is towards over-hype, but this may only be true for the “elites” and not the general population. This affects some of the arguments (e.g the problem with recommendation systems).

I think the bottom line is that it is in our best interest that people would be aware of the truth. We do not understand it (or agree on one hypothesis) but we do have knowledge which is not understood by the general public.

Edge.org “Reality Club Discussion”

Rodney Brooks

He made a very interesting argument regarding the difficulty of creating AGI. It is all presented in a post at rethinkrobotics.org.

He has convinced me to lower the hopes of a superintelligence created any time soon. This is basically due to the argument, and many supporting examples, that most of the progress in the field has been computational, and there is no reason to think that a major advancement can be made easily.

General comment

I have been very disappointed with the discussion. Almost no one had talked about the “myth of AI” (which is the title..), and most discussion was on the concept of AI-safety.

It was painfully clear that most respondents had very strong opinions and very poor understanding, or at least the arguments written were very shallow. I would have preferred if the respondents would have listen to Nick Bostrom, or read his Superintelligence book before writing their reply, as many of his basic arguments completely contradict much of what is written. Some exceptions are:

  • Lee Smolin (Interesting argument, even though I think it is pretty weak)
  • Rodney Brooks.
  • Nathan Myhrvold
  • Sendhil Mullainathan
  • Stuart Russell