paint-brush
Towards strong AI — few missing parts.by@oleksandrsavsunenko
263 reads

Towards strong AI — few missing parts.

by Oleksandr SavsunenkoApril 29th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

If you are reading this, it’s quite likely that you know what is a Chinese room argument. If no — please read the Wiki, it’s quite simple. Let’s just imagine that the strong AI has been built and placed into Chinese room. It operates perfectly and passes the typical Turing test and you can’t say if there is a human or machine inside the room. I want to postulate two ideas:

Company Mentioned

Mention Thumbnail
featured image - Towards strong AI — few missing parts.
Oleksandr Savsunenko HackerNoon profile picture

Hey there. I strongly feel that progress towards general/strong AI should be powered not only by computational resources and medical imaging, but also by a self-reflection and human psychology. I am machine learning engineer and also do practice various kinds of meditations. Here I am sharing few insights about missing parts of current AI architectures, based on inner insights.

I could be totally wrong and over-simplifing and could accidentally skip some logical glue in my arguments — please ask for the explanations if required.

Chineese room problem and Random Number Generators.

If you are reading this, it’s quite likely that you know what is a Chinese room argument. If no — please read the Wiki, it’s quite simple. Let’s just imagine that the strong AI has been built and placed into Chinese room. It operates perfectly and passes the typical Turing test and you can’t say if there is a human or machine inside the room. I want to postulate two ideas:

  • Given infinite time and resources external researcher would be able to map all possible inputs and outputs along with all possible interconnections inside the algorithm of Chineese room. So, he would be able to reproduce the algorithm inside the room, proving that there is indeed an algorithm, not a human.
  • External researcher wouldn’t be able to map all possible states of the system in the room if there is the human being inside, and will never reach a conclusion “there is a machine inside”. Not because of some form of “self-reprogramming” or “consciousness” that an only a human capable of, but because human’s wet brain wiring has a noise component built-in, that drives unpredictability.

And here’s my favorite part. One can argue, that an AI algorithm can have a noise component (Dropout layer) intentionally built-in. But, remember, that most of the random number generators in our computers are actually pseudo-random. Every engineer and hacker knows that. So, given an infinite time behavior of any system of any complexity that uses pseudo-random numbers generator can be mapped and thus fail Turing test.

So, my point is - strong AI will be created. Quite likely big corps like “F” or “G” already have conversational bots that pass Turing tests of some non-very-strict sort. This AI will evolve and those algorithms will definitely use random number generators in their functionality. And one day we will have perfect AI that passes the very complex Turing test. Is this machine going to have mind/consciousness in my opinion? No. But, if you swap the pseudo-random function with a true random number generator based on quantum noise — I will greet this mind as equal. I do think that we (humans) have some kind of noise build it and this noise has a very fundamental property.

Another crazy point — while this is a part of fringe and pseudoscience, I think that role of quantum fluctuations in consciousness are still to be discovered. Go read about Princeton’s Noosphere Project for inspiration.

Solving the motivational problem with opposite loss functions.

Ok, swapping random number source was a kind of ethical and not technological question. Here’s another thing, that’s closer to real-world results. As I loosely explained in my previous post I am quite unhappy with the direction of current generation neural networks growth. It’s because of the general “filter bubble” thing, created by social media, search engines, and social interaction rules.

The neural network training is based actually on the same “filter bubble” idea and perfected it into a maximum. As this system is based on historical data it learns to exploit existing trends. And when applied to the real world it starts to support those pre-existing trends, starting a potentially endless feedback loop. That’s not how human mind operates.


Think of this example — reinforcement learning. Reinforcement learning (as any other form of supervised learning) has a goal, a metric or a score — whatever you call it and optimizes itself in order to improve that metric. That’s not how human mind operates as well.The main difference is that human’s don’t have sole goal metric they we trying to optimize. I argue that we always have at least two, usually opposite of each other. One could say that human goal can be boiled down to one thing — survival. I think that there is another — struggle for death. Freud nailed this with his Eros and Thanatos concept. Evolution without death is impossible and death drive is built into each and every one of us. The polarity and tensions between them are what makes wake up every morning and be creative and evolve.

So, that’s why I love Generative Adversarial Networks. When you look into training progress of properly configured GAN there’s always a fight and tension between generator and discriminator. This could be somehow transferable to reinforcement learning in shape of Constructor and Destructor.

Anyway, my big point. Progress in general AI can and should be powered by discoveries coming from self-reflective practices like meditations, preferably performed by researchers themselves.

Please, comment, share, clap to put me into a positive feedback loop and forced to continue writing. Contact me if you’d like to share yours or maybe implement something. sasha (dot) savsunenko (at) gmail (dot) com