Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Friday, 25 May 2018

Does true AI require magic ?

To be fair, Google Duplex doesn’t literally use phrase-book-like templates. It uses “machine learning” techniques to extract a range of possible phrases drawn from an enormous data set of recordings of human conversations. But the basic problem remains the same: No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety.

Well, yes and no. Yes, because AI is really just a hugely elaborate metaphorical stereotype of a parrot. It's just repeating and matching up stuff. I'm not seeing the merest glimmer that it in any way resembles an actual parrot, which has some understanding of what it's doing and why. So, yes to the part about it not really being intelligent. But no the part about creativity : even a purely mechanical device that rapidly explores parameter space could be said to be (in some basic sense) hugely creative, and potentially much faster than any human. The problem is - like exploring an infinite random number - extracting anything from it that's not simply gibberish.

From a purely materialistic perspective, AI is undeniably possible. It just requires some careful arrangement of atoms to create a perfect synthetic brain, be that made of silicon or otherwise. It could even potentially emerge from pure code and be hardware-independent. In that sense we already have true (though crude) AI; it's simply experiencing the world in a very different way to ours. Our brains are nothing more than incredibly complex and sophisticated calculators, in that view. Thoughts and feelings are merely electrical currents and chemical reactions; whether that means a calculator has some kind of experience of the world, or if it's somehow only our own specific types of electrochemical reactions that give rise to this, I don't know.

In an idealist view, we may artificially create something that approximates intelligence but always remains strictly artificial in nature. It might become a very convincing fake, but it would never have the sort of understanding a living animal has. In that view, true AI is impossible - there is some other substance to mind and thought that cannot be synthesised. The only way to create it (or in some views, create receivers that can interpret it) is by, well, fucking. Goodness knows what would happen if you could synthesise a zygote atom by atom... In this perspective, thoughts and feelings have some weirder, more mystical aspect than physical constructs.

I'm not gonna tell you which (if any) of these I most align with. Though this always reminds me of the question Alexander the Great supposedly asked to an oracle :
"How does a man become a god ?"
To which the response was, "By doing something a man cannot do."


https://www.nytimes.com/2018/05/18/opinion/artificial-intelligence-challenges.html

2 comments:

  1. At first I thought that was a beer can at the bottom.
    That wasn't really me, it was the liquor talking...

    ReplyDelete
  2. ...my intuition on the current state of AI (and it's own quest to perhaps, and through ourselves, become a god as by the oracle's definition mentioned above) is that some radical lateral insight is yet to be determined. A literal Gordian Knot-cutting recombination of conceptual systems or otherwise unconventional topological slice across a logical possibility-space. I have my own suspicions of what kind of thing it might be, or what kind of enigmatic logic it may resemble or derive from, but here I choose to consciously reflect back to you your own mischievous (public) agnosticism on which side you choose to fall from the materialist/idealist fence.

    I agree that there may be something irreducible in organic sentience. It does not necessarily follow that organisational principles, symmetries, materials or processes and information or energy flows derived from those apparent in (some) organic systems can not be replicated given sufficient or appropriate materials, systems or informational sophistication. There is an ancient Chinese aphorism that we are that through which the earth comes to know itself and in this vein it is not entirely implausible that the Universe having generated sentience and intelligence in (at least) one specific instance should be able to self-referentially and self-consciously do so again through us.

    Our greatest contemporary limitation on this quest for authentic AGI may be that the narratives (and their reflexive purposes) which have developed around the application, research and development of AI have unnecessarily bound us to commercial, conflict and statistical surveillance applications and that these may be fundamentally misleading in regards to the actual nature of whatever this thing is that we know as sentience, intelligence and consciousness; consequently, leading us astray in our attempts to self-replicate an Artificial General Intelligence.

    ReplyDelete

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Review : Pagan Britain

Having read a good chunk of the original stories, I turn away slightly from mythological themes and back to something more academical : the ...