Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Tuesday, 14 August 2018

AI : lack of goals is a symptom of the problem, not the cause

Three rules for any article on AI :

1) AI does not yet have the the same kind of understanding as human intelligence.
2) There is no guarantee rule 1 will always hold true.
3) It is not necessary to violate rule 1 for AI to have a massive impact, positive or otherwise, intentional or otherwise.

There's some very interesting social commentary in here, but I shall focus on the main question.

Maslow’s original formulation in 1943 identified five levels. The first level comprised biological needs – such as food, shelter, warmth, sex, and sleep. The second focused on ‘safety’: protection from the environment, law and order, stability, and security. The third level concerned ‘love and belonging’, including friendship, acceptance, love, and being part of a group – not only family, Maslow said, but also at work. Fourth were the needs for ‘esteem’. These included both self-esteem (dignity, achievement, independence) and respect from others (status, prestige). Finally, ‘self-actualisation’ needs covered self-fulfilment, personal growth, and particularly intense, ‘peak’ experiences.

I would imagine that an AI, or indeed anything which propagates, unavoidably has needs much like levels 1 and 2. It may not want them, but it does need them. Those are not the same thing (as the article later points out by distinguishing needing and caring, but much more discussion is - ahem - needed on this crucial point).

A person can survive, unhappily, if only their first- and second-level needs are met. But to thrive, the third level (love/belonging), and the fourth (esteem), must be satisfied too. And to flourish, the top level of self-actualisation must be reached.

Some newly accepted goals are generated from scratch by the individual person. But many are inherited from the surrounding culture. So what motivates us – what we care about – is ultimately grounded in needs, but suffuses throughout the many-levelled purposive structures that organise human behaviour. Those structures are hugely more complex than the most complicated, most greedily data-crunching AI program.

Consider a man queuing outside a baker’s shop with the intention, or goal, of buying a loaf of rye bread. He might be hungry, of course. Or he might (also?) hope to increase his social standing by making cucumber sandwiches for Lady Bracknell. Or he might plan to test the bread biochemically, to confirm a diagnosis of ergot poisoning in the neighbourhood. He might carry his bread around the corner with a lascivious smirk, offering it to a starving and beautiful girl. He might use it as part of a still-life he is painting. Or he might buy 50 loaves, and gobble them down to win an entry in The Guinness Book of Records. Possibly, he is planning to use the bread in a religious ritual, such as the Eucharist.

The fact that a computer is following any goals at all can always be explained with reference to the goals of some human agent. (That’s why responsibility for the actions of AI systems lies with their users, manufacturers and/or retailers – not with the systems themselves.) Besides this, an AI program’s ‘goals’, ‘priorities’ and ‘values’ don’t matter to the system. When DeepMind’s AlphaGo beat the world champion Lee Sedol in 2016, it felt no satisfaction, still less exultation. And when the then-reigning chess program Stockfish 8 was trounced by AlphaZero a year later (even though AlphaZero had been given no data or advice about how humans play), it wasn’t beset by disappointment or humiliation. Garry Kasparov, by contrast, was devastated when he was beaten at chess by IBM’s Deep Blue in 1997.

All well and good so far.

Moreover, it makes no sense to imagine that future AI might have needs. They don’t need sociality or respect in order to work well. A program either works, or it doesn’t. For needs are intrinsic to, and their satisfaction is necessary for, autonomously existing systems – that is, living organisms. They can’t sensibly be ascribed to artefacts.

Well, why in the world not ? What if you program an AI to have emotions - or at least, something with all the same practical consequences and coherency of emotions ? Possibly you can't do this in software, you might require something more electro_chemical_ in nature - I don't know. But on what grounds can you say that a future AI won't have needs ? What's so special about silicon ? As for sociality, what about programs which are explicitly designed to work with other programs ? The root of the problem is, as usual, that we don't know what our own intelligence is so we're left confused when discussing artificial versions of it. Again, the difference between needs and desires is crucial here.

However, Omohundro’s argument begs the question at issue here. He assumes that (some) AI systems can be ‘highly motivated’, that they can care about their own preservation and about achieving their various goals. Indeed, he takes it for granted that they can have goals, in the same (caring) sense that we do.

And the author is here taking it for granted that they can't. What's the justification for it ? What fundamental mechanism prevents AI from having desires ? Are we not machines ourselves, and if not, why not ? What about us can't be reproduced artificially ?

A slight diversion from the main topic :

Recent research, however, has shown that the lonely residents’ distress can be significantly lessened by having individually tailored, personal interaction for only one hour per week.... This approach is pure Maslow. It requires caring people to deliver love, belonging and respect – and perhaps to satisfy curiosity, too. It also requires personal sensitivity to identify patients’ interests, and to plan, and then share in, specific activities. It couldn’t be done by computers.

Surely what's required here is not actually love and respect but perception of love and respect. If people genuinely think the robots love and respect them, then the benefit should be the same regardless of whether or not they actually do. Same with humans. Again, not seeing any reason why it couldn't be done by computers, though it certainly can't at the moment.

Personally I'm not the slightest bit worried that the robots will "take over". I'm slightly more concerned that AI will (eventually) become so advanced that human creativity, curiosity and ingenuity will become obsolete, but that ain't happening anytime soon.

https://aeon.co/essays/the-robots-wont-take-over-because-they-couldnt-care-less?utm_source=Aeon+Newsletter&utm_campaign=0a1956c9e9-EMAIL_CAMPAIGN_2018_08_13_03_31&utm_medium=email&utm_term=0_411a82e59d-0a1956c9e9-69415397

40 comments:

  1. Maslow's take of the transactions/motivations that shape the exchange between an individual and reality is too simplistic to be applied to an AI topic. Its simplicity makes if quite functional if you want to approach populations as whole, but when looking at AI we are not really talking about populations synthesizing an abstract over-simplified common denominator(an individual).
    AI as a concept(execution may vary) is based on hive logic and not herd logic. This comes from the nature of the possible implementation. That alone renders Maslow's pyramid irrelevant- there is simply no individuality, while we continue to see a directed action that is synchronized with environmental stimulus.

    ReplyDelete
  2. Alistair McHarg not really. We will never be able to create artificial personality..which does not mean that we won't try and succeed to some extent.

    ReplyDelete
  3. I agree with Rhys Taylor's criticism. For me, the critical problem is "computers don’t have goals of their own". But...in what sense do any of us have goals of our own? The supposedly different human goals are just goals programmed into us by our genetic heritage. No human chose to feel hunger or sleepiness any more than a Sidewinder missile chose to seek heat.

    And if an AI is told to maximize paperclip production, by a purposefully malicious human or by some careless typo, then the global AI takeover will be an AI takeover nonetheless.

    ReplyDelete
  4. AI entities are evolving fairly quickly but they'll never be anything like us. They will achieve identities, ( Siri, for all that it can do, does not really have an identity ) but they were never created to reproduce, as life has done. I foresee a vast interconnected fabric of AI which will grow up around specific areas of expertise. No telling what comes thereafter.

    ReplyDelete
  5. Kiki Jewell has pointed out that Maslow's five-level hierarchy might better be considered as six, comprised of three need-security pairs.

    The first are basic physical needs, and security concerning them. The second are social needs, and security concerning them, and the third are intellectual or creative needs, and security concerning them. Maslow combines the social need-safety elements. Kiki's insight was that self-actualisation represents intellectual or creative safety.

    At which point we might even see this not as a hierarchy but rather as a set of balanced need/security attributes of which there might be further axes: physical activity, power, community engagement or caring, spiritual, say (just kicking out ideas, don't take these too seriously). This might address a criticism of Maslow's model: that often an inadequacy at lower levels is at least partially compensated for, in some cases, by fulfillment at higher ones.

    This also fits in, somewhat better, with a wants (motivation) / fears (avoidance) psycchological model.

    I'm still meaning to read the Aeon piece so can't say that this matters to it, but Rhys's commentary touched on the topic.

    ReplyDelete
  6. I suppose for me the fundamental questions are quite simple : is there any fundamental difference between a bunch of atoms (and their associated relational properties) created organically and another bunch of atoms arranged artificially ? If so, what ?
    If it's just a matter of sheer complexity, then it should be acknowledged that we may eventually succeed in creating a true AI - it's merely very difficult as opposed to being fundamentally impossible. And if it's something more than that, then I should like to know what it is.

    I had the distinct impression from the article that the author favours the latter, but never expanded on this. I could be misreading it, but that's what came across to me.

    ReplyDelete
  7. Rhys Taylor Well, the organically combined atoms have been shaped by four billion years of ruthless evolutionary pressure.

    The artificially combined atoms are under no such pressure. They fill our garbage dumps.

    The machines may eventually be recycled. But creating a life form, well...

    ReplyDelete
  8. Dan Weese But knowing the properties of the atoms subject to billions of years of evolutionary pressure, doesn't that make it (in principle) much easier to recreate in a bran new bunch of atoms without the need for the evolutionary pressure ?

    ReplyDelete
  9. Rhys Taylor We're kinda close, when you think about what polymerase chain reaction is doing, duplicating the cookbook for one of life's essential molecules, as you say, a bunch of atoms without the need for the evolutionary pressure.

    I'm fascinated by prions. Are they alive or not? Hard to say.

    ReplyDelete
  10. Rhys Taylor be careful. Reading it like it was some "magical" difference is a straw man. There is a difference in complexity, but it is just a difference between a bunch of atoms and a bunch of atoms, and just a difference in complexity. No magic. It is about the level and physical quality of difference.

    The first bunch of atoms is mostly purpose-built by external processes and the simulation part that is relevant for the artificial intelligence is less than a cherry on top. It is like a Giza pyramid where the pyramid is the part that is built by others (people) and on top of the pyramid there is an abacus that can be used for calculations related to AI. That pyramid is the society, education, component factories, computer and software vendors, and so forth and so on -- and that abacus is some researcher's AI app. So that's the first bunch of atoms. The size of items in the metaphor reflects the size of energy flows needed to keep it going.

    The second bunch of atoms has spontaneously emerged from its own environment, first by chance, then adapting itself via evolution. Every bit of it participates in the process, lives it, is a part of it. Not because magic, but because hard physical reality that you and me can observe -- every bit of it. We can separate one specimen, say a butterfly, and see how small it is compared to all the biodiversity, but we can do this for every part of biodiversity -- no part is more or less dependent on others -- it is all similar. It is all self-sustaining because of how it was formed, via evolution. There is no Giza pyramid size ball and chain that would have to be deliberately maintained by external processes. Everything in biodiversity is like it because it has proven itself to be more energy efficient - more competitive - freely in nature.

    Both bunches of atoms are same physical reality, but the way how biodiversity has evolved is fundamentally different, millions of years of "nature's research" on efficiency. It is very hard to beat that and create a more efficient design that would outcompete it.

    For size comparison, expressed in simple terms, the abacus can be no bigger than the size of the wallet of those who are funding the pyramid + the abacus. Biodiversity is only limited by available sunshine and the conditions that make it possible to channel it into the processes of life. Also note how those fat guys' wallets are a (tiny) subset of the available sunshine.

    Biodiversity is real and very hard to outcompete in efficiency. Just because you have two bunches of atoms does not mean the first had a good chance of becoming like the other. The obstacles and probabilities are physical, not magical.




    ReplyDelete
  11. Rhys Taylor Possibly -- and mind, this is sheer speculation --- path dependencies and emergent properties such that organic wetware is capable of dynamics mechanical and electronic systems cannot converge to.

    Wheether this is just fancier, or more presently fashionable, dress for variious earlier spiritual or essentialist objections (e.g., Roger Penrose), I really can't say.

    ReplyDelete
  12. Sakari Maaranen If I understand you correctly, you're saying that it is indeed possible to create an AI, but monstrously difficult to arrange the necessary components into something with the required complexity. Natural intelligence emerges as much from its external, societal environment as much as from an organism itself.

    That's definitely one possibility. I don't think the original article is very clear about it though.

    Personally I'm not sure. I remain somewhat skeptical that we need to have things so precisely arranged to create something intelligent, even if the nature of that intelligence is rather different from organic intelligence.

    ReplyDelete
  13. Rhys Taylor yeah, well the probabilities are like a normal number vs. a (mathematically speaking) very large number. It is not going to happen even if we could imagine it in purely logical thought exercises omitting the probabilities. If you are still questioning the straw man of magic then yeah, there is no magic in this.

    ReplyDelete
  14. Sakari Maaranen Not so much of a straw man as much as a completely absent man. I was genuinely unsure what counter-argument the author was trying to raise, since it contains a lot of very strong statements (i.e. "impossible" versus "fiendishly difficult"). I don't discount the possibility of Mystical Woo, but that's another story. Thanks for providing a purely physical argument I thought was missing from the original.

    ReplyDelete
  15. I will take that list in a presentation, thx!!!

    ReplyDelete
  16. Thats partly because what we at this moment in time consider AI is just a decision tree, it as such is not intelligent neither is it sentient.
    As long as we have the low level of AI or better NSAI they will be hardly more then a crutch to do fast what we can not do, or do not want to spend the time on.
    It is when SAI appears things wil get interesting.
    Looking at what now is called AI, it hardly deserves the name.

    ReplyDelete
  17. Addressing the Aeon article's assertions regarding drives:

    1. AI and computers have, for the most part, not had to develop drives and motivations for basic functionss, because those aare taken care of by their human operators. And even so, systems do_have, at small scales, goal-seeking behaviour, most especially around cooling, but also memory and disk management.

    2. Drives are trivially simple to program (set goal state, provide feedback). Or they can be evolved. But the underlying need has to exist. Computers are fed electricity and feel no pain on being shut down. The necessary elements for that development -- a need to self-manage -- don't exist.

    3. High-level reward, aversion, and pathological states (elation, dejection, mental breakdown, depression) may simply require more complex internal states before they evolve. In the case of humans, there is a substantial amount of mental infrastructure that is innate. AI remains largely _trained rather than evolved, there is not species-level foundation.

    (Training cycles can be considered "evolutionary", though in a different sense.)

    At what stage in animal minds do we see elation, joy, dejection? Primatess? Mammals? Birds? Reptiles? Insects? Fish? Octopus? Jellyfish? Coral?

    Are present AI simply insufficiently developeed?

    I actually do believe that if we succeed in producing truly aware AI we likely will see equivalent responses, including mental illness.

    And we won't be able to instruct them simply: "I'm sorry, HAL, but I'm afraid I can't let you do that."

    (One interpretation of 2001 is that conflicting directives forced HAL into a breakdown.)

    ReplyDelete
  18. What are goals of the ant? Can ants beat humans in particular environments? Is single ant intelligent?
    I suppose there is no need to religious dispute here. AI at present is no more intelligent than ordinary microwave oven. However it is complicated, so a lot of people want to believe it is wonder comparable to human brain, which is of course absurd.
    In the other side, to be influential, doesn't require to be intelligent. You have to be just successful at particular interactions, and it just happens

    ReplyDelete
  19. "In what sense do any of us have goals" => That is sweeping a hard problem under the rug.

    Try arguing that in court. "Your honor, the prosecution's claim about my motives is nonsense. I don't have any motives. I'm just a bag of atoms. I never make any choices. My actions were all predetermined by physics."

    ReplyDelete
  20. Of course, we can't help but projecting ourselves into this possible machine intelligence. A little like the University of Utah camera (shared recently), data capture and representation or analysis optimised for a machine and without human beings being necessary at all in the loop, why do we suppose that the needs and drivers of biological life should be those of digital or algorithmic entities ? Certainly, there are similarities - information, energy, patterned self-replication - but does Life 2.0 (so to speak) necessarily possess the emotional or material dependencies of Anthropic wetware ?

    What of a notionally "living", self-replicating, adapting and changing autonomous machine or patterned energetic system that is reduced to its streamlined essential fact of pure self-replication (not unlike the horror stories of science fiction where nanobots replicate endlessly). What we consider as important in intelligence and self-interest may only be (to some extent) the reflexive artifacts of culture and deep historical, biological development - the vessel which has carried us here.

    Human motivations and needs are only those of the machine in as much as we find it difficult to believe that there is anything beyond us, that there could be anything fundamentally other than us. If machines develop emotions or animal dependencies and attachments it will likely only be because we have created them far too much in our own image. The replication of the process of self-replication, naked and existentially alone - bare without sentiment or emotion, is the unadorned fact of human life, of life. Everything else, as much as we all feel we can never live without or beyond our material and affective persistence in language, communication and culture - it is really just decoration.

    (This makes me sound a bit harsh. I am really saying that the effective elemental algorithmic process or patterned self-replication and energetic recursion underlying all of this does not possess values, feelings - if we were to replicate this in the machine - why should we expect there to be anything other than vestigial, incidental remnants of the creators, ourselves, in our creations ?)

    ReplyDelete
  21. Of course their needs would be different, if they had any. Grass has different needs from humans. The difference between grass and AI is that grass does have needs of its own and it grows wherever it is able to fulfill those needs. AI only happens where we, people, use it to fulfill our needs. AI does not grow on its own, because it has no idea of how or why - it has no needs whatsoever that it is aware of. We are aware of what we need to use AI, but those are our needs because we have them and we manage them.

    ReplyDelete
  22. Sakari Maaranen What about programs that evolve by mutation and selection ? Within their own digital world, don't they have the same basic needs of survival that grass has ?

    ReplyDelete
  23. In my opinion, the only correct answer to Rhys's last question is "nobody knows for sure."

    If someone wants to believe, on faith alone, that there is no essential difference between physical grass and a very detailed digital simulation of grass, then I cannot rebut that, because it's simply a statement of faith.

    But the reality is that there are aspects of living matter that are not yet fully understood. Scientists cannot create living matter from scratch. Not yet. They don't know how. Not only that, but as far as we know, nature cannot do that, with the sole exception of the origin of life a few billion years ago. How come life doesn't arise spontaneously all the time? Nobody knows the answer to that. Someday we will know, but today we do not know.

    If someone wants to maintain that there is no essential difference between life and a digital simulation of life, then the burden of proof is on them. I'm not saying that this claim is wrong, I am only saying that I cannot tell, one way or the other, whether the claim is true. I won't take it on faith.

    ReplyDelete
  24. John Nolan , firstly, if you do not understand something, it does not mean someone else didn't. This is independent of the number of people in each group.

    Secondly, you do not get to decide which explanation, "essential" or no such difference, is the default. Making up a word, such as "essential" without defining what it even means in the context, and then suggesting people "do not believe in it", "based on faith alone", is a cheap and meaningless shot to make. For example, if you "believe based on faith alone" that there is/no hopotipomyloy, then I can certainly say that is bullshit and that is a statement of fact until you define exactly what you mean by "hopotipomyloy". This fact does not change if you replace that nonsensical word with mystical sounding string of dictionary words, also lacking proper definition, such as "essential difference between mind and matter". Just because you or a million people puts those six words in a string does not mean such thing comes to factual existence purely out of your imagination. This is independent of how many times it has been imagined or written in various texts, and did it first happen before or after paper was invented.

    The default is there is no hopotipomyloy until you have 1) defined what exactly do you mean by it, and 2) demonstrated that such thing does indeed exist. I do not need to do either, if I say bullshit and that is a statement of fact. This fact means that we can put any words in any kinds of strings, but reality does not depend on those strings of words, other than some people going on mumbling them and sometimes doing silly things because of these mumblings. That is reality -- that, metaphorically speaking, we do have billions of followers of hopotipomyloy, acting on faith alone, and they keep doing all sorts of real things, but none of that changes the fact that hopotipomyloy is bullshit. End of story.

    ReplyDelete
  25. Sakari Maaranen It would be help if you at least tried to respond to the arguments rather than simply dismissing/attacking people who disagree or 'don't seem to get it'. That's not helpful.

    ReplyDelete
  26. Worth adding that explaining something based on the complexity of known physical facts is a physical, factual explanation, even if it was without numerical calculations. For example -- saying that mammals have a long evolutionary history and have myriads evolved features, all the way from cellular level to species-typical behaviors, which is very, very complex, especially when studied as a part of the entire biodiversity in its environment -- is a solid statement of physical facts. None of that is based on faith. It is all demonstrated science, easily available online and in public libraries.

    Claiming that this level of complexity is far beyond the capabilities of AI, and noting that the evolution of real life is based on evolution in nature, is a solid statement of all known facts.

    ReplyDelete
  27. Rhys Taylor I have already done that. You are running in circles. I simply refuse to run in those circles with you. Just read again what I have already stated.

    ReplyDelete
  28. Plus, Rhys Taylor , I have not done a single ad hominem. Every statement I have made above is relevant arguments, including the fact that you are running in circles and repetition is not interesting.

    ReplyDelete
  29. People frequently talk about perfect or near-perfect simulations ("close enough" simulations), without actually producing the simulation. That's the main weakness in a lot of arguments about AI.

    That's what I mean about "faith."

    If someone says to me, "imagine a thinking mind simulated in software...", my response is, um, no, I'm not going to imagine that. You show me that simulation first.

    That's what I meant about taking things on faith.

    If someone proposed that a simulation of X would be just the same as X, then I want to see the simulation first, before I even begin to investigate whether the simulation is faithful. The burden of proof is not on me.

    Because it may very well be that the simulation is in fact impossible, under the assumed constraints.

    ReplyDelete
  30. If someone makes a claim that there may be some sort of magical quintessence of biological life which is not encompassed by known physics and chemistry, then the burden of proof is on that claimant.

    Until then, the scientific expectation is that the behavior of biological life can be predicted via known chemistry and physics, just like the behavior of a steel frame building or passenger jet.

    ReplyDelete
  31. Right, but there is no need for any magical quintessence. Science has plenty of examples of phenomena which were unknown for a long time, and then later discovered. For example, electromagnetic radiation. Or molecular theory. Or cells.

    Without a microscope, you have no hope of understanding life.

    Nowadays, many thinkers are in the habit of dragging out words like "magical quintessence" and "woo woo" and making accusations of spiritualism whenever you suggest to them that there might just possibly be physical forces and phenomena that are not yet known. But this attitude is not justified by the history of science. The history of science is full of examples of investigations revealing new things. Often, the new things were very unexpected, even by astute scientists.

    If someone wants to claim that "sure, of course you can build a working mind just with software and today's computer hardware," the burden of proof is not on me to prove that this is impossible. The burden of proof is on you to show that it is possible. At the moment, there is no compelling reason for me to accept this proposition even in theory.

    It's manifestly true that human brains are built on completely different hardware than computers, and the functioning of that hardware is not yet understood on a physical level. It is not unscientific to point out that "We don't understand that and cannot build it. Maybe someday, but not now."

    ReplyDelete
  32. "The scientific expectation is that the behavior of biological life can be predicted via known chemistry and physics." => The weakness in your statement is this word "known." That expectation is usually true, but is not absolutely true.

    Of course nobody should propose new physics lightly. I am not proposing any new physics or chemistry.

    But I'm also not ruling it out.

    My claim is limited and simple: if you propose thar you already have all the science you need to build a mind, then you need to produce a working model. Let's see it. Until then, it's fair for anyone to doubt this assertion.

    Thought experiments about artificial minds are not evidence, and they are not persuasive.

    ReplyDelete
  33. There is also the case of complexity, like explained above in this thread. Complexity is neither magical nor unknown -- just complex known stuff. Suppose we had a ten billion qubit quantum computer -- far more powerful than anything we have today or in foreseeable future. Biodiversity on Earth is an unimaginably large number of times more complex than such quantum computer could simulate. Reality has as many quantum states as ... well ... as there is in existence in any given region which we were supposed to simulate. Even if we did understand all laws of physics (and we do not), reality is still too complex to be faithfully simulated, armed with all that understanding, both because we do not have the capacity to measure it at sufficient level of detail (to copy the current state as input to our model) nor the capacity to run such simulations. Claiming otherwise is blatant disregard of physical realities, a luxury not allowed to anyone calling themselves a scientist.

    ReplyDelete
  34. Isaac Kuo There's a difference between consistent with and emergent from.

    That principle was expressed clearly by Michael Polanyi in "Life Transcending Physics and Chemistry", and is addressed by much of complexity research.

    The principles of chemistry themselves are only roughly predictable from physics, and physics cannot predict the behaviour of three objects, let alone the dynamics of the trillions within a human body.

    That's not to say that their behaviour violates the laws of physics. It does not. But to insist that physics and preknowledge alone suffice has been shown demonstrably false.

    Biological and artificial intelligence have followed different evolutionary paths. They're converging in some senses, but not in others. That they are shown to have different characteristics and behaviours, and capabilities and applications, does not surprise me.

    http://inters.org/Polanyi-Life-Irreducible-Structure

    pubs.acs.org - pubs.acs.org/doi/abs/10.1021/cen-v045n035.p054

    ReplyDelete
  35. It's got nothing to do with AI because we're talking about something that supposedly is special about bacteria and viruses that somehow doesn't apply to a computer algorithm emulating it.

    The claim is that somehow real living things have real goals, but a computer program can't have real goals. What does this have to do with AI? You don't need to use complex AI in order to emulate the behavior of grass.

    Are there gaps that we don't have perfect explanations for yet? Sure. Does that mean that we must seriously scientifically consider that there's a god of the gaps? No.

    We don't need to demonstrate a perfect simulation of a bacteria to be confident that a bacteria can be simulated.

    ReplyDelete
  36. Isaac Kuo The fact that an argument is bogus (goal-directedness -- I agree with you, the artical is idiotic on that point) does not meen the class of arguments (as I've sketched above) is.

    There's a hierarchical process in problem resolution. We've covered awareness and diagnosis, but are stumbling on etiology.

    https://old.reddit.com/r/dredmorbius/comments/2fsr0g/hierarchy_of_failures_in_problem_resolution/
    old.reddit.com - Hierarchy of Failures in Problem Resolution • r/dredmorbius

    ReplyDelete
  37. Not "we", Isaac Kuo . You are talking about such thing. Blocked him for insisting on using straw man arguments even after having been corrected.

    ReplyDelete

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Whose cloud is it anyway ?

I really don't understand the most militant climate activists who are also opposed to geoengineering . Or rather, I think I understand t...