Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Monday, 23 April 2018

The different kinds of AI : turning data into conclusions

A nice overview presenting the differences between and importance of machine learning (where a computer learns new data, finds new trends, or otherwise acquires new factual information), intelligence augmentation (wherein computers are used to augment human intelligence by doing the more complex tasks for them, e.g. finding trends in large data sets, but don't choose those tasks for themselves), intelligence infrastructure (the network connecting all the flows of data from difference sources and the understanding of how they relate to each other), and of course true artificial intelligence in the classical sense (a machine that can genuinely think about and understand data and make/present choices based on its conclusions). All of these can have important consequences. The latter does not yet exist and probably won't anytime soon, but that doesn't negate the others.

The problem had to do not just with data analysis per se, but with what database researchers call “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight.

I’m also a computer scientist, and it occurred to me that the principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education. And it occurred to me that the development of such principles — which will be needed not only in the medical domain but also in domains such as commerce, transportation and education — were at least as important as those of building AI systems that can dazzle us with their game-playing or sensorimotor skills.

...we might imagine living our lives in a “societal-scale medical system” that sets up data flows, and data-analysis flows, between doctors and devices positioned in and around human bodies, thereby able to aid human intelligence in making diagnoses and providing care. The system would incorporate information from cells in the body, DNA, blood tests, environment, population genetics and the vast scientific literature on drugs and treatments. It would not just focus on a single patient and a doctor, but on relationships among all humans — just as current medical testing allows experiments done on one set of humans (or animals) to be brought to bear in the care of other humans. It would help maintain notions of relevance, provenance and reliability, in the way that the current banking system focuses on such challenges in the domain of finance and payment. And, while one can foresee many problems arising such a system — involving privacy issues, liability issues, security issues, etc — these problems should properly be viewed as challenges, not show-stoppers.

I do have a quibble where the author discusses genuine AI though :

It is sometimes argued that the human-imitative AI aspiration subsumes IA and II aspirations, because a human-imitative AI system would not only be able to solve the classical problems of AI (as embodied, e.g., in the Turing test), but it would also be our best bet for solving IA and II problems. Such an argument has little historical precedent. Did civil engineering develop by envisaging the creation of an artificial carpenter or bricklayer? Should chemical engineering have been framed in terms of creating an artificial chemist?

Well, the historical argument seems relevant to me; different industries have different goals. Labour considerations aside, the idea of a bricklaying robot or robotic chemist seems like a very good idea to me* - just because those industries haven't proceeded with the view to creating a perfect AI doesn't mean others won't.

* Obviously, an artificial astronomer is just a ludicrous idea.

A related argument is that human intelligence is the only kind of intelligence that we know, and that we should aim to mimic it as a first step. But humans are in fact not very good at some kinds of reasoning — we have our lapses, biases and limitations. Moreover, critically, we did not evolve to perform the kinds of large-scale decision-making that modern II systems must face, nor to cope with the kinds of uncertainty that arise in II contexts. One could argue that an AI system would not only imitate human intelligence, but also “correct” it, and would also scale to arbitrarily large problems. But we are now in the realm of science fiction — such speculative arguments, while entertaining in the setting of fiction, should not be our principal strategy going forward in the face of the critical IA and II problems that are beginning to emerge. We need to solve IA and II problems on their own merits, not as a mere corollary to an human-imitative AI agenda.

I've always thought that the goal was a human-like intelligence only in the sense that it would have a capacity to understand and reason about the world, to actually think about things and not just correlate data. While I agree with this paragraph, I don't think it's (too) unreasonable to speculate about a machine that can process data, check the validity of the data, correct for any possible biases, and then form an emotionless conclusion with carefully-stated assumptions as a result. That's always been my interpretation of the goal of AI, rather than, or at least in addition to, forming one that's more similar to the madcap blob of goop that is the human brain. I full agree with the author that such a goal is a long way off, however.

https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7

No comments:

Post a Comment

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Whose cloud is it anyway ?

I really don't understand the most militant climate activists who are also opposed to geoengineering . Or rather, I think I understand t...