Computer Intelligence: Deep Learning
Is true artificial intelligence just around the corner? The journal Nature thinks so.
It’s called deep learning–essentially, teaching a machine to learn the way a three year-old does. That’s how a Google supercomputer discovered a recurring phenomenon on the internet–cats.
Deep learning involves neural networks that change in response to experience. That’s how a toddler learns–by seeing an event, responding, and seeing what happens again. The game “peekaboo” is an example. In effect, we are teaching supercomputers to play peek-a-boo. So they grow up to be R2D2.
Facial recognition is a major goal, apparently achieving leaps and bounds. To get there, the computer progresses through four layers: (1) telling light from dark pixels; (2) recognizing edges and circles; (3) more complex shapes such as an eye; (4) defining a face.
What is deep learning good for? Translating foreign languages, and testing drug candidates. Forensic identification, no doubt.
Now we’re giving our toddler computer “a stack of scanned textbooks” to “pass standardized elementary-school science tests (ramping up eventually to pre-university exams).” R2D2 goes to college.
BTW, we’ll hear more from R2D2 at Boskone this year. 🙂
Comments are closed.
Biggest problem–convincing people that the computers are right when it goes against human nature and/or including a sense of safety into deep learning, so the computer doesn’t rip us apart to find out about anatomy..
The problem with deep learning (DL), as mentioned in the article, is the huge amount of computing power needed. Hinton used a Restricted Bolzmann Machine (RBM) to reduce that requirement, but even then it was huge. RBMs are the current hot approach for machine learning contests, but very few teams can use this technique.
DL is best suited for pattern finding. In this sense it is similar to neural nets. However, at a more abstract level, we are probably better off with symbolic processing as this is far more efficient. WATSON was mentioned as a success here. However, Doug Lenat’s Cyc was a predecessor which I don’t believe did that well despite all the knowledge it had (20+ years of curated input).
I think the robotic breakthrough happens when we get cheap, high density, neuromorphic chips. This should hugely lower the size, cost and energy demand for DL and allow our machines to be smart and mobile. Couple DL with symbolic processing and you may have a formidable system.
Next problem will be what to do when work is almost completely robot filled in very short time, displacing a large fraction of the workforce. The transition will be unpleasant. I also worry that agreements to not allow such autonomous robots to make kill decisions on a battlefield will be quickly violated. I just watched Elysium and was nonplussed why Max would be put at risk rather than using the available robots. It was a plot driver and a very “in your face” political statement, but it made no contextual real world sense.
Being a non-tech person, I’ve only read the non-technical articles about DL …
What bothers me — in that I think it’s being missed/overlooked — is that a 3-year old has built-in value systems, sensorial plus emotional, that sift the input (data) even before any attempts to fit inputs together into a comprehensible story/explanation. That’s on an individual level …. Then a 3-year old typically also has other humans nearby with whom he/she can then test their learning (theory) at a yet higher level.
What do current designed-for DL machines/computers have/use that provide a similar multi-level function?
Been a while since my last visit, caught up on some previous – now closed – topics …
A couple of Richard Feynman videos discussing points that are very applicable to this topic (computer intelligence) as well as what needs to be done re: education reform. Apart from being a Nobel laureate in physics, Feyman was considered one of the best communicators/educators of his era.
Mathematicians vs. Physicists
Feynman: Knowing versus Understanding
Overall, I think that any laundry list of what topics/modes of reasoning kids need to learn in school should come with validated (tested) appraisals of the strengths and weaknesses of each topic/mode of reasoning. Otherwise, it’s not education, simple-minded rote-learning at best, brainwashing/dogma at worst.