Allie Ford Reviews Probably Approximately Correct: Nature’s Algorithms for Learning and Prospering in a Complex World

probably-approximately-correct-400

Review by Allie Ford

Probably Approximately Correct: Nature’s Algorithms for Learning and Prospering in a Complex World
by Leslie Valiant

Book’s Homepage: http://www.probablyapproximatelycorrect.com

If we are ever to develop artificial intelligence (AI), we first need to understand human intelligence. Even if we ignore that that there are multiple forms of intelligence, not just one, we struggle to be able to replicate even a single type of intelligence to a reasonable standard at present! The need for AI is increasing though, especially as our world is shaped by the data we produce – and the amount of data we produce. Making sense of this new information landscape is going to require the ability to mine the data – to explore, recognise, identify and classify useful ‘needles’ among the vast ‘information hay stack’ being generated every second. Data mining at this level is never going to be a human-only endeavour – we are going to need computers and programs with such abilities to help us. But with the types and forms of data ever changing, those computers are going to need to learn to process the data, not just be programmed for it by humans.

Leslie Valiant, in his book Probably Approximately Correct, argues that understanding learning is a key to understanding – and replicating intelligence, but currently we face numerous barriers to understanding how learning works – in humans or in computers. Some of the difficulties include:

  • Human babies are born encapsulating billions of years of evolution, and are hardwired to learn, making it difficult to separate the ‘hardware’ (evolution-based) and the ‘software’ (individual-based) aspects of what happens during learning. We don’t actually understand many of the algorithms at play in some aspects of evolution – cognitively or biologically – to any degree. Even trying to model evolution is difficult: there are so many algorithm possibilities that the time taken to model all (or even a small fraction) of them would be immense; if we try to narrow down the field to ‘possible’ algorithms, however, we run the risk of leaving out the very thing we’re looking for – because we have assumed otherwise, or we don’t recognise the ‘needle’ when we see it. We don’t even know what it is that we don’t know!
  • We often ‘learn’ indiscriminately, extrapolating from small amounts of data (even from a single experience) because nothing has yet contradicted our assumptions. This occurs even if the data are seriously flawed, or if we do not have enough existing knowledge to prepare us to learn the information or improve a skill yet: think about kids in school who miss an important class on a topic and are then dropped into the next session with an expectation of full understanding; they need to take shortcuts to assimilate the information coming at them, even if those shortcuts have potentially damaging outcomes in the long term.
  • Effective teaching is often a product of effective individual curriculum design, where teachers identify what aspects of learning would be achievable at a given time, based on the learner’s previous knowledge and performance, and individualise it for each specific learner. This task is tricky for a class full of humans who can display and discuss their knowledge, and who have followed a fairly set curriculum. With robots and computers, it is very time consuming to build the ‘prior’ learning, and it’s difficult to understand their past ‘experience’, let along recognise the most effective way to build on it. (This doesn’t mean that people aren’t trying to teach computers anyway – and succeeding!).
  • The more obvious something seems, the less likely we are to recognise that it has been learnt, let alone consider how we learnt it! We don’t actually factor everything into our learning, only the information and experiences in memory at that time. The problem is that memory is itself the product of biases, attention and filtering by our inbuilt algorithms – algorithms we don’t even know exist.

It is possible that learning and/or evolutionary algorithms lie at, or near the limits of, computational feasibility – a limit that is currently undefined, and could lie far off. This means that we could eventually be in a position to understand learning and evolution mathematically, in the same way that we understand gravity, or that this goal is never going to be attainable (though we might get close – or get lucky).

Valiant proposes a Probably Approximately Correct (PAC) algorithm as a way to approach the challenge of trying to understand learning and evolution based on a limited number of assumptions. He considers both the statistical and computational components of this challenge, and of identifying the limits of each. I confess, I didn’t understand everything, especially when the explanation of one algorithm required references to numerous other algorithms! I have a background in physics and mathematics, and a strong interest in artificial intelligence, but this book still went over my head in places, due to the sheer amount of information! It’s definitely not light reading or pop science – you do need to have a grounding in the multiple disciplines related to AI research to really get to grips with much of the material on anything more than a superficial level.

Overall, though, Valiant raises some questions to provoke thought in a wide range of professions – from teaching to AI research, computer programming to evolutionary biology. I found some of the book a struggle as it is quite abstract and dry in parts. More real-world examples would be useful to readers like me, with an interest in the area, but not a detailed knowledge. The last couple of chapters were much more practical and grounded; examples relating to email spam and natural language handling are probably the most accessible to those without a strong background in this area.

For me, the book didn’t quite live up to its promise – but there were lots of gems hidden away – if you could mine enough of the data packed into the book to recognise them!

allie-fordAllie Ford

Allie fell in love with science when she spotted test tubes full of different coloured solutions at high school. She studied astrophysics and chemistry at university. She taught Bioastronomy for several years, as well as being an active participant in the Science in Schools program, and touring Australia as a cast member in the RiAus/BBC Science of Doctor Who Live show. Allie loves reading and learning new things, often ‘helped’ by her two parrots (who prefer eating books to reading them).

Advertisement

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s