June 4, 2006

Machine Learning Success at the Grand Challenge

This story goes into detail about different machine learning algorithms, and their applications that are finally making headway in the software world. Most famous of the tasks is the DARPA Grand Challenge II that was won by the Stanford Stanley vehicle.

stanley_sebastian.jpg

Thrun used several new machine-learning techniques in software that literally drove an autonomous car 132 miles across the desert to win a $2 million prize for Stanford in a recent contest put on by the Defense Advanced Research Projects Agency. The car learned road-surface characteristics as it went. And machine-learning techniques gave his team a productivity boost as well, Thrun says. "I could develop code in a day that would have taken me half a month to develop by hand," he says.

Computer scientist Tom Mitchell, director of the Center for Automated Learning and Discovery at Carnegie Mellon University, says machine learning is useful for the kinds of tasks that humans do easily -- speech and image recognition, for example -- but that they have trouble explaining explicitly in software rules. In machine-learning applications, software is "trained" on test cases devised and labeled by humans, scored so it knows what it got right and wrong, and then sent out to solve real-world cases.

Mitchell is testing the concept of having two classes of learning algorithms in essence train each other, so that together they can do better than either would alone. For example, one search algorithm classifies a Web page by considering the words on it. A second one looks at the words on the hyperlinks that point to the page. The two share clues about a page and express their confidence in their assessments.

Mitchell's experiments have shown that such "co-training" can reduce errors by more than a factor of two. The breakthrough, he says, is software that learns from training cases labeled not by humans, but by other software.

Stuart Russell, a computer science professor at the University of California, Berkeley, is experimenting with languages in which programmers write code for the functions they understand well but leave gaps for murky areas. Into the gaps go machine-learning tools, such as artificial neural networks.

Russell has implemented his "partial programming" concepts in a language called Alisp, an extension of Lisp. "For example, I want to tell you how to get to the airport, but I don't have a map," he says. "So I say, 'Drive along surface streets, stopping at stop signs, until you get to a freeway on-ramp. Drive on the freeway till you get to an airport exit sign. Come off the exit and drive along surface streets till you get to the airport.' There are lots of gaps left in that program, but it's still extremely useful." Researchers specify the learning algorithms at each gap, but techniques might be developed that let the system choose the best method, Russell says.

Posted by elkaim at June 4, 2006 11:56 PM