Saturday, September 20, 2008

A simple and bad algorithm for teaching

This term I'm teaching an undergraduate course in Artificial Intelligence (AI). I got the job just as the semester started, and so I didn't get a chance to develop my own approach to the teaching of this very splendid and worthwhile subject. Of course, as a mixed blessing, there came the inevitable recourse to Russell & Norvig, who've shepherded thousands of similar courses over the last 10 years, and all over the place.

Book and lecture notes in hand, I've been so far following my first pass at a teaching algorithm:

1. Read the chapter. Make sure you understand it.
2. Prepare lecture slides.
3. Do a couple of exercises. Choose a couple for the next problem set.
4. Go over the slides a couple of times, mumbling things you want to say that are not explicitly written on the slide.
5. Enter classroom; deliver lecture; forget to use all your little "pedagogical devices"; watch students yawn...
6. Goto 1

This is as boring for the poor souls in my class as it is unsatisfying for me. I've "only" lost 4 students so far (out of a high point of 19), mostly due to a harsh problem set early on. But I need to come up with a better algorithm.

At least I have a nice subroutine in the "deliver lecture" department. At the beginning of each class, I call on random students to tell me, in their own words, about the concepts we discussed last time. In the spirit of modularity and reuse, I took this feature from a library of little gems by a senior faculty in my department. Thank you, senior faculty!

Saturday, September 13, 2008

On the complexity of stuff

Last post got me thinking, which in my case means asking lots of questions and refusing to think through the answers, at least at first. Later, arguably, I will forget all about these very pressing questions and so will never find the answers until one day, I read something written about this elsewhere and bemoan the passing of time and my laziness.

But back to the pressing questions. One afternoon over tea with Dr. Peshkin, I heard him express an excellent question: what is the complexity of physical objects? In computational theory, there are complexity classes for problems. If an abstract machine (whose abilities can be approximated by a modern PC) can find the answer in a relatively short period of time, it belongs to one complexity class (called the 'polynomial-time' class or P) . If you need a machine that's capable of pursuing several solution lines at a time and non-deterministically switching between them, then it belongs to another, intuitively harder class of problems (called NP for 'nondeterministic polynomial time'). There's more classes, but the point is: you can tell if a problem fits into one of them. This is a useful thing for writing programs: you don't want to tell your computer to do something that's provably going to take from now until the end of time.

So is there a similar classification for objects? Objects are kind of like problems if you try to make them with your new personal fabber (PF). You need to tell the fabber how to make them, of course. And thus I think we can define useful complexity classes for objects in terms of the fabber that can make them and the instructions that it would require.

But what's the complexity of a fabber? Why, it's the class of objects it can make! :-) There's got to be a better way of course, but maybe we can establish equivalence classes, like in computability theory. Push-down automata can solve context-free grammars. A 2D laser printer can build planar objects. OK, this is a really uninformative example, but I have to think more to come up with useful classes in the complexity of stuff.

For now, let's all agree that stuff is pretty complex.

Friday, September 12, 2008

On the computability of universal rapid-prototyping

The new issue of Seed magazine (No. 25) has an article on machines that can make a copy of themselves. Well, actually, they can only make all the parts (other than batteries) that are needed to make a copy of themselves. That's not half-bad either. A machine that's cheap (and for which the materials are cheap!) and that can make all the parts for a copy of itself -- that's a big step towards universal availability of rapid-prototyping. And that may very well lead to something like personal factories, by analogy with personal computers and a similarly revolutionary idea.

By the way, in addition to RepRap mentioned in Seed, Technocrati and tens of other online sources (presumably because the project is affiliated with Google?), Fab@Home and its inventor Evan Malone (then at Cornell) have been pursuing universal rapid-prototyping at least since 2006.

The very idea raises questions about the interplay of the computational and the physical. If you can readily manufacture articulated, controllable things in your home, how do you then make them do useful things? As of now, fabbers (for that is what we shall call these nifty machines) don't make computer chips or electronics as part of the process. But people are working on it, and it doesn't defy the imagination to consider building (programmable?) circuits directly into the fabricated parts. Ah. So many things to ask from a computational perspective. Here's one: what is the computational class of circuits built by a fabber? Meaning: what class of problems can these circuits compute? What about any physically realizable fabber? There's a research project in there, yours for free.

More on all that later.