Tuesday, May 5, 2009

Guaranteed to result in nothing

1. Set up a research meeting
2. Do nothing
3. Until time = t-1
4. Cancel research meeting

Also guaranteed to result in nothing if research meeting is replaced with any sort of deadline. A very generally awful algorithm, really.

Saturday, February 7, 2009

Engineers who are citizens

After a 2-month hiatus, yesterday's talk by David Douglas inspired me to write again. The talk was entitled "Citizen Engineers, Computing and Clouds". So I went expecting a talk on regular folk who do engineering, perhaps the open source community, perhaps rapid prototyping in your home, perhaps harnessing the ideas and contributions of hobbyists for a greater good. The abstract mentioned engineering and computing's relationship to current social and world problems, specifically environmental impact, so I went hoping for a hint of an idea as to how computing, computer science, could help or hurt sustainability and the fight against climate change.

It was an interesting talk, but Douglas's "citizen engineers" are not regular folk doing engineering in their free time (as one might have guessed by extension from "citizen journalists"). Instead, I heard some interesting stuff about the carbon footprint of data centers (on a par with air travel), the lack of openness about power consumption and carbon footprint by Google, and how companies should incorporate environmental impact reduction into their business practices lest they go out of business.

Which is all very well, but here is what I would like to know. Are there advances in computer science that can be immediately applied to sustainability problems, and how? Is cloud computing better for the environment than older forms of computing, and how? Some obvious things were mentioned: data center computing doesn't require nearly as much plastic components as end-user computing, because who cares what the machines look like inside a data center? But here is what I would have liked to hear and didn't: here's a sustainability problem -- you can cast it in this very cool new way which immediately suggests a computational solution.

Maybe the deeper wisdom of tackling world problems with computing is that you can't address any abstract, computational aspect of any of them without thinking at the same time about very physical and mundane things like amount of plastic and power required. I'm thinking of this as the analog to the embodied intelligence idea: the physical imprint, the body that carries out the computation will be part of the problem and part of the solution. Maybe that was the message of the talk, that I'm finally getting.

Wednesday, December 10, 2008

Not enough soul

Today I was told that my teaching and research statements (those well-known venues for personality spice) lack a certain je ne sais quoi. I believe the particular words used were "not enough soul".

Which for no reason made me think about a time long gone when I was a first-year CS/CogSci undergrad attending a distinguished lecture by a somewhat famous neuroscientist. Having understood very little of her talk, I decided nonetheless to ask her this pressing question: "What is your take on the mind-body problem?" And then was very disappointed when she started answering by talking about soul (and how she wasn't going to talk about it because it isn't a scientific concept). Sometimes I shudder to think that I'm no less scientifically naive now than I was then.

What's with the soul? I'd like to see an algorithm for injecting soul sprinkles into a research statement (yeah, yeah, I know...). The person who advised me on my soul deficiency had a few suggestions in mind: a telling anecdote from childhood here, a personal revelation there. Which is exactly the opposite of FSP's advice (although her advice is about grad school applications). Funny thing is, my soul advisor is something of a "FSP" herself. Go figure.

Thursday, October 23, 2008

How motor skills are different from language

Some people like to argue that the development and learning of motor skills in humans is a lot like the development and learning of language in humans: intertwined, hard to separate, probably with a large element of hardwired knowledge and only a few parameters to tune through experience.

And sure, it may be helpful to think of motor skills this way, and people talk of motor primitives (kind of like words) and grammars that tell you how to combine them, which further helps with the analogy. From a computational standpoint, of course, natural language is a fascinating system clearly governed by rules that nonetheless admit a large degree of fuzziness in practice. Despite several good linguistic models of language production, it's still really hard to make a computer do a decent job of things like conversation and translation. There is, though, spectacular progress, and a whole body of research on the subject. So if motor skills are like natural language, then maybe we can apply things we know about the latter to the former. "Natural" motor skills: their fluidity, robustness to perturbation, and extremely precise yet very compliant control, are hard for robots perhaps the same way language is hard for computers. And it's quite probable that movements are compositional the way sentences are.

But I'd like to point out three fundamental differences.

1. Kinds of compositionality

Utterances are compositional according to the rules of grammar. Things like "apod ihfoa dhf" or "but the why cow give tree storm" don't count as language, even though they are strings composed of letters or phonemes (or words) and strung together. On the other hand, if I start twitiching and winking and throwing my limbs about, this is acceptable movement and maybe even a dance. The point is: movement is compositional (composed of little things you can do called motion primitives) but nothing dictates its acceptability other than physical laws. If a motion sequence is impossible for a human being, it's because the joint doesn't bend that way, or the muscle isn't strong enough, or the required degree of freedom is lacking from our body etc. On the other hand, as demonstrated above, I can make any number of utterances that don't mean or communicate anything because they don't obey the restricted, symbolic compositionality rules (grammar) of a language. Why is this distinction relevant?

2. Kinds of available information for learning

How utterances vs. movements are created, and which ones are acceptable is relevent because it directly bears on the question of how a human infant/child could possibly learn the two sets of skills. It's widely accepted in linguistics that there isn't enough information (enough examples of what does and doesn't constitute a grammatical sentence) in order to learn (infer) a complete grammar from nothing. The conclusion is that we must be born with a language organ in the brain that essentially has a hardwired grammar, the parameters of which need to be set by learning via exposure to a particular language. In particular, children don't hear nearly enough ungrammatical utterances, nor are they usually corrected when they say ungrammatical things, in order to reliably identify the grammar of a language.

Does the same hold in the case of motor skills? Let's see. Every waking minute of every day of our lives, we move our muscles and receive sensory feedback on our movements. We actively send neural control signals to our muscles and directly sense what happens as a result for everything that we do, including sitting, standing, breathing and blinking. Clearly, there are instinctive motor skills such as breathing and blinking that don't require any learning. We have the required neural circuitry at birth. But sitting and standing does not come at birth. Nor do any directed limb movements. But those muscles too get commanded relentlessly and at a high rate. And also relentlessly, at a high rate, we perceive the results of our actions in the form of sensory signals that come from proprioception, touch, and vision (and also hearing and smell and taste sometimes). But mostly proprioception. So it seems like a true wealth of experience to draw on for learning how to move.

Notice also, that there is a wealth of negative experience or negative examples of how not to move for human infants. They lose their balance all the time, they stumble, they fall, they bite their tongues ... They control their muscles in all these ways they shouldn't be, and they immediately get a negative failure signal due to the laws of physics. Again, there is plenty to learn from. And this disparity is specifically due to the fact that motor skill compositionality is not based on rules for stringing symbols, but directly and only on the physical capabilities of the body and the physical universe.

3. Kinds of goals

Utterances are vehicles for communication. Movements are vehicles for displacement. While the intended meaning of an utterance changes with its grammatical structure, any movement that achieves the desired displacement is generally acceptable.

There is surely more to say on the subject, but these three things lead me to believe motor skills are more learnable and less hardwired than linguistic abilities.

Tuesday, October 21, 2008

Engineering, not computation?

I heard Noah Cowan of JHU give a great talk today. He talked about having an engineer's perspective, a systems perspective, applied to scientific questions in biology. There is so much to be learned about the neural control of animal behavior, and he and colleagues are figuring out how animals close the sensorimotor feedback loop. It turns out, for example, both theoretically and experimentally that cockroaches can't be using simple proportial control when they are running at 1.5m/sec along a wall, but a PD model does in fact predict pretty well the roach motion. Or there are these weakly electric knifefish that really like hiding inside tubes. If you move the tube around, it will look like the fish is attached to it with a spring as it follows along. And it turns out that a harmonic oscillator (spring-mass) mechanics model in fact correctly predicts the behavior at various frequencies of moving the tube around, whereas a kinematic model does not.

As I payed close attention to the controller-plant-feedback diagrams, the second-order differential equations and the low-pass, high-pass and band-pass filter discussions, I noticed that all of these useful tools have nothing to do with computation or a computational worldview. The engineer has a formidable toolbox for use in the science of neural control; the progress that can be made is remarkable. But what can a computer scientist bring to the table? What models, what theoretical tools?

It's pretty obvious to me that computation won't be helpful in figuring out the science of animal motion, in all its graceful and fast glory. So what kinds of biology questions might we address armed with our understanding of data structures, algorithms and complexity? Ideally, questions beyond proving this or that purportedly intelligent activity is NP-hard? My hunch (and I'm betting my research on it) is that our computational tools will be best employed in asking and answering questions at a level above that of the neural control of one organism. Instead, we can study the interactions, communications, and group-level behaviors of many organisms.

Bayesian models of human cognition notwithstanding.

Saturday, September 20, 2008

A simple and bad algorithm for teaching

This term I'm teaching an undergraduate course in Artificial Intelligence (AI). I got the job just as the semester started, and so I didn't get a chance to develop my own approach to the teaching of this very splendid and worthwhile subject. Of course, as a mixed blessing, there came the inevitable recourse to Russell & Norvig, who've shepherded thousands of similar courses over the last 10 years, and all over the place.

Book and lecture notes in hand, I've been so far following my first pass at a teaching algorithm:

1. Read the chapter. Make sure you understand it.
2. Prepare lecture slides.
3. Do a couple of exercises. Choose a couple for the next problem set.
4. Go over the slides a couple of times, mumbling things you want to say that are not explicitly written on the slide.
5. Enter classroom; deliver lecture; forget to use all your little "pedagogical devices"; watch students yawn...
6. Goto 1

This is as boring for the poor souls in my class as it is unsatisfying for me. I've "only" lost 4 students so far (out of a high point of 19), mostly due to a harsh problem set early on. But I need to come up with a better algorithm.

At least I have a nice subroutine in the "deliver lecture" department. At the beginning of each class, I call on random students to tell me, in their own words, about the concepts we discussed last time. In the spirit of modularity and reuse, I took this feature from a library of little gems by a senior faculty in my department. Thank you, senior faculty!

Saturday, September 13, 2008

On the complexity of stuff

Last post got me thinking, which in my case means asking lots of questions and refusing to think through the answers, at least at first. Later, arguably, I will forget all about these very pressing questions and so will never find the answers until one day, I read something written about this elsewhere and bemoan the passing of time and my laziness.

But back to the pressing questions. One afternoon over tea with Dr. Peshkin, I heard him express an excellent question: what is the complexity of physical objects? In computational theory, there are complexity classes for problems. If an abstract machine (whose abilities can be approximated by a modern PC) can find the answer in a relatively short period of time, it belongs to one complexity class (called the 'polynomial-time' class or P) . If you need a machine that's capable of pursuing several solution lines at a time and non-deterministically switching between them, then it belongs to another, intuitively harder class of problems (called NP for 'nondeterministic polynomial time'). There's more classes, but the point is: you can tell if a problem fits into one of them. This is a useful thing for writing programs: you don't want to tell your computer to do something that's provably going to take from now until the end of time.

So is there a similar classification for objects? Objects are kind of like problems if you try to make them with your new personal fabber (PF). You need to tell the fabber how to make them, of course. And thus I think we can define useful complexity classes for objects in terms of the fabber that can make them and the instructions that it would require.

But what's the complexity of a fabber? Why, it's the class of objects it can make! :-) There's got to be a better way of course, but maybe we can establish equivalence classes, like in computability theory. Push-down automata can solve context-free grammars. A 2D laser printer can build planar objects. OK, this is a really uninformative example, but I have to think more to come up with useful classes in the complexity of stuff.

For now, let's all agree that stuff is pretty complex.