I thought I'd explain the title a little bit.
A couple of days ago I was coding up some graph making algorithms to visualize the automatic geometry deriver I'm working on at work. I should maybe explain (for those who don't know me) that I work on automatic license/number plate recognition (ALPR/ANPR). Part of it is to check whether the registration number we THINK we read actually makes sense, and part of THAT is to check whether the numbers and letters we think we read are located in the correct spot relative to each other. That's geometry checking, and it's not easy. The difference between some guy in Alabama getting a $20 ticket and some guy in New Mexico getting it can be down to two pixels... and multiplied by tens of thousands of passages per day ... you probably get the picture.
Where I was going is that I wanted to draw a little box around the standard deviation on the histogram of errors that our derived model produces in the learning process. Then we can visualize how quickly it learns in a nice, comprehensible way, while it's busy doing it's thing.
Standard Deviation is, of course: SQRT( SUM_OF_SQUARES / NUMBER_OF_SAMPLES ). Easy enough, right? So I type it in, compile, and watch the value explode waaay off the scale. Huh-whut? It turns out that our C library thinks that SQRT(x) is actually x*x...
So, that got me on a train of thought that started with square roots, rolled along the etymology of "root", entered a tunnel containing the life and times of Euclid of Alexandria and his book about geometry and the properties of squares, and exited somewhere in the neighborhood of variance squared.
In other words, I went beyond square roots, all the way to... Euclid's Vegetables.