Sugar for the technical mind (Part I)
I thought it would be nice to periodically share some of the papers I've read and enjoyed recently, with a little note for the reader as to whether each is a light read or a dense read.
A Computational Approach to Edge Detection by John Canny, 1986 (Medium/Heavy) : Ah, the famous canny edge detection algorithm. This was certainly bound to have a spot, as it's a primary subject of another one of my blog post. There's a fair bit of math in there, but what's interesting isn't necessarily the derivations, but the way he formalized the otherwise fuzzy definition of an edge.
A New Implementation Technique for Applicative Languages by D. A. Turner, 1979 (Light) : This paper is a nice introduction to combinatory logic, a different way of thinking about programming using constructs called "combinators". No background is CS or Math is required, it's a leisure read.
Automatic Audio Segmentation Using a Measure of Audio Novelty by Jonathan Foote, 2000 (Medium) : Obviously a fairly specialized paper, but the idea of self-similarity matrix, used in this context, is really cool (in my opinion).
Classifier Technology and the Illusion of Progress by David J. Hand, 2006 (Medium) : This deals with machine learning and will mostly be interesting to readers with a minimal background in statistics or some experience with algorithms that attempt to do "fuzzy" tasks, such as recognizing patterns in images or sound. It will (hopefully) remind me that complex doesn't always mean better, it can be beneficial to KISS (Keep it Simple Stupid).
Seam Carving for Content-Aware Image Resizing by Shai Avidan and Ariel Shamir, 2007 (Light/Medium) : An algorithm is presented to resize images in a way to reduce only the least relevant parts, such as the background, and keep the aspect ratio of certain features, such faces. This has been incorporated into a Photoshop feature and it's really awesome, a simple but ingenious idea.
Stein's Paradox in Statistics by Bradley Efron and Carl Morris, 1977 (Light) : A statistics paper, not a CS one, but still worth a read. It explains the unintuitive idea that taking the average of a baseball player's past batting average (or any other kind of data) is not the best predictor of future performance - one should also consider the batting average of other baseball player - even if they are in different leagues!
Temporal Difference Learning and TD-Gammon by Gerald Tesauro, 1995 (Light) : One of the most influential development in artificial intelligence happened when computers learned to not only beat humans at backgammon, but create an entirely new strategy and way of thinking about the game. The techniques used are quite interesting, but also the discussions that follow.
Hmm, did I forget anything. Ah, well, this should do for now.