Here is an interesting Op-Ed in today’s NYT, which touches on the point I made in my earlier post entitled “Ignorance is Bliss”.
I have been harping on this for a while, but Eli Pariser (of MoveOn.org) puts it very eloquently: “There is a new group of gatekeepers in town, and this time, they’re not people, they’re code.”
In Jon Crowcroft’s talk at BU earlier this month, I hinted to this issue — getting information through a social network reduces entropy — and alluded to the need for better “personalization” technology and algorithmics. Pariser’s point is that we should not trust editorial responsibility (the control of information flow) to code. If we do, then the Internet would have turned things around 360 degrees — by allowing us to bypass “an elite class of editors”, only to let code decide what people would see and hear about the world.
Related to (and influencing my thinking about) the above is the long-held position that “Code is Law” by Lawrence Lessig.
Computer Science is quickly becoming a social science!
Vern Paxson is the 2011 SIGCOMM award winner. This is a great choice, and well deserved. Vern has worked mainly in the areas of Internet Measurement and Security. Of course, often those areas intersect, and he has done some of the best work at that intersection. Vern’s work is notable for for asking and answering interesting questions through careful and thorough acquisition and analysis of data (think ‘Freakonomics’ for the Internet). And he’s given back to the community in many ways. Congratulations Vern!
This is a very intriguing study about how social media/interactions may be warping “crowd wisdom” — defined as “the statistical phenomenon by which individual biases cancel each other out, distilling hundreds or thousands of individual guesses into uncannily accurate average answers”. In this study, researchers told test participants about their peers’ guesses. As a result, their group insight (a.k.a., group regression to the mean) “went awry”. You can think about this as introducing dependencies, and hence biases in the sample statistics.
I should try this in a test in CS-350!
Perhaps related to the above is the mounting criticism of “personalization” as introducing biases in what (say) search engines return to different people for the same exact query — Google now is personalizing Google search and Google News…
There is something to be said for having crowds have consistent views of the world…
This is an interesting analysis of MSFT vs REL vs CentOS.
I’m continually intrigued by how the location of ‘value’ shifts in the IT industry. It seems that software has this constant potential for being re-architected, and as a result, where the problem lies, shifts. This article suggests that the real money-making opportunity in cloud computing right now is in management (ie, as opposed to in OSes or CPUs). This makes sense to me and jibes with what I saw when I worked in industry.
It reminds me of how IBM, which once made oodles of money selling OSes and CPUs, eventually moved ‘up’ the value chain to selling managed ‘solutions’. I think this was a remarkable reinvention. I wonder if Microsoft can and will eventually do something similar.
Here are mine — comments?
Y. Chu, S.G. Rao, and H. Zhang. A Case for End System Multicast. In Proceedings of ACM Sigmetrics, June 2000.
Lixin Gao and Jennifer Rexford. Stable Internet routing without global coordination. In Proceedings of ACM Sigmetrics, June 2000.
Thomas Bonald and Laurent Massoulié. Impact of fairness on Internet performance. In Proceedings of ACM Sigmetrics, June 2001.
Steven H. Low, Larry Peterson, and Limin Wang. Understanding TCP vegas: a duality model. In Proceedings of ACM Sigmetrics, June 2001.
We are just getting started. Watch this space!