Five or so years ago, I served on a provost committee headed by Jim Collins to look into building a library in front of the College of Communication. At the time, I was one of three dissenting voices about the need for (more) book shelves and robotic-operated stacks, instead calling for nice seating, espresso machines, magazine racks, and large wall LCD displays for visualizations, telepresence, etc. — only to be ridiculed by the library people, including the one quoted in this BU Today article as saying:
During the decades when Mugar acquired books only, Hudson says, “what we had been doing is building more book stacks and taking away seating.” Digitization reversed that, allowing “us to take some of the stacks down and start rebuilding student use space—computers, but also carrels, group interactive spaces, classrooms.” In short, “we’re more than books” and need student work space.
How quickly things change!
The Fifth Amendment states that no person “shall be compelled in any criminal case to be a witness against himself.”
This is not to say that physical evidence such as diaries and other documents are not admissible in court against their authors. Quite the contrary, such evidence is not only admissible, but over the years the courts have ruled that an author may be compelled to surrender such evidence, even if such evidence is incriminating. But, what the courts have taken away by limiting the scope of the Fifth Amendment to exclude physical evidence, technology may be giving back thanks in large to modern cryptography. In a recent federal appeal’s ruling, the court decided that forcing a criminal suspect to decrypt their previously-encrypted hard drives so that their contents can be used by prosecutors is a breach of the Fifth Amendment right against compelled self-incrimination. This “question” is not settled by any means; at least one other federal appeal ruling by a different court on a different case under slightly different circumstances have come out with an opposite opinion, suggesting that in all likelihood this question will be revisited by the Supreme Court at some point in the future.
[Read more from the Electronic Frontier Foundation]
A fascinating view of the difference between those who are taught to “program” and those who are taught to just “use” a program.
For students taught to use programs rather than to create them, “their bigger problem is that their entire orientation to computing will be from the perspective of users. When a kid is taught software as a subject, she’ll tend to think of it like any other thing she has to learn. Success means learning to behave in the way the program needs her to. Digital technology becomes the immutable… thing, while the student is the moving part, conforming to the needs of the program in order to get a good grade on the test”.
Corollary: Students taught to program see themselves as immutable creators, and see digital technology and its users as the moving parts that conform to their needs, empowering them to control the world around them!
Interesting developments (and political maneuvers) are afoot at the United Nations as world powers jostle for control of the Internet, with China and Russia drawing a line in the sand with a joint document submitted to the General Assembly on Internet “Code of Conduct”.
Quoting: “To settle any dispute resulting from the application of this Code through peaceful means and refrain from the threat or use of force.” Sounds like somebody is trying to cover their backs
Here is the link: http://news.dot-nxt.com/2011/09/13/china-russia-security-code-of-conduct
Who said networking is boring?!
Google’s Chief Security Officer (yes, companies now have a CIO and a CSO), Eran Feigenbaum, stirred a debate recently when he questioned the obsession of the US (and other governments) about data sovereignty in outsourced environments. He is quoted as saying: “It is an old way of thinking. Professionals should worry about security and privacy of data, rather than where it is stored.”
What do you think? Should it matter *where* data is stored (or for that matter where the pipes carrying it happen to be)? Assuming a cloud provider meets what it promises in its SLA (availability, persistence, proper authentication/encryption, etc.), can you think of vulnerabilities that necessiates that data resides on “American Soil”?
The other interesting statement by Google’s CSO regards the need for encryption of data at rest (i.e., on disk as opposed to end-to-end through an application): “It is a false sense of security. Crypto people do a good job at cryptography, but a really bad job at key management.”
BU Today graced us this morning with a funny cover story, proclaiming that “BU’s Research Paper Productivity Gets High Marks”, adding that “University in fifth place in national cost per paper analysis”.
The bottom line is that the Cost/Paper at BU is $50K vs $72K nationally. BU is a bargain when it comes to paper-per-dollar The silly season must be upon us!
A colleague here at BU suggested that all we have to do to get better papers-per-dollar is to submit to the “World Multiconference on Systemics, Cybernetics and Informatics” What a joke — and a waste of tax-payer money to fund studies like this… And, shame on BU Today for marching in that parade!
Our request for raises is making its way to central administration
Here is an interesting Op-Ed in today’s NYT, which touches on the point I made in my earlier post entitled “Ignorance is Bliss”.
I have been harping on this for a while, but Eli Pariser (of MoveOn.org) puts it very eloquently: “There is a new group of gatekeepers in town, and this time, they’re not people, they’re code.”
In Jon Crowcroft’s talk at BU earlier this month, I hinted to this issue — getting information through a social network reduces entropy — and alluded to the need for better “personalization” technology and algorithmics. Pariser’s point is that we should not trust editorial responsibility (the control of information flow) to code. If we do, then the Internet would have turned things around 360 degrees — by allowing us to bypass “an elite class of editors”, only to let code decide what people would see and hear about the world.
Related to (and influencing my thinking about) the above is the long-held position that “Code is Law” by Lawrence Lessig.
Computer Science is quickly becoming a social science!
This is a very intriguing study about how social media/interactions may be warping “crowd wisdom” — defined as “the statistical phenomenon by which individual biases cancel each other out, distilling hundreds or thousands of individual guesses into uncannily accurate average answers”. In this study, researchers told test participants about their peers’ guesses. As a result, their group insight (a.k.a., group regression to the mean) “went awry”. You can think about this as introducing dependencies, and hence biases in the sample statistics.
I should try this in a test in CS-350!
Perhaps related to the above is the mounting criticism of “personalization” as introducing biases in what (say) search engines return to different people for the same exact query — Google now is personalizing Google search and Google News…
There is something to be said for having crowds have consistent views of the world…
The problem with patent law (and the patent application and prosecution processes) is that the outcome of a patent application is binary: either the patent is awarded or not.
Clearly, most patent applications will have some level of innovation when compared to prior art. While the innovation of some patents may be truly revolutionary, for most patents the innovation is typically incremental. The fact that there is “some innovation” means that applicants (and their lawyers) can eventually get their patents to issue. But, once issued, the protection afforded to a patent is the same whether the underlying innovation is revolutionary or incremental.
Incremental innovation should be encouraged, but the protection extended to an incremental invention must be limited. If I come up with an idea six months before it would have been obvious to those “skilled in the art”, do I deserve the same protection afforded to an ideea that is twenty years ahead of its time? I don’t think so. And, it is precisely that mismatch that makes people (especially those involved in IT and software systems) dismiss the whole patent process.
So, here is an idea whose time has come: Why not allow the patent office to issue patents on a scale that reflects the level of innovation — and accordingly provides the patent holders with protection that is commensurate with their level of innovation? A patent embodying an idea that is 6 months ahead of the curve would get (say) five years of protection, whereas a patent embodying an idea that is 6 years ahead of the curve should get fifty years of protection.
The adoption of such an approach has the added advantage of making inventors (and inventor-wannabees) think twice before rushing to file a patent…