In yesterday’s New York Times, Ellen Ullman argued that we need artificial intelligence, not more testing, to prevent high-volume trading catastrophes like the one at Knight Capital. But artificially intelligent market watchers are an expensive and possibly unworkable solution. Instead, by collaborating on an open infrastructure for high-volume algorithmic trading, firms could reduce errors, improve stability, and possibly avoid expensive regulation.
SFLC Blog: Posts tagged “security”
At the beginning of December, we warned the Copyright Office that operating system vendors would use UEFI secure boot anticompetitively, by colluding with hardware partners to exclude alternative operating systems. As Glyn Moody points out, Microsoft has wasted no time in revising its Windows Hardware Certification Requirements to effectively ban most alternative operating systems on ARM-based devices that ship with Windows 8.
Public Safety is not a matter of Private Concern
In a recent article, Slate’s Farhad Manjoo attempts to play down fears of faulty software in car braking systems as a potential cause of traffic accidents. Citing numerous studies which conclude that “the overwhelming reason we get in crashes is driver error,” Manjoo reasons that “the less driving people do, the fewer people will die on the roads.”
While it may certainly be true that most crashes occur because of intoxication, distraction, or driver fatigue, and that computer controlled cars may decrease driver error, Manjoo doesn’t seem to see the obvious implication of his own assumptions – “opaque” and “inherently buggy” software which could endanger public safety should be subject to review.
New York Times reporters John Markoff and Ashlee Vance correctly pointed out that “nations, private corporations, and even bands of rogue programmers are capable of covertly tunneling into information systems,” by exploiting bugs in a program’s source code in their January 20th story, “Fearing Hackers Who Leave no Trace.”