[ revised version of an older post] “Soft real-time” is a perfect example of the “soft design” noted in an earlier post. There are perfectly good ways of characterizing quality of service (QOS) assurances precisely. Doug Jenson proposes one possible
Sparc T2 (niagra 2)
UT architecture seminar today was by Greg Grohoski from Sun – an updated version of his Hot Chips talk. I’m not a big fan of this approach to chip architecture: 8 processors, each with 8 threads, but they are working
OpenBSD developer notes king’s clothing is “virtual”
Theo de Raadt explains why virtualization does not improve security. How about this: to improve security, you have to have a secure design, a marketing buzzword won’t do the trick. Anyone who has seriously looked that the current generation x86
Distributed shared memory from first principles
[Update 10/16] What is the fundamental performance limiting factor that has dominated that last 30 years of computer architecture? The obvious answer is the disparity between processor and memory/storage speed. We connect processors to cache, to more cache, to even
more on missed wakeup
Here are some conventions [Update: typos fix, Friday] We are concerned with state machines and sequences of events. The prefixes of a sequence include the empty sequence “null” and the sequence itself. Relative state: If “w” is the sequence of
OSIM Madrid and Value Manifolds
Spent a couple of very interesting days at the OSIM conference in Madrid as part of my consulting for WindRiver which has a very powerful market position in cellular handsets now – partly due to their acquisition of RTLinux for
Apple’s strategic brilliance
I may be reading too much into it, but Apple looks to have come up with a strategy to pass Microsoft in the next ten years. They are linking their phone, music, and PC business together to form an unavoidable
Carbon neutral processors and ecocidal operating systems
Data centers are reckless consumers of power. Since modern processors leak somewhere in the neighborhood of 40% of their peak power consumption even when idle and since most measurements show that most computers are nearly always idle, that’s a lot
Underlying requirement for hard real-time
Simple point that is widely ignored: you need hard real-time capability to offer any meaningful “soft” real-time. Let’s suppose you say that you can drop up to N frames a minute or K< N frames over any 10 second period
universal machinery
If even 20% of what Peter Gutman says is so, then I’ve been optimistic in my assessement of DRM.