[cadynce] Meditations on Learning and a Big Back End
Chenyang Lu
lu at cse.wustl.edu
Fri Mar 9 13:47:24 CST 2007
I agree with Raj. If we can (i) certify individual system components via
traditional testing and (ii) provide a sound theory and compatible run-time
infrastructure for *composing* the components into different system
configurations, we will be able to certify numerous combinations of the
components based on the theory instead of testing.
The fundamental benefit is the capability to certify a huge number of
*combinations* by testing only a small number of individual components.
Chenyang
Washington University in St. Louis
http://www.cse.wustl.edu/~lu
_____
From: cadynce-bounces at list.isis.vanderbilt.edu
[mailto:cadynce-bounces at list.isis.vanderbilt.edu] On Behalf Of Raj Rajkumar
Sent: Friday, March 09, 2007 12:54 PM
To: Cross, Joseph
Cc: cadynce at list.isis.vanderbilt.edu
Subject: Re: [cadynce] Meditations on Learning and a Big Back End
Cross, Joseph wrote:
Patrick -
Okay, very cool. We're down to touching up the face powder on our Miss
America contestant.
We are talking about how to deploy a configuration that have
never been tested in the lab.
Right. And looked at another way, we're talking about how to certify a
configuration that has never been tested.
I conjecture that there is no way to do that with anything other than
a
statistical assurance. The goal is to first give them a 1000 tested
configurations and use the novel generation method as a back up when
none of them fit.
If we're being Bayesian, where probability is used to express strength
of belief, then you're saying that we'll never achieve 100% certainty
that our system will work. Sure.
But yea verily we never had 100% certainty and we never will. (Drat!
There goes another one of those pesky alpha particles!)
If we say to the Navy "Our process will deploy configurations for which
we have only statistical assurance of correctness," they'll hear "...
configurations that aren't as reliable as your old-school certified
configurations." And that's not true.
Old-school certification provides only statistical assurance of
correctness. And it would be boorish of us to rub our customers' noses
in that fact.
We're talking public relations here, not technology.
Obviously we must not oversell our product. But I believe that we can
honestly say that we're going to provide mechanisms that will enable the
certification of untested configurations. (Then the means by which we
chose one of these certified configurations at run-time is an
engineering detail.) If somebody asks whether the cert board will have
to loosen its standards to accept our configurations, we say no.
In this way we avoid emphasizing the
statistical/not-mathematically-certain nature of the entire
certification process, with or without CADynCE.
- Joe
_______________________________________________
Cadynce mailing list
Cadynce at list.isis.vanderbilt.edu
http://list.isis.vanderbilt.edu/mailman/listinfo/cadynce
Dear Joe, Patrick and Gautam:
(I apologize for missing the telecon yesterday; had a visitor.)
I agree 100% with the above comments. And wanted to add a couple more
thoughts:
In bin-packing, we pack objects and declare victory if all objects fit
within available bins and no bin constraints (single-dimensional or
multi-dimensional) are violated. In the embedded real-time systems domain,
there are other aspects that are not necessarily captured in the simple
bin-packing model. These include the notion of execution patterns
(periodic, sporadic or aperiodic), local and end-to-end deadlines,
schedulability requirements, OS overheads, interrupt latencies,
interactions among resources (e.g. processing latency when a network packet
arrives at a network interface and then has to go up the protocol processing
stack but the network and CPU resources are scheduled as two different
policies using two different schedulers), and potential non-negligible
sources of non-determinism (caching interactions among multiple concurrent
processes). We can (and should) take care of requirements such as
schedulability in the bin-packing phase. I also believe that an
appropriate run-time infrastructure is needed to (a) schedule the various
processes/threads to meet their deadlines when run concurrently (b)
enforce/isolate the run-time behaviors of each process/thread, and (c) the
policies/mechanisms of the run-time infrastructure should match the
assumptions of the bin-packing phase. Together, the certification of the
feasible (but potentially untested) configurations becomes possible.
Thanks,
---
Raj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://list.isis.vanderbilt.edu/pipermail/cadynce/attachments/20070309/e3c1d111/attachment-0001.htm
More information about the Cadynce
mailing list