[ciao-users] Antwort: Re: EXT :Re: Multithreaded dance_locality_manager method call dispatcher?

William Otte wotte at dre.vanderbilt.edu
Tue May 19 08:52:53 CDT 2015


Hi -

I'll throw in my 2 cents as well.

On 19 May 2015, at 6:58, Hayman, Mark (ES) wrote:

> Hi,
>
> I agree with Johnny that a multiple LM process solution to 
> multi-threading is much less complex than diving into the accidental 
> complexities involved in concurrent programming.  Since we use 
> commercial model-based tooling to auto-generate D&C deployment plans 
> for DAnCE, creating new LM processes to host components only takes a 
> few extra seconds of time via a few drag & drop operations on a 
> deployment plan editor GUI.  It's also a great productivity multiplier 
> when programmers without a computer science/engineering background can 
> easily write component business logic for real-world DRE systems 
> without having to have their code rewritten or "fixed" downstream by 
> software folks to be thread safe.

I agree strongly with Mark and Johnny.  Most of our research work in 
component based systems has really focused hard on driving towards a 
single threaded/non-reentrant programming model.

We have a fork of CIAO that actually enforces this:

* All components are guaranteed to have only a single thread of control, 
regardless of the middleware configuration (or if you have components 
interacting with multiple middlewares; e.g., DDS and CORBA).
* Components in a single container may execute concurrently, depending 
on the configuration of the
* You can have multiple containers per process, each with it's own ORB.

Sadly I haven't had funded time to fold this work into the public 
release.

>
> I don't know whether it's the case here, but I'm still regularly 
> surprised at the number of "old school" DRE software engineers who 
> continue to design under the pretense that on modern processors the 
> context switch time for what used to be called "heavyweight" processes 
> with their own memory context is prohibitively expensive compared to 
> context switching between "lightweight" threads that share a single 
> memory context within a process.  Thanks to the many HW advances in 
> modern processors, the difference between the two has virtually 
> disappeared in general use.

I completely agree with Mark's analysis here.  You'll pay slightly 
higher costs for IPC/synchronization (which require a full trap to the 
kernel context, even in an uncontended state), but I'd argue that for 
the most part the increases in productivity and decrease in bugs/race 
conditions are worth the cost, except in the most performance critical 
applications.

Remember: premature optimization is evil.  Implement so your solution is 
safe and correct.  Then profile. Then optimize.

/-Will


More information about the ciao-users mailing list