[ciao-users] Antwort: Re: EXT :Re: Multithreaded dance_locality_manager method call dispatcher?

Hayman, Mark (ES) mark.hayman at ngc.com
Tue May 19 06:58:38 CDT 2015


Hi,

I agree with Johnny that a multiple LM process solution to multi-threading is much less complex than diving into the accidental complexities involved in concurrent programming.  Since we use commercial model-based tooling to auto-generate D&C deployment plans for DAnCE, creating new LM processes to host components only takes a few extra seconds of time via a few drag & drop operations on a deployment plan editor GUI.  It's also a great productivity multiplier when programmers without a computer science/engineering background can easily write component business logic for real-world DRE systems without having to have their code rewritten or "fixed" downstream by software folks to be thread safe.
       
I don't know whether it's the case here, but I'm still regularly surprised at the number of "old school" DRE software engineers who continue to design under the pretense that on modern processors the context switch time for what used to be called "heavyweight" processes with their own memory context is prohibitively expensive compared to context switching between "lightweight" threads that share a single memory context within a process.  Thanks to the many HW advances in modern processors, the difference between the two has virtually disappeared in general use.

For instance, a non-virtualized Linux process context switch on a modern Intel processor is in the 1-2 microsecond range, with a thread context switch only being up to 100 nanoseconds faster due to not having to flush the TLB.  You can still gain a relatively small advantage with threads by playing with affinity settings and caching, and large cache requirements can negatively affect the context switch time of either.  But in most cases the process vs. thread difference in context switch time is negligible, very low either way, and an easy trade to make if SW productivity, reliability and robustness are important to you.  Of course if you're using an older processor, you may be back to the penalties involved with swapping page tables and such with little HW support.  But per today's HW technology, some of the TAO concurrency strategies are almost OBE anyway.

.02
-Mark

-----Original Message-----
From: ciao-users [mailto:ciao-users-bounces at list.isis.vanderbilt.edu] On Behalf Of Johnny Willemsen
Sent: Tuesday, May 19, 2015 2:40 AM
To: ciao-users at list.isis.vanderbilt.edu
Subject: Re: [ciao-users] Antwort: Re: EXT :Re: Multithreaded dance_locality_manager method call dispatcher?

Hi Kai,

Changing the locality to run multiple threads is really a more complex solution, it does require that all middleware code is thread safe  and also that all your business code is thread safe. That all together could get pretty complex. Deploying multiple localities will be a little bit more complex in terms of deployment plan, but that is it, no need to review all your designs and code for thread safety.

Best regards,

Johnny



On 05/18/2015 03:42 PM, kai-uwe.schieser at diehl.com wrote:
> Hi there!
> 
> thanks for your quick answers.
> @Mark: Your problem description is close to mine.
> 
> To make things clearer I wouldlike to explain my system structure more 
> detailed.
> I have a dance_locality_manager process running with many different 
> components on a single node. On the 'top-layer' of this system there 
> are components that are accessed via naming service  from a different 
> process (plain CORBA interfaces). Some of the opertation calls to 
> these components are long-running, and since they are dipatched by a 
> single thread only one after another is called in the 
> dance_locality_manager process. The method invocation on the other process is multithreaded.
> 
> I looked into documentation for some ORB options to handle this topic.
> Do you have some thoughts on  ''-ORBCollocationStrategy direct"  or 
> maybe the "-ORBConcurrency thread-per-connection"?
> 
> Definiteley, to use more dance_locality_manager processes would be a 
> wy to solve this. But I would prefer an simpler solution.
> 
> Thanks,
> Kai
> 
> 
> 
> 
> 
> Von:        "Hayman, Mark (ES)" <mark.hayman at ngc.com>
> An:        CIAO Users Mailing List <ciao-users at list.isis.vanderbilt.edu>
> Datum:        13.05.2015 16:39
> Betreff:        Re: [ciao-users] EXT :Re: Multithreaded
> dance_locality_manager method call dispatcher?
> Gesendet von:        "ciao-users"
> <ciao-users-bounces at list.isis.vanderbilt.edu>
> ----------------------------------------------------------------------
> --
> 
> 
> 
> Kai,
> 
> Will's suggestion to use AMI4CCM will work well in term of eliminating 
> unnecessary container work thread lockup due to synchronous/blocking 
> calls from component client ports.
> 
> Perhaps you are instead worried about long-running operation 
> implementations on servant/server side ports rather than client ports, 
> and have 2 or more on the same component (or on multiple components 
> mapped to the same DAnCE LM component server process) that will 
> execute for an extended period before returning to the event queue, 
> thus preventing other events from being dispatched & processed by that 
> same container?  If so, my suggestion would be to allocate those 
> server ports/operations to two different components instead, if 
> they're on just one, and deploy the two+ components each to their own 
> DAnCE LM component server process.  That's the easy approach to CIAO+DAnCE multi-threading.
>  Specifically, use multiple LM processes instead of multiple threads 
> in the same process.  It has the nice added decoupling benefit of 
> eliminating all the accidental complexities involved with having to 
> write multi-threaded code and deal with concurrent access issues.
>                
> -Mark
> 
> -----Original Message-----
> From: ciao-users [mailto:ciao-users-bounces at list.isis.vanderbilt.edu] 
> On Behalf Of William Otte
> Sent: Wednesday, May 13, 2015 9:59 AM
> To: CIAO Users Mailing List
> Subject: EXT :Re: [ciao-users] Multithreaded dance_locality_manager 
> method call dispatcher?
> 
> Hi Kai -
> 
> Thanks for using the PRF.  I discarded your message to tao-users, in 
> order to keep this discussion in the appropriate forum, but cleared 
> your moderate flag for that list as well.
> 
> On 13 May 2015, at 8:37, kai-uwe.schieser at diehl.com wrote:
> 
>> CIAO VERSION: 1.2.3
>> TAO VERSION : 2.2.3
>> ACE VERSION : 6.2.3
>>
>> HOST MACHINE and OPERATING SYSTEM:
>> Windows 7
>>
>> $ACE_ROOT/ace/config-win32.h
>>
>> THE PROBLEM AFFECTS: Execution
>>
>> TAO/CIAO are affected.
>>
>> SYNOPSIS:
>> Multithreaded dance_locality_manager method call dispatcher?
>> Interface calls to components in an single dance_locality_manager are 
>> dispatched single threaded.
>>
>> DESCRIPTION:
>>
>> Hi there!
>> I am using CIAO as CCM implementation and have the following problem:
>> So far I deploy my components on a local machine (localhost) and they 
>> are running in a single dance_locality_manager process. It seems that 
>> all incoming interface calls (and also intercomponent calls) are 
>> dispatched by a single thread in the dance_locality_manager. This is 
>> a problem because some interface calls need long processing time. I 
>> would like to call more interface methods in parallel. Is there a TAO 
>> ORB option to enable some kind of multithreaded dispatching?
>> Thanks in advance, Kai.
> 
> That is certainly possible using a svc.conf file (I'll refer you to 
> the TAO documentation for the details on that, but can dig it up if 
> you are still interested).  It is, however, not currently a currently 
> tested (and thus not supported) configuration, and I don't know if 
> there will be unintended side effects.
> 
> A better solution to your problem is to explicitly design for 
> asynchronous behavior for such long-running method calls using AMI4CCM 
> (https://github.com/DOCGroup/ATCD/tree/master/CIAO/connectors/ami4ccm).
> 
> hth,
> /-Will
> _______________________________________________
> ciao-users mailing list
> ciao-users at list.isis.vanderbilt.edu
> http://list.isis.vanderbilt.edu/cgi-bin/mailman/listinfo/ciao-users
> _______________________________________________
> ciao-users mailing list
> ciao-users at list.isis.vanderbilt.edu
> http://list.isis.vanderbilt.edu/cgi-bin/mailman/listinfo/ciao-users
> 
> 
> 
> ----------------------------------------------------------------------
> -- Bitte überlegen Sie, ob Sie diese Nachricht wirklich ausdrucken 
> müssen/ before printing, think about environmental responsibility.
> 
> Diehl Metering GmbH, Industriestraße 13, 91522 Ansbach Telefon + 49 
> 981 1806 0, Telefax +49 981 1806 615 Sitz der Gesellschaft: Ansbach, 
> Registergericht: Ansbach HRB 69
> Geschäftsführer: Frank Gutzeit (Sprecher), Dr.-Ing. Robert Westphal, 
> Thomas Gastner, Adam Mechel
> 
> Der Inhalt der vorstehenden E-Mail ist nicht rechtlich bindend. Diese 
> E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.
> Informieren Sie uns bitte, wenn Sie diese E-Mail fälschlicherweise 
> erhalten haben. Bitte löschen Sie in diesem Fall die Nachricht. Jede 
> unerlaubte Form der Reproduktion, Bekanntgabe, Änderung, Verteilung 
> und/oder Publikation dieser E-Mail ist strengstens untersagt.
> 
> The contents of the above mentioned e-mail is not legally binding. 
> This e-mail contains confidential and/or legally protected information.
> Please inform us if you have received this e-mail by mistake and 
> delete it in such a case. Each unauthorized reproduction, disclosure, 
> alteration, distribution and/or publication of this e-mail is strictly 
> prohibited.
> 
> 
> 
> 
> _______________________________________________
> ciao-users mailing list
> ciao-users at list.isis.vanderbilt.edu
> http://list.isis.vanderbilt.edu/cgi-bin/mailman/listinfo/ciao-users
> 

_______________________________________________
ciao-users mailing list
ciao-users at list.isis.vanderbilt.edu
http://list.isis.vanderbilt.edu/cgi-bin/mailman/listinfo/ciao-users


More information about the ciao-users mailing list