[tao-users] EXTERNAL: Re: How to force explicit parallelism with TAO?

Melton, Jim jim.melton at lmco.com
Mon Apr 20 14:53:26 CDT 2015


I've been wrestling with trying to get this going, and I've run across the following limitations. I don't know if these are constraints or bugs.

* Linking with libTAO_RTCORBA causes intermittent errors unless libTAO_RTPortableServer is also included. The major symptom of this is at runtime an exception is thrown from resolve_initial_references("RootPOA") trying to contact the name service via multicast. That is all kinds of wrong for several reasons, but simply adding RTPortableServer keeps that from happening.

* Creating a POA thread pool requires execution in a real-time context. This precludes (on Solaris 11) running in a default configured zone, or as any user but root. The examples in $TAO_ROOT/DevGuideExamples/RTCORBA confirm this. They only ran on the global zone as root.

I have scoured Google and even the OCI Developer's Guide, but can't seem to get past this. Why is it so hard to get predictable connection semantics from TAO? Surely I'm missing something obvious?
--
Jim Melton
Software Architect, Fusion Programs
Lockheed Martin IS&GS
(720) 922-5584


> -----Original Message-----
> From: Phil Mesnier [mailto:mesnier_p at ociweb.com]
> Sent: Friday, March 27, 2015 3:20 PM
> To: Melton, Jim
> Cc: tao-users at list.isis.vanderbilt.edu
> Subject: Re: EXTERNAL: Re: [tao-users] How to force explicit parallelism with
> TAO?
> 
> Hi Jim,
> 
> > On Mar 26, 2015, at 8:00 PM, Melton, Jim <jim.melton at lmco.com> wrote:
> >>
> >> I don't think I have a clear enough understanding of your
> >> architecture. Are you saying that you want a single subscriber /
> >> repository session to have a dedicated thread on the server side?
> >
> > Dedicated would be OK, but not necessary. What I don't want is for client
> requests to be blocked waiting for a server thread to become available (so
> maybe that is what I want). The key thing is that we need to move data
> quickly. The infrastructure must not be a cause for (excessive) delay.
> >
> 
> Maybe POA thread pooling is what you need. We've talked about it a couple
> of times. The dynamic thread pool feature was designed to emulate Orbix. It
> was funded by a defense contractor performing a major migration from
> Orbix to TAO. They also sponsored the ImR redesign, improved redundancy
> services and lightweight load balancing integrated with the naming service.
> 
> 
> >> If you use RTCORBA on the client you can set up exclusive connections
> >> on the repository object references, then with TPC on the server side
> >> you'll have a single subscriber thread interacting with a single repository
> thread.
> >
> > OK. This is sounding familiar. I think we might have done that in our first
> (aborted) attempt to migrate to TAO 10+ years ago, because I've had a bunch
> of commented out RTCORBA stuff in the code for ages, and Orbix didn't
> require or implement the RTCORBA spec.
> >
> 
> Unfortunately we are not privy to Orbix's implementation. Who knows what
> tricks they perform. Best we can do is characterize the desired behavior
> sufficiently enough to define requirements for TAO. If you are able to fund
> the effort, I'd be happy to do it.
> 
> >> You keep using past tense describing the problems so is there still an
> issue?
> >
> > There were two problems. One was the interleaving of calls between
> subscribers on the client. Refactoring to make the "do it" part of callback() a
> separate thread resolved those issues.
> >
> > The other is the data starvation of (at least) one of the subscribers during
> heavy load. I don't believe this is resolved, but neither does it happen every
> day.
> >
> 
> OK.
> 
> > On the other hand, there was a race condition where a subscriber could
> lose its reference to the publisher that I've fixed but not tested. It's possible
> that the race condition was the cause of the starvation and not unpredictable
> thread behavior on the server. Because, as you point out, the evidence is
> that the server is using some number of threads to service requests from the
> clients, as the policies should enforce.
> >
> > I won't get to test again until next week. I'll let you know if there are
> further issues.
> >
> 
> Ah. Good luck with the testing!
> 
> Best regards,
> Phil
> 
> --
> Phil Mesnier
> Principal Software Engineer and Partner,   http://www.ociweb.com
> Object Computing, Inc.                     +01.314.579.0066 x225
> 
> 
> 



More information about the tao-users mailing list