EPICS Controls Argonne National Laboratory

Experimental Physics and
Industrial Control System

1994  1995  1996  1997  1998  1999  2000  2001  2002  <20032004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024  Index 1994  1995  1996  1997  1998  1999  2000  2001  2002  <20032004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
<== Date ==> <== Thread ==>

Subject: [Fwd: Re: Tornado 2.2, cross-compiling woes]
From: Marty Kraimer <[email protected]>
To: [email protected]
Date: Thu, 04 Dec 2003 06:58:15 -0600
Jeff did not send the original message to tech-talk but I think many of you will find it interesting.

Marty Kraimer
--- Begin Message ---
Subject: RE: Tornado 2.2, cross-compiling woes
From: Jeff Hill <[email protected]>
To: 'Andy Foster' <[email protected]>
Cc: 'Richard Dabney' <[email protected]>, 'Andrew Johnson' <[email protected]>, 'Marty Kraimer' <[email protected]>
Date: Wed, 03 Dec 2003 12:01:27 -0700
> 
> In these days of much faster chips, does optimization still make
> a big difference? Have you any EPICS performance figures between code
> compiled with the optimization on and off?

Of course, computers and networks are faster, but we continue to expect them
to do more work. 

I do see very substantial efficiency differences comparing the CA client
library compiled optimized and not optimized. 

In general, mistakes made by the optimizer are what I would call hard
failures. That is, if we run through the code that was incorrectly generated
we will always, 100% of the time, produce an incorrect result. The failure
will rarely, if ever, be intermittent. It is generally easy for a user to
tell us how to reproduce this type of problem.

In contrast, race conditions in multi-threaded programs are far more
insidious because we can run through the same section of code thousands of
times before we experience the timing window that introduces a failure.
Users must have a strong desire for improved quality, be very observant
about infrequent failures, and be willing to spend the time to produce a
good bug report if this type of problem will be resolved. It's interesting
to speculate that the issues that Richard observed with an optimized version
of FreeBSD might even have been race conditions occurring only in untested
optimized versions of the kernel. I have certainly seen codes before that
failed when the optimizer was turned due to no fault of the compiler.

Some would compare using the optimizer to reckless envelope extremism
activities like over clocking your CPU or driving your car to fast, but
perhaps this is an overstatement of the risks involved. Many types of
failures routinely are detected in large codes. Failures introduced by the
compiler are, in my humble experience, a fairly small subset of the issues
that we must deal with, and we can deal with this type of failure
efficiently because they are easier to reproduce. If the code is routinely
tested and used with optimizations turned on then we should converge to a
reliable product.

> I am slightly concerned that if we just supply a piece of code which
> demonstrates this particular problem, how do we know how many
> other subtle problems might be lerking undiscovered in the background
> due to optimization?

In general we no longer worry much about what happens inside of a compiler.
We expect that it will create correct execution of the algorithms in the
source code. Compilers are very complex and it is very possible for them to
make mistakes even when generating non-optimized code.

My experience has been that compiler optimization problems are relatively
rare. Lately, we have certainly seen several significant problems with the
Cygnus version of GCC that WRS is peddling. A possible cause is the rapid
acceleration in activity occurring when the cloistered GCC team gave up
control to EGCS. I have heard that 3.0 versions of GCC are much better.
There may also be problems with quality control at Cygnus.

I have been developing with optimizing compilers for some 20 years now. I
have always been able to address the handful of compiler related bugs that
have come up. If we find that the number of problems with the optimizer of a
particular compiler are costing too much of our time then of course we
should turn off the optimization for that compiler, but certainly not for
all compilers. Admittedly, we have recently experienced an unheard of
cluster of problems with the WRS/Cygnus GCC optimizer, but based on our long
term experience I am not prepared to give up on it yet. I assume that when
faced with a simple test code reproducing the problem WRS will cough up a
patch for the compiler. Otherwise, giving up probably also incurs a rational
decision that we must look for a compiler targeting vxWorks with a more
ordinary reliability record, and less maintenance headaches.

Jeff


--- End Message ---

Replies:
Re: Tornado 2.2, cross-compiling woes Andy Foster

Navigate by Date:
Prev: RE: Problems with solaris-sparc-gnu Mark Rivers
Next: Re: Tornado 2.2, cross-compiling woes Andy Foster
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  <20032004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
Navigate by Thread:
Prev: RE: Problems with solaris-sparc-gnu Mark Rivers
Next: Re: Tornado 2.2, cross-compiling woes Andy Foster
Index: 1994  1995  1996  1997  1998  1999  2000  2001  2002  <20032004  2005  2006  2007  2008  2009  2010  2011  2012  2013  2014  2015  2016  2017  2018  2019  2020  2021  2022  2023  2024 
ANJ, 10 Aug 2010 Valid HTML 4.01! · Home · News · About · Base · Modules · Extensions · Distributions · Download ·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing ·