Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conditional expressions erroneously skip operations for AD<AD<double> > #7

Closed
joaoleal opened this issue Feb 24, 2015 · 7 comments
Closed

Comments

@joaoleal
Copy link

The optimize() method in ADFun will determine which operations can be skipped when using conditional expressions so that only the required branch needs to be evaluated.
When using AD<AD<double> > or AD<CG<double> > these operations should not be skipped.

Here is a file to reproduce the issue:

#! /bin/bash -e
# $Id$
# -----------------------------------------------------------------------------
# CppAD: C++ Algorithmic Differentiation: Copyright (C) 2003-13 Bradley M. Bell
#
# CppAD is distributed under multiple licenses. This distribution is under
# the terms of the 
#                     Eclipse Public License Version 1.0.
#
# A copy of this license is included in the COPYING file of this distribution.
# Please visit http://www.coin-or.org/CppAD/ for information on other licenses.
# -----------------------------------------------------------------------------
cat << EOF
Description
EOF
cat << EOF > bug.$$
#include <iostream>
#include <cppad/cppad.hpp>
int main(void) {
    using namespace CppAD;

    typedef AD<double> adouble;
    typedef AD<adouble> a2double;

    std::vector<double> x{0, 1};

    /**
     * First tape 
     * (using AD<AD<double> >)
     */
    std::vector<a2double> a2x(x.size());
    for (size_t i = 0; i < a2x.size(); i++) {
        a2x[i] = adouble(x[i]);
    }
    Independent(a2x);

    std::vector<a2double> a2y(1);
    a2double a = a2x[0] * a2x[1];
    a2y[0] = CondExpEq(a2x[0], a2double(1.0), a, a2double(0.0));

    // create f: X -> Y 
    ADFun<adouble> f1(a2x, a2y);
    f1.optimize(); //  <<<<<<<<<<<<<<< causes the issue

    /**
     * Second tape
     * (using AD<double>)
     */
    std::vector<adouble> ax{adouble(1), adouble(0)};
    Independent(ax);

    std::vector<adouble> ay = f1.Forward(0, ax);// <<<<<<<<<<<< assertion fails here!

    CppAD::ADFun<double> f2(ax, ay);

    /**
     * Use second tape
     */
    x = {1, 0.5};

    std::vector<double> y = f2.Forward(0, x);

    //std::cout << y[0] << std::endl;
    assert(std::abs(y[0] - x[0] * x[1]) < 1e-10);

}
EOF
# -----------------------------------------------------------------------------
if [ ! -e build ]
then
    mkdir build
fi
cd build
echo "$0"
name=`echo $0 | sed -e 's|.*/||' -e 's|\..*||'`
mv ../bug.$$ $name.cpp
echo "g++ -I../.. --std=c++11 -g $name.cpp -o $name"
g++ -I../.. --std=c++11 -g $name.cpp -o $name
#
echo "./$name"
if ./$name
then
    echo "OK"
else
    echo "Error"
fi
@bradbell
Copy link
Contributor

This bug has been verified; see
https://github.com/coin-or/CppAD/blob/master/bug/cond_exp.sh

@bradbell bradbell reopened this Feb 24, 2015
@bradbell
Copy link
Contributor

bradbell commented Mar 1, 2015

I have added a temporary (undocumented) feature
f.optimize("no_conditional_skip")
should work with the case above. This is a temporary fix that is necessary when the base type can correspond to variables. A better solution (which does automatic detection of this variability for each conditional expression) will be added to the wish list.

@joaoleal
Copy link
Author

joaoleal commented Mar 2, 2015

Thank you!
This fixes the issue as long as people remember to use this option.
Would it be possible to define default behavior for specific data types?
Some like this:

template<class Base>
struct CppADOptimizationOptions {
    static inline bool multiLevelADType() {
        return false;
    }
    // more option related methods here
};

template<>
template<class Base>
struct CppADOptimizationOptions<CppAD::AD<Base> > {
    static inline bool multiLevelADType() {
        return true;
    }
};

@bradbell
Copy link
Contributor

bradbell commented Mar 2, 2015

In the long term, I plan to add information to the tape that informs the
optimizer when a variables base type value is IdenticalPar
http://www.coin-or.org/CppAD/Doc/base_identical.xml#Identical.IdenticalPar
Even in the multilevel AD case, there can be conditional comparisons
that can be skipped (once this information is available on the tape). I
will not be able to work on this for a few months.

For this reason, I have not made
f.optimize("no_conditional_skip")
part of the CppAD API. If the plan above works, at this point I plan to
remove the option (with a message that explains it is no longer necessary).

Brad.

On 3/2/2015 3:13 AM, João Rui Leal wrote:

Thank you!
This fixes the issue as long as people remember to use this option.
Would it be possible to define default behavior for specific data types?
Some like this:

|template
struct CppADOptimizationOptions {
static inline bool multiLevelADType() {
return false;
}
// more option related methods here
};

template<>
template
struct CppADOptimizationOptionsCppAD::AD {
static inline bool multiLevelADType() {
return true;
}
};
|


Reply to this email directly or view it on GitHub
#7 (comment).

@bradbell
Copy link
Contributor

I think that I have fixed this issue with the following commit
b564688
and it is not longer necessary to use the
f.optimize("no_conditional_skip")
option. The complication was the difference between parameters and variables at the AD and Base levels. The doxygen documentation for
https://github.com/coin-or/CppAD/blob/master/cppad/local/cskip_op.hpp
discusses this issue.

@bradbell bradbell reopened this May 27, 2015
@joaoleal
Copy link
Author

Thank you!
I tested with CppADCodegen and it works fine without the
"no_conditional_skip" argument.

2015-05-27 6:57 GMT+01:00 Brad Bell notifications@github.com:

Reopened #7 #7.


Reply to this email directly or view it on GitHub
#7 (comment).

@bradbell
Copy link
Contributor

bradbell commented Jun 7, 2015

This issue has been fixed.

@bradbell bradbell closed this as completed Jun 7, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants