Skip to content
Fetching contributors…
Cannot retrieve contributors at this time
867 lines (805 sloc) 41.1 KB
<?xml version="1.0" encoding="utf-8"?>
<!--
################################################################################
# HPCC SYSTEMS software Copyright (C) 2012 HPCC Systems®.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
-->
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN" "http://www.oasis-open.org/docbook/xml/4.3/docbookx.dtd">
<book lang="en_US">
<bookinfo>
<title>HPCC Source</title>
<mediaobject>
<imageobject>
<imagedata fileref="images/redswoo0.jpg" />
</imageobject>
</mediaobject>
<author>
<surname>Boca Documentation Team</surname>
</author>
<legalnotice>
<para>
We welcome your comments and feedback about this document via
email to <email>docfeedback@lexisnexis.com</email> Please include
<emphasis role="bold">Documentation Feedback</emphasis> in the subject
line and reference the document name, page numbers, and current Revision
Number in the text of the message.
</para>
<para>
LexisNexis and the Knowledge Burst logo are registered trademarks
of Reed Elsevier Properties Inc., used under license. Other products and
services may be trademarks or registered trademarks of their respective
companies. All names and example data used in this manual are
fictitious. Any similarity to actual persons, living or dead, is purely
coincidental.
</para>
<para></para>
</legalnotice>
<releaseinfo>
HPCC SYSTEMS software Copyright (C) 2012 HPCC Systems®.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
</releaseinfo>
<date>2011-01-11</date>
<corpname>LexisNexis</corpname>
<copyright>
<year>2011 LexisNexis Risk Solutions. All rights reserved</year>
</copyright>
<mediaobject role="logo">
<imageobject>
<imagedata fileref="images/LN_Horz.gif" scale="45" />
</imageobject>
</mediaobject>
</bookinfo>
<chapter>
<title>Overview</title>
<para>
This manual contains a description of the HPCC sources.
</para>
</chapter>
<chapter>
<title>Getting the sources</title>
<para>
The HPCC Platform sources are hosted on GitHub at
https://github.com/hpcc-systems/HPCC-Platform. You can download a
snapshot of any branch using the download button there, or you can set
up a git clone of the repository. If you are planning to contribute
changes to the system, see the file CONTRIBUTORS at
https://github.com/hpcc-systems/HPCC-Platform/blob/master/CONTRIBUTORS
for information about how to set up a GitHub fork of the project
through which pull-requests can be made.
</para>
</chapter>
<chapter>
<title>Building the system from sources
</title>
<sect1>
<title>Requirements</title>
<para>
The HPCC platform requires a number of third party tools and libraries in order to build.
On Ubuntu 12.04, the following commands will install the required libraries
<programlisting>
sudo apt-get install cmake bison flex libicu-dev libboost-regex-dev \
binutils-dev libxerces-c2-dev libxalan110-dev zlib1g-dev \
libssl-dev libldap2-dev expect libarchive-dev \
libapr1-dev libaprutil1-dev
</programlisting>
</para>
<para>
For building any documentation, the following are also required
<programlisting>
sudo apt-get install docbook
sudo apt-get install xsltproc
sudo apt-get install fop
</programlisting>
</para>
<para>
<emphasis role="bold"> NOTE:</emphasis> Installing the above via alternative methods (i.e. from source) may place installations outside of searched paths.
</para>
</sect1>
<sect1>
<title>Building the system</title>
<para>
The HPCC system is built using the cross-platform build tool cmake,
which is available for Windows, virtually all flavors of Linux,
FreeBSD, and other platforms. You should install cmake version
2.8.3 or later before building the sources.
On some distros you will need to build cmake from sources if the version
of cmake in the standard repositories for that distro is not modern enough.
It is good practice in cmake to separate the build directory where
objects and executables are made from the source directory, and the
HPCC cmake scripts will enforce this.
To build the sources, create a directory where the built files should
be located, and from that directory, run
<programlisting>
cmake &lt;source directory&gt;
</programlisting>
Depending on your operating system and the compilers installed on it,
this will create a makefile, Visual Studio .sln file, or other build
script for building the system. If cmake was configured to create a
makefile, then you can build simply by typing
<programlisting>
make
</programlisting>
If a Visual Studio solution file was created, you can load it simply
by typing the name:
<programlisting>
lexisnexisrs.sln
</programlisting>
This will load the solution in Visual Studio where you can build in the
usual way.
</para>
</sect1>
<sect1>
<title>Packaging</title>
<para>
To make an installation package on a supported linux system, use the
command
<programlisting>
make package
</programlisting>
This will first do a make to ensure everything is up to date, then will
create the appropriate package for your operating system, Currently supported
package formats are rpm (for RedHat/Centos) and .deb (for Debian and
Ubuntu). If the operating system is not one of the above, or is not recognized,
make package will create a tarball.
</para>
<para>
The package installation does not start the service on the machine, so if you
want to give it a go or test it (see below), make sure to start the service manually
and wait until all services are up (mainly wait for EclWatch to come up on port 8010).
</para>
</sect1>
<sect1>
<title>Testing the system</title>
<para>
After compiling, installing the package and starting the services, you can test
the HPCC platform on a single-node setup.
</para>
<sect2>
<title>Unit Tests</title>
<para>
Some components have their own unit-tests. Once you have compiled (no need to
start the services), you can already run them. Supposing you build a Debug
version, from the build directory you can run:
<programlisting>./Debug/bin/roxie -selftest</programlisting>
and
<programlisting>./Debug/bin/eclagent -selftest</programlisting>
</para>
<para>
You can also run the Dali regression self-tests:
<programlisting>./Debug/bin/daregress localhost</programlisting>
</para>
</sect2>
<sect2>
<title>Regression Tests</title>
<para>
After the initial batch of unit-tests, which are quick and show only the most
basic errors in the system, you can run the more complete regressions' test.
These tests are located in the source directory 'testing/ecl' and you'll need
the HPCC platform up and running to execute them.
</para>
<para>
In order for the regression suite to work, there are perl modules that need to be installed as well.
The most efficient method for their installation is to use cpanm. This itself can be installed using the command line below
and following the prompted setup instructions. In most cases the suggested defaults are applicable.
<programlisting>
sudo cpan App:cpanminus
</programlisting>
Then install the following list of perl modules:
<programlisting>
sudo cpanm Config::Simple (Required)
sudo cpanm Cwd (Required)
sudo cpanm Exporter (Required)
sudo cpanm File::Compare (Required)
sudo cpanm File::Copy (Required)
sudo cpanm File::Path (Required)
sudo cpanm File::Spec::Functions (Required)
sudo cpanm Getopt::Long (Required)
sudo cpanm IPC::Run (Required)
sudo cpanm Pod::Usage (Required)
sudo cpanm POSIX (Required - However, this is typically
installed by default)
sudo cpanm Text::Diff (Required by the Diff and DiffFull
report types)
sudo cpanm HTML::Entities (Required by the HTML report type)
sudo cpanm Text::Diff::HTML (Required by the HTML report type)
sudo cpanm Template (Required by the HTML report type)
sudo cpanm Term::Prompt (Required if you do not specify a
password in the configuration file)
sudo cpanm Sys::Hostname (Recommended: if available, and it can
find the hostname, the hostname will be
logged)
sudo cpanm Text::Wrap (Optional: if available, makes output of
-listreports neater)
</programlisting>
</para>
<para>
Step 1: Configure your regression suites. This need only be done once.
<programlisting>./runregress -ini=environment.xml</programlisting>
The file 'environment.xml' is normally located in your '/etc/HPCCPlatform'
directory and contains information on how your cluster is set-up, so the
regression engine can reach it. You should see a new file, 'regress.ini'.
Edit it to accommodate to your preferred setup.
</para>
<para>
Note: There is a current issue with Roxie tests, so you should comment out
the 'roxie' from 'setup_clusters'. That will leave you about 650 tests to run.
</para>
<para>
Note 2: There is another issue with eclplus having to live in the current
testing directory. For now, you have to copy or symlink 'eclplus' into that
directory. You can get it from your build directory.
</para>
<para>
Step 2: Create test files. You'll need some files created as part of the
tests. You should also need this to be run once, too, unless you have cleaned
the files for any reason.
<programlisting>./runregress -setup</programlisting>
There is no reason for this to fail, you should get all queries executed
successfully.
</para>
<para>
Step 3: Run the regression tests. This takes about 5-10 minutes on a machine
with multiple CPUs/cores. There is an optimum value on the number of parallel
queries, not necessarily more is faster. Start with 50 and work your way up
and down to a better number for your machine.
<programlisting>./runregress -pq 50 hthor_suite</programlisting>
If some of the queries gets locked, CTRL+C them won't help. You need to abort
them from the EclWatch interface, or restart the service.
</para>
<para>
If, after it finishes, you want to see the report again, just run:
<programlisting>./runregress -n -report Summary hthor_suite</programlisting>
</para>
<para>
If you want to re-run a simgle test, just run:
<programlisting>./runregress -n -query anytest.ecl hthor_suite</programlisting>
</para>
<para>
All test results and their expected files are in the suite's directory (like
hthor_suite), on 'out' and 'key' respectively.
</para>
</sect2>
<sect2>
<title>Compiler Tests</title>
<para>
The ECLCC compiler tests rely on two distinct runs: a known good one and your
test build. For normal development, you can safely assume that the OSS/master
branch in github is good. For overnight testing, golden directories need to
be maintained according to the test infrastructure. There are Bash (Linux)
and Batch (Windows) scripts to run the regressions:
</para>
<para>
The basic idea behind this tests is to compare the output files (logs and
XML files) between runs. The log files should change slightly (the comparison
should be good enough to filter most irrelevant differences), but the XML
files should be identical if nothing has changed. You should only see
differences in the XML where you have changed in the code, or new tests
were added as part of your development.
</para>
<para>
On Linux, there are two steps:
</para>
<para>
Step 1: Check-out OSS/master, compile and run the regressions to populate
the 'golden' directory:
<programlisting>
./regress.sh -t golden -e buildDir/Debug/bin/eclcc
</programlisting>
This will run the regressions in parallel, using as many CPUs as you have,
and using your just-compiled ECLCC, assuming you compiled for Debug version.
</para>
<para>
Step 2: Make your changes (or check-out your branch), compile and run again,
this time output to a new directory and compare to the 'golden' repo.
<programlisting>
./regress.sh -t my_branch -c golden -e buildDir/Debug/bin/eclcc
</programlisting>
This will run the regressions in the same way, output to 'my_branch' dir
and compare it to the golden version, highlighting the differences.
NOTE: If you changed the headers that the compiled binaries will use, you
must re-install the package (or provide -i option to the script to the new
headers).
</para>
<para>
Step 3: Step 2 only listed the differences, now you need to see what they are.
For that, re-run the regressing script omitting the compiler, since the only
thing we'll do is to compare verbosely.
<programlisting>
./regress.sh -t my_branch -c golden
</programlisting>
This will show you all differences, using the same ignore filters as before,
between your two branches. Once you're happy with the differences, commit and
issue a pull-request.
</para>
<para>
TODO: Describe compiler tests on Windows.
</para>
</sect2>
</sect1>
<sect1>
<title>Debugging the system</title>
<para>
On linux systems, the makefile generated by cmake will build a specific
version (debug or release) of the system depending on the options selected
when cmake is first run in that directory. The default is to build a release
system. In order to build a debug system instead, use
command
<programlisting>
cmake -DCMAKE_BUILD_TYPE=Debug &lt;source directory&gt;
</programlisting>
You can then run make or make package in the usual way to build the system.
</para>
<para>
On a Windows system, cmake always generates s solution file with both debug and
release target platforms in it, so you can select which one to build within
Visual Studio.
</para>
</sect1>
</chapter>
<chapter>
<title>Coding conventions</title>
<sect1>
<title>Why coding conventions</title>
<para>
Everyone has their own ideas of what the best code formatting style is, but most
would agree that code in a mixture of styles is the worst of all worlds. A
consistent coding style makes unfamiliar code easier to understand and navigate.
In an ideal world, the HPCC sources would adhere to the coding standards described
perfectly. In reality, there are many places that do not. These are being cleaned up
as and when we find time.
</para>
</sect1>
<sect1>
<title>C++ coding conventions</title>
<para>
Unlike most software projects around, HPCC has some very specific
constraints that makes most basic design decisions difficult, and often
the results are odd to developers getting acquainted with its code base.
For example, when HPCC was initially developed, most common-place
libraries we have today (like STL and Boost) weren't available or stable
enough at the time.
</para>
<para>
Also, at the beginning, both C++ and Java were being considered as
the language of choice, but development started with C++. So a C++
library that copied most behaviour of the Java standard library (At the
time, Java 1.4) was created (see jlib below) to make the transition, if
ever taken, easier. The transition never happened, but the decisions
were taken and the whole platform is designed on those terms.
</para>
<para>
Most importantly, the performance constraints in HPCC can make
no-brainer decisions look impossible in HPCC. One example is the use of
traditional smart pointers implementations (such as boost::shared_ptr or
C++'s auto_ptr), that can lead to up to 20% performance hit if used
instead of our internal shared pointer implementation.
</para>
<para>
The last important point to consider is that some
libraries/systems were designed to replace older ones but haven't got
replaced yet. There is a slow movement to deprecate old systems in
favour of consolidating a few ones as the elected official ways to use
HPCC (Thor, Roxie) but old systems still could be used for years in
tests or legacy sub-systems.
</para>
<para>
In a nutshell, expect re-implementation of well-known containers
and algorithms, expect duplicated functionality of sub-systems and
expect to be required to use less-friendly libraries for the sake of
performance, stability and longevity.
</para>
<para>
For the most part out coding style conventions match those
described at http://geosoft.no/development/cppstyle.html, with a few
exceptions or extensions as noted below.
</para>
<sect2>
<title>Source files</title>
<para>
We use the extension .cpp for C++ source files, and .h or .hpp for header files.
Header files with the .hpp extension should be used for headers that are internal
to a single library, while header files with the .h extension should be used for
the interface that the library exposes. There will typically be one .h file per
library, and one .hpp file per cpp file.
Source file names within a single shared library should share a common prefix to aid
in identifying where they belong.
Header files with extension .ipp (i for internal) and .tpp (t for template) will
be phased out in favour of the scheme described above.
</para>
</sect2>
<sect2>
<title>Java-style</title>
<para>
We adopted a Java-like inheritance model, with macro
substitution for the basic Java keywords. This changes nothing on the
code, but make it clearer for the reader on what's the recipient of
the inheritance doing with it's base.
</para>
<para>
<itemizedlist>
<listitem>
<para>
interface (struct): declares an interface (pure virtual class)
</para>
</listitem>
<listitem>
<para>
extends (public): One interface extending another, both are pure virtual
</para>
</listitem>
<listitem>
<para>
implements (public): Concrete class implementing an interface
</para>
</listitem>
</itemizedlist>
</para>
<para>
There is no semantic check, which makes it difficult to enforce
such scheme, which has led to code not using it intermixed with code
using it. You should use it when possible, most importantly on code
that already uses it.
</para>
<para>
We also tend to write methods inline, which matches well with
C++ Templates requirements. We, however, do not enforce the
one-class-per-file rule.
</para>
<para>
See chapter 3.2 for more information on our implementation of
interfaces.
</para>
</sect2>
<sect2>
<title>Identifiers</title>
<para>
Class and interface names are in CamelCase with a leading
capital letter. Interface names should be prefixed capital I followed
by another capital. Class names may be prefixed with a C if there is a
corresponding I-prefixed interface name, but need not be
otherwise.
</para>
<para>
Variables, function and method names, and parameters use
camelCase starting with a lower case letter. Parameters may be
prefixed with underscore, normally when overwritten by local
variables.
</para>
<para>Example:</para>
<para>
<programlisting> class MySQLSuperClass {
void mySQLFunctionIsCool(int _haslocalcopy, bool enablewrite) {
bool haslocalcopy = false;
if (enablewrite)
haslocalcopy = _haslocalcopy;
}
};
</programlisting>
</para>
</sect2>
<sect2>
<title>Pointers</title>
<para>
Use real pointers when you can, and smart pointers when you have
to. Take extra care on understanding the needs of your pointers and
their scope. Most programs can afford a few dangling pointers, but a
high-performance clustering platform cannot.
</para>
<para>
Most importantly, use common sense and a lot of thought. Here
are a few guidelines:
</para>
<para>
<itemizedlist>
<listitem>
<para>
Use real pointers for return values, parameter passing
</para>
</listitem>
<listitem>
<para>
For local variables use real pointers if their lifetime is
guaranteed to be longer than the function (and no exception
is thrown from functions you call), shared pointers otherwise.
</para>
</listitem>
<listitem>
<para>
Use Shared pointers for member variables - unless there is
a strong guarantee the object has a longer lifetime.
</para>
</listitem>
<listitem>
<para>
Create Shared&lt;&gt; with either:
</para>
<itemizedlist>
<listitem>
<para>
Owned&lt;&gt;: if your new pointer will own the
pointer alone (transfer)
</para>
</listitem>
<listitem>
<para>
Linked&lt;&gt;: if you still want to share the
ownership (shared)
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Consider whether your code is critical and use
link/release when necessary
</para>
</listitem>
</itemizedlist>
</para>
<para>
Warning: Direct manipulation of the ownership might
cause Shared&lt;&gt; pointers to lose the pointers, so subsequent
calls to it (like o2-&gt;doIt() after o3 gets ownership) *will* cause
segmentation faults.
</para>
<para>
Refer to chapter 5.3 for more information on our smart pointer
implementation, Shared&lt;&gt;.
</para>
<para>
Methods that return Shared&lt;&gt; pointers, or that use them,
should have a common naming standard.
</para>
<para>
<itemizedlist>
<listitem>
<para>
Foo * queryFoo(): does not return a linked pointer since
lifetime is guaranteed for a set period. Caller should link if it
needs to retain it for longer.
</para>
</listitem>
</itemizedlist>
<itemizedlist>
<listitem>
<para>
Foo * getFoo(): returned values is linked - should be
assigned to an owned, or returned directly.
</para>
</listitem>
</itemizedlist>
<itemizedlist>
<listitem>
<para>
void setFoo(Foo * x): generally parameters to functions are
assumed to not be linked, the callee needs to link them if they
are retained.
</para>
</listitem>
</itemizedlist>
<itemizedlist>
<listitem>
<para>
void setownFoo(Foo * ownedX): Some functions do take
pointers that are linked - where you are implicitly transferring
ownership.
</para>
</listitem>
</itemizedlist>
</para>
</sect2>
<sect2>
<title>Indentation</title>
<para>
We use 4 spaces to indent each level. TAB characters should not be used. There is
some discussion about possibly changing to a 2-space indentation convention at some
point in the future.
</para>
<para>
The { that starts a new scope and the corresponding } to close it are placed on a
new line by themselves, and are not indented. This is sometimes known as the Allman
or ANSI style.
</para>
</sect2>
<sect2>
<title>Comments</title>
<para>
We generally believe in the philosophy that well written code is self-documenting.
javadoc-formatted comments for classes and interfaces are being added.
</para>
</sect2>
<sect2>
<title>Namespaces</title>
<para>
We do not use namespaces. We probably should, following the Google style guide&apos;s
guidelines - see http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Namespaces
</para>
</sect2>
<sect2>
<title>Other</title>
<para>
We often pretend we are coding in Java and write all our class members inline.
</para>
</sect2>
</sect1>
<sect1>
<title>Other coding conventions</title>
<sect2>
<title>ECL code</title>
<para>
The ECL style guide is published separately.
</para>
</sect2>
<sect2>
<title>Javascript, XML, XSL etc</title>
<para>
We use the commonly accepted conventions for formatting these files.
</para>
</sect2>
</sect1>
</chapter>
<chapter>
<title>Design Patterns</title>
<sect1>
<title>Why Design Patterns?</title>
<para>
Consistent use of design patterns helps make the code easy to understand.
</para>
</sect1>
<sect1>
<title>Interfaces</title>
<para>
While C++ does not have explicit support for interfaces (in the java sense), an
abstract class with no data members and all functions pure virtual can be used
in the same way.
</para>
<para>
Interfaces are pure virtual classes. They are similar concepts to
Java's interfaces and should be used on public APIs. If you need common
code, use policies (see below).
</para>
<para>
An interface's name must start with an 'I' and the base class for
its concrete implementations should start with a 'C' and have the same
name, ex:
</para>
<programlisting> CFoo : implements IFoo { };</programlisting>
<para>
When an interface has multiple implementations, try to stay as
close as possible from this rule. Ex:
</para>
<programlisting> CFooCool : implements IFoo { };
CFooWarm : implements IFoo { };
CFooALot : implements IFoo { };
</programlisting>
<para>
Or, for partial implementation, use something like this:
</para>
<programlisting> CFoo : implements IFoo { };
CFooCool : public CFoo { };
CFooWarm : public CFoo { };
</programlisting>
<para>
Extend current interfaces only on a 'is-a' approach, not to
aggregate functionality. Avoid pollution of public interfaces by having
only the public methods on the most-base interface in the header, and
internal implementation in the source file. Prefer pImpl idiom
(pointer-to-implementation) for functionality-only requirements and
policy based design for interface requirements.
</para>
<para>
Example 1: You want to decouple part of the implementation from
your class, and this part does not implements the interface your
contract requires.
</para>
<programlisting> interface IFoo {
virtual void foo()=0;
};
class CFoo : implements IFoo {
MyImpl *pImpl;
public:
void foo() { pImpl-&gt;doSomething(); }
};
</programlisting>
<para>
Example2: You want to implement the common part of one (or more)
interface(s) in a range of sub-classes.
</para>
<programlisting> interface ICommon {
virtual void common()=0;
};
interface IFoo : extends ICommon {
virtual void foo()=0;
};
interface IBar : extends ICommon {
virtual void bar()=0;
};
template &lt;class IFACE&gt;
class Base : implements IFACE {
void common() { ... };
}; // Still virtual
class CFoo : Base&lt;IFoo&gt; {
void foo() { 1+1; };
};
class CBar : Base&lt;IBar&gt; {
void bar() { 2+2; };
};
</programlisting>
</sect1>
<sect1>
<title>Reference counted objects</title>
<para>
Shared&lt;&gt; is an in-house smart pointer implementation. It's
close to boost's intrusive_ptr. It has two derived implementations:
Linked and Owned, which are used to control whether the pointer is
linked when a shared pointer is created from a real pointer or not,
respectively. Ex:
</para>
<programlisting> Owned&lt;Foo&gt; = new Foo; // Owns the pointers
Linked&lt;Foo&gt; = myFooParmeter; // Shared ownership
</programlisting>
<para>
Shared&lt;&gt; is thread-safe and uses atomic reference count
handled by each object (rather than by the smart pointer itself, like
boost's shared_ptr).
</para>
<para>
This means that, to use Shared&lt;&gt;, your class must implement
the IInterface interface, most commonly by extending the CInterface
class (and using the IMPLEMENT_IINTERFACE macro in the public section of
your class declaration).
</para>
<para>
This interface controls how you Link() and Release() the pointer.
This is necessary because in some inner parts of HPCC, the use of a
"really smart" smart pointer would add too many links and releases (on
temporaries, local variables, members, etc) that could add to a
significant performance hit.
</para>
</sect1>
<sect1><title>STL</title><para/></sect1>
</chapter>
<chapter>
<title>Structure of the HPCC source tree</title>
<section>
<title>Introduction</title>
<para/>
</section>
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="cmake_modules/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="common/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="dali/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="deployment/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="ecl/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="ecllibrary/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="esp/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="initfiles/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="plugins/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="roxie/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="rtl/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="services/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="system/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="testing/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="thorlcr/sourcedoc.xml" />
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="tools/sourcedoc.xml" />
</chapter>
</book>
Something went wrong with that request. Please try again.