Skip to content
Switch branches/tags
Go to file
4 contributors

Users who have contributed to this file

@jgallagher59701 @kyang2014 @hyoklee @ndp-opendap
1210 lines (885 sloc) 51.7 KB
What's new for Hyrax 1.16.3
CF option:
1. Enhance the support of handling HDF-EOS2 swath multiple dimension map pairs.
The enhancement includes the support of multiple swaths.
This fix solves the MOD09/MYD09 issue docoumented in HFRHANDLER-332.
Note for this enhancement:
1) Limitations
(1) The Latitude and Longitude must be 2 dimensional arrays.
(2) The number of dimension maps must be an even number in a swath.
(3) The handling of MODIS level 1B is still kept as the old way.
(4) When there is a one pair of dimension maps in a swath and the
the geo-dimensions defined in the dimension maps are only used
by 2-D Latitude and Longitude fields, we still keep the old way.
2) Variable/dimension name conventions
(1) The HDF-EOS2 file contains only one swath
The swath name is not included in the variable names.
For Latitude and Longitude, the interpolated Latitude and Longitude variable
names are named as "Latitude_1","Latitude_2","Longitude_1","Longitude_2".
The dimension and other variable names are just modified by following the
CF conventions.
A DDS example can be found under
Go to the following directory:
(2) The HDF-EOS2 file contains multiple swaths
The swath name are included in the variable and dimension names to
avoid name clashings.
The swath names are added as suffix for variable and dimension names.
Examples are like:
"temperature_swath1","Latitude_swath1","Latitude_swath1_1" etc.
A DDS example can be found under
Go to the following directory:
3) For applications that don't want to handle dimension maps, one can change
the BES key "H4.DisableSwathDimMap=false" at to
"H4.DisableSwathDimMap= true".
2. Add a BES key to turn off the handling of HDF-EOS2 swath dimension map.
What's new for Hyrax 1.16.2
Clean up some compiler warnings.
What's new for Hyrax 1.16.1
Default option:
Fix the memory leaking caused by handling vdata.
What's new for Hyrax 1.15.4
CF option:
1. Map the AIRS version 6 HDF-EOS Grid/Swath attributes to DAP2.
What's new for Hyrax 1.15.3
CF option:
1. Enhance the support of handling the scale_factor and add_offset to
follow the CF. The scale_factor and add_offset rule for the MOD/MYD21
product is different than other MODIS products. We make an exception
for this product only to ensure the scale_factor and add_offset follow
the CF conventions.
What's new for Hyrax 1.15.0
CF option:
1. Enhance the support of handling the scale_factor and add_offset to
follow the CF. The scale_factor and add_offset rule for the MOD16A3
product is different than other MODIS products. We make an exception
for this product only to ensure the scale_factor and add_offset follow
the CF conventions.
What's new for Hyrax 1.14.0
CF option:
1. Enhance the support of handling the scale_factor and add_offset to
follow the CF. The scale_factor and add_offset rule for the MOD16A2
product is different than other MODIS products. We make an exception
for this product only to ensure the scale_factor and add_offset follow
the CF conventions.
Default option(Not developed by The HDF Group):
1. We fixed several coding issues discovered by the coverity scan. We
also fixed quite a few memory leaking issues discovered by valgrind.
What's new for Hyrax 1.13.4
This is a maintenance release. No new features are added and no
outstanding bugs are fixed.
********Special note about the version number********
Since Hyrax 1.13.2, Hyrax just treats each handler as a module.
So We stop assigning an individual version number for the HDF4 handler.
Updated for version 3.12.2(November 16 2016, Hyrax 1.13.2)
1) Minor code re-arrangement by removing the DAP4 macro and other misc. minor code updates.
2) The [Known Issues] is updated to reflect the current findings of some issues that either not
appropriate or not worth the effort for the handler to handle.
[Known Issues]
1) For AMSR E level-3 data, the variable like SI_12km_N_18H_DSC doesn't have any CF scale/offset,
_FillValue, unit attributes. These attributes have to be retrieved from the product document.
Therefore, the plot generated directly from the OPeNDAP handler may not be correct.
If AMSR_E level 3 HDF4 products are still served by NASA data centers, the Hyrax's NcML module
can be used with the HDF4 handler to provide the missing CF attributes to generate the correct
More information on how to use NcML to provide the missing CF attributes,
please check
2) For the HDF-EOS2 swath, the HDF4 handler identifies the latitude and longitude coordinate variables
based on the variable names rather than via the CF units attribute since one cannot use the HDF-EOS2
library to add attributes for an HDF-EOS2 field. Therefore, the HDF-EOS2 swath field attributes
can only be added and retrieved by using the HDF4 APIs. It is difficult and inefficient for the
handler to use HDF4 APIs when handling the HDF-EOS2 swath fields. Given that all NASA HDF-EOS2
swath files we observed simply use name "Latitude" for the latitude field and "Longitude" for the
longitude field, we believe using variable names "Latitude" and "Longitude" to identify latitude
and longitude under swath geo-location fields is sufficient to serve NASA HDF-EOS2 products. Even
if the handler cannot identify the latitude and longitude as coordinate variables, the handler
will still generate the correct DDS and DAS and Data responses although they may not follow the
CF conventions.
3) DataCache may not work on MacOS for rare cases.
Some HDF4 variables can be cached in disk. The cached file name uses the variable name as the key
to distinguish each other. On MacOS system that is not configured to be case-sensitive
(using diskutil info to check your OS), when the DataCache BES key is turned on, two legal variable
names like TIME and Time with the same shape may share the same cached file name. Thus this may
cause the data inconsistent.
Given that the DataCache feature is turned off by default and the usage of this feature is only
on Linux, the handler just documents this as an known issue.
Updated for version 3.12.1(April 29 2016, Hyrax 1.13.1))
1) Improve the calculation of XDim and YDim for the sinusoidal projection.
2) Improve the handling of BES keys by moving the initialization of BES keys to the Handler constructor.
3) Bug Fixes:
(1) Remove the character dot(.) when the _FillValue is NaN.
In the previous version, the character dot(.) was added at the end of NaN. This prevents the NetCDF Java clients to access the DAS.
(2) Correct the (0,360) longitude value when the HDF-EOS2 EnableEOSGeoCacheFile is turned on.
Some HDF-EOS2 products make the longitude range from 0 to 360. When the HDF-EOS2 lat/lon cache is on, the longtiude values were not retrieved
correctly in the previous versions. This was discovered by testing an AMSR-E product when updating the testsuite for the handler.
[Known Issues]
1) DataCache may not work on MacOS for rare cases.
Some HDF4 variables can be cached in disk. The cached file name uses the variable name as the key to distinguish each other.
On MacOS system that is not configured to be case-sensitive(using diskutil info to check your OS),
when the DataCache BES key is turned on, two legal variable names like TIME and Time with the same shape may share the same cached file name.
Thus this may cause the data inconsistent.
Given that the DataCache feature is turned off by default and the usage of this feature is only on Linux,
the handler just documents this as an known issue.
2) Note for version 3.12.0: no new features or bug fixes for version 3.12.0. Only several warnings are removed.
The version was accidently bumped from 3.11.7 to 3.12.0.
Updated for version 3.11.7 (15 September 2015,Hyrax 1.13.0,1.12.2,1.12.1)
New Features:
1) We added 1-D coordinate variables and CF grid_mapping attributes for HDF-EOS2 grid with Sinusoidal projection.
This request is from LP DAAC.
2) We added the DDS and DAS cache for AIRS version 6 products and the data cache for HDF4 data read via SDS interfaces.
Several other improvement is also made for AIRS. This request is from GES DISC.
Bug Fixes:
1) We fixed a bug caused by casting values via pointers between different datatypes.
This is first reported by NSIDC and then from OPeNDAP.
2) We fixed the attribute missing issue for the HDF-EOS2 swath geo-location variables.(variables under Geolocation Fields)
3) We also made the correct representation of scale_factor and add_offset CF attributes for HDF-EOS2 swath geo-location variables.
You may need to read the information about BES keys in the file to see if the default values
need to be changed for your service.
Updated for version 3.11.6 (15 November 2014,Hyrax 1.11.2,1.11.1,1.11.0,1.10.1,1.10.0)
In this version, we added the following features:
1) Implement an option to cache HDF-EOS2 grid latitude and longitude values to improve performance.
2) Implement an option not to pass file ids for the compatibility with NcML modules.
3) Improve the access of DDS by using the file structure obtained in the stage of building DAS.
4) Add the CF support of AIRS version 6 level 2(swath).
5) Support the mapping of vgroup attributes to DAP.
6) Support the mapping of HDF-EOS2 swath and grid object(like vgroup) attributes to DAP.
Bug fixes:
1) Obtained the CF add_offset values for MOD/MYD08_M3 products.
2) Fixed the wrong dimension size when a dimension is unlimited.
3) Make vdata field attributes mapped to correct DAS container.
NASA HDF4 and HDF-EOS2 products that are supported up to this release:
HDF-EOS2: AIRS, MODIS, some MISR, some Merra, some MOPITT
HDF4: TRMM version 6 and version 7, some CERES, some OBPG
Performance enhancement:
AIRS version 6 and MODIS O8_M3-like products
HDF-EOS2 grid lat/lon are calculated via cache
You may need to read the information about BES keys in the file to see if the default values
need to be changed for your service.
Updated for version 3.11.5 (25 April 2014,Hyrax 1.9.7)
In this version, we fix the following datatype mapping issues:
1) We make HDF4 DFNT_CHAR array map to DAP string for variables.
In the previous versions, DFNT_CHAR is mapped to DAP BYTE for variables.
This may cause some NASA HDF4 vdata DFNT_CHAR array to DAP BYTE array, which is not right.
The fix makes the vdata fields of some NASA MISR (MISR_AEROSOL etc.) products map correctly to DAP.
2) We fix the mapping of DFNT_INT8 attribute value to DAP.
In the previous versions, DFNT_INT8(signed 8-bit integer) is mapped to DAP BYTE(unsigned 8-bit integer).
This will cause misrepresentation when the value is < 0.
This fix will make some attribute values of some MODIS or MISR non-physical fields map correctly to DAP.
3) For attribute _FillValue, we enforce that it is always a number even if the attribute datatype is
DFNT_CHAR or DFNT_UCHAR. In this release,we make the string representation of a _FillValue to a number.
We also make turn on the H4.DisableStructMetaAttr key to be 'true'the default setting of This
will improve the performance to generate DAS.
Updated for version 3.11.4 (1 April 2014,Hyrax 1.9.3)
This version improves I/O by reducing the number of HDF4/HDF-EOS2 file open /
close requests.
The default BES key settings are changed. See USAGE document.
Support to TRMM version 7 (3B43, 3B42, 3B31, 3A26, 3A25, 3A12, 3A11, 2B31,
2A25, 2A23, 2A21, 2A12, 1C21, 1B21, 1B11 and 1B01) and some TRMM level 3
version 6(CSH and 3A46) products are added. The emphasis is on level 3 grid
Some memory leaks detected by valgrind are fixed.
Error handling is greatly improved:resources are released properly when errors
The testsuite is updated accordingly.
All the updates are solely applied to the CF option.
[Known Issues]
Note:HFRHANDLER-??? means The HDF Group's JIRA ID.
HFRHANDLER-223:(U)CHAR8 type _FillValue attribute value conversion produces
string representation, instead of real value when the dataset is in numeric
HFRHANDLER-224:Some attributes of Lat/Lon geolocation fields in HDF-EOS2
swath are dropped in DAS output.
Updated for version 3.11.3 (1 February 2014,Hyrax 1.9.2)
This version optimizes handler for MOD08 M3 and AIRS version 6 products with
new BES keys. See the USAGE document for the new BES keys for details.
Updated for version 3.11.2 (18 October 2013)
This version changes BaseType::read().
Updated for version 3.11.1 (10 September 2013)
This version addresses some issues in the code base.
Updated for version 3.11.0 (30 May 2013)
Limitation of handling MOD14 product is documented under USAGE. MOD14 is not
an HDF-EOS2 product so having MOD03 geo-location file will not change the
output of the HDF4 handler.
This version fixes the 7.6 of "Known Issues" documented in README for version
3.10.0. Search for "7.6" in this document for details.
Updated for version 3.10.1 (27 Nov 2012)
This version fixes a bug for reading Int16 type dataset from NASA MODIS 09
Updated for version 3.10.0 (30 Sep 2012)
1. Introduction
The HDF4 handler version 3.10.0 is a major update for the CF support. The
README for this version is *solely* for the CF support.
The README consists of several sections: Usage, New Features, Bug fixes,
Code Improvement, Limitations, and Known Issues.
2. Usage
The current version uses BES Keys to make the configuration of the CF option
more flexible and easier. The current version also doesn't require
the HDF-EOS2 library to support the CF conventions for non-HDFEOS2 HDF4 files.
Check the USAGE for the detailed information.
3. New Features
3.1 Testsuite Expansion
If you configure the handler with '--with-hdfeos2', the 'make check' will
test a new set of HDF-EOS2 test files that are added. Please make sure to
clean everything with 'make distclean' if you want to test with a different
configuration option each time.
Source codes are also provided for the new set of HDF-EOS2 test files.
3.2 Naming conventions follow HDF5 handler naming conventions for consistency.
Again, this is for the CF support only.
3.2.1 Non-CF-compliant characters
Any non-CF-compliant character for objects and their attribute names will be
changed to '_'. There is only one exception: if the first character of a name
is '/', handler will ignore the first '/'. This is a request from NASA data
centers since it means that the a path is prefixed to the name. The first '/'
is ignored for better readability.
The object names include all HDF-EOS2 and HDF4 objects supported by the CF
option. They are HDF-EOS2 swath fields and grid fields, HDF4 SDS, HDF4 Vdata,
HDF4 Vdata fields and corresponding attributes.
3.2.2 Multiple HDF-EOS2 swath and grid
For multiple HDF-EOS2 swath and grid files, since we only find a bunch of
AIRS and MODIS grid products in this category and so far the field names under
these grids can be distinguished by themselves, so we simply keep the original
field names. Grid names are not prefixed.
For example, in AIRS.2002.08.24.L3.RetStd_H008.v4.0.21.0.G06104133343.hdf
there are two fields with similar names:
1) Field under vgroup "ascending": TotalCounts_A
2) Field under vgroup "descending": TotalCounts_D
As you can see, the field "TotalCounts" in each vgroup can be distinguished
by their field names anyway. Prefixing a grid name to field name such as
"ascending_TotalCounts_A" and "descending_TotalCounts_D" makes the field name
difficult to read.
3.2.3 Non-HDFEOS2 SDS objects
For pure HDF4 SDS objects, the handler prefixes the vgroup path for SDS object
3.2.4 SDS objects in Hybrid HDF-EOS2 files
For SDS objects in hybrid HDF-EOS2 files, to make these SDS objects
distinguished from the HDF-EOS2 grid or swath field names, handler adds
"_NONEOS" to the object name. This is based on the following fact:
These added SDS objects often share the same name as the HDF-EOS2 fields.
For example, band_number can be found both under HDF-EOS2 swath and under the
root vgroup. Thus, name clashing needs to be done. Adding "_NONEOS" is a better
way to handle the name clashing than simply adding a "_1".
3.2.5 Vdata object name conventions
The vdata name conventions are subject to change. We will evaluate the current
name conventions after hearing feedback from users. Since the main NASA HDF4
object is SDS, hopefully this Vdata name conventions may not be critical. Vdata object mapped to DAP array
Since we would like to make sure that users can easily figure out the DAP
variables mapped from vdata, we use the following name conventions.
For example, a vdata "level1B" has a field "scan" under the group "g1"
The mapped DAP variable name is "vdata_g1_level1B_vdf_scan." The "vdata_"
prefix tells the user that this is an original vdata. The "_vdf_" tells the
user that the string followed by "_vdf_" is the vdata field name.
The handler also adds the following attributes to DAS to serve the similar
std::string VDdescname = "hdf4_vd_desc";
std::string VDdescvalue = "This is an HDF4 Vdata.";
std::string VDfieldprefix = "Vdata_field_";
These attributes will be generated if the corresponding BES key is turned on.
See USAGE for details. Vdata fields mapped to DAP attributes
For the current version, if the number of vdata records is less than 10,
the vdata field will be mapped to DAP attributes. This is based on the fact
that small number of vdata records is mostly metadata (like attributes),
so this kind of mapping may be more reasonable. However, the attribute name
that are mapped from vdata field simply follows the conventions below:
<vdata path>_<vdata name>_<vdata field name>
The above name convention is simply following the handling of SDS. The
rationale here is that distinguishing between SDS and Vdata fields in DAP
attribute(DAS) is not as critical as distinguishing between SDS and Vdata
fields in DAP array(DDS). However, this may cause user's confusion.
We will evaluate this approach in the future based on user's feedback.
3.3 Name clashings
Name clashing is handled similarly like the HDF5 handler. Only the name that
has name clashing will be changed to a different name. This is according to
NASA user's request. Previously, all corresponding names will be changed if
there is a name clashing found in the file.
See 3.2 for the details about the naming conventions.
3.4 New BES Keys
There are six BES keys are newly added.
H4.EnableCF is the most important key. It must be set to be true if CF
option is used. Other five keys are valid only if H4.EnableCF is set to be
For more information, check the USAGE file.
3.5 Dimension maps and scale/offset handling for MODIS swath
3.5.1 Interpolating more fields according to the dimension maps
Previously, we only supported the interpolation of latitude and longitude.
In this release, we add the support of other fields such as solar_zenith, etc.
if dimension map is used to store the field. Additionally, we also provide an
option to directly read these fields (latitude, longitude, solar zenith etc.)
from MOD03 or MYD03 files distributed by LAADS. For more information on this
option, check the USAGE file.
3.5.2 Enhance the scale/offset handling for MODIS level 1B fields
According to the discussions with NASA users, we found that for fields such
as EV_1KM_RefSB and EV_1KM_Emissive, for values greater than 65500 are special
values that should not be considered as the real data signal. So in this
release, we keep values between 65500 and 65535 when applying the scale and
offset function. Furthermore, we also calculated valid_min and valid_max for
EV_???_RefSB and EV_???_Emissive to assure that users can plot data easily
with Panoply and IDV.
3.6 Dimensions in MERRA products
In this release, the handler uses HDF-EOS2 latitude and longitude fields as
the coordinate variables. The added HDF4 SDS XDim and YDim are used inside the
file as coordinate variables.
3.7 Vdata mapping
The new handler can handle vdatas robustly either as an attribute in DAS or
as an array in DDS. Review the previous section for details.
Several BES keys are also provided for users to have options on how to
generate vdata DAP output. See USAGE for details.
Vdata subsetting is handled robustly except for some HDF-EOS2 Vdata objects.
See section 7.5 for the known issue.
3.8 Handling general HDF4 files with the CF option
The previous release listed the NASA HDF4 products we handled specially to
follow the CF convention. For other HDF4 products (we call them general HDF4
files), we do the following:
A. Make the object and attribute names follow the CF conventions
B. Follow the default handler's mapping to map SDS. In this way, some SDS
objects that have dimension scales can be visualized by Panoply and IDV.
C. Map Vdata according to 3.7.
4. Bug Fixes
4.1 Attribute names are cleaned up. If an attribute name contains non alpha
numeric characters like '(' or '%', they are replaced with '_' to meet the
CF naming conventions.
4.2 Products that use SOM projection are handled correctly and Panoply can
display MISR data in block-by-block basis. However, please read 7.11 for
the known issue for the interpretation of final visualization output.
4.3 We continued correcting attributes related to scale/offset. For example,
the handler corrected "SCALE_FACTOR" and "OFFSET" attributes for AMSR_E L2A
product by renaming them to "scale_factor" and "add_offset". It also cleaned
any extra spaces in attribute names. We also corrected the rule on how to
applying scale/offset equations in MODIS product (e.g., Solar_Zenith dataset).
Finally, the handler renames Number_Type attributes to Number_Type_Orig if
data field's type is changed by applying scale/offset by the handler (e.g.,
4.4 Strange ECS metadata sequences are handled. Some MODIS products have
metadata name sequence "coremetadata.0, coremetadata.0.1, ..." instead of
"coremdatadata.0, coremetadata.1, ..."
4.5 Mismatched valid_range attribute is removed from CERES ES4 product.
Panoply fails to visualize the product if the valid_range attribute in
lat/lon dataset doesn't match the calculated coordinate variable values
returned by the handler. Thus, the handler removes the valid_attribute from
coordinate variables.
4.6 There was a bug regarding subsetting the vdata field when the stride is
greater than 1. It was fixed in this release. We also found a similar bug
inside the HDF-EOS2 library regarding the subsetting of 1-D HDF-EOS2 swath.
This should be fixed in the next HDF-EOS2 release.
5. Code Improvement
5.1 Refactored codes
There were huge code overlap between and Those code
lines are combined and are moved to
5.2 Error handlings and debugging
Error handlings are improved to ensure the closing of all opened HDF4 API
handles when error occurs. DEBUG macros are replaced with BESDEBUG or BESLog.
6. Limitations
Again, all the limitations here are for the CF support (when the CF option is
enabled) only.
6.1 Unmapped objects
The handler ignores the mapping of image, palette, annotation, vgroup
attributes and HDF-EOS2 swath group and grid group attributes. Note HDF4 global
attributes(attributes from SD interfaces), HDF4 SDS objects, HDF4 Vdata
attributes, HDF-EOS2 global(ECS metadata etc.) and field attributes are mapped
to DAP.
6.2 The handler doesn't handle unlimited dimension.
The result may be unexpected. (e.g., some fields in CER_ES8_TRMM-PFM_Edition2
product cannot be handled.)
6.3 Non-printable vdata (unsigned) character type data will not appear in DAS.
If vdata char type column has a non-printable value like '\\005', it will not
appear in DAS when vdata is mapped to attribute because the BES key,
H4.EnableVdata_toAttr, is enabled. See the file USAGE for the usage of the key.
6.4 Vdata with string type is handled in character-by-character basis in 2D
For example, when the vdata is a string of characters like
the handler represents it as
Byte vdata_PerBlockMetadataTime_vdf_BlockCenterTime[VDFDim0_vdata_PerBlockMetadataTime_vdf_BlockCenterTime = 2][VDFDim1_vdata_PerBlockMetadataTime_vdf_BlockCenterTime = 28] = {{50, 48, 48, 54, 45, 49, 48, 45, 48, 49, 84, 49, 54, 58, 49, 55, 58, 49, 50, 46, 54, 57, 51, 51, 49, 48, 90, 0},{50, 48, 48, 54, 45, 49, 48, 45, 48, 49, 84, 49, 54, 58, 49, 55, 58, 51, 51, 46, 52, 51, 56, 57, 50, 54, 90, 0}};
7. Known Issues
7.1 CER_SRBAVG3_Aqua-FM3-MODIS_Edition2A products have many blank spaces in
long_name attribute.
These products have datasets with really long_name attribute with size 277.
However, most of them are blank spaces in the middle. For example, you'll see
DAS output like below:
String long_name "1.0 Degree Regional Monthly Hourly (200+ blank spaces)
CERES Cloud Properties";
String units "unitless";
This is not a bug in the handler. The data product itself has such long
7.2 Longitude values for products that use LAMAZ projection will differ in i386
For i386 machines, handler will generate different longitude values from
x86_64 machines for the products that use Lambert azimuthal projection (LAMAZ)
near North Pole or South Pole. For example, the handler will return 0
for 64-bit machine while it'll return -135 for 32-bit machine in the middle
point of longitude in the NSIDC AMSR_E_L3_5DaySnow_V09_20050126.hdf product:
< Float64 Longitude[YDim = 3][XDim = 3] = {{-135, 180, 135},{-90, 0, 90},
{-45, 0, 45}};
> Float64 Longitude[YDim = 3][XDim = 3] = {{-135, -180, 135},{-90, -135, 90},
{-45, -1.4320576915337e-15, 45}};
This is due to the calculations in the current GCTP library that HDF-EOS2
library uses. However, this will not affect the final visualization because,
for North Pole or South Pole, the longitude can be anything from -180 to 180.
So depending on floating point accuracy, handler may get different results for
longitude of this pixel from GCTP. The longitude value is irrelevant at North
Pole or South Pole for visualization clients.
7.3 IDV can't visualize SOM projection
MISR products that use SOM projection have 3D lat/lon. Although Panoply can
visualize them but IDV cannot. Handler doesn't treat the 3rd dimension as
a separate coordinate variable and coordinate attribute on dataset includes
only latitude and longitude variable names.
7.4 Vdata is mapped to attribute if there are less than or equal to 10 records
For example, the DAS output of TRMM data 1B21 will show Vdata as an attribute:
String hdf4_vd_desc "This is an HDF4 Vdata.";
Float32 Vdata_field_transCoef -0.5199999809;
Float32 Vdata_field_receptCoef 0.9900000095;
Float32 Vdata_field_fcifIOchar 0.000000000, 0.3790999949, 0.000000000,
-102.7460022, 0.000000000, 24.00000000, 0.000000000, 226.0000000, 0.000000000,
0.3790999949, 0.000000000, -102.7460022, 0.000000000, 24.00000000, 0.000000000,
This is part of the vdata handling convention, not an issue. However, we
list here just for user's convenience. See 3.7 for more information.
7.5 Vdata subsetting in HDF-EOS2 data products may not work.
Subsetting HDF-EOS2 vdata with large step index (e.g. a_vdata[0:999:999])
may not work due to a bug in HDF-EOS2 library. The bug has been reported to
HDF-EOS2 developer and should be fixed in the next HDF-EOS2 release. Reading
the entire Vdata is OK.
7.6 DDX generation will fail on PO.DAAC AVHHR product.
For example, handler can't generate DDX output for NASA JPL PO.DAAC AVHRR
product 2006001-2006005.s0454pfrt-bsst.hdf. Please see the OPeNDAP ticket #1930
for details.
7.7 It is possible to have name clashing between dimension names and variable
names. Currently, the handler only checks the name clashing for the variables
and the name clashing for the dimensions, not combined. Here's a reason:
Many good COARDS files will have the following layout:
If we want to check the name clashings for the combined set, this kind of good
files will always have name clashings. And depending on the code flow, the
final layout may become something like:
These are absolutely bad names for normal users. Instead, if we don't consider
the combined set, the chance of the name clashing due to changing the
conflicted coordinate variable names is very rare. So we may not do this at
all until we really find a typical product that causes a problem.
7.8 long_name attribute for <variable name>_NONEOS
The handler generates long_name attribute to indicate the original variable
name for the SDS objects after renaming them with _NONEOS suffix. Those
_NONEOS variables appear in hybrid files --- files that have additional HDF4
objects written by the HDF4 APIs on top of the existing HDF-EOS2 files. This
addition of long_name attribute cannot be turned off using the
H4.EnableVdataDescAttr=false key described in USAGE document.
7.9 Empty dimension name creates a variable with empty name.
It is possible to create dataset with no dimension name in HDF-EOS2 library.
In such case, the handler generates fake dimension variable without
dataset name in DDS like below:
Int32 [4];
Since there's no dataset name, data reading will also fail.
7.10 Special CF handlings for some products
The handler doesn't correct scale_factor/add_offset/_FillValue for every HDF4
product to make it follow CF conventions.
For example, the handler doesn't apply the scale and offset function(log
equation) for OBPG CZCS Level 3 products.
The handler doesn't insert or correct fill value attribute for some OBPG
products so their plots may include fill values if you visualize them. OPeNDAP
server administrator can fix these easily by using the NcML handler.
PO.DAAC AVHRR product 2006001-2006005.s0454pfrt-bsst.hdf has "add_off" and
doesn't specify fill value in attribute. Therefore, the final visualization
image will not be correct for such product.
7.11 Bit shifting required in MISR product is not handled.
Some datasets in MISR products combine two datasets into one, which requires
bit shifting for the correct interpretation of the data. Handler doesn't
perform such operation so the final visualization image may not be correct.
(e.g., Blue Radiance in MISR_AM1_GRP_ELLIPSOID_GM_P117_O058421_BA_F03_0024.hdf)
7.12 Subsetting through Hyrax HTML interface will not work on LAMAZ products.
You cannot subset Latitude_1 and Longitude_1 datasets using the HTML form.
Checking the check box will not insert any array subscription text into the
text box. Please see the OPeNDAP ticket #2075 for details.
Kent Yang (
Updated for version 3.9.4 (19 Jan, 2011)
If your system is non-i368 such as 64-bit architecture, please read
IMPORTANT NOTE on INSTALL document regarding '--with-pic' configuration option
(i.e., '-fPIC' compiler option). You need to install both HDF4 (and HDF-EOS2)
library with '--with-pic' option if you encounter a linking problem.
1. Fixed the bug in uint16/uint32 type attribute handling.
2. The following bug fix applies to only --with-hdfeos2 configuration option.
2.1 Corrected the handling the scale/offset for MODIS products because the
MODIS scale/offset equation is quite different from the CF standard.
There are three different "scale_factor" and "add_offset" equations in MODIS
data files:
1) For MODIS L1B, MODIS 03,05,06,07,08,09A1,17 and ATML2 level 2 swath
products, MCD43B4, MCD43C1, MOD and MYD 43B4 level 3 grid files, the scale
offset equation is
correct_data_value = scale * (raw_data_value - offset).
2) For MODIS 13, MODIS 09GA, and MODIS 09GHK, the scale offset equation is
correct_data_value=(raw_data_value -offset)/scale_factor.
3) For MODIS 11 level 2 swath products, the equation is
correct_data_value = scale * raw_data_value + offset.
We decide the type based on the group name.
If the group name consists of "L1B", "GEO", "BRDF", "0.05Deg", "Reflectance",
"MOD17A2","North", "mod05", "mod06", "mod07", "mod08", or "atm12",
it is type 1.
If the group name consists of "LST", it is type 2.
If group name consists of "VI", "1km_2D", "L2g_2d", it is type 3.
For type 1, use (raw_value-offset)*scale.
For type 2, use (raw_value*scale+offset).
For type 3, use (raw_value-offset)/scale.
For recalculation of MODIS, one of the following conditions must meet:
"radiance_scales" and "radiance_offsets" attributes are available, or
"reflectance_scales" and "reflectance_offsets" attributes are available, or
"scale_factor" and "add_offset" attributes are available.
If any of the above conditions meet, recalculation will be applied,
otherwise nothing will happen. If scale is 1 and offset is 0, we don't
perform recalculation to improve performance.
Data values are adjusted based on it type. If "scale_factor" and "add_offset"
attributes are not available, "radiance_scales" and "radiance_offsets"
attributes, or "reflectance_scales" and "reflectance_offsets" attributes, are
used instead.
After adjustment, the data type is converted uniformly to float not to lose
precision even if its original data type is integer. The "valid_range"
attribute is removed accordingly as it does not reflect the actual values
any more.
Since some netCDF visualization tools will apply the linear scale and offset
equation to the data value if the CF "scale_factor" and "add_offset" attributes
appear, these two attributes are renamed as "orig_scale_factor" and
"orig_add_offset" respectively to prevent the second adjustment.
2.2 Latitude and longitude are provided for HDF-EOS2 grid files that use
Space-Oblique Mercator(SOM) and Lambert Azimuthal Equal Area(LAMAZ)
We added the support for LAMAZ projection data such as MOD29 from NSIDC.
For grid files using HDF-EOS2 Lambert Azimuthal Equal Area(LAMAZ) projection,
the latitude and longitude values retrieved from the HDF-EOS2 library include
infinite numbers. Those infinite numbers are removed and replaced with new
values through interpolation. Therefore, an HDF-EOS2 grid file with LAMAZ
projection can be served correctly.
2.3 Fixed memory release error that occurs on iMac (OS X Lion) with the STL
string:string Map.
2.4 For OBPG L3m products, two additional CF attributes, "scale_factor" and
"add_offset", are added if their scaling function is linear. The values of
these two attributes are copied directly from file attributes, "Slope" and
2.5 Known Bugs:
1) Attribute names are not sanitized if they contain non-CF compliant
characters such as '('. NSIDC MOD29 data product is a good example.
2) If different scale/offset rules should be applied to different datasets
like MOD09GA product, the current handler cannot handle them properly.
We apply scale/offset rule globally on per file basis, not on per
individual dataset basis and CF visualization clients like IDV and
Panoply will not display some datasets correctly since they'll apply
scale/offset rule according to the CF-convention which doesn't match
MODIS's scale/offset rule.
Updated for 3.9.3 (21 Aug. 2011)
Fixed a lingering issue with the processing of HDF-EOS attributes when
the handler is not compiled with the HDF-EOS library. The handler was
returning an error because those attributes were not parsing
correctly. In the end, it appears there were two problems. The first
was that some files contain slightly malformed EOS attributes: the
'END' token is missing the 'D'. The second was that this triggered a
bogus error message.
Fixed the nagging 'dim_0' bug where some files with variables that use
similar names for arrays trigger a bug in the code that merges the das
into the dds. The results was that sometimes the code tried to add a
dim_0 attribute to a variable that already had one. This I fixed by
correcting an error in the way the STL's string::find() method was
Updated for 3.9.2 (17 Mar. 2011)
In this patch, we added the following three features:
1. We add the mapping of the SDS objects added by using HDF4 APIs to
an HDF-EOS2 file. These SDS objects are normally NOT physical fields,
so they are not supposed to be plotted by Java tools such as IDV and Panoply.
The attributes and values of these SDS objects may be useful for end
2. We also fix the bug of handling MERRA data. In the previous
release, the unit of time is not handled correctly. This release fixes
this bug under the condition that the file name of MERRA data must
start with MERRA.
3. We also enhance the support of mapping HDF4 files that uses HDF4
SDS dimension scales. Especially we make a patch specially for P.O.
DAAC's AVHRR files. Now with enough Heap space, IDV can visualize
AVHRR files via OPeNDAP.
What we haven't done:
1. We haven't mapped the Vdata objects added by using HDF4 APIs to an
HDF-EOS2 file.
2. We haven't handled the plotting of the vertical profile files (Such
as MOP Level 2 data). More investigation needs to be done on how IDV
can handle things.
3. Other limitations listed in Section III of 3.9.1 that are not
addressed above.
Kent Yang (
Updated for 3.9.1 (14 Sep. 2010)
In this release, we greatly enhance the support of the access of NASA
HDF-EOS2 and HDF4 products. The whole note for 3.9.0 includes three
Section I. Configuration
The handler is enhanced to support the access of many NASA HDF-EOS2
products and some NASA pure HDF4 products by many CF-compliant
visualization clients such as IDV and Panoply. To take advantage of
this feature, one MUST use HDF-EOS2 library and configure with the
following option:
./configure --with-hdf4=<Your HDF4 library path>
--with-hdfeos2=<Your HDF-EOS2 library path>
--prefix=<Your installation path>
Without specifying the option "--with-hdfeos2" will result in
configuring the default HDF4 OPeNDAP handler. The HDF4 handler with
the default options can NOT make the NASA HDF-EOS2 products and some
NASA pure HDF4 products work with CF-compliant visualization clients.
Some variable paths are pretty long(>15 characters). COARDS conventions
require the number of characters in a field doesn't exceed 15
characters. So the above configuration option may cause some OPeNDAP
clients that are still following COARDS conventions. To compensate
that, we provide a configuration option to shorten the name so that in
doesn't exceed 15 characters. To address the potential name clashing
issue, both options may make some variable names to change so that
unique variable names are present in the OPeNDAP output. To best
preserve the original variable names, we recommend not to use
--enable-short-name option if necessary. To configure the handler with
the short name option, do the following:
./configure --with-hdf4=<Your HDF4 library path>
--with-hdfeos2=<Your HDF-EOS2 library path>
--prefix=<Your installation path> --enable-short-name
To find the information on how to build the HDF-EOS2 library, please
To build RPMs by yourself, check the directory 'build_rpms_eosoption'.
Section II. NASA products that are supported to be accessed via Java
and other OPeNDAP visualization clients
The following NASA HDF-EOS2 products are tested with IDV and Panoply,
check the Limitation section for the limitations:
Many MODIS products
AMSR_E/NISE products
The following NASA special HDF4 products are tested with IDV Panoply,
check the Limitation section for the limitations:
TRMM Level 1B, Level 2B Swath
TRMM Level 3 Grid 42B and 43B
2). OBPG(Ocean Color)
SeaWiFS/MODIST/MODISA/CZCS/OCTS level 2 and level 3m(l3m)
3). Some LaRC CERES products
CER_ES4_Aqua-FM3_Edition1-CV or similar one
CER_ISCCP-D2like-Day_Aqua-FM3-MODIS or similar one
CER_ISCCP-D2like-GEO_ or similar one
CER_SRBAVG3_Aqua or similar one
CER_SYN_Aqua or similar one
CER_ZAVG or similar one
Section III. Limitations
1. Visualization clients and http header size
1). Visualization Slowness or even failures with IDV or panoply
clients for big size field We found that for big size variable
array(>50 MB), the visualization of the variable is very slow.
Sometimes, IDV or Panoply may even generate an "out of memory" error.
2). Some NASA HDF files(some CERES files e.g.) include many (a few
hundred) fields and the field names are long. This will cause the
maximum http header size to exceed the default maximum http header
size and a failure will occur. To serve those files, please increase
your max http header size by adding the following line at your
server.xml under the line containing <Connector port="8080"
protocol="HTTP/1.1" maxHttpHeaderSize="819200"
2. HDF-EOS2 files
1) HDF-EOS2 Lambert Azimuthal Equal Area(LAMAZ) projection grid For
LAMAZ projection data, the latitude and longitude values retrieved
from the HDF-EOS2 library include infinite numbers. So an HDF-EOS2
grid file with LAMAZ projection can not be served correctly.
2) Latitude and longitude values that don't follow CF conventions
2.1) Missing (Fill) values inside latitude and longitude fields
Except the HDF-EOS2 geographic(also called equidirectional,
equirectangular, equidistant cylindrical) projection, clients may
NOT display the data correctly.
2.2) 3-D latitude and longitude fields Except some CERES products
listed at section 2, clients may NOT display the data correctly if
the latitude and longitude fields are 3-D arrays.
3) HDF-EOS2 files having additional HDF4 objects
Some HDF-EOS2 files have additional HDF4 objects. The object may be
vdata or SDS. That means, some contents are added to an HDF-EOS2
file by using the HDF4 APIs directly after the HDF-EOS2 file is
created. The HDF-EOS2 API may not retrieve the added information. Up
to this release, we found those information are mainly related to
metadata and those metadata may not be critical to visualize and
analyze the real physical data variables. So in this release, those
objects are currently ignored. One should NOTE that attributes of
existing HDF-EOS2 data variables are NOT ignored.
4) Variables stored as 3-D or 4-D arrays
Some variables stored as 3-D or 4-D arrays are either missing or
hard to find the third or the fourth dimension's coordinate
variables. The handler will use integer number(1,2,3,......) to
represent the third or the fourth dimension as levels. Clients can
still visualize the data in a horizontal plane level by level.
3. Pure HDF4 files
All pure HDF4(e.g. non-HDFEOS2) products we've tested are listed
under Section II. Other pure HDF4 products are not tested and may
NOT be visualized by Java OPeNDAP clients.
4. Misc.
For pure HDF4 products, currently attributes in a vgroup(not SDS or SD)
are not mapped to DAP.
To speed up the performance, we choose not to generate
structMetadata, coreMetadata, archiveMetadata attributes for some
CERES products. For applications that need these, please contact or You can also post a
message at .
Kent Yang(
Updated for 3.8.1
The OPeNDAP HDF4 Data Handler is enhanced by adding two more configuration
1) --enable-cf
2) --use-hdfeos2=/path/to/hdfeos2_library
The option 1) uses the StructMetadata parser. A valid HDF-EOS2 file always
has the StructMetadata information and the parser can infer geolocation
information from the StructMetadata. By retrieving such information, the
HDF4 handler can generate DAP Grids that OPeNDAP visualization clients can
display on a world map.
The option 2) REQUIRES option 1) and it uses the HDF-EOS2 library instead of
the StructMetadata parser. The benefit of using HDF-EOS2 library is tremendous.
It can support more HDF-EOS2 files by handling different projections like polar
and sinusoidal. In addition, it can detect any anomalies that are common in
some HDF-EOS2 files and handle them intelligently. Thus, we recommend the
server administrator to install HDF-EOS2 library first and configure the
handler with BOTH 1) and 2) options.
Please note that the enhanced handler has some limitations.
o No support for using the HDF4 handler cache directory.
o No support for Grids other than geographic 1-D projection. However,
option 2) will make some Grids with other projections (polar, sinusoidal)
o No vgroup to DAP structure mapping. Thus, the files that have same
dataset name under different vgroups will throw a DDS semantics violation
o No support for files that have the same dimension names with different
dimension sizes. For example if a swath variable "A" has dimension lat=360
and lon=1440 (e.g., A[lat=360][lon=1440]) and another swath variable "B"
has dimension lat=180 and lon=720 (e.g., B[lat=180][lon=720]),
the handler will throw an error for inconsistent dimension.
Updated for 3.7.12 (16 March 2009)
This is the OPeNDAP HDF4 Data Handler. It is used along with the OPeNDAP
DAP Server.
For information about building the OPeNDAP netCDF Data Handler, see the
This handler uses a configuration parameter, set in the bes.conf file, to
control where copies of some metadata objects built by the server are
cached. By default this cache is in /tmp - you are encouraged to change
that. Set the location using the 'HDF4.CacheDir' parameter. For example,
if you have set the BES.CacheDir parameter to /var/run/bes/cache you might
set HDF4.CacheDir to /var/run/bes/hdf4_cache.
A configuration edition helper script, `' is provided in
this package for easy configuration of the Hyrax BES server, designed to
edit bes.conf. The script is called using:
<code> [<bes.conf file to modify> [<bes modules dir>]]
The `bes-conf' make target runs the script while trying to select paths
cleverly, and should be called using:
make bes-conf
Test data are also installed, so after installing this handler, Hyrax
will have data to serve providing an easy way to test your new
installation and to see how a working bes.conf should look. To use this,
make sure that you first install the bes, and that dap-server gets
installed too. Finally, every time you install or reinstall handlers,
make sure to restart the BES and OLFS.
This data handler is one component of the OPeNDAP DAP Server; the server
base software is designed to allow any number of handlers to be
configured easily. See the DAP Server README and INSTALL files for
information about configuration, including how to use this handler.
Copyright information: This software was originally written at JPL as
part of the DODS NASA ESIP Federation Cooperative Agreement Notice. The
original copyright described free use of the software 'for research
purposes' although it was not clear what exactly those were. In Spring
of 2003 we (OPeNDAP) sought clarification of that language and
JPL/CalTech asked us to change the copyright to the copyright text now
included in the code.
In Fall of 2005 we decided to release the software under the LGPL, based
on a previous discussion with personnel at JPL.
James Gallagher
Support for HDF data types:
HDF Version: This release of the server supports HDF4.2r1. It also
supports reading/parsing the HDF-EOS attribute
information which is then available to DAP clients.
SDS: This is mapped to a DAP2 Grid (if it has a dimension
scale) or Array (if it lacks a dim scale).
Raster image: This is read via the HDF 4.0 General Raster
interface and is mapped to Array. Each component of a
raster is mapped to a new dimension labeled
accordingly. For example, a 2-dimensional, 3-component
raster is mapped to an m x n x 3 Array.
Vdata: This is mapped to a Sequence, each element of
which is a Structure. Each subfield of the Vdata is
mapped to an element of the Structure. Thus a Vdata
with one field of order 3 would be mapped to a
Sequence of 1 Structure containing 3 base types.
Note: Even though these appear as Sequences, the data
handler does not support applying relational
constraints to them. You can use the array notation to
request a range of elements.
Attributes: HDF attributes on SDS, rasters are
straight-forwardly mapped to DAP attributes (HDF
doesn't yet support Vdata attributes). File
attributes (both SDS, raster) are mapped as attributes
of a DAP variable called "HDF_GLOBAL" (by analogy to
the way DAP handles netCDF global attributes, i.e.,
attaching them to "NC_GLOBAL").
Annotations: HDF file annotations mapped in the DAP to attribute
values of type "String" attached to the fake DAP
variable named "HDF_ANNOT". HDF annotations on
objects are currently not read by the server.
Vgroups: Vgroups are straight-forwardly mapped to
Special characters in HDF identifiers:
A number of non-alphanumeric characters (e.g., space, #, +, -) used in HDF
identifiers are not allowed in the names of DAP objects, object components
or in URLs. The HDF4 data handler therefore deals internally with
translated versions of these identifiers. To translate the WWW convention
of escaping such characters by replacing them with "%" followed by the
hexadecimal value of their ASCII code is used. For example, "Raster Image
#1" becomes "Raster%20Image%20%231". These translations should be
transparent to users of the server (but they will be visible in the DDS,
DAS and in any applications which use a client that does not translate the
identifiers back to their original form).
Known problems:
Handling of floating point attributes:
Because the DAP software encodes attribute values as ASCII strings there
will be a loss of accuracy for floating point attributes. This loss of
accuracy is dependent on the version of the C++ I/O library used in
compiling/linking the software (i.e., the amount of floating point
precision preserved when outputting to ASCII is dependent on the library).
Typically it is very small (e.g., at least six decimal places are
Handling of global attributes:
- The server will merge the separate global attributes for the SD, GR
interfaces with any file annotations into one set of global attributes.
These will then be available through any of the global attribute access
- If the client opens a constrained dataset (e.g., in SDstart), any global
attributes of the unconstrained dataset will not be accessible because the
constraint creates a "virtual dataset" which is a subset of the original
unconstrained dataset.
Todd Karakashian (Todd.K.Karakashian at
Isaac Henry (ike at
Jake Hamby (Jake.Hamby at
NASA/JPL April 1998