-
Notifications
You must be signed in to change notification settings - Fork 1
/
README
1220 lines (892 loc) · 52 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
What's new for Hyrax 1.16.5
----------------------------
No updates.
What's new for Hyrax 1.16.4
----------------------------
1. Update the handling of escaping special characters due to NASA request.
The '\' and the '"' are no longer escaped.
What's new for Hyrax 1.16.3
----------------------------
CF option:
1. Enhance the support of handling HDF-EOS2 swath multiple dimension map pairs.
The enhancement includes the support of multiple swaths.
This fix solves the MOD09/MYD09 issue docoumented in HFRHANDLER-332.
Note for this enhancement:
1) Limitations
(1) The Latitude and Longitude must be 2 dimensional arrays.
(2) The number of dimension maps must be an even number in a swath.
(3) The handling of MODIS level 1B is still kept as the old way.
(4) When there is a one pair of dimension maps in a swath and the
the geo-dimensions defined in the dimension maps are only used
by 2-D Latitude and Longitude fields, we still keep the old way.
2) Variable/dimension name conventions
(1) The HDF-EOS2 file contains only one swath
The swath name is not included in the variable names.
For Latitude and Longitude, the interpolated Latitude and Longitude variable
names are named as "Latitude_1","Latitude_2","Longitude_1","Longitude_2".
The dimension and other variable names are just modified by following the
CF conventions.
A DDS example can be found under https://github.com/OPENDAP/hdf4_handler/
Go to the following directory:
/bes-testsuite/h4.nasa.with_hdfeos2/MYD09.dds.bescmd.baseline
(2) The HDF-EOS2 file contains multiple swaths
The swath name are included in the variable and dimension names to
avoid name clashings.
The swath names are added as suffix for variable and dimension names.
Examples are like:
"temperature_swath1","Latitude_swath1","Latitude_swath1_1" etc.
A DDS example can be found under https://github.com/OPENDAP/hdf4_handler/
Go to the following directory:
/bes-testsuite/h4.with_hdfeos2/swath_3_3d_dimmap.dds.bescmd.baseline
3) For applications that don't want to handle dimension maps, one can change
the BES key "H4.DisableSwathDimMap=false" at h4.conf.in to
"H4.DisableSwathDimMap= true".
2. Add a BES key to turn off the handling of HDF-EOS2 swath dimension map.
What's new for Hyrax 1.16.2
----------------------------
Clean up some compiler warnings.
What's new for Hyrax 1.16.1
----------------------------
Default option:
Fix the memory leaking caused by handling vdata.
What's new for Hyrax 1.15.4
----------------------------
CF option:
1. Map the AIRS version 6 HDF-EOS Grid/Swath attributes to DAP2.
What's new for Hyrax 1.15.3
----------------------------
CF option:
1. Enhance the support of handling the scale_factor and add_offset to
follow the CF. The scale_factor and add_offset rule for the MOD/MYD21
product is different than other MODIS products. We make an exception
for this product only to ensure the scale_factor and add_offset follow
the CF conventions.
What's new for Hyrax 1.15.0
----------------------------
CF option:
1. Enhance the support of handling the scale_factor and add_offset to
follow the CF. The scale_factor and add_offset rule for the MOD16A3
product is different than other MODIS products. We make an exception
for this product only to ensure the scale_factor and add_offset follow
the CF conventions.
What's new for Hyrax 1.14.0
----------------------------
CF option:
1. Enhance the support of handling the scale_factor and add_offset to
follow the CF. The scale_factor and add_offset rule for the MOD16A2
product is different than other MODIS products. We make an exception
for this product only to ensure the scale_factor and add_offset follow
the CF conventions.
Default option(Not developed by The HDF Group):
1. We fixed several coding issues discovered by the coverity scan. We
also fixed quite a few memory leaking issues discovered by valgrind.
What's new for Hyrax 1.13.4
----------------------------
This is a maintenance release. No new features are added and no
outstanding bugs are fixed.
*****************************************************
********Special note about the version number********
*****************************************************
Since Hyrax 1.13.2, Hyrax just treats each handler as a module.
So We stop assigning an individual version number for the HDF4 handler.
Updated for version 3.12.2(November 16 2016, Hyrax 1.13.2)
1) Minor code re-arrangement by removing the DAP4 macro and other misc. minor code updates.
2) The [Known Issues] is updated to reflect the current findings of some issues that either not
appropriate or not worth the effort for the handler to handle.
[Known Issues]
1) For AMSR E level-3 data, the variable like SI_12km_N_18H_DSC doesn't have any CF scale/offset,
_FillValue, unit attributes. These attributes have to be retrieved from the product document.
Therefore, the plot generated directly from the OPeNDAP handler may not be correct.
If AMSR_E level 3 HDF4 products are still served by NASA data centers, the Hyrax's NcML module
can be used with the HDF4 handler to provide the missing CF attributes to generate the correct
plot.
More information on how to use NcML to provide the missing CF attributes,
please check http://hdfeos.org/examples/ncml.php
2) For the HDF-EOS2 swath, the HDF4 handler identifies the latitude and longitude coordinate variables
based on the variable names rather than via the CF units attribute since one cannot use the HDF-EOS2
library to add attributes for an HDF-EOS2 field. Therefore, the HDF-EOS2 swath field attributes
can only be added and retrieved by using the HDF4 APIs. It is difficult and inefficient for the
handler to use HDF4 APIs when handling the HDF-EOS2 swath fields. Given that all NASA HDF-EOS2
swath files we observed simply use name "Latitude" for the latitude field and "Longitude" for the
longitude field, we believe using variable names "Latitude" and "Longitude" to identify latitude
and longitude under swath geo-location fields is sufficient to serve NASA HDF-EOS2 products. Even
if the handler cannot identify the latitude and longitude as coordinate variables, the handler
will still generate the correct DDS and DAS and Data responses although they may not follow the
CF conventions.
3) DataCache may not work on MacOS for rare cases.
Some HDF4 variables can be cached in disk. The cached file name uses the variable name as the key
to distinguish each other. On MacOS system that is not configured to be case-sensitive
(using diskutil info to check your OS), when the DataCache BES key is turned on, two legal variable
names like TIME and Time with the same shape may share the same cached file name. Thus this may
cause the data inconsistent.
Given that the DataCache feature is turned off by default and the usage of this feature is only
on Linux, the handler just documents this as an known issue.
Updated for version 3.12.1(April 29 2016, Hyrax 1.13.1))
1) Improve the calculation of XDim and YDim for the sinusoidal projection.
2) Improve the handling of BES keys by moving the initialization of BES keys to the Handler constructor.
3) Bug Fixes:
(1) Remove the character dot(.) when the _FillValue is NaN.
In the previous version, the character dot(.) was added at the end of NaN. This prevents the NetCDF Java clients to access the DAS.
(2) Correct the (0,360) longitude value when the HDF-EOS2 EnableEOSGeoCacheFile is turned on.
Some HDF-EOS2 products make the longitude range from 0 to 360. When the HDF-EOS2 lat/lon cache is on, the longtiude values were not retrieved
correctly in the previous versions. This was discovered by testing an AMSR-E product when updating the testsuite for the handler.
[Known Issues]
1) DataCache may not work on MacOS for rare cases.
Some HDF4 variables can be cached in disk. The cached file name uses the variable name as the key to distinguish each other.
On MacOS system that is not configured to be case-sensitive(using diskutil info to check your OS),
when the DataCache BES key is turned on, two legal variable names like TIME and Time with the same shape may share the same cached file name.
Thus this may cause the data inconsistent.
Given that the DataCache feature is turned off by default and the usage of this feature is only on Linux,
the handler just documents this as an known issue.
2) Note for version 3.12.0: no new features or bug fixes for version 3.12.0. Only several warnings are removed.
The version was accidently bumped from 3.11.7 to 3.12.0.
Updated for version 3.11.7 (15 September 2015,Hyrax 1.13.0,1.12.2,1.12.1)
New Features:
1) We added 1-D coordinate variables and CF grid_mapping attributes for HDF-EOS2 grid with Sinusoidal projection.
This request is from LP DAAC.
2) We added the DDS and DAS cache for AIRS version 6 products and the data cache for HDF4 data read via SDS interfaces.
Several other improvement is also made for AIRS. This request is from GES DISC.
Bug Fixes:
1) We fixed a bug caused by casting values via pointers between different datatypes.
This is first reported by NSIDC and then from OPeNDAP.
2) We fixed the attribute missing issue for the HDF-EOS2 swath geo-location variables.(variables under Geolocation Fields)
3) We also made the correct representation of scale_factor and add_offset CF attributes for HDF-EOS2 swath geo-location variables.
******
You may need to read the information about BES keys in the file h4.conf.in to see if the default values
need to be changed for your service.
******
Updated for version 3.11.6 (15 November 2014,Hyrax 1.11.2,1.11.1,1.11.0,1.10.1,1.10.0)
In this version, we added the following features:
1) Implement an option to cache HDF-EOS2 grid latitude and longitude values to improve performance.
2) Implement an option not to pass file ids for the compatibility with NcML modules.
3) Improve the access of DDS by using the file structure obtained in the stage of building DAS.
4) Add the CF support of AIRS version 6 level 2(swath).
5) Support the mapping of vgroup attributes to DAP.
6) Support the mapping of HDF-EOS2 swath and grid object(like vgroup) attributes to DAP.
Bug fixes:
1) Obtained the CF add_offset values for MOD/MYD08_M3 products.
2) Fixed the wrong dimension size when a dimension is unlimited.
3) Make vdata field attributes mapped to correct DAS container.
NASA HDF4 and HDF-EOS2 products that are supported up to this release:
HDF-EOS2: AIRS, MODIS, some MISR, some Merra, some MOPITT
HDF4: TRMM version 6 and version 7, some CERES, some OBPG
Performance enhancement:
AIRS version 6 and MODIS O8_M3-like products
HDF-EOS2 grid lat/lon are calculated via cache
You may need to read the information about BES keys in the file h4.conf.in to see if the default values
need to be changed for your service.
Updated for version 3.11.5 (25 April 2014,Hyrax 1.9.7)
In this version, we fix the following datatype mapping issues:
1) We make HDF4 DFNT_CHAR array map to DAP string for variables.
In the previous versions, DFNT_CHAR is mapped to DAP BYTE for variables.
This may cause some NASA HDF4 vdata DFNT_CHAR array to DAP BYTE array, which is not right.
The fix makes the vdata fields of some NASA MISR (MISR_AEROSOL etc.) products map correctly to DAP.
2) We fix the mapping of DFNT_INT8 attribute value to DAP.
In the previous versions, DFNT_INT8(signed 8-bit integer) is mapped to DAP BYTE(unsigned 8-bit integer).
This will cause misrepresentation when the value is < 0.
This fix will make some attribute values of some MODIS or MISR non-physical fields map correctly to DAP.
3) For attribute _FillValue, we enforce that it is always a number even if the attribute datatype is
DFNT_CHAR or DFNT_UCHAR. In this release,we make the string representation of a _FillValue to a number.
We also make turn on the H4.DisableStructMetaAttr key to be 'true'the default setting of h4.conf.in. This
will improve the performance to generate DAS.
Updated for version 3.11.4 (1 April 2014,Hyrax 1.9.3)
This version improves I/O by reducing the number of HDF4/HDF-EOS2 file open /
close requests.
The default BES key settings are changed. See USAGE document.
Support to TRMM version 7 (3B43, 3B42, 3B31, 3A26, 3A25, 3A12, 3A11, 2B31,
2A25, 2A23, 2A21, 2A12, 1C21, 1B21, 1B11 and 1B01) and some TRMM level 3
version 6(CSH and 3A46) products are added. The emphasis is on level 3 grid
products.
Some memory leaks detected by valgrind are fixed.
Error handling is greatly improved:resources are released properly when errors
occur.
The testsuite is updated accordingly.
All the updates are solely applied to the CF option.
[Known Issues]
Note:HFRHANDLER-??? means The HDF Group's JIRA ID.
HFRHANDLER-223:(U)CHAR8 type _FillValue attribute value conversion produces
string representation, instead of real value when the dataset is in numeric
type.
HFRHANDLER-224:Some attributes of Lat/Lon geolocation fields in HDF-EOS2
swath are dropped in DAS output.
Updated for version 3.11.3 (1 February 2014,Hyrax 1.9.2)
This version optimizes handler for MOD08 M3 and AIRS version 6 products with
new BES keys. See the USAGE document for the new BES keys for details.
Updated for version 3.11.2 (18 October 2013)
This version changes BaseType::read().
Updated for version 3.11.1 (10 September 2013)
This version addresses some issues in the code base.
Updated for version 3.11.0 (30 May 2013)
Limitation of handling MOD14 product is documented under USAGE. MOD14 is not
an HDF-EOS2 product so having MOD03 geo-location file will not change the
output of the HDF4 handler.
This version fixes the 7.6 of "Known Issues" documented in README for version
3.10.0. Search for "7.6" in this document for details.
Updated for version 3.10.1 (27 Nov 2012)
This version fixes a bug for reading Int16 type dataset from NASA MODIS 09
products.
Updated for version 3.10.0 (30 Sep 2012)
1. Introduction
The HDF4 handler version 3.10.0 is a major update for the CF support. The
README for this version is *solely* for the CF support.
The README consists of several sections: Usage, New Features, Bug fixes,
Code Improvement, Limitations, and Known Issues.
2. Usage
The current version uses BES Keys to make the configuration of the CF option
more flexible and easier. The current version also doesn't require
the HDF-EOS2 library to support the CF conventions for non-HDFEOS2 HDF4 files.
Check the USAGE for the detailed information.
3. New Features
3.1 Testsuite Expansion
If you configure the handler with '--with-hdfeos2', the 'make check' will
test a new set of HDF-EOS2 test files that are added. Please make sure to
clean everything with 'make distclean' if you want to test with a different
configuration option each time.
Source codes are also provided for the new set of HDF-EOS2 test files.
3.2 Naming conventions follow HDF5 handler naming conventions for consistency.
Again, this is for the CF support only.
3.2.1 Non-CF-compliant characters
Any non-CF-compliant character for objects and their attribute names will be
changed to '_'. There is only one exception: if the first character of a name
is '/', handler will ignore the first '/'. This is a request from NASA data
centers since it means that the a path is prefixed to the name. The first '/'
is ignored for better readability.
The object names include all HDF-EOS2 and HDF4 objects supported by the CF
option. They are HDF-EOS2 swath fields and grid fields, HDF4 SDS, HDF4 Vdata,
HDF4 Vdata fields and corresponding attributes.
3.2.2 Multiple HDF-EOS2 swath and grid
For multiple HDF-EOS2 swath and grid files, since we only find a bunch of
AIRS and MODIS grid products in this category and so far the field names under
these grids can be distinguished by themselves, so we simply keep the original
field names. Grid names are not prefixed.
For example, in AIRS.2002.08.24.L3.RetStd_H008.v4.0.21.0.G06104133343.hdf
there are two fields with similar names:
1) Field under vgroup "ascending": TotalCounts_A
2) Field under vgroup "descending": TotalCounts_D
As you can see, the field "TotalCounts" in each vgroup can be distinguished
by their field names anyway. Prefixing a grid name to field name such as
"ascending_TotalCounts_A" and "descending_TotalCounts_D" makes the field name
difficult to read.
3.2.3 Non-HDFEOS2 SDS objects
For pure HDF4 SDS objects, the handler prefixes the vgroup path for SDS object
names.
3.2.4 SDS objects in Hybrid HDF-EOS2 files
For SDS objects in hybrid HDF-EOS2 files, to make these SDS objects
distinguished from the HDF-EOS2 grid or swath field names, handler adds
"_NONEOS" to the object name. This is based on the following fact:
These added SDS objects often share the same name as the HDF-EOS2 fields.
For example, band_number can be found both under HDF-EOS2 swath and under the
root vgroup. Thus, name clashing needs to be done. Adding "_NONEOS" is a better
way to handle the name clashing than simply adding a "_1".
3.2.5 Vdata object name conventions
The vdata name conventions are subject to change. We will evaluate the current
name conventions after hearing feedback from users. Since the main NASA HDF4
object is SDS, hopefully this Vdata name conventions may not be critical.
3.2.5.1 Vdata object mapped to DAP array
Since we would like to make sure that users can easily figure out the DAP
variables mapped from vdata, we use the following name conventions.
For example, a vdata "level1B" has a field "scan" under the group "g1"
The mapped DAP variable name is "vdata_g1_level1B_vdf_scan." The "vdata_"
prefix tells the user that this is an original vdata. The "_vdf_" tells the
user that the string followed by "_vdf_" is the vdata field name.
The handler also adds the following attributes to DAS to serve the similar
purpose:
std::string VDdescname = "hdf4_vd_desc";
std::string VDdescvalue = "This is an HDF4 Vdata.";
std::string VDfieldprefix = "Vdata_field_";
These attributes will be generated if the corresponding BES key is turned on.
See USAGE for details.
3.2.5.2 Vdata fields mapped to DAP attributes
For the current version, if the number of vdata records is less than 10,
the vdata field will be mapped to DAP attributes. This is based on the fact
that small number of vdata records is mostly metadata (like attributes),
so this kind of mapping may be more reasonable. However, the attribute name
that are mapped from vdata field simply follows the conventions below:
<vdata path>_<vdata name>_<vdata field name>
The above name convention is simply following the handling of SDS. The
rationale here is that distinguishing between SDS and Vdata fields in DAP
attribute(DAS) is not as critical as distinguishing between SDS and Vdata
fields in DAP array(DDS). However, this may cause user's confusion.
We will evaluate this approach in the future based on user's feedback.
3.3 Name clashings
Name clashing is handled similarly like the HDF5 handler. Only the name that
has name clashing will be changed to a different name. This is according to
NASA user's request. Previously, all corresponding names will be changed if
there is a name clashing found in the file.
See 3.2 for the details about the naming conventions.
3.4 New BES Keys
There are six BES keys are newly added.
H4.EnableCF
H4.EnableMODISMISRVdata
H4.EnableVdata_to_Attr
H4.EnableCERESMERRAShortName
H4.DisableVdataNameclashingCheck
H4.EnableVdataDescAttr
H4.EnableCF is the most important key. It must be set to be true if CF
option is used. Other five keys are valid only if H4.EnableCF is set to be
true.
For more information, check the USAGE file.
3.5 Dimension maps and scale/offset handling for MODIS swath
3.5.1 Interpolating more fields according to the dimension maps
Previously, we only supported the interpolation of latitude and longitude.
In this release, we add the support of other fields such as solar_zenith, etc.
if dimension map is used to store the field. Additionally, we also provide an
option to directly read these fields (latitude, longitude, solar zenith etc.)
from MOD03 or MYD03 files distributed by LAADS. For more information on this
option, check the USAGE file.
3.5.2 Enhance the scale/offset handling for MODIS level 1B fields
According to the discussions with NASA users, we found that for fields such
as EV_1KM_RefSB and EV_1KM_Emissive, for values greater than 65500 are special
values that should not be considered as the real data signal. So in this
release, we keep values between 65500 and 65535 when applying the scale and
offset function. Furthermore, we also calculated valid_min and valid_max for
EV_???_RefSB and EV_???_Emissive to assure that users can plot data easily
with Panoply and IDV.
3.6 Dimensions in MERRA products
In this release, the handler uses HDF-EOS2 latitude and longitude fields as
the coordinate variables. The added HDF4 SDS XDim and YDim are used inside the
file as coordinate variables.
3.7 Vdata mapping
The new handler can handle vdatas robustly either as an attribute in DAS or
as an array in DDS. Review the previous 3.2.5.2 section for details.
Several BES keys are also provided for users to have options on how to
generate vdata DAP output. See USAGE for details.
Vdata subsetting is handled robustly except for some HDF-EOS2 Vdata objects.
See section 7.5 for the known issue.
3.8 Handling general HDF4 files with the CF option
The previous release listed the NASA HDF4 products we handled specially to
follow the CF convention. For other HDF4 products (we call them general HDF4
files), we do the following:
A. Make the object and attribute names follow the CF conventions
B. Follow the default handler's mapping to map SDS. In this way, some SDS
objects that have dimension scales can be visualized by Panoply and IDV.
C. Map Vdata according to 3.7.
4. Bug Fixes
4.1 Attribute names are cleaned up. If an attribute name contains non alpha
numeric characters like '(' or '%', they are replaced with '_' to meet the
CF naming conventions.
4.2 Products that use SOM projection are handled correctly and Panoply can
display MISR data in block-by-block basis. However, please read 7.11 for
the known issue for the interpretation of final visualization output.
4.3 We continued correcting attributes related to scale/offset. For example,
the handler corrected "SCALE_FACTOR" and "OFFSET" attributes for AMSR_E L2A
product by renaming them to "scale_factor" and "add_offset". It also cleaned
any extra spaces in attribute names. We also corrected the rule on how to
applying scale/offset equations in MODIS product (e.g., Solar_Zenith dataset).
Finally, the handler renames Number_Type attributes to Number_Type_Orig if
data field's type is changed by applying scale/offset by the handler (e.g.,
LPDAAC MOD11_L2 LST).
4.4 Strange ECS metadata sequences are handled. Some MODIS products have
metadata name sequence "coremetadata.0, coremetadata.0.1, ..." instead of
"coremdatadata.0, coremetadata.1, ..."
4.5 Mismatched valid_range attribute is removed from CERES ES4 product.
Panoply fails to visualize the product if the valid_range attribute in
lat/lon dataset doesn't match the calculated coordinate variable values
returned by the handler. Thus, the handler removes the valid_attribute from
coordinate variables.
4.6 There was a bug regarding subsetting the vdata field when the stride is
greater than 1. It was fixed in this release. We also found a similar bug
inside the HDF-EOS2 library regarding the subsetting of 1-D HDF-EOS2 swath.
This should be fixed in the next HDF-EOS2 release.
5. Code Improvement
5.1 Refactored codes
There were huge code overlap between hdfdesc.cc and HE2CF.cc. Those code
lines are combined and are moved to HDFCFUtil.cc.
5.2 Error handlings and debugging
Error handlings are improved to ensure the closing of all opened HDF4 API
handles when error occurs. DEBUG macros are replaced with BESDEBUG or BESLog.
6. Limitations
Again, all the limitations here are for the CF support (when the CF option is
enabled) only.
6.1 Unmapped objects
The handler ignores the mapping of image, palette, annotation, vgroup
attributes and HDF-EOS2 swath group and grid group attributes. Note HDF4 global
attributes(attributes from SD interfaces), HDF4 SDS objects, HDF4 Vdata
attributes, HDF-EOS2 global(ECS metadata etc.) and field attributes are mapped
to DAP.
6.2 The handler doesn't handle unlimited dimension.
The result may be unexpected. (e.g., some fields in CER_ES8_TRMM-PFM_Edition2
product cannot be handled.)
6.3 Non-printable vdata (unsigned) character type data will not appear in DAS.
If vdata char type column has a non-printable value like '\\005', it will not
appear in DAS when vdata is mapped to attribute because the BES key,
H4.EnableVdata_toAttr, is enabled. See the file USAGE for the usage of the key.
6.4 Vdata with string type is handled in character-by-character basis in 2D
array.
For example, when the vdata is a string of characters like
"2006-1--01T16:17:12.693310Z",
the handler represents it as
Byte vdata_PerBlockMetadataTime_vdf_BlockCenterTime[VDFDim0_vdata_PerBlockMetadataTime_vdf_BlockCenterTime = 2][VDFDim1_vdata_PerBlockMetadataTime_vdf_BlockCenterTime = 28] = {{50, 48, 48, 54, 45, 49, 48, 45, 48, 49, 84, 49, 54, 58, 49, 55, 58, 49, 50, 46, 54, 57, 51, 51, 49, 48, 90, 0},{50, 48, 48, 54, 45, 49, 48, 45, 48, 49, 84, 49, 54, 58, 49, 55, 58, 51, 51, 46, 52, 51, 56, 57, 50, 54, 90, 0}};
7. Known Issues
7.1 CER_SRBAVG3_Aqua-FM3-MODIS_Edition2A products have many blank spaces in
long_name attribute.
These products have datasets with really long_name attribute with size 277.
However, most of them are blank spaces in the middle. For example, you'll see
DAS output like below:
String long_name "1.0 Degree Regional Monthly Hourly (200+ blank spaces)
CERES Cloud Properties";
String units "unitless";
This is not a bug in the handler. The data product itself has such long
attribute.
7.2 Longitude values for products that use LAMAZ projection will differ in i386
platform.
For i386 machines, handler will generate different longitude values from
x86_64 machines for the products that use Lambert azimuthal projection (LAMAZ)
near North Pole or South Pole. For example, the handler will return 0
for 64-bit machine while it'll return -135 for 32-bit machine in the middle
point of longitude in the NSIDC AMSR_E_L3_5DaySnow_V09_20050126.hdf product:
< Float64 Longitude[YDim = 3][XDim = 3] = {{-135, 180, 135},{-90, 0, 90},
{-45, 0, 45}};
---
> Float64 Longitude[YDim = 3][XDim = 3] = {{-135, -180, 135},{-90, -135, 90},
{-45, -1.4320576915337e-15, 45}};
This is due to the calculations in the current GCTP library that HDF-EOS2
library uses. However, this will not affect the final visualization because,
for North Pole or South Pole, the longitude can be anything from -180 to 180.
So depending on floating point accuracy, handler may get different results for
longitude of this pixel from GCTP. The longitude value is irrelevant at North
Pole or South Pole for visualization clients.
7.3 IDV can't visualize SOM projection
MISR products that use SOM projection have 3D lat/lon. Although Panoply can
visualize them but IDV cannot. Handler doesn't treat the 3rd dimension as
a separate coordinate variable and coordinate attribute on dataset includes
only latitude and longitude variable names.
7.4 Vdata is mapped to attribute if there are less than or equal to 10 records
For example, the DAS output of TRMM data 1B21 will show Vdata as an attribute:
DATA_GRANULE_PR_CAL_COEF {
String hdf4_vd_desc "This is an HDF4 Vdata.";
Float32 Vdata_field_transCoef -0.5199999809;
Float32 Vdata_field_receptCoef 0.9900000095;
Float32 Vdata_field_fcifIOchar 0.000000000, 0.3790999949, 0.000000000,
-102.7460022, 0.000000000, 24.00000000, 0.000000000, 226.0000000, 0.000000000,
0.3790999949, 0.000000000, -102.7460022, 0.000000000, 24.00000000, 0.000000000,
226.0000000;
}
This is part of the vdata handling convention, not an issue. However, we
list here just for user's convenience. See 3.7 for more information.
7.5 Vdata subsetting in HDF-EOS2 data products may not work.
Subsetting HDF-EOS2 vdata with large step index (e.g. a_vdata[0:999:999])
may not work due to a bug in HDF-EOS2 library. The bug has been reported to
HDF-EOS2 developer and should be fixed in the next HDF-EOS2 release. Reading
the entire Vdata is OK.
7.6 DDX generation will fail on PO.DAAC AVHHR product.
For example, handler can't generate DDX output for NASA JPL PO.DAAC AVHRR
product 2006001-2006005.s0454pfrt-bsst.hdf. Please see the OPeNDAP ticket #1930
for details.
7.7 It is possible to have name clashing between dimension names and variable
names. Currently, the handler only checks the name clashing for the variables
and the name clashing for the dimensions, not combined. Here's a reason:
Many good COARDS files will have the following layout:
lat[lat=180]
lon[lon=360]
If we want to check the name clashings for the combined set, this kind of good
files will always have name clashings. And depending on the code flow, the
final layout may become something like:
lat_1[lat_1=180]
lon_1[lon_1=360].
These are absolutely bad names for normal users. Instead, if we don't consider
the combined set, the chance of the name clashing due to changing the
conflicted coordinate variable names is very rare. So we may not do this at
all until we really find a typical product that causes a problem.
7.8 long_name attribute for <variable name>_NONEOS
The handler generates long_name attribute to indicate the original variable
name for the SDS objects after renaming them with _NONEOS suffix. Those
_NONEOS variables appear in hybrid files --- files that have additional HDF4
objects written by the HDF4 APIs on top of the existing HDF-EOS2 files. This
addition of long_name attribute cannot be turned off using the
H4.EnableVdataDescAttr=false key described in USAGE document.
7.9 Empty dimension name creates a variable with empty name.
It is possible to create dataset with no dimension name in HDF-EOS2 library.
In such case, the handler generates fake dimension variable without
dataset name in DDS like below:
Int32 [4];
Since there's no dataset name, data reading will also fail.
7.10 Special CF handlings for some products
The handler doesn't correct scale_factor/add_offset/_FillValue for every HDF4
product to make it follow CF conventions.
For example, the handler doesn't apply the scale and offset function(log
equation) for OBPG CZCS Level 3 products.
The handler doesn't insert or correct fill value attribute for some OBPG
products so their plots may include fill values if you visualize them. OPeNDAP
server administrator can fix these easily by using the NcML handler.
PO.DAAC AVHRR product 2006001-2006005.s0454pfrt-bsst.hdf has "add_off" and
doesn't specify fill value in attribute. Therefore, the final visualization
image will not be correct for such product.
7.11 Bit shifting required in MISR product is not handled.
Some datasets in MISR products combine two datasets into one, which requires
bit shifting for the correct interpretation of the data. Handler doesn't
perform such operation so the final visualization image may not be correct.
(e.g., Blue Radiance in MISR_AM1_GRP_ELLIPSOID_GM_P117_O058421_BA_F03_0024.hdf)
7.12 Subsetting through Hyrax HTML interface will not work on LAMAZ products.
You cannot subset Latitude_1 and Longitude_1 datasets using the HTML form.
Checking the check box will not insert any array subscription text into the
text box. Please see the OPeNDAP ticket #2075 for details.
Kent Yang (myang6@hdfgroup.org)
Updated for version 3.9.4 (19 Jan, 2011)
If your system is non-i368 such as 64-bit architecture, please read
IMPORTANT NOTE on INSTALL document regarding '--with-pic' configuration option
(i.e., '-fPIC' compiler option). You need to install both HDF4 (and HDF-EOS2)
library with '--with-pic' option if you encounter a linking problem.
1. Fixed the bug in uint16/uint32 type attribute handling.
2. The following bug fix applies to only --with-hdfeos2 configuration option.
2.1 Corrected the handling the scale/offset for MODIS products because the
MODIS scale/offset equation is quite different from the CF standard.
There are three different "scale_factor" and "add_offset" equations in MODIS
data files:
1) For MODIS L1B, MODIS 03,05,06,07,08,09A1,17 and ATML2 level 2 swath
products, MCD43B4, MCD43C1, MOD and MYD 43B4 level 3 grid files, the scale
offset equation is
correct_data_value = scale * (raw_data_value - offset).
2) For MODIS 13, MODIS 09GA, and MODIS 09GHK, the scale offset equation is
correct_data_value=(raw_data_value -offset)/scale_factor.
3) For MODIS 11 level 2 swath products, the equation is
correct_data_value = scale * raw_data_value + offset.
We decide the type based on the group name.
If the group name consists of "L1B", "GEO", "BRDF", "0.05Deg", "Reflectance",
"MOD17A2","North", "mod05", "mod06", "mod07", "mod08", or "atm12",
it is type 1.
If the group name consists of "LST", it is type 2.
If group name consists of "VI", "1km_2D", "L2g_2d", it is type 3.
For type 1, use (raw_value-offset)*scale.
For type 2, use (raw_value*scale+offset).
For type 3, use (raw_value-offset)/scale.
For recalculation of MODIS, one of the following conditions must meet:
"radiance_scales" and "radiance_offsets" attributes are available, or
"reflectance_scales" and "reflectance_offsets" attributes are available, or
"scale_factor" and "add_offset" attributes are available.
If any of the above conditions meet, recalculation will be applied,
otherwise nothing will happen. If scale is 1 and offset is 0, we don't
perform recalculation to improve performance.
Data values are adjusted based on it type. If "scale_factor" and "add_offset"
attributes are not available, "radiance_scales" and "radiance_offsets"
attributes, or "reflectance_scales" and "reflectance_offsets" attributes, are
used instead.
After adjustment, the data type is converted uniformly to float not to lose
precision even if its original data type is integer. The "valid_range"
attribute is removed accordingly as it does not reflect the actual values
any more.
Since some netCDF visualization tools will apply the linear scale and offset
equation to the data value if the CF "scale_factor" and "add_offset" attributes
appear, these two attributes are renamed as "orig_scale_factor" and
"orig_add_offset" respectively to prevent the second adjustment.
2.2 Latitude and longitude are provided for HDF-EOS2 grid files that use
Space-Oblique Mercator(SOM) and Lambert Azimuthal Equal Area(LAMAZ)
projections.
We added the support for LAMAZ projection data such as MOD29 from NSIDC.
For grid files using HDF-EOS2 Lambert Azimuthal Equal Area(LAMAZ) projection,
the latitude and longitude values retrieved from the HDF-EOS2 library include
infinite numbers. Those infinite numbers are removed and replaced with new
values through interpolation. Therefore, an HDF-EOS2 grid file with LAMAZ
projection can be served correctly.
2.3 Fixed memory release error that occurs on iMac (OS X Lion) with the STL
string:string Map.
2.4 For OBPG L3m products, two additional CF attributes, "scale_factor" and
"add_offset", are added if their scaling function is linear. The values of
these two attributes are copied directly from file attributes, "Slope" and
"Intercept".
2.5 Known Bugs:
1) Attribute names are not sanitized if they contain non-CF compliant
characters such as '('. NSIDC MOD29 data product is a good example.
2) If different scale/offset rules should be applied to different datasets
like MOD09GA product, the current handler cannot handle them properly.
We apply scale/offset rule globally on per file basis, not on per
individual dataset basis and CF visualization clients like IDV and
Panoply will not display some datasets correctly since they'll apply
scale/offset rule according to the CF-convention which doesn't match
MODIS's scale/offset rule.
Updated for 3.9.3 (21 Aug. 2011)
Fixed a lingering issue with the processing of HDF-EOS attributes when
the handler is not compiled with the HDF-EOS library. The handler was
returning an error because those attributes were not parsing
correctly. In the end, it appears there were two problems. The first
was that some files contain slightly malformed EOS attributes: the
'END' token is missing the 'D'. The second was that this triggered a
bogus error message.
Fixed the nagging 'dim_0' bug where some files with variables that use
similar names for arrays trigger a bug in the code that merges the das
into the dds. The results was that sometimes the code tried to add a
dim_0 attribute to a variable that already had one. This I fixed by
correcting an error in the way the STL's string::find() method was
used.
Updated for 3.9.2 (17 Mar. 2011)
In this patch, we added the following three features:
1. We add the mapping of the SDS objects added by using HDF4 APIs to
an HDF-EOS2 file. These SDS objects are normally NOT physical fields,
so they are not supposed to be plotted by Java tools such as IDV and Panoply.
The attributes and values of these SDS objects may be useful for end
users.
2. We also fix the bug of handling MERRA data. In the previous
release, the unit of time is not handled correctly. This release fixes
this bug under the condition that the file name of MERRA data must
start with MERRA.
3. We also enhance the support of mapping HDF4 files that uses HDF4
SDS dimension scales. Especially we make a patch specially for P.O.
DAAC's AVHRR files. Now with enough Heap space, IDV can visualize
AVHRR files via OPeNDAP.
What we haven't done:
1. We haven't mapped the Vdata objects added by using HDF4 APIs to an
HDF-EOS2 file.
2. We haven't handled the plotting of the vertical profile files (Such
as MOP Level 2 data). More investigation needs to be done on how IDV
can handle things.
3. Other limitations listed in Section III of 3.9.1 that are not
addressed above.
Kent Yang (myang6@hdfgroup.org)
Updated for 3.9.1 (14 Sep. 2010)
In this release, we greatly enhance the support of the access of NASA
HDF-EOS2 and HDF4 products. The whole note for 3.9.0 includes three
sections.
Section I. Configuration
The handler is enhanced to support the access of many NASA HDF-EOS2
products and some NASA pure HDF4 products by many CF-compliant
visualization clients such as IDV and Panoply. To take advantage of
this feature, one MUST use HDF-EOS2 library and configure with the
following option:
./configure --with-hdf4=<Your HDF4 library path>
--with-hdfeos2=<Your HDF-EOS2 library path>
--prefix=<Your installation path>
Without specifying the option "--with-hdfeos2" will result in
configuring the default HDF4 OPeNDAP handler. The HDF4 handler with
the default options can NOT make the NASA HDF-EOS2 products and some
NASA pure HDF4 products work with CF-compliant visualization clients.
Some variable paths are pretty long(>15 characters). COARDS conventions
require the number of characters in a field doesn't exceed 15
characters. So the above configuration option may cause some OPeNDAP
clients that are still following COARDS conventions. To compensate
that, we provide a configuration option to shorten the name so that in
doesn't exceed 15 characters. To address the potential name clashing
issue, both options may make some variable names to change so that
unique variable names are present in the OPeNDAP output. To best
preserve the original variable names, we recommend not to use
--enable-short-name option if necessary. To configure the handler with
the short name option, do the following:
./configure --with-hdf4=<Your HDF4 library path>
--with-hdfeos2=<Your HDF-EOS2 library path>
--prefix=<Your installation path> --enable-short-name
To find the information on how to build the HDF-EOS2 library, please
check
http://hdfeos.org/software/hdfeos.php#ref_sec:hdf-eos2
To build RPMs by yourself, check the directory 'build_rpms_eosoption'.
Section II. NASA products that are supported to be accessed via Java
and other OPeNDAP visualization clients
The following NASA HDF-EOS2 products are tested with IDV and Panoply,
check the Limitation section for the limitations:
1). NASA GES DISC
AIRS/MERRA/TOMS
2). NASA LAADS/LP DAAC/NSIDC
Many MODIS products
3). NASA NSIDC
AMSR_E/NISE products
4). NASA LaRC
MISR/MOPITT/CERES-TRMM
The following NASA special HDF4 products are tested with IDV Panoply,
check the Limitation section for the limitations:
1). NASA GES DISC
TRMM Level 1B, Level 2B Swath
TRMM Level 3 Grid 42B and 43B
2). OBPG(Ocean Color)
SeaWiFS/MODIST/MODISA/CZCS/OCTS level 2 and level 3m(l3m)
3). Some LaRC CERES products
CER_AVG_Aqua-FM3-MODIS,CER_AVG_Terra-FM1-MODIS
CER_ES4_Aqua-FM3_Edition1-CV or similar one
CER_ISCCP-D2like-Day_Aqua-FM3-MODIS or similar one
CER_ISCCP-D2like-GEO_ or similar one
CER_SRBAVG3_Aqua or similar one
CER_SYN_Aqua or similar one
CER_ZAVG or similar one
Section III. Limitations
1. Visualization clients and http header size
1). Visualization Slowness or even failures with IDV or panoply
clients for big size field We found that for big size variable
array(>50 MB), the visualization of the variable is very slow.
Sometimes, IDV or Panoply may even generate an "out of memory" error.
2). Some NASA HDF files(some CERES files e.g.) include many (a few
hundred) fields and the field names are long. This will cause the
maximum http header size to exceed the default maximum http header
size and a failure will occur. To serve those files, please increase
your max http header size by adding the following line at your
server.xml under the line containing <Connector port="8080"
protocol="HTTP/1.1" maxHttpHeaderSize="819200"
2. HDF-EOS2 files
1) HDF-EOS2 Lambert Azimuthal Equal Area(LAMAZ) projection grid For
LAMAZ projection data, the latitude and longitude values retrieved
from the HDF-EOS2 library include infinite numbers. So an HDF-EOS2
grid file with LAMAZ projection can not be served correctly.
2) Latitude and longitude values that don't follow CF conventions
2.1) Missing (Fill) values inside latitude and longitude fields
Except the HDF-EOS2 geographic(also called equidirectional,
equirectangular, equidistant cylindrical) projection, clients may
NOT display the data correctly.
2.2) 3-D latitude and longitude fields Except some CERES products
listed at section 2, clients may NOT display the data correctly if
the latitude and longitude fields are 3-D arrays.
3) HDF-EOS2 files having additional HDF4 objects
Some HDF-EOS2 files have additional HDF4 objects. The object may be
vdata or SDS. That means, some contents are added to an HDF-EOS2