Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG/ISSUE] Cannot get field FLASH_DENS Error: even though Flash dens file is present. #245

Closed
gopikrishnangs44 opened this issue Mar 13, 2020 · 18 comments
Assignees
Labels
category: Bug Something isn't working topic: HEMCO Submodule Related to HEMCO

Comments

@gopikrishnangs44
Copy link

gopikrishnangs44 commented Mar 13, 2020

I have been trying to nest the 25x30km resolution run for a domain and I am facing issue with flash dens.
I have tried to simulate first ten days of JAN 2018 and the file FLASH_CTH_GEOSFP_0.25x0.3125_2018_01.nc4 is already in the HEMCO directory.
When I tried running, its getting error in Hemco.log like :

"HEMCO ERROR: Cannot find field with valid time stamp in /home/cccr/hamza/GEOS-CHEM/ExtData/HEMCO/OFFLINE_LIGHTNING/v2019-01/GEOSFP/2018/FLASH_CTH_GEOSFP_0.25x0.3125_2018_01.nc4 - Cannot get field FLASH_DENS. Please check file name and time (incl. time ran
ERROR LOCATION: HCOIO_READ_STD (hcoio_read_std_mod.F90)
ERROR LOCATION: HCOIO_DataRead (hcoio_dataread_mod.F90)
ERROR LOCATION: ReadList_Fill (hco_readlist_mod.F90)
ERROR LOCATION: ReadList_Read (hco_readlist_mod.F90)
ERROR LOCATION: HCO_RUN (hco_driver_mod.F90)

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
HEMCO ERROR: Error encountered in routine HCOIO_Read_Std!
ERROR LOCATION: HCOIO_READ_STD (hcoio_read_std_mod.F90)
ERROR LOCATION: HCOIO_DataRead (hcoio_dataread_mod.F90)
ERROR LOCATION: ReadList_Fill (hco_readlist_mod.F90)
ERROR LOCATION: ReadList_Read (hco_readlist_mod.F90)
ERROR LOCATION: HCO_RUN (hco_driver_mod.F90)"

How can I solve the issue. I have got the same issue with geosfp_2x2.5_tropchem run and i changes the LDENS and CTH field to MERRA 2 and it was working.
The same change for 0.25x0.31 give core dumping issue.
I have found a same issue in the page for 2013(Issue #153) and that is replaced by merra2.
My 2x2.5 simulation is working well with the change but the nesting part is giving error.
Please suggest me a solution.

code:

* FLASH_DENS $ROOT/OFFLINE_LIGHTNING/v2019-01/$MET/$YYYY/FLASH_CTH_$MET_0.25x0.3125_$YYYY_$MM.nc4  LDENS 1989-2019/1-12/1-31/0-23/+90minute RFY3 xy  1  * -  1 1
* CONV_DEPTH $ROOT/OFFLINE_LIGHTNING/v2019-01/$MET/$YYYY/FLASH_CTH_$MET_0.25x0.3125_$YYYY_$MM.nc4  CTH   1989-2019/1-12/1-31/0-23/+90minute RFY3 xy  1  * -  1 1

what I have tried to change and failed.

* FLASH_DENS $ROOT/OFFLINE_LIGHTNING/v2019-01/MERRA2/$YYYY/FLASH_CTH_MERRA2_0.5x0.625_$YYYY_$MM.nc4  LDENS 1980-2019/1-12/1-31/0-23/+90minute RFY3 xy  1  * -  1 1
* CONV_DEPTH $ROOT/OFFLINE_LIGHTNING/v2019-01/MERRA2/$YYYY/FLASH_CTH_MERRA2_0.5x0.625_$YYYY_$MM.nc4  CTH   1980-2019/1-12/1-31/0-23/+90minute RFY3 xy  1  * -  1 1

I think here the input file is at .5x0.6125 and model resolution is 0.25x0.31 which is causing segmentation fault. But issue #153 says that this can be used and I am obtaining core dump.
But there is no hemco error when replacing by MERRA2

Also I request to suggest me if there is any way by which I can use the 2017 data for 2018. I have recreated the data by changing the time of 2017 to 2018 and tried and it is also giving error.

@yantosca yantosca self-assigned this Mar 16, 2020
@yantosca
Copy link
Contributor

Thanks for writing. I wonder if the timestamps in this file are incorrect. If you type:

ncdump -cts FLASH_CTH_GEOSFP_0.25x0.3125_2018_01.nc4

We get this output for the time dimension.

    "2018-01-14 22:30", "2018-01-15 01:30", "2018-01-15 04:30", 
    "2018-01-15 07:30", "2018-01-15 10:30", "2018-01-15 13:30", 
    "2018-01-15 16:30", "2018-01-15 19:30", "2018-01-15 22:30", 
    "2018-01-16 01:30", "2018-01-16 04:30", "2018-01-16 07:30", 
    "2018-01-16 10:30", "2018-01-16 13:30", "2018-01-16 16:30", 
    "2018-01-16 19:30", "2018-01-16 22:30", "2018-01-17 01:30", 
    "2018-01-17 04:30", "2018-01-17 07:30", "2018-01-17 10:30", 
    "2018-01-17 13:30", "2018-01-17 16:30", "2018-01-17 19:30", 
    "2018-01-17 22:30", "2018-01-18 01:30", "2018-01-18 04:30", 
    "2018-01-18 07:30", "2018-01-18 10:30", "2018-01-18 13:30", 
    "2018-01-18 16:30", "2018-01-18 19:30", "2018-01-18 22:30", 
    "2018-01-19 01:30", "2018-01-19 04:30", "2018-01-19 07:30", 
    "2018-01-19 10:30", "2018-01-19 13:30", "2018-01-19 16:30", 
    "2018-01-19 19:30", "2018-01-19 22:30", "2018-01-20 01:30", 
    "2018-01-20 04:30", "2018-01-20 07:30", "2018-01-20 10:30", 
    "2018-01-20 13:30", "2018-01-20 16:30", "2018-01-20 19:30", 
    "2018-01-20 22:30", "2018-01-21 01:30", "2018-01-21 04:30", 
    "2018-01-21 07:30", "2018-01-21 10:30", "2018-01-21 13:30", 
    "2018-01-21 16:30", "2018-01-21 19:30", "2018-01-21 22:30", 
    "2018-01-22 01:30", "2018-01-22 04:30", "2018-01-22 07:30", 
    "2018-01-22 10:30", "2018-01-22 13:30", "2018-01-22 16:30", 
    "2018-01-22 19:30", "2018-01-22 22:30", "2018-01-23 01:30", 
    "2018-01-23 04:30", "2018-01-23 07:30", "2018-01-23 10:30", 
    "2018-01-23 13:30", "2018-01-23 16:30", "2018-01-23 19:30", 
    "2018-01-23 22:30", "2018-01-24 01:30", "2018-01-24 04:30", 
    "2018-01-24 07:30", "2018-01-24 10:30", "2018-01-24 13:30", 
    "2018-01-24 16:30", "2018-01-24 19:30", "2018-01-24 22:30", 
    "2018-01-25 01:30", "2018-01-25 04:30", "2018-01-25 07:30", 
    "2018-01-25 10:30", "2018-01-25 13:30", "2018-01-25 16:30", 
    "2018-01-25 19:30", "2018-01-25 22:30", "2018-01-26 01:30", 
    "2018-01-26 04:30", "2018-01-26 07:30", "2018-01-26 10:30", 
    "2018-01-26 13:30", "2018-01-26 16:30", "2018-01-26 19:30", 
    "2018-01-26 22:30", "2018-01-27 01:30", "2018-01-27 04:30", 
    "2018-01-27 07:30", "2018-01-27 10:30", "2018-01-27 13:30", 
    "2018-01-27 16:30", "2018-01-27 19:30", "2018-01-27 22:30", 
    "2018-01-28 01:30", "2018-01-28 04:30", "2018-01-28 07:30", 
    "2018-01-28 10:30", "2018-01-28 13:30", "2018-01-28 16:30", 
    "2018-01-28 19:30", "2018-01-28 22:30", "2018-01-29 01:30", 
    "2018-01-30 16:30", "2018-01-30 19:30", "2018-01-30 22:30", 
    "2018-01-31 01:30", "2018-01-31 04:30", "2018-01-31 07:30", 
    "2018-01-31 10:30", "2018-01-31 13:30", "2018-01-31 16:30", 
    "2018-01-31 19:30", "2018-01-31 22:30", "2018-01-01", "2018-01-01", 
    "2018-01-01", "2018-01-01", "2018-01-01", "2018-01-01", "2018-01-01", 
    "2018-01-01" ;

As you can see, there are multiple time points for 2018-01-01. That is probably confusing the HEMCO I/O. The netCDF time variable has to be monotonically increasing (or decreasing) for COARDS compliance.

If you look at the next month

ncdump -cts FLASH_CTH_GEOSFP_0.25x0.3125_2018_02.nc4

you see:

 time = "2018-02-01 01:30", "2018-02-01 04:30", "2018-02-01 07:30", 
    "2018-02-01 10:30", "2018-02-01 13:30", "2018-02-01 16:30", 
    "2018-02-01 19:30", "2018-02-01 22:30", "2018-02-02 01:30", 
    "2018-02-02 04:30", "2018-02-02 07:30", "2018-02-02 10:30", 
    "2018-02-02 13:30", "2018-02-02 16:30", "2018-02-02 19:30", 
    "2018-02-02 22:30", "2018-02-03 01:30", "2018-02-03 04:30", 
    "2018-02-03 07:30", "2018-02-03 10:30", "2018-02-03 13:30", 
    "2018-02-03 16:30", "2018-02-03 19:30", "2018-02-03 22:30", 
    "2018-02-04 01:30", "2018-02-04 04:30", "2018-02-04 07:30", 
    "2018-02-04 10:30", "2018-02-04 13:30", "2018-02-04 16:30", 

which is a steadily increasing time dimension.

@ltmurray: Have you noticed this? Would it be possible for you to recreate the FLASH_CTH_GEOSFP_0.25x0.3125_2018_01.nc4 file?

@yantosca
Copy link
Contributor

yantosca commented Mar 16, 2020

More info: if you type:

ncdump -c  FLASH_CTH_GEOSFP_0.25x0.3125_2018_01.nc4

then you see the actual time offsets:

 time = 90, 270, 450, 630, 810, 990, 1170, 1350, 1530, 1710, 1890, 2070, 
    2250, 2430, 2610, 2790, 2970, 3150, 3330, 3510, 3690, 3870, 4050, 4230, 
    4410, 4590, 4770, 4950, 5130, 5310, 5490, 5670, 5850, 6030, 6210, 6390, 
    6570, 6750, 6930, 7110, 7290, 7470, 7650, 7830, 8010, 8190, 8370, 8550, 
    8730, 8910, 9090, 9270, 9450, 9630, 9810, 9990, 10170, 10350, 10530, 
    10710, 10890, 11070, 11250, 11430, 11610, 11790, 11970, 12150, 12330, 
    12510, 12690, 12870, 13050, 13230, 13410, 13590, 13770, 13950, 14130, 
    14310, 14490, 14670, 14850, 15030, 15210, 15390, 15570, 15750, 15930, 
    16110, 16290, 16470, 16650, 16830, 17010, 17190, 17370, 17550, 17730, 
    17910, 18090, 18270, 18450, 18630, 18810, 18990, 19170, 19350, 19530, 
    19710, 19890, 20070, 20250, 20430, 20610, 20790, 20970, 21150, 21330, 
    21510, 21690, 21870, 22050, 22230, 22410, 22590, 22770, 22950, 23130, 
    23310, 23490, 23670, 23850, 24030, 24210, 24390, 24570, 24750, 24930, 
    25110, 25290, 25470, 25650, 25830, 26010, 26190, 26370, 26550, 26730, 
    26910, 27090, 27270, 27450, 27630, 27810, 27990, 28170, 28350, 28530, 
    28710, 28890, 29070, 29250, 29430, 29610, 29790, 29970, 30150, 30330, 
    30510, 30690, 30870, 31050, 31230, 31410, 31590, 31770, 31950, 32130, 
    32310, 32490, 32670, 32850, 33030, 33210, 33390, 33570, 33750, 33930, 
    34110, 34290, 34470, 34650, 34830, 35010, 35190, 35370, 35550, 35730, 
    35910, 36090, 36270, 36450, 36630, 36810, 36990, 37170, 37350, 37530, 
    37710, 37890, 38070, 38250, 38430, 38610, 38790, 38970, 39150, 39330, 
    39510, 39690, 39870, 40050, 40230, 40410, 40590, 40770, 40950, 41130, 
    41310, 41490, 41670, 41850, 42030, 42210, 42390, 42570, 42750, 42930, 
    43110, 43290, 43470, 43650, 43830, 44010, 44190, 44370, 44550, 0, 0, 0, 
    0, 0, 0, 0, 0 ;

from the reference time:

 time:units = "minutes since 2018-1-1 00:00:00"

The last 8 data points are zero, which indicates there is an issue.

@gopikrishnangs44
Copy link
Author

gopikrishnangs44 commented Mar 16, 2020

I have used the 2017 offline lightning data by changing its time step to 2018 and ran the simulations.

The time step for the data of 2017 is

<xarray.DataArray 'time' (time: 248)>
array(['2017-01-01T01:30:00.000000000', '2017-01-01T04:30:00.000000000',
       '2017-01-01T07:30:00.000000000', ..., '2017-01-31T16:30:00.000000000',
       '2017-01-31T19:30:00.000000000', '2017-01-31T22:30:00.000000000'],
      dtype='datetime64[ns]')

and the new time step is:

<xarray.DataArray 'time' (time: 248)>
array(['2018-01-01T01:30:00.000000000', '2018-01-01T04:30:00.000000000',
       '2018-01-01T07:30:00.000000000', ..., '2018-01-31T16:30:00.000000000',
       '2018-01-31T19:30:00.000000000', '2018-01-31T22:30:00.000000000'],
      dtype='datetime64[ns]')

and the 0.25x0.3125 simulation is giving error as

********************************************
* B e g i n   T i m e   S t e p p i n g !! *
********************************************

---> DATE: 2018/01/02  UTC: 00:00  X-HRS:      0.000000
 HEMCO already called for this timestep. Returning.
NASA-GSFC Tracer Transport Module successfully initialized
HEMCO (VOLCANO): Opening /home/cccr/hamza/GEOS-CHEM/ExtData/HEMCO/VOLCANO/v2019-08/2018/01/so2_volcanic_emissions_Carns.20180102.rc
--- Initialize surface boundary conditions from input file ---
--- Finished initializing surface boundary conditions ---
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% USING O3 COLUMNS FROM THE MET FIELDS! %%% 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     - RDAER: Using online SO4 NH4 NIT!
     - RDAER: Using online BCPI OCPI BCPO OCPO!
     - RDAER: Using online SALA SALC
     - DO_STRAT_CHEM: Linearized strat chemistry at 2018/01/02 00:00
###############################################################################
# Interpolating Linoz fields for jan
###############################################################################
     - LINOZ_CHEM3: Doing LINOZ
**Segmentation fault (core dumped)**

No HEMCO ERROR was also there.
Hemco log ended as:

CHEM/Test/run/test/OutputDir/GEOSChem.BoundaryConditions.20180102_0000z.nc4
--> LOCATION: HCOIO_READ_STD (hcoio_read_std_mod.F90)
HEMCO WARNING: Data is treated as unitless, but file attribute suggests it is not: mol mol-1 dry. File: /home/cccr/hamza/GEOS-CHEM/Test/run/test/OutputDir/GEOSChem.BoundaryConditions.20180102_0000z.nc4
--> LOCATION: HCOIO_READ_STD (hcoio_read_std_mod.F90)
HEMCO WARNING: Data is treated as unitless, but file attribute suggests it is not: mol mol-1 dry. File: /home/cccr/hamza/GEOS-CHEM/Test/run/test/OutputDir/GEOSChem.BoundaryConditions.20180102_0000z.nc4
--> LOCATION: HCOIO_READ_STD (hcoio_read_std_mod.F90)
HEMCO WARNING: Data is treated as unitless, but file attribute suggests it is not: mol mol-1 dry. File: /home/cccr/hamza/GEOS-CHEM/Test/run/test/OutputDir/GEOSChem.BoundaryConditions.20180102_0000z.nc4
--> LOCATION: HCOIO_READ_STD (hcoio_read_std_mod.F90)
HEMCO WARNING: Data is treated as unitless, but file attribute suggests it is not: mol mol-1 dry. File: /home/cccr/hamza/GEOS-CHEM/Test/run/test/OutputDir/GEOSChem.BoundaryConditions.20180102_0000z.nc4
--> LOCATION: HCOIO_READ_STD (hcoio_read_std_mod.F90)
HEMCO WARNING: Data is treated as unitless, but file attribute suggests it is not: mol mol-1 dry. File: /home/cccr/hamza/GEOS-CHEM/Test/run/test/OutputDir/GEOSChem.BoundaryConditions.20180102_0000z.nc4
--> LOCATION: HCOIO_READ_STD (hcoio_read_std_mod.F90)
HEMCO WARNING: Data is treated as unitless, but file attribute suggests it is not: mol mol-1 dry. File: /home/cccr/hamza/GEOS-CHEM/Test/run/test/OutputDir/GEOSChem.BoundaryConditions.20180102_0000z.nc4
--> LOCATION: HCOIO_READ_STD (hcoio_read_std_mod.F90)
HEMCO WARNING: Data is treated as unitless, but file attribute suggests it is not: mol mol-1 dry. File: /home/cccr/hamza/GEOS-CHEM/Test/run/test/OutputDir/GEOSChem.BoundaryConditions.20180102_0000z.nc4
--> LOCATION: HCOIO_READ_STD (hcoio_read_std_mod.F90)
HEMCO WARNING: Data is treated as unitless, but file attribute suggests it is not: mol mol-1 dry. File: /home/cccr/hamza/GEOS-CHEM/Test/run/test/OutputDir/GEOSChem.BoundaryConditions.20180102_0000z.nc4
--> LOCATION: HCOIO_READ_STD (hcoio_read_std_mod.F90)
HEMCO WARNING: Data is treated as unitless, but file attribute suggests it is not: mol mol-1 dry. File: /home/cccr/hamza/GEOS-CHEM/Test/run/test/OutputDir/GEOSChem.BoundaryConditions.20180102_0000z.nc4
--> LOCATION: HCOIO_READ_STD (hcoio_read_std_mod.F90)
HEMCO WARNING: Data is treated as unitless, but file attribute suggests it is not: mol mol-1 dry. File: /home/cccr/hamza/GEOS-CHEM/Test/run/test/OutputDir/GEOSChem.BoundaryConditions.20180102_0000z.nc4
--> LOCATION: HCOIO_READ_STD (hcoio_read_std_mod.F90)
HEMCO WARNING: Data is treated as unitless, but file attribute suggests it is not: mol mol-1 dry. File: /home/cccr/hamza/GEOS-CHEM/Test/run/test/OutputDir/GEOSChem.BoundaryConditions.20180102_0000z.nc4

The file was read during the run:

**HEMCO: Opening /home/cccr/hamza/GEOS-CHEM/ExtData/HEMCO/OFFLINE_LIGHTNING/v2019-01/GEOSFP/2018/FLASH_CTH_GEOSFP_0.25x0.3125_2018_01.nc4**
HEMCO: Opening ./GEOSChem.Restart.20180102_0000z.nc4
     - Found all CN     met fields for 2011/01/01 00:00
     - Found all A1     met fields for 2018/01/02 00:30
     - Found all A3cld  met fields for 2018/01/02 01:30
     - Found all A3dyn  met fields for 2018/01/02 01:30
     - Found all A3mstC met fields for 2018/01/02 01:30
     - Found all A3mstE met fields for 2018/01/02 01:30
     - Found all I3     met fields for 2018/01/02 00:00
 TMPU1    not found in restart, keep as value at t=0
 SPHU1    not found in restart, keep as value at t=0
 PS1_WET  not found in restart, keep as value at t=0
 PS1_DRY  not found in restart, keep as value at t=0
 DELP_DRY not found in restart, set to zero
     - Found all I3     met fields for 2018/01/02 03:00

@ltmurray
Copy link
Contributor

ltmurray commented Mar 16, 2020 via email

@yantosca
Copy link
Contributor

Thanks Lee! We can grab that for the Harvard server. I'll have Jun Meng get that for the ComputeCanada server,

@ltmurray
Copy link
Contributor

ltmurray commented Mar 16, 2020 via email

@gopikrishnangs44
Copy link
Author

gopikrishnangs44 commented Mar 16, 2020

Hi Lee,

Thank you.

The data and it is a single file for 2018.
So to read it, should I need to only modify the flash dens reading lines as

* FLASH_DENS $ROOT/OFFLINE_LIGHTNING/v2019-01/$MET/$YYYY/FLASH_CTH_$MET_0.25x0.3125_$YYYY.nc4  LDENS 1989-2019/1-12/1-31/0-23/+90minute RFY3 xy  1  * -  1 1
* CONV_DEPTH $ROOT/OFFLINE_LIGHTNING/v2019-01/$MET/$YYYY/FLASH_CTH_$MET_0.25x0.3125_$YYYY.nc4  CTH   1989-2019/1-12/1-31/0-23/+90minute RFY3 xy  1  * -  1 1

Or should I need to split it to monthly files?

@ltmurray
Copy link
Contributor

ltmurray commented Mar 16, 2020 via email

@gopikrishnangs44
Copy link
Author

gopikrishnangs44 commented Mar 16, 2020

Still the result is :

********************************************
* B e g i n   T i m e   S t e p p i n g !! *
********************************************

---> DATE: 2018/01/02  UTC: 00:00  X-HRS:      0.000000
 HEMCO already called for this timestep. Returning.
NASA-GSFC Tracer Transport Module successfully initialized
HEMCO (VOLCANO): Opening /home/cccr/hamza/GEOS-CHEM/ExtData/HEMCO/VOLCANO/v2019-08/2018/01/so2_volcanic_emissions_Carns.20180102.rc
--- Initialize surface boundary conditions from input file ---
--- Finished initializing surface boundary conditions ---
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% USING O3 COLUMNS FROM THE MET FIELDS! %%% 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     - RDAER: Using online SO4 NH4 NIT!
     - RDAER: Using online BCPI OCPI BCPO OCPO!
     - RDAER: Using online SALA SALC
     - DO_STRAT_CHEM: Linearized strat chemistry at 2018/01/02 00:00
###############################################################################
# Interpolating Linoz fields for jan
###############################################################################
     - LINOZ_CHEM3: Doing LINOZ
**Segmentation fault (core dumped)**

Adding my HMECO config file
HEMCO_Config.txt

@ltmurray
Copy link
Contributor

ltmurray commented Mar 16, 2020 via email

@gopikrishnangs44
Copy link
Author

gopikrishnangs44 commented Mar 16, 2020

Getting the same issue.

ulimit -c unlimited # coredumpsize
ulimit -u 50000 # maxproc
ulimit -v unlimited # vmemoryuse

export OMP_NUM_THREADS=36
export OMP_STACKSIZE=100MB

@ltmurray
Copy link
Contributor

ltmurray commented Mar 16, 2020 via email

@gopikrishnangs44
Copy link
Author

Thank you.

@yantosca
Copy link
Contributor

Have you tried to recompile with the debugging flags and rerun? That might give you some clue as to where the error is happening. Please see:

http://wiki.geos-chem.org/Debugging_GEOS-Chem#Debug_options_for_GEOS-Chem_Classic_simulations

@yantosca
Copy link
Contributor

We also have a chapter on Segmentation faults in our Guide to GEOS-Chem error messages:
http://wiki.geos-chem.org/Segmentation_faults

Basically a seg fault means that the model has tried to read some memory but couldn't. This can happen for several reasons, as described on the wiki.

@msulprizio msulprizio added the category: Bug Something isn't working label Mar 20, 2020
@yuexuyan
Copy link

what about turn off the Lightning inventory? I am experiencing a similar issue with this error.

The last simulation I conducted was GEOSFP_CO2_2x2.5 from Aug 2019 to Dec 2019. Lightning inventory was not listed in HEMCO cofig.rc file in default setting, but required when excute dryrun. Is there any way turning off this inventory as it is a minority impact?

@yuexuyan
Copy link

563559418
890055313

@yantosca yantosca changed the title [QUESTION]Cannot get field FLASH_DENS Error: even though Flash dens file is present. [BUG/ISSUE] Cannot get field FLASH_DENS Error: even though Flash dens file is present. Mar 30, 2020
@yantosca yantosca added the topic: HEMCO Submodule Related to HEMCO label Mar 30, 2020
@yantosca
Copy link
Contributor

We are going to be adding updates to a future version to make this automated so users don't need to go into the source code to turn this off. I have add a separate feature request for that update -- please see #279

Also see this issue: #277

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: Bug Something isn't working topic: HEMCO Submodule Related to HEMCO
Projects
None yet
Development

No branches or pull requests

5 participants