gribout ERROR plev for interpolation above model top – in #9: CCLM

in #9: CCLM

<p> Dear colleagues, </p> <p> I’m doing <span class="caps"> CCLM </span> simulations over East-Asia with <span class="caps"> ERA </span> -Interim data. <br/> cclm version: cosmo_131108_5.0_clm2 <br/> int2lm version: int2lm_131101_2.00_clm1 <br/> webpep version: <span class="caps"> EXTPAR </span> -2.0.2 </p> <p> First in a smaller domain, around 17~42N 100~130E (see domainsGEscreen.jpg attached below), all seems ok. </p> <p> Then in a larger domain, around 6~62N 65~165E, the gribout error happened when it tried to write the first simulated output for plevs (e.g. lffd1995010106p.nc) other than the initial time (e.g. lffd1995010100p.nc), with this error messages: </p> <strong> —————————————————————————————— </strong> * <span class="caps"> PROGRAM </span> <span class="caps"> TERMINATED </span> <span class="caps"> BECAUSE </span> OF <span class="caps"> ERRORS </span> <span class="caps"> DETECTED </span> * IN <span class="caps"> ROUTINE </span> : p_int * <span class="caps"> ERROR </span> <span class="caps"> CODE </span> is 1004 * plev for interpolation above model top! <strong> —————————————————————————————— </strong> <p> My cclm namelist options and run log for the larger domain are attached below too. </p> <p> <span class="caps"> BUT </span> if I chose not to interpolate model level data to pressure levels, it could run continiously. </p> <p> Thank you very much for any suggestions! </p> <p> Weidan </p>

  @redc_migration in #86c4a13

<p> Dear colleagues, </p> <p> I’m doing <span class="caps"> CCLM </span> simulations over East-Asia with <span class="caps"> ERA </span> -Interim data. <br/> cclm version: cosmo_131108_5.0_clm2 <br/> int2lm version: int2lm_131101_2.00_clm1 <br/> webpep version: <span class="caps"> EXTPAR </span> -2.0.2 </p> <p> First in a smaller domain, around 17~42N 100~130E (see domainsGEscreen.jpg attached below), all seems ok. </p> <p> Then in a larger domain, around 6~62N 65~165E, the gribout error happened when it tried to write the first simulated output for plevs (e.g. lffd1995010106p.nc) other than the initial time (e.g. lffd1995010100p.nc), with this error messages: </p> <strong> —————————————————————————————— </strong> * <span class="caps"> PROGRAM </span> <span class="caps"> TERMINATED </span> <span class="caps"> BECAUSE </span> OF <span class="caps"> ERRORS </span> <span class="caps"> DETECTED </span> * IN <span class="caps"> ROUTINE </span> : p_int * <span class="caps"> ERROR </span> <span class="caps"> CODE </span> is 1004 * plev for interpolation above model top! <strong> —————————————————————————————— </strong> <p> My cclm namelist options and run log for the larger domain are attached below too. </p> <p> <span class="caps"> BUT </span> if I chose not to interpolate model level data to pressure levels, it could run continiously. </p> <p> Thank you very much for any suggestions! </p> <p> Weidan </p>

gribout ERROR plev for interpolation above model top

Dear colleagues,

I’m doing CCLM simulations over East-Asia with ERA -Interim data.
cclm version: cosmo_131108_5.0_clm2
int2lm version: int2lm_131101_2.00_clm1
webpep version: EXTPAR -2.0.2

First in a smaller domain, around 17~42N 100~130E (see domainsGEscreen.jpg attached below), all seems ok.

Then in a larger domain, around 6~62N 65~165E, the gribout error happened when it tried to write the first simulated output for plevs (e.g. lffd1995010106p.nc) other than the initial time (e.g. lffd1995010100p.nc), with this error messages:

—————————————————————————————— * PROGRAM TERMINATED BECAUSE OF ERRORS DETECTED * IN ROUTINE : p_int * ERROR CODE is 1004 * plev for interpolation above model top! ——————————————————————————————

My cclm namelist options and run log for the larger domain are attached below too.

BUT if I chose not to interpolate model level data to pressure levels, it could run continiously.

Thank you very much for any suggestions!

Weidan

View in channel
<p> I cannot see what can be wrong. It might be a bug in the program. You want to have the variables U,V,W,T,RELHUM,FI,QV on p-levels. Can you check it without W, i.e. yvarpl=‘FI’,‘QV’,‘T’,‘U’,‘V’,‘ <span class="caps"> RELHUM </span> ’, <br/> and put the variable ‘P’ in <span class="caps"> GRIBOUT </span> group 3 and check the first level in the output? </p>

  @burkhardtrockel in #197732c

<p> I cannot see what can be wrong. It might be a bug in the program. You want to have the variables U,V,W,T,RELHUM,FI,QV on p-levels. Can you check it without W, i.e. yvarpl=‘FI’,‘QV’,‘T’,‘U’,‘V’,‘ <span class="caps"> RELHUM </span> ’, <br/> and put the variable ‘P’ in <span class="caps"> GRIBOUT </span> group 3 and check the first level in the output? </p>

I cannot see what can be wrong. It might be a bug in the program. You want to have the variables U,V,W,T,RELHUM,FI,QV on p-levels. Can you check it without W, i.e. yvarpl=‘FI’,‘QV’,‘T’,‘U’,‘V’,‘ RELHUM ’,
and put the variable ‘P’ in GRIBOUT group 3 and check the first level in the output?

<p> Hello Weidan, <br/> in your cclm_735_stdout.txt file I see, that you get <span class="caps"> CFL </span> violations during the run. The output for lffd1995010106p.nc is the first after these violations where you interpolate on pressure levels. </p> <p> You should check, whether these <span class="caps"> CFL </span> -violations ruin your simulation. Often, the interpolation to pressure levels is the first part, where the model notices that. </p> <p> Ciao <br/> Uli </p>

  @ulrichschättler in #200a5c4

<p> Hello Weidan, <br/> in your cclm_735_stdout.txt file I see, that you get <span class="caps"> CFL </span> violations during the run. The output for lffd1995010106p.nc is the first after these violations where you interpolate on pressure levels. </p> <p> You should check, whether these <span class="caps"> CFL </span> -violations ruin your simulation. Often, the interpolation to pressure levels is the first part, where the model notices that. </p> <p> Ciao <br/> Uli </p>

Hello Weidan,
in your cclm_735_stdout.txt file I see, that you get CFL violations during the run. The output for lffd1995010106p.nc is the first after these violations where you interpolate on pressure levels.

You should check, whether these CFL -violations ruin your simulation. Often, the interpolation to pressure levels is the first part, where the model notices that.

Ciao
Uli

<p> Dear colleagues, I have faced the same problem recently. I have done the nesting simulation (cclm-to-cclm) from 0.165 to 0.025 resolution. After 1 day it crashes with the error ‘plev for interpolation above model top’ (see slurm-1151761.out). I have mentioned the NaN values in YU* files before crash. According to your advice about <span class="caps"> CFL </span> -criterion, I have adopted the dt parameter (by decreasing it at first to 30 and then to 20), but it didn’t change the result. Besides, this simulation is associated with extreme winds (up to 30-40 m/s near surface). Maybe, it could also affect these results? Could you give any suggestions about many namelists parameters or something other? <br/> I’m attaching also YU* files, output in point (M_Teriberka) and script (cclm5_arctic_nest_modified.sh). <br/> Thank you for any hints. </p>

  @vladimirplatonov in #9cd02d5

<p> Dear colleagues, I have faced the same problem recently. I have done the nesting simulation (cclm-to-cclm) from 0.165 to 0.025 resolution. After 1 day it crashes with the error ‘plev for interpolation above model top’ (see slurm-1151761.out). I have mentioned the NaN values in YU* files before crash. According to your advice about <span class="caps"> CFL </span> -criterion, I have adopted the dt parameter (by decreasing it at first to 30 and then to 20), but it didn’t change the result. Besides, this simulation is associated with extreme winds (up to 30-40 m/s near surface). Maybe, it could also affect these results? Could you give any suggestions about many namelists parameters or something other? <br/> I’m attaching also YU* files, output in point (M_Teriberka) and script (cclm5_arctic_nest_modified.sh). <br/> Thank you for any hints. </p>

Dear colleagues, I have faced the same problem recently. I have done the nesting simulation (cclm-to-cclm) from 0.165 to 0.025 resolution. After 1 day it crashes with the error ‘plev for interpolation above model top’ (see slurm-1151761.out). I have mentioned the NaN values in YU* files before crash. According to your advice about CFL -criterion, I have adopted the dt parameter (by decreasing it at first to 30 and then to 20), but it didn’t change the result. Besides, this simulation is associated with extreme winds (up to 30-40 m/s near surface). Maybe, it could also affect these results? Could you give any suggestions about many namelists parameters or something other?
I’m attaching also YU* files, output in point (M_Teriberka) and script (cclm5_arctic_nest_modified.sh).
Thank you for any hints.

<p> Dear all </p> <p> When running simulations with <span class="caps"> CCLM </span> on a 0.025° resolution grid (~2.8 km) I have encountered the same issue described here. In my case, the output of the running jobs yields no warning message referring to the violation of the <span class="caps"> CFL </span> criterion. Nonetheless the same NaN values in the <span class="caps"> YUPRMASS </span> file appear. </p> <p> I have tried reducing the time step of the model up to 5 seconds, but this won’t make any difference. Besides, I have tried to choose a different set of pressure levels to interpolate to, with, obviously a lower highest pressure level, but this neither has an impact on my simulations. </p> <p> Does anybody else know of a different solution? I was thinking in increasing the number of model levels since I think this could interfere with the fact that I have anomalously big values of wind speed in my simulations. </p> <p> Any help would be appreciated. </p> <p> With kind regards </p> <p> Alberto </p> <p> PS I attach the output of my model runs </p>

  @albertocaldas-alvarez in #c6c3473

<p> Dear all </p> <p> When running simulations with <span class="caps"> CCLM </span> on a 0.025° resolution grid (~2.8 km) I have encountered the same issue described here. In my case, the output of the running jobs yields no warning message referring to the violation of the <span class="caps"> CFL </span> criterion. Nonetheless the same NaN values in the <span class="caps"> YUPRMASS </span> file appear. </p> <p> I have tried reducing the time step of the model up to 5 seconds, but this won’t make any difference. Besides, I have tried to choose a different set of pressure levels to interpolate to, with, obviously a lower highest pressure level, but this neither has an impact on my simulations. </p> <p> Does anybody else know of a different solution? I was thinking in increasing the number of model levels since I think this could interfere with the fact that I have anomalously big values of wind speed in my simulations. </p> <p> Any help would be appreciated. </p> <p> With kind regards </p> <p> Alberto </p> <p> PS I attach the output of my model runs </p>

Dear all

When running simulations with CCLM on a 0.025° resolution grid (~2.8 km) I have encountered the same issue described here. In my case, the output of the running jobs yields no warning message referring to the violation of the CFL criterion. Nonetheless the same NaN values in the YUPRMASS file appear.

I have tried reducing the time step of the model up to 5 seconds, but this won’t make any difference. Besides, I have tried to choose a different set of pressure levels to interpolate to, with, obviously a lower highest pressure level, but this neither has an impact on my simulations.

Does anybody else know of a different solution? I was thinking in increasing the number of model levels since I think this could interfere with the fact that I have anomalously big values of wind speed in my simulations.

Any help would be appreciated.

With kind regards

Alberto

PS I attach the output of my model runs

<p> Dear all </p> <p> I have managed to overcome the problem of the <span class="caps"> ERROR </span> 1004 interpolation to the pressure levels. <br/> Trying a denser number of model levels yielded no difference, instead I have been playing with the parameters in the <span class="caps"> TUNING </span> namelist. <br/> I have managed to isolate the problem, that arises when the scheme for the fast waves number two is adopted (fast_waves_sc.f90) where, accordingly the type of bottom boundary conditions needs to be changed. </p> <p> So for example if my settings were ldyn_bbc = .FALSE., rlwidth=50000.0, nrdtau = 5, iadv_order = 5, itype_bbc_w = 114, itype_fast_waves = 2, </p> <p> The error would be shown, as opposite to the settings: </p> ldyn_bbc = .TRUE., rlwidth=50000.0, nrdtau = 5, iadv_order = 5, itype_bbc_w = 1, itype_fast_waves = 1, <p> That entail no error. </p> <p> Regards </p> <p> Alberto </p>

  @albertocaldas-alvarez in #09475a2

<p> Dear all </p> <p> I have managed to overcome the problem of the <span class="caps"> ERROR </span> 1004 interpolation to the pressure levels. <br/> Trying a denser number of model levels yielded no difference, instead I have been playing with the parameters in the <span class="caps"> TUNING </span> namelist. <br/> I have managed to isolate the problem, that arises when the scheme for the fast waves number two is adopted (fast_waves_sc.f90) where, accordingly the type of bottom boundary conditions needs to be changed. </p> <p> So for example if my settings were ldyn_bbc = .FALSE., rlwidth=50000.0, nrdtau = 5, iadv_order = 5, itype_bbc_w = 114, itype_fast_waves = 2, </p> <p> The error would be shown, as opposite to the settings: </p> ldyn_bbc = .TRUE., rlwidth=50000.0, nrdtau = 5, iadv_order = 5, itype_bbc_w = 1, itype_fast_waves = 1, <p> That entail no error. </p> <p> Regards </p> <p> Alberto </p>

Dear all

I have managed to overcome the problem of the ERROR 1004 interpolation to the pressure levels.
Trying a denser number of model levels yielded no difference, instead I have been playing with the parameters in the TUNING namelist.
I have managed to isolate the problem, that arises when the scheme for the fast waves number two is adopted (fast_waves_sc.f90) where, accordingly the type of bottom boundary conditions needs to be changed.

So for example if my settings were ldyn_bbc = .FALSE., rlwidth=50000.0, nrdtau = 5, iadv_order = 5, itype_bbc_w = 114, itype_fast_waves = 2,

The error would be shown, as opposite to the settings:

ldyn_bbc = .TRUE., rlwidth=50000.0, nrdtau = 5, iadv_order = 5, itype_bbc_w = 1, itype_fast_waves = 1,

That entail no error.

Regards

Alberto

<p> Dear colleagues, </p> <p> I am writing again in this thread since unfortunately I got the same error message some time steps later in my simulation. <br/> The settings I described in my last contribution to the thread were able to delay the appearing of NaN values in <span class="caps"> YUPRMASS </span> until the time step 5568 (before it would stop on a previous time step ). </p> <p> This time, I got some error messages mentioning the <span class="caps"> CFL </span> criterion. In order to avoid this problem I reduced the time step to 10 seconds. I am using a 2.8km grid so normally with 20 seconds it should suffice. </p> <p> Once done that, the <span class="caps"> CFL </span> -criterion violation messages disappear, still I am getting the plev interpolation error messages. </p> <p> I also tried substituting W for <span class="caps"> OMEGA </span> in the <span class="caps"> GRIBOUT </span> parameters, nevertheless the values shown in <span class="caps"> YUPRMASS </span> still explode to NaNs. </p> <p> Does anybody know where the problem could be? Or if anybody did <span class="caps"> CCLM </span> simulations on a 2.8 grid over mountain areas such as the Alps, could you send me the settings used? </p> <p> I attach the settings that I am currently using together with the model debugging output. </p> <p> Thanks in advance </p> <p> Alberto </p>

  @albertocaldas-alvarez in #72ab728

<p> Dear colleagues, </p> <p> I am writing again in this thread since unfortunately I got the same error message some time steps later in my simulation. <br/> The settings I described in my last contribution to the thread were able to delay the appearing of NaN values in <span class="caps"> YUPRMASS </span> until the time step 5568 (before it would stop on a previous time step ). </p> <p> This time, I got some error messages mentioning the <span class="caps"> CFL </span> criterion. In order to avoid this problem I reduced the time step to 10 seconds. I am using a 2.8km grid so normally with 20 seconds it should suffice. </p> <p> Once done that, the <span class="caps"> CFL </span> -criterion violation messages disappear, still I am getting the plev interpolation error messages. </p> <p> I also tried substituting W for <span class="caps"> OMEGA </span> in the <span class="caps"> GRIBOUT </span> parameters, nevertheless the values shown in <span class="caps"> YUPRMASS </span> still explode to NaNs. </p> <p> Does anybody know where the problem could be? Or if anybody did <span class="caps"> CCLM </span> simulations on a 2.8 grid over mountain areas such as the Alps, could you send me the settings used? </p> <p> I attach the settings that I am currently using together with the model debugging output. </p> <p> Thanks in advance </p> <p> Alberto </p>

Dear colleagues,

I am writing again in this thread since unfortunately I got the same error message some time steps later in my simulation.
The settings I described in my last contribution to the thread were able to delay the appearing of NaN values in YUPRMASS until the time step 5568 (before it would stop on a previous time step ).

This time, I got some error messages mentioning the CFL criterion. In order to avoid this problem I reduced the time step to 10 seconds. I am using a 2.8km grid so normally with 20 seconds it should suffice.

Once done that, the CFL -criterion violation messages disappear, still I am getting the plev interpolation error messages.

I also tried substituting W for OMEGA in the GRIBOUT parameters, nevertheless the values shown in YUPRMASS still explode to NaNs.

Does anybody know where the problem could be? Or if anybody did CCLM simulations on a 2.8 grid over mountain areas such as the Alps, could you send me the settings used?

I attach the settings that I am currently using together with the model debugging output.

Thanks in advance

Alberto

<p> Hi Alberto, </p> <p> looking at your data I saw a few things that are rather strange for me. <br/> 1.) What is the reason of applying a vertical grid spacing of 240 m between the surface and the first level above the surface (see <span class="caps"> YUSPECIF </span> file) <br/> 2.) Your T_S and T_SO fields (see your laf-file) do have very high maximum values (larger than 340K, see <span class="caps"> YUCHKDAT </span> ). If I interpret you model domain correctly (mostly southern France, part of the Med-Sea, western parts of <br/> Switzerland and Italy, a little bit of Spain) I would not expect such high values. <br/> 3.) Minimum T values (3-D Temperature on Model Levels) fall below 200 K in the lowest model level during your simulation (see <span class="caps"> YUCHKDAT </span> ). I would say this is much too low even for the highest summit of the Alps. ( I did not check where the low value occurs) </p> <p> My conclusion: your problem is due to some completely unphysical conditions that occur during the simulation and which let the model explode. The error message that you get is due to a follow up error. </p> <p> Hans-Juergen </p>

  @hans-jürgenpanitz in #1335223

<p> Hi Alberto, </p> <p> looking at your data I saw a few things that are rather strange for me. <br/> 1.) What is the reason of applying a vertical grid spacing of 240 m between the surface and the first level above the surface (see <span class="caps"> YUSPECIF </span> file) <br/> 2.) Your T_S and T_SO fields (see your laf-file) do have very high maximum values (larger than 340K, see <span class="caps"> YUCHKDAT </span> ). If I interpret you model domain correctly (mostly southern France, part of the Med-Sea, western parts of <br/> Switzerland and Italy, a little bit of Spain) I would not expect such high values. <br/> 3.) Minimum T values (3-D Temperature on Model Levels) fall below 200 K in the lowest model level during your simulation (see <span class="caps"> YUCHKDAT </span> ). I would say this is much too low even for the highest summit of the Alps. ( I did not check where the low value occurs) </p> <p> My conclusion: your problem is due to some completely unphysical conditions that occur during the simulation and which let the model explode. The error message that you get is due to a follow up error. </p> <p> Hans-Juergen </p>

Hi Alberto,

looking at your data I saw a few things that are rather strange for me.
1.) What is the reason of applying a vertical grid spacing of 240 m between the surface and the first level above the surface (see YUSPECIF file)
2.) Your T_S and T_SO fields (see your laf-file) do have very high maximum values (larger than 340K, see YUCHKDAT ). If I interpret you model domain correctly (mostly southern France, part of the Med-Sea, western parts of
Switzerland and Italy, a little bit of Spain) I would not expect such high values.
3.) Minimum T values (3-D Temperature on Model Levels) fall below 200 K in the lowest model level during your simulation (see YUCHKDAT ). I would say this is much too low even for the highest summit of the Alps. ( I did not check where the low value occurs)

My conclusion: your problem is due to some completely unphysical conditions that occur during the simulation and which let the model explode. The error message that you get is due to a follow up error.

Hans-Juergen